Planet Russell


Charles StrossUpcoming Attractions!

As you know by now, my next novel, Dead Lies Dreaming comes out next week—on Tuesday the 27th in the US and Thursday 29th in the UK, because I've got different publishers in different territories).

Signed copies can be ordered from Transreal Fiction in Edinburgh via the Hive online mail order service.

(You can also order it via Big River co and all good bookshops, but they don't stock signed copies: Link to Amazon US: Link to Amazon UK. Ebooks are available too, and I gather the audiobook—again, there's a different version in the US, from Audible, and the UK, from Hachette Digital—should be released at the same time.)

COVID-19 has put a brake on any plans I might have had to promote the book in public, but I'm doing a number of webcast events over the next few weeks. Here are the highlights:

Outpost 2020 is a virtual SF convention taking place from Friday 23rd (tomorrow!) to Sunday 25th. I'm on a discussion panel on Saturday 24th at 4pm (UK time), on the subject of "Reborn from the Apocalypse": Both history and current events teach that a Biblical-proportioned apocalypse is not necessarily confined to the realms of fiction. How can we reinvent ourselves, and more importantly, will we?. (Panelists: Charlie Stross, Gabriel Partida, David D. Perlmutter. Moderator: Mike Fatum.)

Orbit Live! As part of a series of Crowdcast events, at 8pm GMT on Thursday 27th RJ Barker is going to host myself and Luke Arnold in conversation about our new books: sign up for the crowdcast here.

Reddit AmA: No book launch is complete these days without an Ask me Anything on Reddit, which in my case is booked for Tuesday 3rd, starting at 5pm, UK time (9am on the US west coast, give or take an hour—the clocks change this weekend in the UK but I'm not sure when the US catches up).

The Nürnberg Digital Festival is a community driven Festival with about 20.000 attendees in Nuremberg, to discuss the future, change and everything that comes with it. Obviously this year it's an extra-digital (i.e. online-only) festival, which has the silver lining of enabling the organizers to invite guests to connect from a long way away. Which is why I'm doing an interview/keynote on Monday November 9th at 5pm (UK time). You can find out more about the Festival here (as well as buying tickets for any or all days' events). It's titled "Are we in dystopian times?" which seems to be an ongoing theme of most of the events I'm being invited to these days, and probably gives you some idea of what my answer is likely to be ...

Anyway, that's all for now: I'll add to this post if new events show up.

Charles StrossThe Laundry Files: an updated chronology

I've been writing Laundry Files stories since 1999, and there's now about 1.4 million words in that universe. That's a lot of stuff: a typical novel these days is 100,000 words, but these books trend long, and this count includes 11 novels (of which, #10 comes out later this month) and some shorter work. It occurs to me that while some of you have been following them from the beginning, a lot of people come to them cold in the shape of one story or another.

So below the fold I'm going to explain the Laundry Files time line, the various sub-series that share the setting, and give a running order for the series—including short stories as well as novels.

(The series title, "The Laundry Files", was pinned on me by editorial fiat at a previous publisher whose policy was that any group of 3 or more connected novels had to have a common name. It wasn't my idea: my editor at the time also published Jim Butcher, and Bob—my sole protagonist at that point in the series—worked for an organization disparagingly nicknamed "the Laundry", so the inevitable happened. Using a singular series title gives the impression that it has a singular theme, which would be like calling Terry Pratchett's Discworld books as "the Unseen University series". Anyway ...)

TLDR version: If you just want to know where to start reading, pick one of: The Atrocity Archives, The Rhesus Chart, The Nightmare Stacks, or Dead Lies Dreaming. These are all safe starting points for the series, that don't require prior familiarity. Other books might leave you confused if you dive straight in, so here's an exhaustive run-down of all the books and short stories.

Typographic conventions: story titles are rendered in italics (like this). Book titles are presented in boldface (thus).

Publication dates are presented like this: (pub: 2016). The year in which a story is set is presented like so: (set: 2005).

The list is sorted in story order rather than publication order.

The Atrocity Archive (set: 2002; pub: 2002-3)

  • The short novel which started it all. Originally published in an obscure Scottish SF digest-format magazine called Spectrum SF, it ran from 2002 to 2003, and introduced our protagonist Bob Howard, his (eventual) love interest Mo O'Brien, and a bunch of eccentric minor characters and tentacled horrors. Is a kinda-sorta tribute to spy thriller author Len Deighton.

The Concrete Jungle (set: 2003: pub: see below)

  • Novella, set a year after The Atrocity Archive, in which Bob is awakened in the middle of the night to go and count the concrete cows in Milton Keynes. Winner of the 2005 Hugo award for best SF/F novella.

The Atrocity Archives (set 2002-03, pub: 2003 (hbk), 2006 (trade ppbk))

  • Start reading here! A smaller US publisher, Golden Gryphon, liked The Atrocity Archive and wanted to publish it, but considered it to be too short on its own. So The Concrete Jungle was written, and along with an afterword they were published together as a two-story collection/episodic novel, The Atrocity Archives (note the added 's' at the end). A couple of years later, Ace (part of Penguin group) picked up the US trade and mass market paperback rights and Orbit published it in the UK. (Having won a Hugo award in the meantime really didn't hurt; it's normally quite rare for a small press item such as TAA to get picked up and republished like this.)

The Jennifer Morgue (set: 2005, pub: 2007 (hbk), 2008 (trade ppbk))

  • Golden Gryphon asked for a sequel, hence the James Bond episode in what was now clearly going to be a trilogy of comedy Lovecraftian/spy books. Note that it's riffing off the Broccoli movie franchise version of Bond, not Iain Fleming's original psychopathic British government assassin. Orbit again took UK rights, while Ace picked up the paperbacks. Because I wanted to stick with the previous book's two-story format, I wrote an extra short story:

Pimpf (set: 2006, pub: collected in The Jennifer Morgue)

  • A short story set in what I think of as the Chibi-Laundry continuity; Bob ends up inside a computer running a Neverwinter Nights server (hey, this was before World of Warcraft got big). Chibi-Laundry stories are self-parodies and probably shouldn't be thought of as canonical. (Ahem: there's a big continuity blooper tucked away in this one what comes back to bite me in later books because I forgot about it.)

Down on the Farm (novelette: set 2007, pub. 2008,

  • Novelette: Bob has to investigate strange goings-on at a care home for Laundry agents whose minds have gone. Introduces Krantzberg Syndrome, which plays a major role later in the series.

Equoid (novella: set 2007, pub: 2013,

  • A novella set between The Jennifer Morgue and The Fuller Memorandum; Bob is married to Mo and working for Iris Carpenter. Bob learns why Unicorns are Bad News. Won the 2014 Hugo award for best SF/F novella. Also published as the hardback novella edition Equoid by Subterranean Press.

The Fuller Memorandum (set: 2008, pub: 2010 (US hbk/UK ppbk))

  • Third novel, first to be published in hardback by Ace, published in paperback in the UK by Orbit. The title is an intentional call-out to Adam Hall (aka Elleston Trevor), author of the Quiller series of spy thrillers—but it's actually an Anthony Price homage. This is where we begin to get a sense that there's an overall Laundry Files story arc, and where I realized I wasn't writing a trilogy. Didn't have a short story trailer or afterword because I flamed out while trying to come up with one before the deadline. Bob encounters skullduggery within the organization and has to get to the bottom of it before something really nasty happens: also, what and where is the misplaced "Teapot" that the KGB's London resident keeps asking him about?

Overtime (novelette: set 2009, pub 2009,

  • A heart-warming Christmas tale of Terror. Shortlisted for the Hugo award for best novelette, 2010.

Three Tales from the Laundry Files (ebook-only collection)

  • Collection consisting of Down on the Farm, Overtime, and Equoid published the as an ebook.

The Apocalypse Codex (set: 2010, pub: 2012 (US hbk/UK ppbk))

  • Fourth novel, and a tribute to the Modesty Blaise comic strip and books by Peter O'Donnell. A slick televangelist is getting much to cosy with the Prime Minister, and the Laundry—as a civil service agency—is forbidden from investigating. We learn about External Assets, and Bob gets the first inkling that he's being fast-tracked for promotion. Won the Locus Award for best fantasy novel in 2013.

A Conventional Boy (set: ~2011-12, not yet written)

  • Projected interstitial novella, introducing Derek the DM (The Nightmare Stacks) and Camp Sunshine (The Delirium Brief). Not yet written.

The Rhesus Chart (set: spring 2013, pub: 2014 (US hbk/UK hbk))

  • Fifth novel, a new series starting point if you want to bypass the early novels. First of a new cycle remixing contemporary fantasy sub-genres (I got bored with British spy thriller authors). Subject: Banking, Vampires, and what happens when an agile programming team inside a merchant bank develops PHANG syndrome. First to be published in hardcover in the UK by Orbit.

  • Note that the books are now set much closer together. This is a key point: the world of the Laundry Files has now developed its own parallel and gradually diverging history as the supernatural incursions become harder to cover up. Note also that Bob is powering up (the Bob of The Atrocity Archive wouldn't exactly be able to walk into a nest of vampires and escape with only minor damage to his dignity). This is why we don't see much of Bob in the next two novels.

The Annihilation Score (set: summer/autumn 2013, pub: 2015 (US hbk/UK ppbk))

  • Sixth novel, first with a non-Bob viewpoint protagonist—it's told by Mo, his wife, and contains spoilers for The Rhesus Chart. Deals with superheroes, mid-life crises, nervous breakdowns, and the King in Yellow. We're clearly deep into ahistorical territory here as we have a dress circle box for the very last Last Night of the Proms, and Orbit's lawyers made me very carefully describe the female Home Secretary as clearly not being one of her non-fictional predecessors, not even a little bit.

Escape from Puroland (set: March-April 2014, pub: summer 2021, forthcoming)

  • Interstitial novella, explaining why Bob wasn't around in the UK during the events described in The Nightmare Stacks. He was on an overseas liason mission, nailing down the coffin lid on one of Angleton's left-over toxic waste sites—this time, it's near Tokyo.

The Nightmare Stacks (set: March-April 2014, pub: June 2016 (US hbk/UK ppbk))

  • Seventh novel, and another series starting point if you want to dive into the most recent books in the series. Viewpoint character: Alex the PHANG. Deals with, well ... the Laundry has been so obsessed by CASE NIGHTMARE GREEN that they're almost completely taken by surprise when CASE NIGHTMARE RED happens. Implicitly marks the end of the Masquerade. Features a Maniac Pixie Dream Girl and the return of Bob's Kettenkrad from The Atrocity Archive. Oh, and it also utterly destroys the major British city I grew up in, because revenge is a dish best eaten cold.

The Delirium Brief (set: May-June 2014, pub: June 2017 (US hbk/UK ppbk))

  • Eighth novel, primary viewpoint character: Bob again, but with an ensemble of other viewpoints cropping up in their own chapters. And unlike the earlier Bob books it no longer pastiches other works or genres. Deals with the aftermath of The Nightmare Stacks; opens with Bob being grilled live on Newsnight by Jeremy Paxman and goes rapidly downhill from there. (I'm guessing that if the events of the previous novel had just taken place, the BBC's leading current affairs news anchor might have deferred his retirement for a couple of months ...)

The Labyrinth Index (set: winter 2014/early 2015, pub: October 2018, (US hbk/UK ppbk))

  • Ninth novel, viewpoint character: Mhari, working for the New Management in the wake of the drastic governmental changes that took place at the end of "The Delirium Brief". The shit has well and truly hit the fan on a global scale, and the new Prime Minister holds unreasonable expectations ...

Dead Lies Dreaming (set: December 2016: pub: Oct 2020 (US hbk/UK hbk))

  • New spin-off series, new starting point! The marketing blurb describes it as "book 10 in the Laundry Files" but by the time this book is set—after CASE NIGHTMARE GREEN and the end of the main Laundry story arc (some time in 2015-16) the Laundry no longer exists. We meet a cast of entirely new characters, civilians (with powers) living under the aegis of the New Management, ruled by his Dread Majesty, the Black Pharaoh. The start of a new trilogy, Dead Lies Dreaming riffs heavily off "Peter and Wendy", the original grimdark version of Peter Pan (before Walt Disney made him twee).

In His House (set: December 2016, pub: probably 2022)

  • Second book in the Dead Lies Dreaming trilogy: continues the story, riffs off Sweeney Todd and Mary Poppins—again: the latter was much darker than the Disney musical implies. (The book is written, but COVID19 has done weird things to publishers' schedules and it's provisionally in the queue behind Invisible Sun, the final Empire Games book, which is due out in September 2021.)

Bones and Nightmares (set: December 2016 and summer of 1820, pub: possibly 2023)

  • Third book in the Dead Lies Dreaming trilogy: finishes the story, riffs off The Prisoner and Jane Austen: also Kingsley's The Water Babies (with Deep Ones). In development.

Further novels are planned but not definite: there need to be 1-2 more books to finish the main Laundry Files story arc with Bob et al, filling in the time line before Dead Lies Dreaming, but the Laundry is a civil service security agency and the current political madness gripping the UK makes it really hard to satirize HMG, so I'm off on a side-quest following the tribulations of Imp, Eve, Wendy, and the gang (from Dead Lies Dreaming) until I figure out how to get back to the Laundry proper.

That's all for now. I'll attempt to update this entry as I write/publish more material.

Sam VargheseAustralian sports writer’s predictions prove to be those of a false prophet

After the first match in the Bledisloe Cup series ended in a 16-all draw, Australian sports writers were on a giddy high, predicting that the dominance of the All Blacks had more or less ended and the big boys had been caught with their pants down.

Well before this hype began, at the end of the game, there was a gesture by the Australian team which showed that its mental state was still very fragile. When the final whistle blew, the ball was still live, so the referee let play proceed.

A thrilling nine minutes ensued, with first Australia, and then New Zealand, threatening to score. Strangely though, neither team thought of attempting a drop-goal to win the game. After one of the New Zealand forays, the Australians regained the ball and fly-half James O’Connor kicked it into touch, ending the game.

Now O’Connor could have continued play, by running the ball from his own end. The All Blacks never took the option of ending the game when they got the ball during that nine-minute stretch. O’Connor’s gesture gave the game away: for Australia, a draw was as good as a win. It indicated the extent to which he ranked his team against the All Blacks, despite the heroics they had showed.

With that kind of mental attitude, it was only to be expected that Australia would lose at Eden Park the following week. As they did, by a 27-7 margin, at a venue where they have not beaten New Zealand since 1986.

During the week, there were several triumphal essays in the Australian press; one, by Jamie Pandaram, a senior sports writer at The Daily Telegraph, gives an insight into the type of shallow understanding that sports writers on this side of the Tasman have, and the nationalistic fervour that surrounds sport (as it does everything else).

The headline gave an indication of the bombast that was to follow, reading: “Why All Blacks are finally vulnerable.” It started off saying that facing a beaten All Blacks team was a dangerous exercise, “there’s a different feel about 2020”.

And he cited concerns about the coaching, selection, tactics and loss of seniority in All Black ranks as factors that had contributed to what he called a decline in their ranks.

He cited the absence of four players – Kieran Read, Brodie Retallick, Sonny Bill Williams and Ryan Crotty – as making the team unable to produce those moments of inspiration for which they have become famous. And he claimed that Ian Foster, who took over as coach from Steve Hansen after the 2019 World Cup, was not the best coach among those who could be chosen.

As evidence that stress was allegedly mounting on New Zealand, Pandaram cited the complaints made by the assistant coach John Plumtree about illegal tactics employed by Australia in taking out players.

The first game was unusual in that it was not refereed by a neutral referee – the first time this has happened in a long time and mainly due to the travel issues cause by the coronavirus pandemic.

The referee, New Zealand’s Paul Williams, had to tread a difficult path; he had to ensure that his rulings could not be criticised as being partial to his own country and at the same time he had to police Australia’s thuggery properly without accusations of bias. Having watched the match twice, I can say with confidence that Williams only erred once, in not calling Rieko Ioane for stepping on the sideline boundary when he began what ended up as a try. This was the fault of Australian Angus Gardner who was the linesman on the side concerned.

“It’s a common Kiwi play; turn the referees’ and public’s attention to perceived cheating by their opposition – we’ve seen them previously call out the Wallabies’ scrumming and breakdown play – to take the spotlight away from their own,” wrote Pandaram, completely forgetting that this was exactly what former Wallabies coach Michael Cheika did after every game.

He opined that Foster would be under “intense scrutiny” during the second game as many people in New Zealand felt that the job of chief coach should have gone to Scott Robertson instead, the latter being one who has taken the Crusaders to four Super Rugby titles in his first four years as their coach.

And Pandaram went on and on, outlining what he perceived to be issues with the team, about playing this player and that in this position or that.

I haven’t seen anything he wrote after the second game when Australia was competitive for just one half, and unable to score in the second half. All those perceived “problems” he pointed out were gone.

One crucial factor that he forgot was that this was the first game for both teams this year. Normally, both Australia and New Zealand play two or three Tests before the games against each other, South Africa and Argentina, which make up the Bledisloe Cup and Rugby Championship each year, begin. Both teams were quite rusty.

In 2015, New Zealand lost more talent after the World Cup than they did in 2019; that time Daniel Carter, Richie McCaw, Ma’a Nonu, Conrad Smith, Tony Woodcock and Keven Mealamu all retired from international rugby. But the side picked up and carried on.

A lot of Pandaram’s moaning comes out of nationalism; Australians are the most one-sided sports writers I have seen. When one is shown up like this, they tend to lie quiet until the public forgets. As Pandaram is doing now.

Planet DebianSandro Tosi: Multiple git configurations depending on the repository path

For my work on Debian, i want to use my email address, while for my personal projects i want to use my address.

One way to change the git config value is to git config --local in every repo, but that's tedious, error-prone and doesn't scale very well with many repositories (and the chances to forget to set the right one on a new repo are ~100%).

The solution is to use the git-config ability to include extra configuration files, based on the repo path, by using includeIf:

Content of ~/.gitconfig:

name = Sandro Tosi
email = <personal.address>

[includeIf "gitdir:~/deb/"]
path = ~/.gitconfig-deb

Every time the git path is in ~/deb/ (which is where i have all Debian repos) the file ~/.gitconfig-deb will be included; its content:

name = Sandro Tosi
email =

That results in my personal address being used on all repos not part of Debian, where i use my Debian email address. This approach can be extended to every other git configuration values.

Planet DebianJelmer Vernooij: Debian Janitor: Hosters used by Debian packages

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

The Janitor knows how to talk to different hosting platforms. For each hosting platform, it needs to support the platform- specific API for creating and managing merge proposals. For each hoster it also needs to have credentials.

At the moment, it supports the GitHub API, Launchpad API and GitLab API. Both GitHub and Launchpad have only a single instance; the GitLab instances it supports are and

This provides coverage for the vast majority of Debian packages that can be accessed using Git. More than 75% of all packages are available on salsa - although in some cases, the Vcs-Git header has not yet been updated.

Of the other 25%, the majority either does not declare where it is hosted using a Vcs-* header (10.5%), or have not yet migrated from alioth to another hosting platform (9.7%). A further 2.3% are hosted somewhere on GitHub (2%), Launchpad (0.18%) or (0.15%), in many cases in the same repository as the upstream code.

The remaining 1.6% are hosted on many other hosts, primarily people’s personal servers (which usually don’t have an API for creating pull requests).

Packages per hoster

Outdated Vcs-* headers

It is possible that the 20% of packages that do not have a Vcs-* header or have a Vcs header that say there on alioth are actually hosted elsewhere. However, it is hard to know where they are until a version with an updated Vcs-Git header is uploaded.

The Janitor primarily relies on vcswatch to find the correct locations of repositories. vcswatch looks at Vcs-* headers but has its own heuristics as well. For about 2,000 packages (6%) that still have Vcs-* headers that point to alioth, vcswatch successfully finds their new home on salsa.

Merge Proposals by Hoster

These proportions are also visible in the number of pull requests created by the Janitor on various hosters. The vast majority so far has been created on Salsa.

Hoster Open Merged & Applied Closed
Merge Proposal statistics

In this graph, “Open” means that the pull request has been created but likely nobody has looked at it yet. Merged means that the pull request has been marked as merged on the hoster, and applied means that the changes have ended up in the packaging branch but via a different route (e.g. cherry-picked or manually applied). Closed means that the pull request was closed without the changes being incorporated.

Note that this excludes ~5,600 direct pushes, all of which were to salsa-hosted repositories.

See also:

For more information about the Janitor's lintian-fixes efforts, see the landing page.


Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.3: New features and much more docs

A good month after the initial two releases, we are thrilled to announce relase 0.0.3 of RcppSpdlog. This brings us release 1.8.1 of spdlog as well as a few local changes (more below).

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic.

This version of RcppSpdlog brings a new top-level function setLogLevel to control what events get logged, updates the main example to show this and to also make the R-aware logger the default logger, and adds both an extended vignette showing several key features and a new (external) package documentation site.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.3 (2020-10-23)

  • New function setLogLevel with R accessor in exampleRsink example

  • Updated exampleRsink to use default logger instance

  • Upgraded to upstream release 1.8.1 which contains finalised upstream use to switch to REprintf() if R compilation detected

  • Added new vignette with extensive usage examples, added compile-time logging switch example

  • A package documentation website was added

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBirger Schacht: An Analysis of 5 Million OpenPGP Keys

In July I finished my Bachelor’s Degree in IT Security at the University of Applied Sciences in St. Poelten. During the studies I did some elective courses, one of which was about Data Analysis using Python, Pandas and Jupyter Notebooks. I found it very interesting to do calculations on different data sets and to visualize them. Towards the end of the Bachelor I had to find a topic for my Bachelor Thesis and as a long time user of OpenPGP I thought it would be interesting to do an analysis of the collection of OpenPGP keys that are available on the keyservers of the SKS keyserver network.

So in June 2019 I fetched a copy of one of the key dumps of the one of the keyservers (some keyserver publish these copies of their key database so people who want to join the SKS keyserver network can do an initial import). At that time the copy of the key database contained 5,499,675 keys and was around 12GB. Using the hockeypuck keyserver software I imported the keys into an PostgreSQL database. Hockeypuck uses a table called keys to store the keys and in there the column doc stores the OpenPGP keys in JSON format (always with a data field containing the original unparsed data).

For the thesis I split the analysis in three parts, first looking at the Public Key packets, then analysing the User ID packets and finally studying the Signature Packets. To analyse the respective packets I used SQL to export the data to CSV files and then used the pandas read_csv method to create a dataframe of the values. In a couple of cases I did some parsing before converting to a DataFrame to make the analysis step faster. The parsing was done using the pgpdump python library.

Together with my advisor I decided to submit the thesis for a journal, so we revised and compressed the whole paper and the outcome was now


in the Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications (JoWUA).

I think the work gives some valuable insight in the development of the use of OpenPGP in the last 30 years. Looking at the public key packets we were able to compare the different public key algorithms and for example visualize how DSA was the most used algorithm until around 2010 when it was replaced by RSA. When looking at the less used algorithms a trend towards ECC based crytography is visible.

What we also noticed was an increase of RSA keys with algorithm ID 3 (RSA Sign-Only), which are deprecated. When we took a deeper look at those keys we realized that most of those keys used a specific User ID string in the User ID packets which allowed us to attribute those keys to two software projects both using the Bouncy Castle Java Cryptographic API (resp. the Spongy Castle version for Android). We also stumbled over a tutorial on how to create RSA keys with Bouncycastle which also describes how to create RSA keys with code that produces RSA Sign-Only keys. In one of those projects, this was then fixed.

By looking at the User ID packets we did some statistics about the most used email providers used by OpenPGP users. One domain stood out, because it is not the domain of an email provider: is a domain used in around 45,000 keys. Tellfinder is a Big Data analysis software and the UID of all but two of those keys is TellFinder Page Archiver- Signing Key <>.

We also looked at the comments used in OpenPGP User ID fields. In 2013 Daniel Kahn Gillmor published a blog post titled OpenPGP User ID Comments considered harmful in which he pointed out that most of the comments in the User ID field of OpenPGP keys are duplicating information that is already present somewhere in the User ID or the key itself. In our dataset 3,133 comments were exactly the same as the name, 3,346 were the same as the domain and 18,246 comments were similar to the local part of the email address

Last but not least we looked at the signature subpackets and the development of some of the preferences (Preferred Symmetric Algorithm, Preferred Hash Algorithm) that are being published using signature packets.

Analysing this huge dataset of cryptographic keys of the last 20 to 30 years was very interesting and I learned a lot about the history of PGP resp. OpenPGP and the evolution of cryptography overall. I think it would be interesting to look at even more properties of OpenPGP keys and I also think it would be valuable for the OpenPGP ecosystem if these kinds analysis could be done regularly. An approach like Tor Metrics could lead to interesting findings and could also help to back decisions regarding future developments of the OpenPGP standard.

Planet DebianEnrico Zini: Hetzner build machine

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

Building Qt5 takes a long time. The build server I was using had CPUs and RAM, but was very slow on I/O. I was very frustrated by that, and I started evaluating alternatives. I ended up setting up scripts to automatically provision a throwaway cloud server at Hetzner.

Initial setup

I got an API key from my customer's Hetzner account.

I installed hcloud-cli, currently only in testing and unstable:

apt install hcloud-cli

Then I configured hcloud with the API key:

hcloud context create

Spin up

I wrote a quick and dirty script to spin up a new machine, which grew a bit with little tweaks:


# Create the server
hcloud server create --name buildqt --ssh-key … --start-after-create \
                     --type cpx51 --image debian-10 --datacenter …

# Query server IP
IP="$(hcloud server describe buildqt -o json | jq -r .public_net.ipv4.ip)"

# Update ansible host file
echo "buildqt ansible_user=root ansible_host=$IP" > hosts

# Remove old host key
ssh-keygen -f ~/.ssh/known_hosts -R "$IP"

# Update login script
echo "#!/bin/sh" > login
echo "ssh root@$IP" >> login
chmod 0755 login

I picked a datacenter in the same location as where we have other servers, to get quicker data transfers.

I like that CLI tools have JSON output that I can cleanly pick at with jq. Sadly, my ISP doesn't do IPv6 yet.

Since the server just got regenerated, I remove a possibly cached host key.

Provisioning the machine

One git server I need is behind HTTP authentication. Here's a quick hack to pass the relevant .netrc credentials to ansible before provisioning:


import subprocess
import netrc
import tempfile
import json

login, account, password = netrc.netrc().authenticators("…")

with tempfile.NamedTemporaryFile(mode="wt", suffix=".json") as fd:
        "repo_user": login,
        "repo_password": password,
    }, fd)
        "-i", "hosts",
        "-l", "buildqt",
        "--extra-vars", f"@{}",
        ], check=True)

And here's the ansible playbook:

#!/usr/bin/env ansible-playbook

- name: Install and configure buildqt
  hosts: all
   - name: Update apt cache
        update_cache: yes
        cache_valid_time: 86400

   - name: Create build user
        name: build
        comment: QT5 Build User
        shell: /bin/bash

   - name: Create sources directory
     become: yes
     become_user: build
        path: ~/sources
        state: directory
        mode: 0755

   - name: Download sources
     become: yes
     become_user: build
        url: "https://…/{{item}}"
        dest: "~/sources/{{item}}"
        mode: 0644
      - "qt-everywhere-src-5.15.1.tar.xz"
      - "qt-creator-enterprise-src-4.13.2.tar.gz"

   - name: Populate home directory
     become: yes
     become_user: build
        src: build
        dest: ~/
        mode: preserve

   - name: Write .netrc
     become: yes
     become_user: build
        dest: ~/.netrc
        mode: 0600
        content: |
           machine …
           login {{repo_user}}
           password {{repo_password}}

   - name: Write .screenrc
     become: yes
     become_user: build
        dest: ~/.screenrc
        mode: 0644
        content: |
           hardstatus alwayslastline
           hardstatus string '%{= cw}%-Lw%{= KW}%50>%n%f* %t%{= cw}%+Lw%< %{= kK}%-=%D %Y-%m-%d %c%{-}'
           startup_message off
           defutf8 on
           defscrollback 10240

   - name: Install base packages
        name: git,mc,ncdu,neovim,eatmydata,devscripts,equivs,screen
        state: present

   - name: Clone git repo
     become: yes
     become_user: build
        repo: https://…@…/….git
        dest: ~/…

   - name: Copy Qt license
     become: yes
     become_user: build
        src: qt-license.txt
        dest: ~/.qt-license
        mode: 0600

Now everything is ready for a 16 core, 32Gb ram build on SSD storage.

Tear down

When done:

hcloud server delete buildqt

The whole spin up plus provisioning takes around a minute, so I can do it when I start a work day, and take it down at the end. The build machine wasn't that expensive to begin with, and this way it will even be billed by the hour.

A first try on a CPX51 machine has just built the full Qt5 Everywhere Enterprise including QtWebEngine and all its frills, for amd64, in under 1 hour and 40 minutes.

Cryptogram New Report on Police Decryption Capabilities

There is a new report on police decryption capabilities: specifically, mobile device forensic tools (MDFTs). Short summary: it’s not just the FBI that can do it.

This report documents the widespread adoption of MDFTs by law enforcement in the United States. Based on 110 public records requests to state and local law enforcement agencies across the country, our research documents more than 2,000 agencies that have purchased these tools, in all 50 states and the District of Columbia. We found that state and local law enforcement agencies have performed hundreds of thousands of cellphone extractions since 2015, often without a warrant. To our knowledge, this is the first time that such records have been widely disclosed.

Lots of details in the report. And in this news article:

At least 49 of the 50 largest U.S. police departments have the tools, according to the records, as do the police and sheriffs in small towns and counties across the country, including Buckeye, Ariz.; Shaker Heights, Ohio; and Walla Walla, Wash. And local law enforcement agencies that don’t have such tools can often send a locked phone to a state or federal crime lab that does.


The tools mostly come from Grayshift, an Atlanta company co-founded by a former Apple engineer, and Cellebrite, an Israeli unit of Japan’s Sun Corporation. Their flagship tools cost roughly $9,000 to $18,000, plus $3,500 to $15,000 in annual licensing fees, according to invoices obtained by Upturn.

Planet DebianMolly de Blanc: Endorsements

Transparency is essential to trusting a technology. Through transparency we can understand what we’re using and build trust. When we know what is actually going on, what processes are occurring and how it is made, we are able to decide whether interacting with it is something we actually want, and we’re able to trust it and use it with confidence.

This transparency could mean many things, though it most frequently refers to the technology itself: the code or, in the case of hardware, the designs. We could also apply it to the overall architecture of a system. We could think about the decision making, practices, and policies of whomever is designing and/or making the technology. These are all valuable in some of the same ways, including that they allow us to make a conscious choice about what we are supporting.

When we choose to use a piece of technology, we are supporting those who produce it. This could be because we are directly paying for it, however our support is not limited to direct financial contributions. In some cases this is because of things hidden within a technology: tracking mechanisms or backdoors that could allow companies or governments access to what we’re doing. When creating different types of files on a computer, these files can contain metadata that says what software was used to make it. This is an implicit endorsement, and you can also explicitly endorse a technology by talking about that or how you use it. In this, you have a right (not just a duty) to be aware of what you’re supporting. This includes, for example, organizational practices and whether a given company relies on abusive labor policies, indentured servitude, or slave labor.
Endorsements inspire others to choose a piece of technology. Most of my technology is something I investigate purely for functionality, and the pieces I investigate are based on what people I know use. The people I trust in these cases are more inclined than most to do this kind of research, to perform technical interrogations, and to be aware of what producers of technology are up to.

This is how technology spreads and becomes common or the standard choice. In one sense, we all have the responsibility (one I am shirking) to investigate our technologies before we choose them. However, we must acknowledge that not everyone has the resources for this – the time, the skills, the knowledge, and therein endorsements become even more important to recognize.

Those producing a technology have the responsibility of making all of these angles something one could investigate. Understanding cannot only be the realm of experts. It should not require an extensive background in research and investigative journalism to find out whether a company punishes employees who try to unionize or pay non-living wages. Instead, these must be easy activities to carry out. It should be standard for a company (or other technology producer) to be open and share with people using their technology what makes them function. It should be considered shameful and shady to not do so. Not only does this empower those making choices about what technologies to use, but it empowers others down the line, who rely on those choices. It also respects the people involved in the processes of making these technologies. By acknowledging their role in bringing our tools to life, we are respecting their labor. By holding companies accountable for their practices and policies, we are respecting their lives.

Worse Than FailureError'd: Errors by the Pound

"I can understand selling swiss cheese by the slice, but copier paper by the pound?" Dave P. wrote.


Amanda R. writes, "Ok, that's fine, but can the 1% correctly spell 'people'?"


"In this form, language is quite variable as is when you are able to cancel your reservation ...which are in fact, actual variables," wrote Jean-Pierre M.


Barry M. wrote, "Hey, Royal Caribbean, you know what? I'll take the win-win: total control AND save $7!"


"Oh wow! The secret on how to write good articles is out!" writes Barry L.


[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Krebs on SecurityThe Now-Defunct Firms Behind 8chan, QAnon

Some of the world’s largest Internet firms have taken steps to crack down on disinformation spread by QAnon conspiracy theorists and the hate-filled anonymous message board 8chan. But according to a California-based security researcher, those seeking to de-platform these communities may have overlooked a simple legal solution to that end: Both the Nevada-based web hosting company owned by 8chan’s current figurehead and the California firm that provides its sole connection to the Internet are defunct businesses in the eyes of their respective state regulators.

In practical terms, what this means is that the legal contracts which granted these companies temporary control over large swaths of Internet address space are now null and void, and American Internet regulators would be well within their rights to cancel those contracts and reclaim the space.

The IP address ranges in the upper-left portion of this map of QAnon and 8kun-related sites — some 21,000 IP addresses beginning in “206.” and “207.” — are assigned to N.T. Technology Inc. Image source:

That idea was floated by Ron Guilmette, a longtime anti-spam crusader who recently turned his attention to disrupting the online presence of QAnon and 8chan (recently renamed “8kun”).

On Sunday, 8chan and a host of other sites related to QAnon conspiracy theories were briefly knocked offline after Guilmette called 8chan’s anti-DDoS provider and convinced them to stop protecting the site from crippling online attacks (8Chan is now protected by an anti-DDoS provider in St. Petersburg, Russia).

The public face of 8chan is Jim Watkins, a pig farmer in the Philippines who many experts believe is also the person behind the shadowy persona of “Q” at the center of the conspiracy theory movement.

Watkin owns and operates a Reno, Nev.-based hosting firm called N.T. Technology Inc. That company has a legal contract with the American Registry for Internet Numbers (ARIN), the non-profit which administers IP addresses for entities based in North America.

ARIN’s contract with N.T. Technology gives the latter the right to use more than 21,500 IP addresses. But as Guilmette discovered recently, N.T. Technology is listed in Nevada Secretary of State records as under an “administrative hold,” which according to Nevada statute is a “terminated” status indicator meaning the company no longer has the right to transact business in the state.

N.T. Technology’s listing in the Nevada Secretary of State records. Click to Enlarge.

The same is true for Centauri Communications, a Freemont, Calif.-based Internet Service Provider that serves as N.T. Technology’s colocation provider and sole connection to the larger Internet. Centauri was granted more than 4,000 IPv4 addresses by ARIN more than a decade ago.

According to the California Secretary of State, Centauri’s status as a business in the state is “suspended.” It appears that Centauri hasn’t filed any business records with the state since 2009, and the state subsequently suspended the company’s license to do business in Aug. 2012. Separately, the California State Franchise Tax Board (FTB) suspended this company as of April 1, 2014.

Centauri Communications’ listing with the California Secretary of State’s office.

Neither Centauri Communications nor N.T. Technology responded to repeated requests for comment.

KrebsOnSecurity shared Guilmette’s findings with ARIN, which said it would investigate the matter.

“ARIN has received a fraud report from you and is evaluating it,” a spokesperson for ARIN said. “We do not comment on such reports publicly.”

Guilmette said apart from reclaiming the Internet address space from Centauri and NT Technology, ARIN could simply remove each company’s listings from the global WHOIS routing records. Such a move, he said, would likely result in most ISPs blocking access to those IP addresses.

“If ARIN were to remove these records from the WHOIS database, it would serve to de-legitimize the use of these IP blocks by the parties involved,” he said. “And globally, it would make it more difficult for the parties to find people willing to route packets to and from those blocks of addresses.”

Planet DebianBastian Blank: Salsa updated to GitLab 13.5

Today, GitLab released the version 13.5 with several new features. Also Salsa got some changes applied to it.

GitLab 13.5

GitLab 13.5 includes several new features. See the upstream release postfix for a full list.

Shared runner builds on larger instances

It's been way over two years since we started to use Google Compute Engine (GCE) for Salsa. Since then, all the jobs running on the shared runners run within a n1-standard-1 instance, providing a fresh set of one vCPU and 3.75GB of RAM for each and every build.

GCE supports several new instance types, featuring better and faster CPUs, including current AMD EPICs. However, as it turns out, GCE does not support any single vCPU instances for any of those types. So jobs in the future will use n2d-standard-2 for the time being, provinding two vCPUs and 8GB of RAM..

Builds run with IPv6 enabled

All builds run with IPv6 enabled in the Docker environment. This means the lo network device got the IPv6 loopback address ::1 assigned. So tests that need minimal IPv6 support can succeed. It however does not include any external IPv6 connectivity.

Charles StrossAll Glory to the New Management!

Dead Lies Dreaming - UK cover

Today is September 27th, 2020. On October 27th, Dead Lies Dreaming will be published in the USA and Canada: the British edition drops on October 29th. (Yes, there will be audio editions too, via the usual outlets.)

This book is being marketed as the tenth Laundry Files novel. That's not exactly true, though it's not entirely wrong, either: the tenth Laundry book, about the continuing tribulations of Bob Howard and his co-workers, hasn't been written yet. (Bob is a civil servant who by implication deals with political movers and shakers, and politics has turned so batshit crazy in the past three years that I just can't go there right now.)

There is a novella about Bob coming next summer. It's titled Escape from Puroland and will be publishing it as an ebook and hardcover in the USA. (No UK publication is scheduled as yet, but we're working on it.) I've got one more novella planned, about Derek the DM, and then either one or two final books: I'm not certain how many it will take to wrap the main story arc yet, but rest assured that the tale of SOE's Q-Division, the Laundry, reaches its conclusion some time in 2015. Also rest assured that at least one of our protagonists survives ... as does the New Management.

All Glory to the Black Pharaoh! Long may he rule over this spectred isle!

(But what's this book about?)

Dead Lies Dreaming - US cover

Dead Lies Dreaming is the first book in a project I dreamed up in (our world's) 2017, with the working title Tales of the New Management. It came about due to an unhappy incident: I found out the hard way that writing productively while one of your parents is dying is rather difficult. The first time it happened, it took down a promising space opera project. I plan to pick it up and re-do it next year, but it was the kind of learning experience I could happily have done without. The second time it happened, I had to stop work on Invisible Sun, the third and final Empire Games novel—I just couldn't get into the right head-space. (Empire Games is now written and in the hands of the production folks at Tor. It will almost certainly be published next September, if the publishing industry survives the catastrophe novel we're all living through right now.)

Anyway, I was unable work on the a project with a fixed deadline, but I couldn't not write: so I gave myself license to doodle therapeutically. The therapeutic doodles somehow colonized the abandoned first third of a magical realist novel I pitched in 2014, and turned into an unexpected attack novel titled Lost Boys. (It was retitled Dead Lies Dreaming because a cult comedy movie from 1987 got remade for TV in 2020—unless you're a major bestseller you do not want your book title to clash with an unrelated movie—but it's still Lost Boys in my headcanon.)

Lost Boys—that is, Dead Lies Dreaming—riffs heavily off Peter and Wendy, the original taproot of Peter Pan, a stage play and novel by J. M. Barrie that predates the more familiar, twee, animated Disney version of Peter Pan from 1953 by some decades. (Actually Peter and Wendy recycled Barrie's character from an earlier work, The Little White Bird, from 1902, but let's not get into the J. M. Barrie arcana at this point.) Peter and Wendy can be downloaded from Project Gutenberg here. And if you only know Pan from Disney, you're in for a shock.

Barrie was writing in an era when antibiotics hadn't been discovered, and far fewer vaccines were available for childhood diseases. Almost 20% of children died before reaching their fifth birthday, and this was a huge improvement over the earlier decades of the 19th century: parents expected some of their babies to die, and furthermore, had to explain infant deaths to toddlers and pre-tweens. Disney's Peter is a child of the carefree first flowering of the antibiotic age, and thereby de-fanged, but the original Peter Pan isn't a twee fairy-like escapist fantasy. He's a narcissistic monster, a kidnapper and serial killer of infants who is so far detached from reality that his own shadow can't keep up. Barrie's story is a metaphor designed to introduce toddlers to the horror of a sibling's death. And I was looking at it in this light when I realized, "hey, what if Peter survived the teind of infant mortality, only to grow up under the dictatorship of the New Management?"

This led me down all sorts of rabbit holes, only some of which are explored in Dead Lies Dreaming. The nerdish world-building impulse took over: it turns out that civilian life under the rule of N'yar lat-Hotep, the Black Pharaoh (in his current incarnation as Fabian Everyman MP), is harrowing and gruesome in its own right—there's a Tzompantli on Marble Arch: indications that Lovecraft's Elder Gods were worshipped under other names by other cultures: oligarchs and private equity funds employ private armies: and Brexit is still happening—but nevertheless, ordinary life goes on. There are jobs for cycle couriers, administrative assistants, and ex-detective constables-turned-security guards. People still need supermarkets and high street banks and toy shops. The displays of severed heads on the traffic cameras on the M25 don't stop drivers trying to speed. Boys who never grew up are still looking for a purpose in life, at risk of their necks, while their big sisters try to save them. And so on.

Dead Lies Dreaming is the first of the Tales of the New Management, which are being positioned as a continuation of the Laundry Files (because Marketing). There will be more. A second novel, In His House, already exists in first draft. Tt's a continuation of the story, remixed with Sweeney Todd and Mary Poppins—who in the original form is, like Peter Pan, much more sinister than the Disney whitewash suggests. A third novel, Bones and Nightmares, is planned. (However, I can't give you a publication date, other than to say that In His house can't be published before late 2022: COVID19 has royally screwed up publishers' timetables.)

Anyway, you probably realized that instead of riffing off classic British spy thrillers or urban fantasy tropes, I'm now perverting beloved childhood icons for my own nefarious purposes—and I'm having a gas. Let's just hope that the December of 2016 in which Dead Lies Dreaming is set doesn't look impossibly utopian and optimistic by the time we get to the looming and very real December of 2020! I really hate it when reality front-runs my horror novels ...

Planet DebianVincent Fourmond: QSoas tips and tricks: generating smooth curves from a fit

Often, one would want to generate smooth data from a fit over a small number of data points. For an example, take the data in the following file. It contains (fake) experimental data points that obey to Michaelis-Menten kinetics: $$v = \frac{v_m}{1 + K_m/s}$$ in which \(v\) is the measured rate (the y values of the data), \(s\) the concentration of substrate (the x values of the data), \(v_m\) the maximal rate and \(K_m\) the Michaelis constant. To fit this equation to the data, just use the fit-arb fit:
QSoas> l michaelis.dat
QSoas> fit-arb vm/(1+km/x)
After running the fit, the window should look like this:
Now, with the fit, we have reasonable values for \(v_m\) (vm) and \(K_m\) (km). But, for publication, one would want to generate "smooth" curve going through the lines... Saving the curve from "Data.../Save all" doesn't help, since the data has as many points as the original data and looks very "jaggy" (like on the screenshot above)... So one needs a curve with more data points.

Maybe the most natural solution is simply to use generate-buffer together with apply-formula using the formula and the values of km and vm obtained from the fit, like:

QSoas> generate-buffer 0 20
QSoas> apply-formula y=3.51742/(1+3.69767/x)
By default, generate-buffer generate 1000 evenly spaced x values, but you can change their number using the /samples option. The two above commands can be combined to just one call to generate-buffer:
QSoas> generate-buffer 0 20 3.51742/(1+3.69767/x)
This works, but it is quite cumbersome and it is not going to work well for complex formulas or the results of differential equations or kinetic systems...

This is why to each fit- command corresponds a sim- command that computes the result of the fit using a "saved parameters" file (here, michaelis.params, but you can also save it yourself) and buffers as "models" for X values:

QSoas> generate-buffer 0 20
QSoas> sim-arb vm/(1+km/x) michaelis.params 0
This strategy works with every single fit ! As an added benefit, you even get the fit parameters as meta-data, which are displayed by the show command:
QSoas> show 0
Dataset generated_fit_arb.dat: 2 cols, 1000 rows, 1 segments, #0
Meta-data:	commands =	 sim-arb vm/(1+km/x) michaelis.params 0	fit =	 arb (formula: vm/(1+km/x))	km =	 3.69767
	vm =	 3.5174
They also get saved as comments if you save the data.

Important note: the sim-arb command will be available only in the 3.0 release, although you can already enjoy it if you use the github version.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code and compile it yourself or buy precompiled versions for MacOS and Windows there.

Planet DebianSteinar H. Gunderson: plocate in testing

plocate hit testing today, so it's officially on its way to bullseye :-) I'd love to add a backport to stable, but bpo policy says only to backport packages with a “notable userbase”, and I guess 19 installations in popcon isn't that :-) It's also hit Arch Linux, obviously Ubuntu universe, and seemingly also other distributions like Manjaro. No Fedora yet, but hopefully, some Fedora maintainer will pick it up. :-)

Also, pabs pointed out another possible use case, although this is just a proof-of-concept:

pannekake:~/dev/plocate/obj> time apt-file search bin/updatedb                 
locate: /usr/bin/updatedb.findutils       
mlocate: /usr/bin/updatedb.mlocate
roundcube-core: /usr/share/roundcube/bin/
apt-file search bin/updatedb  1,19s user 0,58s system 163% cpu 1,083 total

pannekake:~/dev/plocate/obj> time ./plocate -d apt-file.plocate.db bin/updatedb
locate: /usr/bin/updatedb.findutils
mlocate: /usr/bin/updatedb.mlocate
roundcube-core: /usr/share/roundcube/bin/
./plocate -d apt-file.plocate.db bin/updatedb  0,00s user 0,01s system 79% cpu 0,012 total

Things will probably be quieting down now; there's just not that many more logical features to add.

Worse Than FailureCodeSOD: Query Elegance

It’s generally hard to do worse than a SQL injection vulnerability. Data access is fundamental to pretty much every application, and every programming environment has some set of rich tools that make it easy to write powerful, flexible queries without leaving yourself open to SQL injection attacks.

And yet, and yet, they’re practically a standard feature of bad code. I suppose that’s what makes it bad code.

Gidget W inherited a PHP application which, unsurprisingly, is rife with SQL injection vulnerabilities. But, unusually, it doesn’t leverage string concatenation to get there. Gidget’s predecessor threw a little twist on it.

$fields = ",, UNIX_TIMESTAMP( as stamp, ";
$fields .= "t2.idT1, t2.otherDate, t2.otherId";
$join = "table1 as t1 join table2 as t2 on";
$where = "where t1.lastModified > $val && t2.lastModified = '$val2'";
$query = "select $field from $join $where";

This pattern appears all through the code. Because it leverages string interpolation, the same core structure shows up again and again, almost copy/pasted, with one line repeated each time.

$query = "select $field from $join $where";

What goes into $field and $join and $where may change each time, but "select $field from $join $where" is eternal, unchanging, and omnipresent. Every database query is constructed this way.

It’s downright elegant, in its badness. It simultaneously shows an understanding of how to break up a pattern into reusable code, but also no understanding of why all of this is a bad idea.

But we shouldn’t let that distract us from the little nuances of the specific query that highlight more WTFs.

t1.lastModified > $val && t2.lastModified = '$val2'

lastModified in both of these tables is a date, as one would expect. Which raises the question: why does one of these conditions get quotes and why does the other one not? It implies that $val probably has the quotes baked in?

Gidget also asks: “Why is the WHERE keyword part of the $where variable instead of inline in the query, but that isn’t the case for SELECT or FROM?”

That, at least, I can answer. Not every query has a filter condition. Since you can’t have WHERE followed by nothing, just make the $where variable contain that.

See? Elegant in its badness.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


LongNowThe Data of Long-lived Institutions

The following transcript has been edited for length and clarity. 

I want to lead you through some of the research that I’ve been doing on a meta-level around long-lived institutions, as well as some observations of the ways various systems have lasted for hundreds of thousands of years. 

Long Now as a Long-lived Institution

This is one of the early projects I worked with Stewart Brand on at Long Now. We were trying to define our problem space and explore the ways we think on different timescales. Generally, companies are working in the “nowadays,” although that’s been shortening to some extent, with more quarterly thinking than decade-level thinking.

It was Peter Schwartz who suggested this 10,000 year timeframe. Danny Hillis’ original idea for what would ultimately become The 10,000 Year Clock was that it would be a Millennium Clock: it would tick once a year, bong once a century, and the cuckoo would come out once a millennium. He didn’t really have an end date. 

We use the 10,000 year time frame to orient our efforts at Long Now because global civilization arose when the last Interglacial period ended 10,000 years ago. It was only then, around 8,000 BC, that we had the emergence of agriculture and the first cities. If we can look back that far, we should be able to look forward that far. Thinking about ourselves as in the middle of a 20,000 year story is very different than thinking about ourselves as at the end of a 10,000 year story. 

This pace layers diagram is the very first thing I worked on at Long Now. The notion of pace layers came out of a discussion between Stewart and Long Now co-founder Brian Eno. They were trying to tease apart these layers of human time. 

Institutions can be mapped across the pace layers diagram as well. Take Apple Computer, for example. They’re coming out with new iPhones every six months, which is the fashion layer. The commerce layer is Apple selling these devices. The infrastructure layer is the cell phone networks and chip fabs that it’s all built on. The governance layer—and note that it is governance, not government; they’re mostly working with governments, but they also have to work with general governing systems. Some of these companies are hitting walls against different types of governments who have different ideas of privacy, different ideas of commercialization, and they’re now having to shape their companies around that. And then obviously, culture is moving slower underneath all of this, but Apple is starting to affect culture. And then there’s the last pace layer, nature, moving the slowest. At some point, Apple is going to have to come to terms with the level of environmental damage and problems that are happening on the nature pace layer if it is going to be a company that lasts for hundreds or a thousand years. So we could imagine any large institution mapped across this and I think it’s a useful tool for that. 

Also very early on in Long Now’s history, in 01997, Kees van der Heijden, who wrote the book on scenario planning, came to a charrette that Long Now organized to come up with business ideas for our organization. He formulated a business plan that was strangely prophetic:

The squares are areas where we have core competencies. The dotted lines indicate temporary competencies, like the founders. The other items indicate all the things we hadn’t really gotten to yet or figured out: we didn’t have a way of funding ourselves; we didn’t have a membership program; we didn’t have a large community of donors; we didn’t have an endowment; and we didn’t have people willing to give their estates to us. We still don’t have an endowment or people willing to give us our estates, but we’ve achieved the rest. And now that we’ve been around for 22 years, we can imagine how those two items are going to start to happen next.  

I also want to point out the cyclical nature of this diagram. There’s no system in the world that I’ve found that is linear that has lasted on these timescales. You need to have a cyclical business model, not a linear business model.

The Longest-Lived Institutions in the World

I’ve been collecting data on all of the longest lived institutions in the world. As you look at these, there’s a few things that stick out. Notice: brewery, brewery, winery, hotel, bar, pub, right? And also notice that a lot of them are in Japan. There’s been a rough system of government there for over 2,000 years (the Royal Family) that’s held together enough to enable something like the Royal Confectioner of Japan to be one of the oldest companies in the world. But there’s also temple builders and things like that.

In the West, most of the companies that have survived for a very long time are basically service companies. It’s a lot easier to reinvent yourself as a service-oriented company than it is as a commodity company when that particular commodity goes out of use.

Colgate Palmolive (founded 01806) and DuPont (founded 01802) are commodity companies that are broad enough to change the kinds of products they sell over time. I’m interested in learning more about all these companies, as they probably all have some kind of special sauce in their stories of longevity. 

Something else that came out of this research is the fact that the length of company’s lives is shrinking at almost one year per year. In 01950, the average company on the Fortune 500 had been around for 61 years. Now it’s 18 years. Companies’ lives are getting shorter.

As I mentioned, most of the oldest companies in the world are in Japan. In a survey of 5,500 companies over 200 years old, 3,100 are based in Japan. The rest are in some of the older countries in Europe. 

But—and this was a fact I found curious, and one that speaks to the cyclical nature of things—90% of the companies that are over 200 years old have 300 employees or less; they’re not mega companies. 

In surveying 1,000 companies over 300 years old, you find a huge amount of disparity concerning which industries they’re a part of. But there were a few big groupings that I found interesting. 23% are in the alcohol industry, and this doesn’t even include pubs and restaurants and hotels that may sell alcohol. 

Patrick McGovern, a biomolecular archeologist who I talked to when we were building The Interval, has done DNA analysis on vines, which are a clonal species. From that analysis, we  know that civilization started cultivating wine 8,000 BC. McGovern supposes that it’s not at all clear if civilization stopped being nomadic in order to ferment things, or because they started fermenting things, they stopped being nomadic. It’s an intriguing correlation, and notable that some of this overwhelmingly large segment of the oldest companies in the world deal in alcohol.

Long-term Thinking is Not Inherently Good

A quick word about values: long-term thinking, and aspiring to be a long-term institution, is not inherently good. At Long Now, we’ve always emphasized the importance of long-term thinking without trying to ascribe a lot of values to it. But I don’t think that’s intellectually honest. We have to ask ourselves what we’re trying to perpetuate. We have to step back far enough and ensure that the kinds of things we’re perpetuating are generally good for society. 

How to Build Things That Last

One way that things have lasted for a really long time is to just take a really long time to build them. Cathedrals are a famous example of this. The most dangerous time for anything that’s lasting is really just one generation after it was built. It’s no longer a new, cool thing; it’s the thing that your parents did, and it’s not cool anymore. And it’s not until another couple generations later where everyone values it, largely because it is old and venerable and has become a kind of cultural icon. 

And we already see this with this cathedral: the Sagrada Familia in Barcelona.  It’s still under construction, 125 years into its build process, and it’s already a UNESCO World Heritage Site. 

The other way things last for a really long time, and this is the Japanese model, is that they’re just extremely well-maintained. 

At about 1,400 years old, these are the two oldest continuously standing wooden structures in the world. And they’ve replaced a lot of parts of them. They keep the roofs on them, and even in a totally humid and raining environment, the central timbers of these buildings have stayed true. Interestingly, this temple was also the place where, over a thousand years ago, a Japanese princess had a vision that she needed to send a particular prayer out to the world to make sure that it survived into the future. And so she had, literally, a million wooden pagodas made with the prayer put inside them, and distributed these little pagodas as far and wide as she could. You can still buy these on eBay right now. It’s an early example of the philosophy of “Lots of Copies Keep Stuff Safe” (LOCKSS). 

Another Japanese example that uses a totally different strategy is this Shinto shrine.

Shinto is an animist religion whose adherents believe that spirits are in everything, unlike Buddhism, which came to Japan later. In the Shinto belief system, temples have this renewing technology, if you will, where they’re rebuilt in a site right next to each other in different periodicities. This one, which is the most famous in Japan, is the Ise Shrine, which is rebuilt every 20 years. A few years ago, I was fortunate enough to attend the rebuilding ceremony. (One of the oldest companies in the world, I should add, is the Japanese temple building company that builds these temples. 

The emphasis here is not on maintenance, but renewal. These temples made of thatch and wood—totally ephemeral materials—have lasted for 2,000 years. They have documented evidence of exact rebuilding for 1,600 years, but this site was founded in 4 AD—also by a visionary Japanese princess. And every 20 years, with the Japanese princess in attendance, they move the treasures from one from one temple to the other. And the kami—the spirits—follow that. And then they deconstruct the temple, the old one, and they send all those parts out to various Shinto temples in Japan as religious artifacts.

I think the most important thing about this particular example is that each generation gets to have this moment where the older master teaches the apprentice the craft. So you get this handing off of knowledge that’s extremely physical and real and has a deliverable. It’s got to ship, right? “The princess is coming to move the stuff; we have to do this.” It’s an eight year process with tons of ritual that goes into rebuilding these temples.

I think an interesting counterexample to things lasting a very long time is when they ascribe to certain ideologies. And I think it’s curious, one of our longest lived institutions is the Catholic Church, and the ideology of something like the Buddha’s Obamian has lasted, but a lot of the artifacts become targets for people who don’t believe in that ideology. The Taliban spent weeks dynamiting and using artillery to destroy these Buddhas. You would think that Buddhism, a relatively innocuous religion, is unthreatening—but not so much to the Taliban.

This is the University of Bologna, which is largely credited as the earliest university in the world. It’s almost 1000 years old at this point. Oxford was shortly behind it. And there’s another 40 or so universities over 500 years old.

Universities have this ability to do a kind of continual refresh where every four years, especially in undergraduate programs, you have a whole new set of people. And so they have to sell themselves to a new generation every single year. Their customer is a whole class. And we see universities now struggle when they aren’t teaching relevant things to people and they have to adjust. And that has kept them around as some of the longest lived institutions in the world. 

I think the idea of communities of practice is a really interesting one. In these communities, knowledge of practice is handed down from generation to generation. Such is the case with martial arts, which we have evidence for dating back at least 2,000 or 3,000 years.

There’s several strategies in nature that allow systems to last for thousands of years. There’s clonal strategies like the Aspen tree. We’ve measured Mesquite rings in the desert where they die and then they grow up in a ring from the root structure that indicates that a Mesquite ring has the same DNA, effectively for 50,000 years. And these clonal forests have definitely been around for thousands of years, even though each individual will only last a few years in some cases.

In other cases, things are cultivated. Going back to the wine example, where we know we have effectively the DNA of clonal species like these grapes from ancient Rome where we have taken a clipping and cultivated it from generation to generation. So there’s been this kind of interplay between humans and the natural world and we also see this in a lot of tree-caring practices. 

The bristlecone exemplifies how an existential crisis gives you practice in terms of how you’re going to survive. The bristlecone is the oldest continuously living single organism that we know of in the world. And the funny thing about the bristlecone is it was not discovered by coring to be the oldest living species in the world; it was postulated because a particular tree scientist had cored other pine species, and as they did that, they found that all the ones in the worst environments were the oldest. And he said: “If you can find the pine species that is living in the absolute worst environment, you will find the oldest species of pine in the world.” And he coined this term: adversity breeds longevity. And so then people went to go find the pines in the worst environment and up at the top of the White Mountains and in the Snake Range in Nevada, and some in Colorado as well, they found three different species of bristlecone, which have been dated to over 5,000 years at this point.

Taking the Future into Account

If any of us are to build an institution that’s going to last for the next few hundred or 1,000 years, the changes in demographics and the climate are a big part of it. This is the projected impact of climate change on agricultural yields until the 02080s. And as you can see, agricultural yields in the global south are going to be going down. In the global north and the much further north, more like Canada and Russia, they’re going to be getting a lot better. And this is going to change the world markets, world populations, and what we’re warring over for the next 100 years.

In all natural systems, you have these sigmoid curves that things always go down. We are always assuming things like our population and our economies to always go up, but that is not the way the world works; it has never been that way, and we always have these kinds of corrections. In this case, a predator follows the prey as a lower sigmoid. Once its prey runs out, then the predators start dying off. 

How do we get good at failing, but not totally dying out? The lynx never dies out totally, but companies that do one thing are bad at recovering when that one thing is no longer the big commodity. It wasn’t record companies that invented iTunes; it was an outsider company. Record companies were adept at selling plastic circles and when there were no plastic circles to sell music on, they didn’t know how to adjust for that. The crux of anything that’s going to last for a long time is: how do you get good at reinvention and retooling?

There’s no scenario that I’ve seen where the world population doesn’t start going down by at least a hundred years from now, if not less than 50 years from now. 

So even the median projection, that red line in the middle, tapers off. But this data is a couple of years old, and it’s now starting to increasingly look a lot more like that dotted blue line at the bottom. And the world has really never lived through a time, except for a few short plague seasons, where the world population was going down—and, by extension, where the customer base was going down.

Even more dangerous than the population going down is that the population is changing. The red line here is the number of 15 to 64 year olds. And the blue line is the zero to 14 year olds. If the world is made up largely of older people who hoard wealth, don’t work hard, and don’t make huge contributions of creativity to the world the way 20 year olds do, that world is a world that I don’t think we’re prepared to live in right now.

We’re seeing this now happening in a lot of the developed world and most notably in Japan. Those of you who remember the 01980s recall that there was no scenario where Japan was not an absolute dominant part of the economy of the world. And now they’re struggling just to be relevant in a lot of ways, and it’s largely because this population change happened and the young people were not there. They wouldn’t allow any immigration, and that creativity, and that thrust of civilization, went out of a country that was a dominant world economic power.

Watch the video of Alexander Rose’s talk on the Data of Long-lived Institutions:

Planet DebianChristian Kastner: RStudio is a refreshingly intuitive IDE

I currently need to dabble with R for a smallish thing. I have previously dabbled with R only once, for an afternoon, and that was about a decade ago, so I had no prior experience to speak of regarding the language and its surrounding ecosystem.

Somebody recommended that I try out RStudio, a popular IDE for R. I was happy to see that an open-source community edition exists, in the form of a .deb package no less, so I installed it and gave it a try.

It's remarkable how intuitive this IDE is. My first guess at doing something has so far been correct every. single. time. I didn't have to open the help, or search the web, for any solutions, either -- they just seem to offer themselves up.

And it's not just my inputs; it's the output, too. The RStudio window has multiple tiles, and each tile has multiple tabs. I found this quite confusing and intimidating on first impression, but once I started doing some work, I was surprised to see that whenever I did something that produced output in one or more of the tabs, it was (again) always in an intuitive manner. There's a fine line between informing with relevant context and distracting with irrelevant context, but RStudio seems to have placed itself on the right side of it.

This, and many other features that pop up here and there, like the live-rendering of LaTeX equations, contributed to what has to be one of the most positive experiences with an IDE that I've had so far.

LongNowA Long Now Drive-in Double Feature at Fort Mason

Join the Long Now Community for a night of films that inspire long-term thinking. On October 27, 02020, we’ll screen Samsara followed by 2001: A Space Odyssey at Fort Mason.


Drive-in Screening on Tuesday October 27, 02020 at 6:00pm PT

SAMSARA is a Sanskrit word that means “the ever turning wheel of life” and is the point of departure for the filmmakers as they search for the elusive current of interconnection that runs through our lives.  SAMSARA transports us to sacred grounds, disaster zones, industrial sites, global gatherings and natural wonders. By dispensing with dialogue and descriptive text, the film subverts our expectations of a traditional documentary, instead encouraging our own inner interpretations inspired by images and music that infuses the ancient with the modern. 

Filmed over five years in twenty-five countries, SAMSARA (02011) is a non-verbal documentary from filmmakers Ron Fricke and Mark Magidson, the creators of BARAKA. It is one of only a handful of films shot on 70mm in the last forty years. Through powerful images, the film illuminates the links between humanity and the rest of nature, showing how our life cycle mirrors the rhythm of the planet.

2001: A Space Odyssey

Drive-in Screening on Tuesday October 27, 02020 at 8:45pm PT

The genius is not in how much Stanley Kubrick does in “2001: A Space Odyssey,” but in how little. This is the work of an artist so sublimely confident that he doesn’t include a single shot simply to keep our attention. He reduces each scene to its essence, and leaves it on screen long enough for us to contemplate it, to inhabit it in our imaginations. Alone among science-fiction movies, “2001″ is not concerned with thrilling us, but with inspiring our awe. 

What Kubrick had actually done was make a philosophical statement about man’s place in the universe, using images as those before him had used words, music or prayer. And he had made it in a way that invited us to contemplate it — not to experience it vicariously as entertainment, as we might in a good conventional science-fiction film, but to stand outside it as a philosopher might, and think about it.

 Roger Ebert

Ticket & Event Information:

  • Tickets are $30 per vehicle for members, General Public Tickets are $60 per vehicle.
  • Separate tickets must be purchased for each of the screenings.
  • Parking opens at 5:00pm for the 6:00pm showing, and 7:45pm for the 8:45pm showing.
  • Please have your ticket printed out or on your phone so we can check you in.
  • Parking location will be chosen by the venue to insure that everyone can best see the screen.
  • The film audio will be through your FM radio receiver. 
  • There will be concessions available at the event! The Interval will be open to purchase to-go drinks, there will be a Food Truck and popcorn, candy and other snacks for sale.

COVID-19 Safety Information:

  • This is a socially distant event. Please do not attend if you are experiencing any symptoms of COVID-19. 
  • Bathrooms will be cleaned throughout the evening.
  • Masks are required when outside of your vehicle. Masks with exhalation valves are not allowed.
  • Attendees must remain inside their vehicles except to use the restroom facilities or pickup concessions.
  • Each vehicle may only be occupied by members of a “pod” who have already been in close contact with each other.
  • Attendees who fail to follow safe distancing at the request of staff will cause the attendee to be subject to ejection of the event. No refund will be given.


With drive-in theaters experiencing a renaissance around the country, Fort Mason Center for Arts & Culture (FMCAC) announces FORT MASON FLIX, a pop-up drive-in theater launching September 18, 2020. Housed on FMCAC’s historic waterfront campus, FORT MASON FLIX will present a cornucopia of film programming, from family favorites and cult classics to blockbusters and arthouse cinema.

Cryptogram NSA Advisory on Chinese Government Hacking

The NSA released an advisory listing the top twenty-five known vulnerabilities currently being exploited by Chinese nation-state attackers.

This advisory provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks. Most of the vulnerabilities listed below can be exploited to gain initial access to victim networks using products that are directly accessible from the Internet and act as gateways to internal networks. The majority of the products are either for remote access (T1133) or for external web services (T1190), and should be prioritized for immediate patching.

Planet DebianDirk Eddelbuettel: RcppZiggurat 0.1.6


A new release, now at version 0.1.6, of RcppZiggurat is now on the CRAN network for R.

The RcppZiggurat package updates the code for the Ziggurat generator by Marsaglia and other which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

This release brings a corrected seed setter and getter which now correctly take care of all four state variables, and not just one. It also corrects a few typos in the vignette. Both were fixed quite a while back, but we somehow managed to not ship this to CRAN for two years.

The NEWS file entry below lists all changes.

Changes in version 0.1.6 (2020-10-18)

  • Several typos were corrected in the vignette (Blagoje Ivanovic in #9).

  • New getters and setters for internal state were added to resume simulations (Dirk in #11 fixing #10).

  • Minor updates to cleanup script and Travis CI setup (Dirk).

Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppZiggurat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBits from Debian: Debian donation for Peertube development

The Debian project is happy to announce a donation of 10,000 € to help Framasoft reach the fourth stretch-goal of its Peertube v3 crowdfunding campaign -- Live Streaming.

This year's iteration of the Debian annual conference, DebConf20, had to be held online, and while being a resounding success, it made clear to the project our need to have a permanent live streaming infrastructure for small events held by local Debian groups. As such, Peertube, a FLOSS video hosting platform, seems to be the perfect solution for us.

We hope this unconventional gesture from the Debian project will help us make this year somewhat less terrible and give us, and thus humanity, better Free Software tooling to approach the future.

Debian thanks the commitment of numerous Debian donors and DebConf sponsors, particularly all those that contributed to DebConf20 online's success (volunteers, speakers and sponsors). Our project also thanks Framasoft and the PeerTube community for developing PeerTube as a free and decentralized video platform.

The Framasoft association warmly thanks the Debian Project for its contribution, from its own funds, towards making PeerTube happen.

This contribution has a twofold impact. Firstly, it's a strong sign of recognition from an international project - one of the pillars of the Free Software world - towards a small French association which offers tools to liberate users from the clutches of the web's giant monopolies. Secondly, it's a substantial amount of help in these difficult times, supporting the development of a tool which equally belongs to and is useful to everyone.

The strength of Debian's gesture proves, once again, that solidarity, mutual aid and collaboration are values which allow our communities to create tools to help us strive towards Utopia.

Worse Than FailureCodeSOD: Delete This

About three years ago, Consuela inherited a giant .NET project. It was… not good. To communicate how “not good” it was, Consuela had a lot of possible submissions. Sending the worst code might be the obvious choice, but it wouldn’t give a good sense of just how bad the whole thing was, so they opted instead to find something that could roughly be called the “median” quality.

This is a stored procedure that is roughly about the median sample of the overall code. Half of it is better, but half of it gets much, much worse.

CREATE proc [dbo].[usermgt_DeleteUser]
    @ssoid uniqueidentifier
    declare @username nvarchar(64)
    select @username = Username from Users where SSOID = @ssoid
    if (not exists(select * from ssodata where ssoid = @ssoid))
        insert into ssodata (SSOID, UserName, email, givenName, sn)
        values (@ssoid, @username, '', 'Firstname', 'Lastname')
        delete from ssodata where ssoid = @ssoid
    else begin
      RAISERROR ('This user still exists in sso', 10, 1)

Let’s talk a little bit about names. As you can see, they’re using an “internal” schema naming convention- usermgt clearly is defining a role for a whole class of stored procedures. Already, that’s annoying, but what does this procedure promise to do? DeleteUser.

But what exactly does it do?

Well, first, it checks to see if the user exists. If the user does exist… it raises an error? That’s an odd choice for deleting. But what does it do if the user doesn’t exist?

It creates a user with that ID, then deletes it.

Not only is this method terribly misnamed, it also seems to be utterly useless. At best, I think they’re trying to route around some trigger nonsense, where certain things happen ON INSERT and then different things happen ON DELETE. That’d be a WTF on its own, but that’s possibly giving this more credit than it deserves, because that assumes there’s a reason why the code is this way.

Consuela adds a promise, which hopefully means some follow-ups:

If you had access to the complete codebase, you would not EVER run out of new material for codesod. It’s basically a huge collection of “How Not To” on all possible layers, from single lines of code up to the complete architecture itself.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianReproducible Builds: Supporter spotlight: Civil Infrastructure Platform

The Reproducible Builds project depends on our many projects, supporters and sponsors. We rely on their financial support, but they are also valued ambassadors who spread the word about the Reproducible Builds project and the work that we do.

This is the first installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at

However, we are kicking off this series by featuring Urs Gleim and Yoshi Kobayashi of the Civil Infrastructure Platform (CIP) project.

Chris Lamb: Hi Urs and Yoshi, great to meet you. How might you relate the importance of the Civil Infrastructure Platform to a user who is non-technical?

A: The Civil Infrastructure Platform (CIP) project is focused on establishing an open source ‘base layer’ of industrial-grade software that acts as building blocks in civil infrastructure projects. End-users of this critical code include systems for electric power generation and energy distribution, oil and gas, water and wastewater, healthcare, communications, transportation, and community management. These systems deliver essential services, provide shelter, and support social interactions and economic development. They are society’s lifelines, and CIP aims to contribute to and support these important pillars of modern society.

Chris: We have entered an age where our civilisations have become reliant on technology to keep us alive. Does the CIP believe that the software that underlies our own safety (and the safety of our loved ones) receives enough scrutiny today?

A: For companies developing systems running our infrastructure and keeping our factories working, it is part of their business to ensure the availability, uptime, and security of these very systems. However, software complexity continues to increase, and the efforts spent on those systems is now exploding. What is missing is a common way of achieving this through refining the same tools, and cooperating on the hardening and maintenance of standard components such as the Linux operating system.

Chris: How does the Reproducible Builds effort help the Civil Infrastructure Platform achieve its goals?

A: Reproducibility helps a great deal in software maintenance. We have a number of use-cases that should have long-term support of more than 10 years. During this period, we encounter issues that need to be fixed in the original source code. But before we make changes to the source code, we need to check whether it is actually the original source code or not. If we can reproduce exactly the same binary from the source code even after 10 years, we can start to invest time and energy into making these fixes.

Chris: Can you give us a brief history of the Civil Infrastructure Platform? Are there any specific ‘success stories’ that the CIP is particularly proud of?

A: The CIP Project formed in 2016 as a project hosted by Linux Foundation. It was launched out of necessity to establish an open source framework and the subsequent software foundation delivers services for civil infrastructure and economic development on a global scale. Some key milestones we have achieved as a project include our collaboration with Debian, where we are helping with the Debian Long Term Support (LTS) initiative, which aims to extend the lifetime of all Debian stable releases to at least 5 years. This is critical because most control systems for transportation, power plants, healthcare and telecommunications run on Debian-based embedded systems.

In addition, CIP is focused on IEC 62443, a standards-based approach to counter security vulnerabilities in industrial automation and control systems. Our belief is that this work will help mitigate the risk of cyber attacks, but in order to deal with evolving attacks of this kind, all of the layers that make up these complex systems (such as system services and component functions, in addition to the countless operational layers) must be kept secure. For this reason, the IEC 62443 series is attracting attention as the de facto cyber-security standard.

Chris: The Civil Infrastructure Platform project comprises a number of project members from different industries, with stakeholders across multiple countries and continents. How does working together with a broad group of interests help in your effectiveness and efficiency?

A: Although the members have different products, they share the requirements and issues when developing sustainable products. In the end, we are driven by common goals. For the project members, working internationally is simply daily business. We see this as an advantage over regional or efforts that focus on narrower domains or markets.

Chris: The Civil Infrastructure Platform supports a number of other existing projects and initiatives in the open source world too. How much do you feel being a part of the broader free software community helps you achieve your aims?

A: Collaboration with other projects is an essential part of how CIP operates — we want to enable commonly-used software components. It would not make sense to re-invent solutions that are already established and widely used in product development. To this end, we have an ‘upstream first’ policy which means that, if existing projects need to be modified to our needs or are already working on issues that we also need, we work directly with them.

Chris: Open source software in desktop or user-facing contexts receives a significant amount of publicity in the media. However, how do you see the future of free software from an industrial-oriented context?

A: Open source software has already become an essential part of the industry and civil infrastructure, and the importance of open source software there is still increasing. Without open source software, we cannot achieve, run and maintain future complex systems, such as smart cities and other key pieces of civil infrastructure.

Chris: If someone wanted to know more about the Civil Infrastructure Platform (or even to get involved) where should they go to look?

A: We have many avenues to participate and learn more! We have a website, a wiki and you can even follow us on Twitter.

For more about the Reproducible Builds project, please see our website at If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing


Planet DebianDirk Eddelbuettel: RcppArmadillo

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 786 other packages on CRAN.

A little while ago, Conrad released version 10.1.0 of Armadillo, a a new major release. As before, given his initial heads-up we ran two full reverse-depends checks, and as a consequence contacted four packages authors (two by email, two via PR) about a miniscule required change (as Armadillo now defaults to C++11, an old existing setting of avoiding C++11 lead to an error). Our thanks to those who promptly update their packages—truly appreciated. As it turns out, Conrad also softened the error by the time the release ran around.

But despite our best efforts, the release was delayed considerably by CRAN. We had made several Windows test builds but luck had it that on the uploaded package CRAN got itself a (completely spurious segfault—which can happen on a busy machine building machine things at once). Sadly it took three or four days for CRAN to reply our email. After which it took another number of days for them to ponder the behaviour of a few new ‘deprecated’ messaged tickled by at the most ten or so (out of 786) packages. Oh well. So here we are, eleven days after I emailed the rcpp-devel list about the new package being on CRAN but possibly delayed (due to that seg.fault). But during all that time the package was of course available via the Rcpp drat.

The changes in this release are summarized below as usual and are mostly upstream along with an improved Travis CI setup due to the aforementioned use of the bspm package for binaries at Travis.

Changes in RcppArmadillo version (2020-10-09)

  • Upgraded to Armadillo release 10.1.0 (Orchid Ambush)

    • C++11 is now the minimum required C++ standard

    • faster handling of compound expressions by trimatu() and trimatl()

    • faster sparse matrix addition, subtraction and element-wise multiplication

    • expanded sparse submatrix views to handle the non-contiguous form of X.cols(vector_of_column_indices)

    • expanded eigs_sym() and eigs_gen() with optional fine-grained parameters (subspace dimension, number of iterations, eigenvalues closest to specified value)

    • deprecated form of reshape() removed from Cube and SpMat classes

    • ignore and warn on use of the ARMA_DONT_USE_CXX11 macro

  • Switch Travis CI testing to focal and BSPM

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Charles StrossEntanglements!

Entanglements Cover.jpg

Many thanks to Charlie for giving me the chance to write about editing and my latest project. I'm very excited about the publication of Entanglements. The book has received a starred review from Publishers Weekly and terrific reviews in Lightspeed, Science, and the Financial Times. MIT Press has created a very nice "Pubpub" page about Entanglements, with information about the book and its various contributors. The "On the Stories" section has an essay about by Nick Wolven about his amazing story, "Sparkly Bits," and a fun Zoom conversation with James Patrick Kelly, Nancy Kress, and Sam J. Miller. I think the site is well worth checking out, and here's the Pubpub description of the book:

Science fiction authors offer original tales of relationships in a future world of evolving technology.

In a future world dominated by the technological, people will still be entangled in relationships--in romances, friendships, and families. This volume in the Twelve Tomorrows series considers the effects that scientific and technological discoveries will have on the emotional bonds that hold us together.

The strange new worlds in these stories feature AI family therapy, floating fungitecture, and a futuristic love potion. A co-op of mothers attempts to raise a child together, lovers try to resolve their differences by employing a therapeutic sexbot, and a robot helps a woman dealing with Parkinson's disease. Contributions include Xia Jia's novelette set in a Buddhist monastery, translated by the Hugo Award-winning writer Ken Liu; a story by Nancy Kress, winner of six Hugos and two Nebulas; and a profile of Kress by Lisa Yaszek, Professor of Science Fiction Studies at Georgia Tech. Stunning artwork by Tatiana Plakhova--"infographic abstracts" of mixed media software--accompanies the texts.

Planet DebianPetter Reinholdtsen: Buster based Bokmål edition of Debian Administrator's Handbook

I am happy to report that we finally made it! Norwegian Bokmål became the first translation published on paper of the new Buster based edition of "The Debian Administrator's Handbook". The print proof reading copy arrived some days ago, and it looked good, so now the book is approved for general distribution. This updated paperback edition is available from The book is also available for download in electronic form as PDF, EPUB and Mobipocket, and can also be read online.

I am very happy to wrap up this Creative Common licensed project, which concludes several months of work by several volunteers. The number of Linux related books published in Norwegian are few, and I really hope this one will gain many readers, as it is packed with deep knowledge on Linux and the Debian ecosystem. The book will be available for various Internet book stores like Amazon and Barnes & Noble soon, but I recommend buying "Håndbok for Debian-administratoren" directly from the source at Lulu.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianSteve Kemp: Offsite-monitoring, from my desktop.

For the past few years I've had a bunch of virtual machines hosting websites, services, and servers. Of course I want them to be available - especially since I charge people money to access at some of them (for example my dns-hosting service) - and that means I want to know when they're not.

The way I've gone about this is to have a bunch of machines running stuff, and then dedicate an entirely separate machine solely for monitoring and alerting. Sure you can run local monitoring, testing that services are available, the root-disk isn't full, and that kind of thing. But only by testing externally can you see if the machine is actually available to end-users, customers, or friends.

A local-agent might decide "I'm fine", but if the hosting-company goes dark due to a fibre cut you're screwed.

I've been hosting my services with Hetzner (cloud) recently, and their service is generally pretty good. Unfortunately I've started to see an increasing number of false-alarms. I'd have a server in Germany, with the monitoring machine in Helsinki (coincidentally where I live!). For the past month I've started to get pinged with a failure every three/four days on average, "service down - dns failed", or "service down - timeout". When the notice would wake me up I'd go check and it would be fine, it was a very transient failure.

To be honest the reason for this is my monitoring is just too damn aggressive, I like to be alerted immediately in case something is wrong. That means if a single test fails I get an alert, as rather than only if a test failed for something more reasonable like three+ consecutive failures.

I'm experimenting with monitoring in a less aggressive fashion, from my home desktop. Since my monitoring tool is a single self-contained golang binary, and it is already packaged as a docker-based container deployment was trivial. I did a little work writing an agent to receive failure-notices, and ping me via telegram - instead of the previous approach where I had an online status-page which I could view via my mobile, and alerts via pushover.

So far it looks good. I've tweaked the monitoring to setup a timeout of 15 seconds, instead of 5, and I've configured it to only alert me if there is an outage which lasts for >= 2 consecutive failures. I guess the TLDR is I now do offsite monitoring .. from my house, rather than from a different region.

The only real reason to write this post was mostly to say that the process of writing a trivial "notify me" gateway to interface with telegram was nice and straightforward, and to remind myself that transient failures are way more common than we expect.

I'll leave things alone for a moment, but it was a fun experiment. I'll keep the two systems in parallel for a while, but I guess I can already predict the outcome:

  • The desktop monitoring will report transient outages now and again, because home broadband isn't 100% available.
  • The heztner-based monitoring, in a different region, will report transient problems, because even hosting companies are not 100% available.
    • Especially at the cheap prices I'm paying.
  • The way to avoid being woken up by transient outages/errors is to be less agressive.
    • I think my paying users will be OK if I find out a services is offline after 5 minutes, rather than after 30 seconds.
    • If they're not we'll have to talk about budgets ..

Worse Than FailureCodeSOD: Extended Time

The C# "extension method" feature lets you implement static methods which "magically" act like they're instance methods. It's a neat feature which the .NET Framework uses extensively. It's also a great way to implement some convenience functions.

Brandt found some "convenience" functions which were exploiting this feature.

public static bool IsLessThen<T>(this T a, T b) where T : IComparable<T> => a.CompareTo(b) < 0; public static bool IsGreaterThen<T>(this T a, T b) where T : IComparable<T> => a.CompareTo(b) > 0; public static bool And(this bool a, bool b) => a && b; public static bool Or(this bool a, bool b) => a || b; public static bool IsBetweensOrEqual<T>(this T a, T b, T c) where T : IComparable<T> => a.IsGreaterThen(b).Or(a.Equals(b)).And( a.IsLessThen(c).Or(a.Equals(c)) );

Here, we observe someone who maybe heard the term "functional programming" and decided to wedge it into their programming style however it would fit. We replace common operators and expressions with extension method versions. Instead of the cryptic a || b, we can now write a.Or(b), which is… better?

I almost don't hate the IsLessThan/IsGreaterThan methods, as that is (arguably) more readable. But wait, I have to correct myself: IsLessThen and IsGreaterThen. So they almost got something that was (arguably) more readable, but with a minor typo just made it all more confusing.

All this, though, to solve their actual problem: the TimeSpan data type doesn't have a "between" comparator, and at one point in their code- one point- they need to perform that check. It's also worth noting that C# supports operator overloading, and TimeSpan does have an overload for all your basic comparison operators, so they could have just done that.

Brandt adds:

While you can argue that there is no out of the box 'Between' functionality, the colleague who programmed this ignored an already existing extension method that was in the same file, that offered exactly this functionality in a better way.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Charles StrossEditorial Entanglements

A young editor once asked me what was the biggest secret to editing a fiction magazine. My answer was "confidence." I have to be confident that the stories I choose will fit together, that people will read them and enjoy them, and most importantly, that each month I'll receive enough publishable material to fill the pages of the magazine.

Asimov's Science Fiction comes out as combined monthly issues six times a year. A typical issue contains ten to twelve stories. That means I buy about 65 stories a year. Roughly speaking, I need to buy five to six stories per month--although I may actually buy two one month and ten the next. That I will receive these stories should seem inevitable. I get to choose them from about eight hundred submissions per month. Yet, since I know that I will have to reject over 99 percent of the stories that wing their way to me, there is always a slight concern that that someday 100 percent of the submissions won't be right for the magazine.

Luckily, this anxiety is strongly offset by a lifetime of experience. For sixteen years as the editor-in-chief, and far longer as a staff member, I've seen that each issue of the magazine has been filled with wonderful stories. Asimov's tales are balanced, they are long and short, amusing and tragic, near- and distant-future explorations of hard SF, far-flung space opera, time travel, surreal tales and a little fantasy. They're by well-known names and brand new authors. I have confidence these stories will show up and that I'll know them when I see them.

I have edited or co-edited more than two-dozen reprint anthologies. These books consisted of stories that previously appeared in genre magazines. Pulling them together mostly required sifting through years and years of published fiction. The tales have been united by a common theme such as Robots or Ghosts or The Solar System.

Editing my first original anthology was not like editing these earlier books or like editing an issue of the magazine. Entanglements: Tomorrows Lovers, Families, and Friends, which I edited as part of the Twelve Tomorrow Series, has just come out from MIT Press. The tales are connected by a theme--the effect of emerging technologies on relationships--but the stories are brand new. Instead of waiting for eight hundred stories to come to me, I asked specific authors for their tales. I approached prominent authors like Nancy Kress (who is also profiled in the book by Lisa Yaszek), Annalee Newitz, James Patrick Kelly, and Mary Robinette Kowal, as well as up and coming authors like Sam J. Miller, Cadwell Turnbull, and Rich Larson. I was working with some writers for the first time. Others, like Suzanne Palmer and Nick Wolven, were people I'd published on several occasions.

I deliberately chose authors who I felt were capable of writing the sort of hard science fiction that the Twelve Tomorrows series is famous for. I was also pretty sure that I was contacting people who were good at making deadlines! I knew I enjoyed the work of Chinese author Xia Jia and I was delighted to have an opportunity to work with her translator, Ken Liu. I was also thrilled to get artwork from Tatiana Plakhova.

Once I commissioned the stories, I had to wait with fingers crossed. What if an author went off in the wrong direction? What if an author failed to get inspired? What if they all missed their deadlines? It turned out that I had no need to worry. Each author came through with a story that perfectly fit the anthology's theme. The material was diverse, with stories ranging from tales about lovers and mentors and friends to stories populated with children and grandparents. The book includes charming and amusing tales, heart-rending stories, and exciting thrillers.

I learned so much from editing Entanglements. The next time I edit an original anthology, I expect to approach it with a self-assurance akin to the confidence I feel when I read through a month of submissions to Asimov's.

Planet DebianReproducible Builds (diffoscope): diffoscope 161 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 161. This version includes the following changes:

[ Chris Lamb ]
* Fix failing testsuite: (Closes: #972518)
  - Update testsuite to support OCaml 4.11.1. (Closes: #972518)
  - Reapply Black and bump minimum version to 20.8b1.
* Move the OCaml tests to the assert_diff helper.

[ Jean-Romain Garnier ]
* Add support for radare2 as a disassembler.

[ Paul Spooren ]
* Automatically deploy Docker images in the continuous integration pipeline.

You find out more by visiting the project homepage.


Planet DebianAntoine Beaupré: SSH 2FA with Google Authenticator and Yubikey

About a lifetime ago (5 years), I wrote a tutorial on how to configure my Yubikey for OpenPGP signing, SSH authentication and SSH 2FA. In there, I used the libpam-oath PAM plugin for authentication, but it turns out that had too many problems: users couldn't edit their own 2FA tokens and I had to patch it to avoid forcing 2FA on all users. The latter was merged in the Debian package, but never upstream, and the former was never fixed at all. So I started looking at alternatives and found the Google Authenticator libpam plugin. A priori, it's designed to work with phones and the Google Authenticator app, but there's no reason why it shouldn't work with hardware tokens like the Yubikey. Both use the standard HOTP protocol so it should "just work".

After some fiddling, it turns out I was right and you can authenticate with a Yubikey over SSH. Here's that procedure so you don't have to second-guess it yourself.


On Debian, the PAM module is shipped in the google-authenticator source package:

apt install libpam-google-authenticator

Then you need to add the module in your PAM stack somewhere. Since I only use it for SSH, I added this line on top of /etc/pam.d/sshd:

auth required nullok

I also used no_increment_hotp debug while debugging to avoid having to renew the token all the time and have more information about failures in the logs.

Then reload ssh (not sure that's actually necessary):

service ssh reload

Creating or replacing tokens

To create a new key, run this command on the server:

google-authenticator -c

This will prompt you for a bunch of questions. To get them all right, I prefer to just call the right ones on the commandline directly:

google-authenticator --counter-based --qr-mode=NONE --rate-limit=1 --rate-time=30 --emergency-codes=1 --window-size=3

Those are actually the defaults, if my memory serves me right, except for the --qr-mode and --emergency-codes (which can't be disabled so I only print one). I disable the QR code display because I won't be using the codes on my phone, but you would obviously keep it if you want to use the app.

Converting to a Yubikey-compatible secret

Unfortunately, the encoding (base32) produced by the google-authenticator command is not compatible with the token expected by the ykpersonalize command used to configure the Yubikey (base16 AKA "hexadecimal", with a fixed 20 bytes length). So you need a way to convert between the two. I wrote a program called oath-convert which basically does this:

read base32
add padding
convert to hex

Or, in Python:

def convert_b32_b16(data_b32):
    remainder = len(data_b32) % 8
    if remainder > 0:
        # XXX: assume 6 chars are missing, the actual padding may vary:
        data_b32 += "======"
    data_b16 = base64.b32decode(data_b32)
    if len(data_b16) < 20:
        # pad to 20 bytes
        data_b16 += b"\x00" * (20 - len(data_b16))
    return binascii.hexlify(data_b16).decode("ascii")

Note that the code assumes a certain token length and will not work correctly for other sizes. To use the program, simply call it with:

head -1 .google_authenticator | oath-convert

Then you paste the output in the prompt:

$ ykpersonalize -1 -o oath-hotp -o append-cr -a
Firmware version 3.4.3 Touch level 1541 Program sequence 2
 HMAC key, 20 bytes (40 characters hex) : [SECRET GOES HERE]

Configuration data to be written to key configuration 1:

fixed: m:
uid: n/a
acc_code: h:000000000000
ticket_flags: APPEND_CR|OATH_HOTP

Commit? (y/n) [n]: y

Note that you must NOT pass the -o oath-hotp8 parameter to the ykpersonalize commandline, which we used to do in the Yubikey howto. That is because Google Authenticator tokens are shorter: it's less secure, but it's an acceptable tradeoff considering the plugin is actually maintained. There's actually a feature request to support 8-digit codes so that limitation might eventually be fixed as well.

Thanks to the Google Authenticator people and Yubikey people for their support in establishing this procedure.

Planet DebianRussell Coker: Video Decoding

I’ve had a saga of getting 4K monitors to work well. My latest issue has been video playing, the dreaded mplayer error about the system being too slow. My previous post about 4K was about using DisplayPort to get more than 30Hz scan rate at 4K [1]. I now have a nice 60Hz scan rate which makes WW2 documentaries display nicely among other things.

But when running a 4K monitor on a 3.3GHz i5-2500 quad-core CPU I can’t get a FullHD video to display properly. Part of the process of decoding the video and scaling it to 4K resolution is too slow, so action scenes in movies lag. When running a 2560*1440 monitor on a 2.4GHz E5-2440 hex-core CPU with the mplayer option “-lavdopts threads=3” everything is great (but it fails if mplayer is run with no parameters). In doing tests with apparent performance it seemed that the E5-2440 CPU gains more from the threaded mplayer code than the i5-2500, maybe the E5-2440 is more designed for server use (it’s in a Dell PowerEdge T320 while the i5-2500 is in a random white-box system) or maybe it’s just because it’s newer. I haven’t tested whether the i5-2500 system could perform adequately at 2560*1440 resolution.

The E5-2440 system has an ATI HD 6570 video card which is old, slow, and only does PCIe 2.1 which gives 5GT/s or 8GB/s. The i5-2500 system has a newer ATI video card that is capable of PCIe 3.0, but “lspci -vv” as root says “LnkCap: Port #0, Speed 8GT/s, Width x16” and “LnkSta: Speed 5GT/s (downgraded), Width x16 (ok)”. So for reasons unknown to me the system with a faster PCIe 3.0 video card is being downgraded to PCIe 2.1 speed. A quick check of the web site for my local computer store shows that all ATI video cards costing less than $300 have PCI3 3.0 interfaces and the sole ATI card with PCIe 4.0 (which gives double the PCIe speed if the motherboard supports it) costs almost $500. I’m not inclined to spend $500 on a new video card and then a greater amount of money on a motherboard supporting PCIe 4.0 and CPU and RAM to go in it.

According to my calculations 3840*2160 resolution at 24bpp (probably 32bpp data transfers) at 30 frames/sec means 3840*2160*4*30/1024/1024=950MB/s. PCIe 2.1 can do 8GB/s so that probably isn’t a significant problem.

I’d been planning on buying a new video card for the E5-2440 system, but due to some combination of having a better CPU and lower screen resolution it is working well for video playing so I can save my money.

As an aside the T320 is a server class system that had been running for years in a corporate DC. When I replaced the high speed SAS disks with SSDs SATA disks it became quiet enough for a home workstation. It works very well at that task but the BIOS is quite determined to keep the motherboard video running due to the remote console support. So swapping monitors around was more pain than I felt like going through, I just got it working and left it. I ordered a special GPU power cable but found that the older video card that doesn’t need an extra power cable performs adequately before the cable arrived.

Here is a table comparing the systems.

2560*1440 works well 3840*2160 goes slow
System Dell PowerEdge T320 White Box PC from rubbish
CPU 2.4GHz E5-2440 3.3GHz i5-2500
Video Card ATI Radeon HD 6570 ATI Radeon R7 260X
PCIe Speed PCIe 2.1 – 8GB/s PCIe 3.0 downgraded to PCIe 2.1 – 8GB/s


The ATI Radeon HD 6570 video card is one that I had previously tested and found inadequate for 4K support, I can’t remember if it didn’t work at that resolution or didn’t support more than 30Hz scan rate. If the 2560*1440 monitor dies then it wouldn’t make sense to buy anything less than a 4K monitor to replace it which means that I’d need to get a new video card to match. But for the moment 2560*1440 is working well enough so I won’t upgrade it any time soon. I’ve already got the special power cable (specified as being for a Dell PowerEdge R610 for anyone else who wants to buy one) so it will be easy to install a powerful video card in a hurry.

Worse Than FailureCodeSOD: Don't Not Be Negative

One of my favorite illusions is the progress bar. Even the worst, most inaccurate progress bar will make an application feel faster. The simple feedback which promises "something is happening" alters the users' sense of time.

So, let's say you're implementing a JavaScript progress bar. You need to decide if you are "in progress" or not. So you need to check: if there is a progress value, and the progress value is less than 100, you're still in progress.

Mehdi's co-worker decided to implement that check as… the opposite.

const isInProgress = progress => !(!progress || (progress && progress > 100))

This is one of those lines of code where you can just see the developer's process, encoded in each choice made. "Okay, we're not in progress if progress doesn't have a value: !(!progress). Or we're not in progress if progress has a value and that value is over 100."

There's nothing explicitly wrong with this code. It's just the most awkward, backwards possible way to express that check. I suspect that part of its tortured logic arises from the fact that the developer wanted to return false if the value was null or undefined, and this was the way they figured out to do that.

Of course, a more straightforward way to write that might be (progress) => (progress && progress <= 100) || false. This will have "unexpected" behavior if the progress value is negative, but then again, so will the original code.

In the end, this is just a story of a double negative. I definitely won't say you should never not use a double negative. Don't not avoid them.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Krebs on SecurityQAnon/8Chan Sites Briefly Knocked Offline

A phone call to an Internet provider in Oregon on Sunday evening was all it took to briefly sideline multiple websites related to 8chan/8kun — a controversial online image board linked to several mass shootings — and QAnon, the far-right conspiracy theory which holds that a cabal of Satanic pedophiles is running a global child sex-trafficking ring and plotting against President Donald Trump. Following a brief disruption, the sites have come back online with the help of an Internet company based in St. Petersburg, Russia.

The IP address range in the upper-right portion of this map of QAnon and 8kun-related sites — — is assigned to VanwaTech and briefly went offline this evening. Source:

A large number of 8kun and QAnon-related sites (see map above) are connected to the Web via a single Internet provider in Vancouver, Wash. called VanwaTech (a.k.a. “OrcaTech“). Previous appeals to VanwaTech to disconnect these sites have fallen on deaf ears, as the company’s owner Nick Lim reportedly has been working with 8kun’s administrators to keep the sites online in the name of protecting free speech.

But VanwaTech also had a single point of failure on its end: The swath of Internet addresses serving the various 8kun/QAnon sites were being protected from otherwise crippling and incessant distributed-denial-of-service (DDoS) attacks by Hillsboro, Ore. based CNServers LLC.

On Sunday evening, security researcher Ron Guilmette placed a phone call to CNServers’ owner, who professed to be shocked by revelations that his company was helping QAnon and 8kun keep the lights on.

Within minutes of that call, CNServers told its customer — Spartan Host Ltd., which is registered in Belfast, Northern Ireland — that it would no longer be providing DDoS protection for the set of 254 Internet addresses that Spartan Host was routing on behalf of VanwaTech.

Contacted by KrebsOnSecurity, the person who answered the phone at CNServers asked not to be named in this story for fear of possible reprisals from the 8kun/QAnon crowd. But they confirmed that CNServers had indeed terminated its service with Spartan Host. That person added they weren’t a fan of either 8kun or QAnon, and said they would not self-describe as a Trump supporter.

CNServers said that shortly after it withdrew its DDoS protection services, Spartan Host changed its settings so that VanwaTech’s Internet addresses were protected from attacks by ddos-guard[.]net, a company based in St. Petersburg, Russia.

Spartan Host’s founder, 25-year-old Ryan McCully, confirmed CNServers’ report. McCully declined to say for how long VanwaTech had been a customer, or whether Spartan Host had experienced any attacks as a result of CNServers’ action.

McCully said while he personally doesn’t subscribe to the beliefs espoused by QAnon or 8kun, he intends to keep VanwaTech as a customer going forward.

“We follow the ‘law of the land’ when deciding what we allow to be hosted with us, with some exceptions to things that may cause resource issues etc.,” McCully said in a conversation over instant message. “Just because we host something, it doesn’t say anything about we do and don’t support, our opinions don’t come into hosted content decisions.”

But according to Guilmette, Spartan Host’s relationship with VanwaTech wasn’t widely known previously because Spartan Host had set up what’s known as a “private peering” agreement with VanwaTech. That is to say, the two companies had a confidential business arrangement by which their mutual connections were not explicitly stated or obvious to other Internet providers on the global Internet.

Guilmette said private peering relationships often play a significant role in a good deal of behind-the-scenes-mischief when the parties involved do not want anyone else to know about their relationship.

“These arrangements are business agreements that are confidential between two parties, and no one knows about them, unless you start asking questions,” Guilmette said. “It certainly appears that a private peering arrangement was used in this instance in order to hide the direct involvement of Spartan Host in providing connectivity to VanwaTech and thus to 8kun. Perhaps Mr. McCully was not eager to have his involvement known.”

8chan, which rebranded last year as 8kun, has been linked to white supremacism, neo-Nazism, antisemitism, multiple mass shootings, and is known for hosting child pornography. After three mass shootings in 2019 revealed the perpetrators had spread their manifestos on 8chan and even streamed their killings live there, 8chan was ostracized by one Internet provider after another.

The FBI last year identified QAnon as a potential domestic terror threat, noting that some of its followers have been linked to violent incidents motivated by fringe beliefs.

Further reading:

What Is QAnon?

QAnon: A Timeline of Violent Linked to the Conspiracy Theory

Planet DebianLouis-Philippe Véronneau: Musings on long-term software support and economic incentives

Although I still read a lot, during my college sophomore years my reading habits shifted from novels to more academic works. Indeed, reading dry textbooks and economic papers for classes often kept me from reading anything else substantial. Nowadays, I tend to binge read novels: I won't touch a book for months on end, and suddenly, I'll read 10 novels back to back1.

At the start of a novel binge, I always follow the same ritual: I take out my e-reader from its storage box, marvel at the fact the battery is still pretty full, turn on the WiFi and check if there are OS updates. And I have to admit, Kobo Inc. (now Rakuten Kobo) has done a stellar job of keeping my e-reader up to date. I've owned this model (a Kobo Aura 1st generation) for 7 years now and I'm still running the latest version of Kobo's Linux-based OS.

Having recently had trouble updating my Nexus 5 (also manufactured 7 years ago) to Android 102, I asked myself:

Why is my e-reader still getting regular OS updates, while Google stopped issuing security patches for my smartphone four years ago?

To try to answer this, let us turn to economic incentives theory.

Although not the be-all and end-all some think it is3, incentives theory is not a bad tool to analyse this particular problem. Executives at Google most likely followed a very business-centric logic when they decided to drop support for the Nexus 5. Likewise, Rakuten Kobo's decision to continue updating older devices certainly had very little to do with ethics or loyalty to their user base.

So, what are the incentives that keep Kobo updating devices and why are they different than smartphone manufacturers'?

A portrait of the current long-term software support offerings for smartphones and e-readers

Before delving deeper in economic theory, let's talk data. I'll be focusing on 2 brands of e-readers, Amazon's Kindle and Rakuten's Kobo. Although the e-reader market is highly segmented and differs a lot based on geography, Amazon was in 2015 the clear worldwide leader with 53% of the worldwide e-reader sales, followed by Rakuten Kobo at 13%4.

On the smartphone side, I'll be differentiating between Apple's iPhones and Android devices, taking Google as the barometer for that ecosystem. As mentioned below, Google is sadly the leader in long-term Android software support.

Rakuten Kobo

According to their website and to this Wikipedia table, the only e-readers Kobo has deprecated are the original Kobo eReader and the Kobo WiFi N289, both released in 2010. This makes their oldest still supported device the Kobo Touch, released in 2011. In my book, that's a pretty good track record. Long-term software support does not seem to be advertised or to be a clear selling point in their marketing.


According to their website, Amazon has dropped support for all 8 devices produced before the Kindle Paperwhite 2nd generation, first sold in 2013. To put things in perspective, the first Kindle came out in 2007, 3 years before Kobo started selling devices. Like Rakuten Kobo, Amazon does not make promises of long-term software support as part of their marketing.


Apple has a very clear software support policy for all their devices:

Owners of iPhone, iPad, iPod or Mac products may obtain a service and parts from Apple or Apple service providers for five years after the product is no longer sold – or longer, where required by law.

This means in the worst-case scenario of buying an iPhone model just as it is discontinued, one would get a minimum of 5 years of software support.


Google's policy for their Android devices is to provide software support for 3 years after the launch date. If you buy a Pixel device just before the new one launches, you could theoretically only get 2 years of support. In 2018, Google decided OEMs would have to provide security updates for at least 2 years after launch, threatening not to license Google Apps and the Play Store if they didn't comply.

A question of cost structure

From the previous section, we can conclude that in general, e-readers seem to be supported longer than smartphones, and that Apple does a better job than Android OEMs, providing support for about twice as long.

Even Fairphone, who's entire business is to build phones designed to last and to be repaired was not able to keep the Fairphone 1 (2013) updated for more than a couple years and seems to be struggling to keep the Fairphone 2 (2015) running an up to date version of Android.

Anyone who has ever worked in IT will tell you: maintaining software over time is hard work and hard work by specialised workers is expensive. Most commercial electronic devices are sold and developed by for-profit enterprises and software support all comes down to a question of cost structure. If companies like Google or Fairphone are to be expected to provide long-term support for the devices they manufacture, they have to be able to fund their work somehow.

In a perfect world, people would be paying for the cost of said long-term support, as it would likely be cheaper then buying new devices every few years and would certainly be better for the planet. Problem is, manufacturers aren't making them pay for it.

Economists call this type of problem externalities: things that should be part of the cost of a good, but aren't for one a reason or another. A classic example of an externality is pollution. Clearly pollution is bad and leads to horrendous consequences, like climate change. Sane people agree we should drastically cut our greenhouse gas emissions, and yet, we aren't.

Neo-classical economic theory argues the way to fix externalities like pollution is to internalise these costs, in other words, to make people pay for the "real price" of the goods they buy. In the case of climate change and pollution, neo-classical economic theory is plain wrong (spoiler alert: it often is), but this is where band-aids like the carbon tax comes from.

Still, coming back to long-term software support, let's see what would happen if we were to try to internalise software maintenance costs. We can do this multiple ways.

1 - Include the price of software maintenance in the cost of the device

This is the choice Fairphone makes. This might somewhat work out for them since they are a very small company, but it cannot scale for the following reasons:

  1. This strategy relies on you giving your money to an enterprise now, and trusting them to "Do the right thing" years later. As the years go by, they will eventually look at their books, see how much ongoing maintenance is costing them, drop support for the device, apologise and move on. That is to say, enterprises have a clear economic incentive to promise long-term support and not deliver. One could argue a company's reputation would suffer from this kind of behaviour. Maybe sometime it does, but most often people forget. Political promises are a great example of this.

  2. Enterprises go bankrupt all the time. Even if company X promises 15 years of software support for their devices, if they cease to exist, your device will stop getting updates. The internet is full of stories of IoT devices getting bricked when the parent company goes bankrupt and their servers disappear. This is related to point number 1: to some degree, you have a disincentive to pay for long-term support in advance, as the future is uncertain and there are chances you won't get the support you paid for.

  3. Selling your devices at a higher price to cover maintenance costs does not necessarily mean you will make more money overall — raising more money to fund maintenance costs being the goal here. To a certain point, smartphone models are substitute goods and prices higher than market prices will tend to drive consumers to buy cheaper ones. There is thus a disincentive to include the price of software maintenance in the cost of the device.

  4. People tend to be bad at rationalising the total cost of ownership over a long period of time. Economists call this phenomenon hyperbolic discounting. In our case, it means people are far more likely to buy a 500$ phone each 3 years than a 1000$ phone each 10 years. Again, this means OEMs have a clear disincentive to include the price of long-term software maintenance in their devices.

Clearly, life is more complex than how I portrayed it: enterprises are not perfect rational agents, altruism exists, not all enterprises aim solely for profit maximisation, etc. Still, in a capitalist economy, enterprises wanting to charge for software maintenance upfront have to overcome these hurdles one way or another if they want to avoid failing.

2 - The subscription model

Another way companies can try to internalise support costs is to rely on a subscription-based revenue model. This has multiple advantages over the previous option, mainly:

  1. It does not affect the initial purchase price of the device, making it easier to sell them at a competitive price.

  2. It provides a stable source of income, something that is very valuable to enterprises, as it reduces overall risks. This in return creates an incentive to continue providing software support as long as people are paying.

If this model is so interesting from an economic incentives point of view, why isn't any smartphone manufacturer offering that kind of program? The answer is, they are, but not explicitly5.

Apple and Google can fund part of their smartphone software support via the 30% cut they take out of their respective app stores. A report from Sensor Tower shows that in 2019, Apple made an estimated US$ 16 billion from the App Store, while Google raked in US$ 9 billion from the Google Play Store. Although the Fortune 500 ranking tells us this respectively is "only" 5.6% and 6.5% of their gross annual revenue for 2019, the profit margins in this category are certainly higher than any of their other products.

This means Google and Apple have an important incentive to keep your device updated for some time: if your device works well and is updated, you are more likely to keep buying apps from their store. When software support for a device stops, there is a risk paying customers will buy a competitor device and leave their ecosystem.

This also explains why OEMs who don't own app stores tend not to provide software support for very long periods of time. Most of them only make money when you buy a new phone. Providing long-term software support thus becomes a disincentive, as it directly reduces their sale revenues.

Same goes for Kindles and Kobos: the longer your device works, the more money they make with their electronic book stores. In my opinion, it's likely Amazon and Rakuten Kobo produce quarterly cost-benefit reports to decide when to drop support for older devices, based on ongoing support costs and the recurring revenues these devices bring in.

Rakuten Kobo is also in a more precarious situation than Amazon is: considering Amazon's very important market share, if your device stops getting new updates, there is a greater chance people will replace their old Kobo with a Kindle. Again, they have an important economic incentive to keep devices running as long as they are profitable.

Can Free Software fix this?

Yes and no. Free Software certainly isn't a magic wand one can wave to make everything better, but does provide major advantages in terms of security, user freedom and sometimes costs. The last piece of the puzzle explaining why Rakuten Kobo's software support is better than Google's is technological choices.

Smartphones are incredibly complex devices and have become the main computing platform of many. Similar to the web, there is a race for features and complexity that tends to create bloat and make older devices slow and painful to use. On the other hand, e-readers are simpler devices built for a single task: display electronic books.

Control over the platform is also a key aspect of the cost structure of providing software updates. Whereas Apple controls both the software and hardware side of iPhones, Android is a sad mess of drivers and SoCs, all providing different levels of support over time6.

If you take a look at the platforms the Kindle and Kobo are built on, you'll quickly see they both use Freescale I.MX SoCs. These processors are well known for their excellent upstream support in the Linux kernel and their relative longevity, chips being produced for either 10 or 15 years. This in turn makes updates much easier and less expensive to provide.

So clearly, open architectures, free drivers and open hardware helps tremendously, but aren't enough on their own. One of the lessons we must learn from the (amazing) LineageOS project is how lack of funding hurts everyone.

If there is no one to do the volunteer work required to maintain a version of LOS for your device, it won't be supported. Worse, when purchasing a new device, users cannot know in advance how many years of LOS support they will get. This makes buying new devices a frustrating hit-and-miss experience. If you are lucky, you will get many years of support. Otherwise, you risk your device becoming an expensive insecure paperweight.

So how do we fix this? Anyone with a brain understands throwing away perfectly good devices each 2 years is not sustainable. Government regulations enforcing a minimum support life would be a step in the right direction, but at the end of the day, Capitalism is to blame. Like the aforementioned carbon tax, band-aid solutions can make things somewhat better, but won't fix our current economic system's underlying problems.

For now though, I'll leave fixing the problem of Capitalism to someone else.

  1. My most recent novel binge has been focused on re-reading the Dune franchise. I first read the 6 novels written by Frank Herbert when I was 13 years old and only had vague and pleasant memories of his work. Great stuff. 

  2. I'm back on LineageOS! Nice folks released an unofficial LOS 17.1 port for the Nexus 5 last January and have kept it updated since then. If you are to use it, I would also recommend updating TWRP to this version specifically patched for the Nexus 5. 

  3. Very few serious economists actually believe neo-classical rational agent theory is a satisfactory explanation of human behavior. In my opinion, it's merely a (mostly flawed) lens to try to interpret certain behaviors, a tool amongst others that needs to be used carefully, preferably as part of a pluralism of approaches

  4. Good data on the e-reader market is hard to come by and is mainly produced by specialised market research companies selling their findings at very high prices. Those particular statistics come from a MarketWatch analysis. 

  5. If they were to tell people: You need to pay us 5$/month if you want to receive software updates, I'm sure most people would not pay. Would you? 

  6. Coming back to Fairphones, if they had so much problems providing an Android 9 build for the Fairphone 2, it's because Qualcomm never provided Android 7+ support for the Snapdragon 801 SoC it uses. 

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 19)

Here’s part nineteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Some show notes:

Here’s the form for getting a free Little Brother story, “Force Multiplier” by pre-ordering the print edition of Attack Surface (US/Canada only)

Here’s the schedule for the Attack Surface lectures

Here’s the list of schools and other institutions in need of donated copies of Attack Surface.

Here’s the form to request a copy of Attack Surface for schools, libraries, classrooms, etc.

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.



Planet DebianAntoine Beaupré: CDPATH replacements

after reading this post I figured I might as well bite the bullet and improve on my CDPATH-related setup, especially because it does not work with Emacs. so i looked around for autojump-related alternatives that do.

What I use now

I currently have this in my .shenv (sourced by .bashrc):

export CDPATH=".:~:~/src:~/dist:~/wikis:~/go/src:~/src/tor"

This allows me to quickly jump into projects from my home dir, or the "source code" (~/src), "work" (src/tor), or wiki checkouts (~/wikis) directories. It works well from the shell, but unfortunately it's very static: if I want a new directory, I need to edit my config file, restart shells, etc. It also doesn't work from my text editor.

Shell jumpers

Those are commandline tools that can be used from a shell, generally with built-in shell integration so that a shell alias will find the right directory magically, usually by keeping track of the directories visited with cd.

Some of those may or may not have integration in Emacs.





Emacs plugins not integrated with the shell

Those projects can be used to track files inside a project or find files around directories, but do not offer the equivalent functionality in the shell.



  • home page
  • elpy has a notion of projects, so, by default, will find files in the current "project" with C-c C-f which is useful


  • built-in
  • home page
  • "Bookmarks record locations so you can return to them later"


  • built-in
  • home page
  • "builds a list of recently opened files. This list is is automatically saved across sessions on exiting Emacs - you can then access this list through a command or the menu"


Planet DebianIustin Pop: Serendipity

To start off, let me say it again: I hate light pollution. I really, really hate it. I love the night sky where you look up and see thousands of stars, and constellations besides Ursa Major. As somebody said once, “You haven’t lived until you’ve seen your shadow by the light of the Milky Way”.

But, ahem, I live in a large city, and despite my attempts using star trackers, special filters, etc. you simply can’t escape it. So, whenever we go on vacation in the mountains, I’m trying to think if I can do a bit of astro-photography (not that I’m good at it).

Which bring me to our recent vacation up in the mountains. I was looking forward to it, until in the week before, when the weather prognosis was switching between snow, rain and overcast for the entire week. No actual day or night with clear skies, so… I didn’t take a tripod, I didn’t take a wide lens, and put night photography out of my mind.

Vacation itself was good, especially the quietness of the place, so I usually went to be early-ish and didn’t look outside. The weather was as forecasted - no new snow (but there was enough up in the mountains), but heavy clouds all the time, and the sun only showed itself for a few minutes at a time.

One night I was up a bit longer than usual, working on the laptop and being very annoyed by a buzzing sound. At first I thought maybe I was imagining it, but from time to time it was stopping briefly, so it was a real noise; I started hunting for the source. Not my laptop, not the fridge, not the TV… but it was getting stronger near the window. I open the door to the balcony, and… bam! Very loud noise, from the hotel nearby, where — at midnight — the pool was being cleaned. I look at the people doing the work, trying to estimate how long it’ll be until they finish, but it was looking like a long time.

Fortunately with the door closed the noise was not bad enough to impact my sleep, so I debate getting angry or just resigned, and since it was late, I just sigh, roll my eyes — not metaphorically, but actually roll my eyes and look up, and I can’t believe my eyes. Completely clear sky, no trace of clouds anywhere, and… stars. Lots of starts. I sit there, looking at the sky and enjoying the view, and I think to myself that it won’t look that nice on the camera, for sure. Especially without a real trip, and without a fast lens.

Nevertheless, I grab my camera and — just for kicks — take one handheld picture. To my surprise (and almost disbelief), blurry pixels aside, the photo does look like what I was seeing, so I grab my tiny tripod that I carried along, and (with only a 24-70 zoom lens), grab a photo. And another, and another and then I realise that if I can make the composition work, and find a good shutter speed, this can turn out a good picture.

I didn’t have a remote release, the tripod was not very stable and it cannot point the camera upwards (it’s basically an emergency tripod), so it was quite sub-optimal; still, I try multiple shots (different compositions, different shutter speeds); they look on the camera screen and on the phone pretty good, so just for safety I take a few more, and, very happy, go to bed.

Coming back from vacation, on the large monitor, it turns out that the first 28 out of the 30 pictures were either blurry or not well focused (as I was focusing manually), and the 29th was almost OK but still not very good. Only the last, the really last picture, was technically good and also composition-wise OK. Luck? Foresight? Don’t know, but it was worth deleting 28 pictures to get this one. One of my best night shots, despite being so unprepared

Stars! Lots of stars! And mountains… Stars! Lots of stars! And mountains…

Of course, compared to other people’s pictures, this is not special. But for me, it will be a keepsake of how a real night sky should look like.

If you want to zoom in, higher resolution on flickr.

Technically, the challenges for the picture were two-fold:

  • fighting the shutter speed; the light was not the problem, but rather the tripod and lack of remote release: a short shutter speed will magnify tripod issues/movement from the release (although I was using delayed release on the camera), but will prevent star trails, and a long shutter speed will do the exact opposite; in the end, at the focal length I was using, I settled on a 5 second shutter speed.
  • composition: due to the presence of the mountains (which I couldn’t avoid by tilting the camera fully up), this was for me a difficult thing, since it’s more on the artistic side, which is… very subjective; in the end, this turned out fine (I think), but mostly because I took pictures from many different perspectives.

Next time when travelling by car, I’ll surely take a proper tripod ☺

Until next time, clear and dark skies…

Cory DoctorowStop Techno Dystopia with SRSLY WRONG

SRSLY WRONG is a leftist/futuristic podcast incorporating sketches in long-form episodes; I became aware of them last year when Michael Pulsford recommended their series on “library socialism”, an idea I was so stricken by that it made its way into The Lost Cause, a novel I’m writing now. The Wrong Boys invited me on for an episode (Stop Techno Dystopia!) (MP3) as part of the Attack Surface tour and it came out so, so good! Thanks, Wrong Boys!

Cory DoctorowMy appearance on the Judge John Hodgman podcast!

I’ve been a fan of the Judge John Hodgman podcast for so many years, and often threaten my wife with bringing a case before the judge whenever we have a petty disagreement. I was so pleased to appear on the JJHO podcast (MP3) this week as part of the podcast tour for Attack Surface!

Cory DoctorowTalking writing with the Writing Excuses crew

A million years ago, I set sail on the Writing Excuses Cruise, a writing workshop at sea. As part of that workshop, I sat down with the Writing Excuses podcast team (Mary Robinette Kowal, Piper J Drake, and Howard Taylor) and recorded a series of short episodes explaining my approach to writing. I had clean forgotten that they saved one to coincide with the release of Attack Surface, until this week’s episode went live (MP3). Listening to it today, I discovered that it was incredibly entertaining!

Planet DebianFrançois Marier: Using a Let's Encrypt TLS certificate with Asterisk 16.2

In order to fix the following error after setting up SIP TLS in Asterisk 16.2:

asterisk[8691]: ERROR[8691]: tcptls.c:966 in __ssl_setup: TLS/SSL error loading cert file. <asterisk.pem>

I created a Let's Encrypt certificate using certbot:

apt install certbot
certbot certonly --standalone -d

To enable the asterisk user to load the certificate successfuly (it doesn't permission to access to the certificates under /etc/letsencrypt/), I copied it to the right directory:

cp /etc/letsencrypt/live/ /etc/asterisk/asterisk.key
cp /etc/letsencrypt/live/ /etc/asterisk/asterisk.cert
chown asterisk:asterisk /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
chmod go-rwx /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key

Then I set the following variables in /etc/asterisk/sip.conf:


Automatic renewal

The machine on which I run asterisk has a tricky Apache setup:

  • a webserver is running on port 80
  • port 80 is restricted to the local network

This meant that the certbot domain ownership checks would get blocked by the firewall, and I couldn't open that port without exposing the private webserver to the Internet.

So I ended up disabling the built-in certbot renewal mechanism:

systemctl disable certbot.timer certbot.service
systemctl stop certbot.timer certbot.service

and then writing my own script in /etc/cron.daily/certbot-francois:


# Stop Apache and backup firewall.
/bin/systemctl stop apache2.service
/usr/sbin/iptables-save > $TEMPFILE

# Open up port 80 to the whole world.
/usr/sbin/iptables -D INPUT -j LOGDROP
/usr/sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT
/usr/sbin/iptables -A INPUT -j LOGDROP

# Renew all certs.
/usr/bin/certbot renew --quiet

# Restore firewall and restart Apache.
/usr/sbin/iptables -D INPUT -p tcp --dport 80 -j ACCEPT
/usr/sbin/iptables-restore < $TEMPFILE
/bin/systemctl start apache2.service

# Copy certificate into asterisk.
cp /etc/letsencrypt/live/ /etc/asterisk/asterisk.key
cp /etc/letsencrypt/live/ /etc/asterisk/asterisk.cert
chown asterisk:asterisk /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
chmod go-rwx /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
/bin/systemctl restart asterisk.service

# Commit changes to etckeeper.
pushd /etc/ > /dev/null
/usr/bin/git add letsencrypt asterisk
DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
if [ -n "$DIFFSTAT" ] ; then
    /usr/bin/git commit --quiet -m "Renewed letsencrypt certs."
    echo "$DIFFSTAT"
popd > /dev/null


Planet DebianDirk Eddelbuettel: digest 0.6.26: Blake3 and Tuning

And a new version of digest is now on CRAN will go to Debian shortly.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, and blake3 algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at 896k monthly downloads, 279 direct reverse dependencies and 8057 indirect reverse dependencies, or just under half of CRAN) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release brings two nice contributed updates. Dirk Schumacher added support for blake3 (though we could probably push this a little harder for performance, help welcome). Winston Chang benchmarked and tuned some of the key base R parts of the package. Last but not least I flipped the vignette to the lovely minidown, updated the Travis CI setup using bspm (as previously blogged about in r4 #30), and added a package website using Matertial for MkDocs.

My CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJunichi Uekawa: Troubleshooting your audio input.

Troubleshooting your audio input. When doing video conferencing sometimes I hear the remote end not doing very well. Especially when your friend tells you he bought a new mic and it didn't sound well, they might be using the wrong configuration on the OS and using the other mic, or they might have a constant noise source in the room that affects the video conferencing noise cancelling algorithms. Yes, noise cancelling algorithms aren't perfect because detecting what is noise is heuristic and better to have low level of noise. Here is the app. I have a video to demonstrate.


Cryptogram Cybersecurity Visuals

The Hewlett Foundation just announced its top five ideas in its Cybersecurity Visuals Challenge. The problem Hewlett is trying to solve is the dearth of good visuals for cybersecurity. A Google Images Search demonstrates the problem: locks, fingerprints, hands on laptops, scary looking hackers in black hoodies. Hewlett wanted to go beyond those tropes.

I really liked the idea, but find the results underwhelming. It’s a hard problem.

Hewlett press release.

Cryptogram Split-Second Phantom Images Fool Autopilots

Researchers are tricking autopilots by inserting split-second images into roadside billboards.

Researchers at Israel’s Ben Gurion University of the Negev … previously revealed that they could use split-second light projections on roads to successfully trick Tesla’s driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they’ve found they can pull off the same trick with just a few frames of a road sign injected on a billboard’s video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind.


In this latest set of experiments, the researchers injected frames of a phantom stop sign on digital billboards, simulating what they describe as a scenario in which someone hacked into a roadside billboard to alter its video. They also upgraded to Tesla’s most recent version of Autopilot known as HW3. They found that they could again trick a Tesla or cause the same Mobileye device to give the driver mistaken alerts with just a few frames of altered video.

The researchers found that an image that appeared for 0.42 seconds would reliably trick the Tesla, while one that appeared for just an eighth of a second would fool the Mobileye device. They also experimented with finding spots in a video frame that would attract the least notice from a human eye, going so far as to develop their own algorithm for identifying key blocks of pixels in an image so that a half-second phantom road sign could be slipped into the “uninteresting” portions.

The paper:

Abstract: In this paper, we investigate “split-second phantom attacks,” a scientific gap that causes two commercial advanced driver-assistance systems (ADASs), Telsa Model X (HW 2.5 and HW 3) and Mobileye 630, to treat a depthless object that appears for a few milliseconds as a real obstacle/object. We discuss the challenge that split-second phantom attacks create for ADASs. We demonstrate how attackers can apply split-second phantom attacks remotely by embedding phantom road signs into an advertisement presented on a digital billboard which causes Tesla’s autopilot to suddenly stop the car in the middle of a road and Mobileye 630 to issue false notifications. We also demonstrate how attackers can use a projector in order to cause Tesla’s autopilot to apply the brakes in response to a phantom of a pedestrian that was projected on the road and Mobileye 630 to issue false notifications in response to a projected road sign. To counter this threat, we propose a countermeasure which can determine whether a detected object is a phantom or real using just the camera sensor. The countermeasure (GhostBusters) uses a “committee of experts” approach and combines the results obtained from four lightweight deep convolutional neural networks that assess the authenticity of an object based on the object’s light, context, surface, and depth. We demonstrate our countermeasure’s effectiveness (it obtains a TPR of 0.994 with an FPR of zero) and test its robustness to adversarial machine learning attacks.

Planet DebianYves-Alexis Perez: iOS 14 USB tethering broken on Linux: looking for documentation and contact at Apple

It's a bit of a long shot, but maybe someone on Planet Debian or elsewhere can help us reach the right people at Apple.

Starting with iOS 14, something apparently changed on the way USB tethering (also called Personal Hotspot) is set up, which broke it for people using Linux. The driver in use is ipheth, developped in 2009 and included in the Linux kernel in 2010.

The kernel driver negotiates over USB with the iOS device in order to setup the link. The protocol used by both parties to communicate don't really seemed documented publicly, and it seems the protocol has evolved over time and iOS versions, and the Linux driver hasn't been kept up to date. On macOS and Windows the driver apparently comes with iTunes, and Apple engineers obviously know how to communicate with iOS devices, so iOS 14 is supported just fine.

There's an open bug on libimobildevice (the set of userlands tools used to communicate with iOS devices, although the update should be done in the kernel), with some debugging and communication logs between Windows and an iOS device, but so far no real progress has been done. The link is enabled, the host gets an IP from the device, can ping the device IP and can even resolve name using the device DNS resolver, but IP forwarding seems disabled, no packet goes farther than the device itself.

That means a lot of people upgrading to iOS 14 will suddenly lose USB tethering. While Wi-Fi and Bluetooth connection sharing still works, it's still suboptimal, so it'd be nice to fix the kernel driver and support the latest protocol used in iOS 14.

If someone knows the right contact (or the right way to contact them) at Apple so we can have access to some kind of documentation on the protocol and the state machine to use, please reach us (either to the libimobile device bug or to my email address below).


Worse Than FailureError'd: Try a Different Address

"Aldi doesn't deliver to a memory address!? Ok, that sounds fair," wrote Oisin.


"When you order pictures from this photography studio, you find that what sets them apart from other studios is group photography and not web site design," Stephen D. wrote.


John H. writes, "To be honest, I'm a little bit surprised. I figured for sure that it would have been a 50/50 split between meat and booze."


"I was just trying to perform a trivial merge using GVim, but then ṕ̵̡̛̃̍̊͊̌̉h̸̢̦̭̟̿̉̔̔͛̋̓̑̉́͌͊̒̕͜'̸̛͎̹̦͔̳͎̹͎̥̻̺̦̳͈̞̃̿̐́́͑̒͛̀̋̑̕͠n̸̨̛͉̥̻̘̣̞̉́͂̌͋́̾͗͠g̵̣̫̯̈̈́̕l̷͍̳͚̻̺̀̊͆̂͒͐̀̔́͘͜͝͝͝ừ̷̯̬̜̱̭̳̞͇̠̄̎͗͝į̴̗̺͓̣̘͚̯̳͕̠̗̮̄͛͌̍̈́͌͊͌̓̈́͌̌͝ ̵̼̼̥͍̙̋̇͂͋͐͐͂m̶̥̭͍͕͍̑̏g̸̡͖̜̜̭͈͖͇̦͍͉͚̎͜ͅl̵̡͉̜̀̂̎́͐̑̑͑̍̚ŵ̴̡͚̝̣̬̭̥̙͎̻̽͝'̸̼̩̑́̄͆̾͆̿́̕ṅ̶̛̯͍̰͒́̂̎ǎ̶͇̬̲͕͍̻̞̫̻͕͈͔̍͌͆͌̈̓͛̀̌̿̚͝͠f̸͎͖̰̖̫̪͎̄̈̓̏́̐̄̈̒́̈̾̔̕ḩ̶̠̺̯̦̪͓̜͙̬͎̳̭͊̀̌͂͆́̾̑̾͝ ̶̣̘̟͊̈́͊͗͒͋̊͊̀͛̉͝Ģ̶̛̙̜̗̖̼̺͓͐̀͐͒V̸̢̙̰̟̙͚̗͖̆̈̇̓͘͠͝͝i̶̛̹̳̍͂́͝͝͝m̶̛͈͈̹̯͕̗͐̂͊̇̃̃͌̌̓̄̔̆͘ ̸̢̛̣͓̪͚̘͚̰͖͐́͜R̸͍͈͚̻͕̗̻͉͙͆͌͐͒̓͒̐̓̑̊͒͝'̴̧͚͔̫̼̺͔͎̖͈̞͙͆͆l̷͍̠̄͂̆̒͊̔ẏ̶̢̮͓̄͆́̉̈́̑͗̉͠ͅè̴̡̧̝͙̖̹͓̤̼̻͓̬͚̰̅̋́̑̾͘͜h̷̳͇̖͔̤͇̦̹̮̐̏͛̊͐͒͐ ̷̧͚̔̉̍͆͌̓͗̓̐̐͘w̸̤̝̪͎͇̩̤̳͒̏̒̎̈́̉͗̈́̚̚͝m̴̼̦̩͉̜̳͓͔̟̭͇̜̰̬̋̎͗͆̒̀̍̍̇̔͋̕͝͝ͅê̸̢̡̢̝͓̟̭̞͍̞͈̖̠̲̬̒̐̎̿̇̍́̂͒̏̉͘͠r̸̬̉̈́͆̌͆̈̉ǧ̴̢̙͚͓͇͖͔̩̣͕̞̚e̸̡̢̩̠͙͖̺̥͉̦̟̩͐͘'̵̡̢̧̟͇̭̲̳͕̻̜͇̘̬̙̈̀̓́̄̒̔̒̕͘͠n̶̢̛͎̹̮̻̼̳̜͖̲̂̇̅̾̄̐̊̇̓̒̒̍͘a̸̟̞̗̻͕̘̳͔̿̌͑̌̎̈̊̓͊̊͋͜g̷̢̮̟̠͖̞̤͖̘̻̞̀̄̓͌̾́̉̏͑͐͜l̵̪̫̝̐̃ ̸̮̤̱͇͂̔̂̀̾̀̚f̵̥͌̂̒̐̚h̷̡̤̮͇̥̖̼̙́̌̕ͅţ̴̛̞̦̩͚̝̦̮͎͕̹̖̰̀̋̋͐̍̅̀͛̕̕̚͘͝͠ȁ̸̖̝͈͎̤͇̽͛́̄͆͒̃̓̏͐͊͒̔̌g̸̜̠͉̝̱̳͔̭̦͇̱̘̺͋ͅͅn̶̦̥͈̻͍̠̂̔̊́̑͆̉̈́͝," wrote Ivan.


Robert M. writes, "Look, considering how 2020 is going, 'Invalid Date' could be really any day coming up. Listen - I just want to know, is this before or after Christmas, because like it or not, I still have to do holiday shopping."


[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.


Krebs on SecurityBreach at Dickey’s BBQ Smokes 3M Cards

One of the digital underground’s most popular stores for peddling stolen credit card information began selling a batch of more than three million new card records this week. KrebsOnSecurity has learned the data was stolen in a lengthy data breach at more than 100 Dickey’s Barbeque Restaurant locations around the country.

An ad on the popular carding site Joker’s Stash for “BlazingSun,” which fraud experts have traced back to a card breach at Dickey’s BBQ.

On Monday, the carding bazaar Joker’s Stash debuted “BlazingSun,” a new batch of more than three million stolen card records, advertising “valid rates” of between 90-100 percent. This is typically an indicator that the breached merchant is either unaware of the compromise or has only just begun responding to it.

Multiple companies that track the sale in stolen payment card data say they have confirmed with card-issuing financial institutions that the accounts for sale in the BlazingSun batch have one common theme: All were used at various Dickey’s BBQ locations over the past 13-15 months.

KrebsOnSecurity first contacted Dallas-based Dickey’s on Oct. 13. Today, the company shared a statement saying it was aware of a possible payment card security incident at some of its eateries:

“We received a report indicating that a payment card security incident may have occurred. We are taking this incident very seriously and immediately initiated our response protocol and an investigation is underway. We are currently focused on determining the locations affected and time frames involved. We are utilizing the experience of third parties who have helped other restaurants address similar issues and also working with the FBI and payment card networks. We understand that payment card network rules generally provide that individuals who timely report unauthorized charges to the bank that issued their card are not responsible for those charges.”

The confirmations came from Miami-based Q6 Cyber and Gemini Advisory in New York City.

Q6Cyber CEO Eli Dominitz said the breach appears to extend from May 2019 through September 2020.

“The financial institutions we’ve been working with have already seen a significant amount of fraud related to these cards,” Dominitz said.

Gemini says its data indicated some 156 Dickey’s locations across 30 states likely had payment systems compromised by card-stealing malware, with the highest exposure in California and Arizona. Gemini puts the exposure window between July 2019 and August 2020.

“Low-and-slow” aptly describes the card breach at Dickie’s, which persisted for at least 13 months.

With the threat from ransomware attacks grabbing all the headlines, it may be tempting to assume plain old credit card thieves have moved on to more lucrative endeavors. Alas, cybercrime bazaars like Joker’s Stash have continued plying their trade, undeterred by a push from the credit card associations to encourage more merchants to install credit card readers that require more secure chip-based payment cards.

That’s because there are countless restaurants — usually franchise locations of an established eatery chain — that are left to decide for themselves whether and how quickly they should make the upgrades necessary to dip the chip versus swipe the stripe.

“Dickey’s operates on a franchise model, which often allows each location to dictate the type of point-of-sale (POS) device and processors that they utilize,” Gemini wrote in a blog post about the incident. “However, given the widespread nature of the breach, the exposure may be linked to a breach of the single central processor, which was leveraged by over a quarter of all Dickey’s locations.”

While there have been sporadic reports about criminals compromising chip-based payment systems used by merchants in the U.S., the vast majority of the payment card data for sale in the cybercrime underground is stolen from merchants who are still swiping chip-based cards.

This isn’t conjecture; relatively recent data from the stolen card shops themselves bear this out. In July, KrebsOnSecurity wrote about an analysis by researchers at New York University, which looked at patterns surrounding more than 19 million stolen payment cards that were exposed after the hacking of BriansClub, a top competitor to the Joker’s Stash carding shop.

The NYU researchers found BriansClub earned close to $104 million in gross revenue from 2015 to early 2019, and listed over 19 million unique card numbers for sale. Around 97% of the inventory was stolen magnetic stripe data, commonly used to produce counterfeit cards for in-person payments.

Visa and MasterCard instituted new rules in October 2015 that put retailers on the hook for all of the losses associated with counterfeit card fraud tied to breaches if they haven’t implemented chip-based card readers and enforced the dipping of the chip when a customer presents a chip-based card.

Dominitz said he never imagined back in 2015 when he founded Q6Cyber that we would still be seeing so many merchants dealing with magstripe-based data breaches.

“Five years ago I did not expect we would be in this position today with card fraud,” he said. “You’d think the industry in general would have made a bigger dent in this underground economy a while ago.”

Tired of having your credit card re-issued and updating your payment records at countless e-commerce sites every time some restaurant you frequent has a breach? Here’s a radical idea: Next time you visit an eatery (okay, if that ever happens again post-COVID, etc), ask them if they use chip-based card readers. If not, consider taking your business elsewhere.

Planet DebianJelmer Vernooij: Debian Janitor: How to Contribute Lintian-Brush Fixers

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

lintian-brush can currently fix about 150 different issues that lintian can report, but that's still a small fraction of the more than thousand different types of issue that lintian can detect.

If you're interested in contributing a fixer script to lintian-brush, there is now a guide that describes all steps of the process:

  1. how to identify lintian tags that are good candidates for automated fixing
  2. creating test cases
  3. writing the actual fixer

For more information about the Janitor's lintian-fixes efforts, see the landing page.

Planet DebianGunnar Wolf: I am who I am and that's all that I am

Mexico was one of the first countries in the world to set up a national population registry in the late 1850s, as part of the church-state separation that was for long years one of the national sources of pride.

Forty four years ago, when I was born, keeping track of the population was still mostly a manual task. When my parents registered me, my data was stored in page 161 of book 22, year 1976, of the 20th Civil Registration office in Mexico City. Faithful to the legal tradition, everything is handwritten and specified in full. Because, why would they write 1976.04.27 (or even 27 de abril de 1976) when they could spell out día veintisiete de abril de mil novecientos setenta y seis? Numbers seem to appear only for addresses.

So, the State had record of a child being born, and we knew where to look if we came to need this information. But, many years later, a very sensible tecnification happened: all records (after a certain date, I guess) were digitized. Great news! I can now get my birth certificate without moving from my desk, paying a quite reasonable fee (~US$4). What’s there not to like?

Digitally certified and all! So great! But… But… Oh, there’s a problem.

Of course… Making sense of the handwriting as you can see is somewhat prone to failure. And I cannot blame anybody for failing to understand the details of my record.

So, my mother’s first family name is Iszaevich. It was digitized as Iszaerich. Fortunately, they do acknowledge some errors could have made it into the process, and there is a process to report and correct errors.

What’s there not to like?

Oh — That they do their best to emulate a public office using online tools. I followed some links in that link to get the address to contact and yesterday night sent them the needed documents. Quite immediately, I got an answer that… I must share with the world:

Yes, the mailing contact is in the domain. I could care about them not using a @… address, but I’ll let it slip. The mail I got says (uppercase and all):


8:00 TO 15:00.



I would only be half-surprised if they were paying the salary of somebody to spend the wee hours of the night receiving and deleting mails from their GMail account.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, September 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September, 208.25 work hours have been dispatched among 13 paid contributors. Their reports are available:
  • Abhijith PA did 12.0h (out of 14h assigned), thus carrying over 2h to October.
  • Adrian Bunk did 14h (out of 19.75h assigned), thus carrying over 5.75h to October.
  • Ben Hutchings did 8.25h (out of 16h assigned and 9.75h from August), but gave back 7.75h, thus carrying over 9.75h to October.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 19.75h (out of 19.75h assigned).
  • Holger Levsen did 5h coordinating/managing the LTS team.
  • Markus Koschany did 31.75h (out of 19.75h assigned and 12h from August).
  • Ola Lundqvist did 9.5h (out of 12h from August), thus carrying 2.5h to October.
  • Roberto C. Sánchez did 19.75h (out of 19.75h assigned).
  • Sylvain Beucler did 19.75h (out of 19.75h assigned).
  • Thorsten Alteholz did 19.75h (out of 19.75h assigned).
  • Utkarsh Gupta did 8.75h (out of 19.75h assigned), while he already anticipated the remaining 11h in August.

Evolution of the situation

September was a regular LTS month with an IRC meeting.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file has 48 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Worse Than FailureCodeSOD: A New Generation

Mapping between types can create some interesting challenges. Michal has one of those scenarios. The code comes to us heavily anonymized, but let’s see what we can do to understand the problem and the solution.

There is a type called ItemA. ItemA is a model object on one side of a boundary, and the consuming code doesn’t get to touch ItemA objects directly, it instead consumes one of two different types: ItemB, or SimpleItemB.

The key difference between ItemB and SimpleItemB is that they have different validation rules. It’s entirely possible that an instance of ItemA may be a valid SimpleItemB and an invalid ItemB. If an ItemA contains exactly five required fields importantDataPieces, and everything else is null, it should turn into a SimpleItemB. Otherwise, it should turn into an ItemB.

Michal adds: “Also noteworthy is the fact that ItemA is a class generated from XML schemas.”

The Java class was generated, but not the conversion method.

public ItemB doConvert(ItemA itemA) {
  final ItemA emptyItemA = new ItemA();

  final String importantDataPiece1 = itemA.importantDataPiece1();
  final String importantDataPiece2 = itemA.importantDataPiece2();
  final String importantDataPiece3 = itemA.importantDataPiece3();
  final String importantDataPiece4 = itemA.importantDataPiece4();
  final String importantDataPiece5 = itemA.importantDataPiece5();


  final boolean isSimpleItem = itemA.equals(emptyItemA) 
&& importantDataPiece1 != null && importantDataPiece2 != null && importantDataPiece3 != null && importantDataPiece4 != null;


  if (isSimpleItem) {
      return simpleItemConverter.convert(itemA);
  } else {
      return itemConverter.convert(itemA);

We start by making a new instance of ItemA, emptyItemA, and copy a few values over to it. Then we clear out the five required fields (after caching the values in local variables). We rely on .equals, generated off that XML schema, to see if this newly created item is the same as our recently cleared out input item. If they are, and none of the required fields are null, we know this will be a SimpleItemB. We’ll put the required fields back into the input object, and then call the appropriate conversion methods.

Let’s restate the goal of this method, to understand how ugly it is: if an object has five required values and nothing else, it’s SimpleItemB, otherwise it’s a regular ItemB. The way this developer decided to perform this check wasn’t by examining the fields (which, in their defense, are being generated, so you might need reflection to do the inspection), but by this unusual choice of equality test. Create an empty object, copy a few ID related elements into it, and your default constructor should handle nulling out all the things which should be null, right?

Or, as Michal sums it up:

The intention of the above snippet appears to be checking whether itemA contains all fields mandatory for SimpleItemB and none of the other ones. Why the original author started by copying some fields to his ‘template item’ but switched to the ‘rip the innards of the original object, check if the ravaged carcass is equal to the barebones template, and then stuff the guts back in and pretend nothing ever happened’ approach halfway through? I hope I never find out what it’s like to be in a mental state when any part of this approach seems like a good idea.

Ugly, yes, but still, this code worked… until it didn’t. Specifically, a new nullable Boolean field was added to ItemA which was used by ItemB, but had no impact on SimpleItemB. This should have continued to work, except that the original developer defaulted the new field to false in the constructor, but didn’t update the doConvert method, so equals started deciding that our input item and our “empty” copy no longer matched. Downstream code started getting invalid ItemB objects when it should have been getting valid SimpleItemB objects, which triggered many hours of debugging to try and understand why this small change had such cascading effects.

Michal refactored this code, but was not the first person to touch it recently:

A cherry on top is the fact that importantDataPiece5 came about a few years after the original implementation. Someone saw this code, contributed to the monstrosity, and happily kept on trucking.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianDirk Eddelbuettel: dang 0.0.12: Two new functions

A new release of the dang package is now on CRAN, roughly one year after the last release. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!) is one, this overbought/oversold price band plotter from an older blog post is another. More recently added were helpers for data.table to xts conversion and a git repo root finder.

This release adds two functions. One was mentioned just days ago in a tweet by Nathan and is a reworked version of something Colin tweeted about a few weeks ago: a little data wrangling off the kewl rtweet to find maximally spammy accounts per search topic. In other words those who include more than ‘N’ hashtags for given search term. The other is something I, if memory serves, picked up a while back on one of the lists: a base R function to identify non-ASCII characters in a file. It is a C function that is not directly exported by and hence no accessible, so we put it here (with credits, of course). I mentioned it yesterday when announcing tidyCpp as I this C function was the starting point for the new tidyCpp wrapper around some C API of R functions.

The (very short) NEWS entry follows.

Changes in version 0.0.12 (2020-10-14)

  • New functions muteTweets and checkPackageAsciiCode.

  • Updated CI setup.

Courtesy of CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianSteinar H. Gunderson: RSS test


Planet DebianMichael Stapelberg: Linux package managers are slow

I measured how long the most popular Linux distribution’s package manager take to install small and large packages (the ack(1p) source code search Perl script and qemu, respectively).

Where required, my measurements include metadata updates such as transferring an up-to-date package list. For me, requiring a metadata update is the more common case, particularly on live systems or within Docker containers.

All measurements were taken on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz running Docker 1.13.1 on Linux 4.19, backed by a Samsung 970 Pro NVMe drive boasting many hundreds of MB/s write performance. The machine is located in Zürich and connected to the Internet with a 1 Gigabit fiber connection, so the expected top download speed is ≈115 MB/s.

See Appendix C for details on the measurement method and command outputs.


Keep in mind that these are one-time measurements. They should be indicative of actual performance, but your experience may vary.

ack (small Perl program)

distribution package manager data wall-clock time rate
Fedora dnf 114 MB 33s 3.4 MB/s
Debian apt 16 MB 10s 1.6 MB/s
NixOS Nix 15 MB 5s 3.0 MB/s
Arch Linux pacman 6.5 MB 3s 2.1 MB/s
Alpine apk 10 MB 1s 10.0 MB/s

qemu (large C program)

distribution package manager data wall-clock time rate
Fedora dnf 226 MB 4m37s 1.2 MB/s
Debian apt 224 MB 1m35s 2.3 MB/s
Arch Linux pacman 142 MB 44s 3.2 MB/s
NixOS Nix 180 MB 34s 5.2 MB/s
Alpine apk 26 MB 2.4s 10.8 MB/s

(Looking for older measurements? See Appendix B (2019).

The difference between the slowest and fastest package managers is 30x!

How can Alpine’s apk and Arch Linux’s pacman be an order of magnitude faster than the rest? They are doing a lot less than the others, and more efficiently, too.

Pain point: too much metadata

For example, Fedora transfers a lot more data than others because its main package list is 60 MB (compressed!) alone. Compare that with Alpine’s 734 KB APKINDEX.tar.gz.

Of course the extra metadata which Fedora provides helps some use case, otherwise they hopefully would have removed it altogether. The amount of metadata seems excessive for the use case of installing a single package, which I consider the main use-case of an interactive package manager.

I expect any modern Linux distribution to only transfer absolutely required data to complete my task.

Pain point: no concurrency

Because they need to sequence executing arbitrary package maintainer-provided code (hooks and triggers), all tested package managers need to install packages sequentially (one after the other) instead of concurrently (all at the same time).

In my blog post “Can we do without hooks and triggers?”, I outline that hooks and triggers are not strictly necessary to build a working Linux distribution.

Thought experiment: further speed-ups

Strictly speaking, the only required feature of a package manager is to make available the package contents so that the package can be used: a program can be started, a kernel module can be loaded, etc.

By only implementing what’s needed for this feature, and nothing more, a package manager could likely beat apk’s performance. It could, for example:

  • skip archive extraction by mounting file system images (like AppImage or snappy)
  • use compression which is light on CPU, as networks are fast (like apk)
  • skip fsync when it is safe to do so, i.e.:
    • package installations don’t modify system state
    • atomic package installation (e.g. an append-only package store)
    • automatically clean up the package store after crashes

Current landscape

Here’s a table outlining how the various package managers listed on Wikipedia’s list of software package management systems fare:

name scope package file format hooks/triggers
AppImage apps image: ISO9660, SquashFS no
snappy apps image: SquashFS yes: hooks
FlatPak apps archive: OSTree no
0install apps archive: tar.bz2 no
nix, guix distro archive: nar.{bz2,xz} activation script
dpkg distro archive: tar.{gz,xz,bz2} in ar(1) yes
rpm distro archive: cpio.{bz2,lz,xz} scriptlets
pacman distro archive: tar.xz install
slackware distro archive: tar.{gz,xz} yes:
apk distro archive: tar.gz yes: .post-install
Entropy distro archive: tar.bz2 yes
ipkg, opkg distro archive: tar{,.gz} yes


As per the current landscape, there is no distribution-scoped package manager which uses images and leaves out hooks and triggers, not even in smaller Linux distributions.

I think that space is really interesting, as it uses a minimal design to achieve significant real-world speed-ups.

I have explored this idea in much more detail, and am happy to talk more about it in my post “Introducing the distri research linux distribution".

There are a couple of recent developments going into the same direction:

Appendix C: measurement details (2020)


You can expand each of these:

Fedora’s dnf takes almost 33 seconds to fetch and unpack 114 MB.

% docker run -t -i fedora /bin/bash
[root@62d3cae2e2f9 /]# time dnf install -y ack
Fedora 32 openh264 (From Cisco) - x86_64     1.9 kB/s | 2.5 kB     00:01
Fedora Modular 32 - x86_64                   6.8 MB/s | 4.9 MB     00:00
Fedora Modular 32 - x86_64 - Updates         5.6 MB/s | 3.7 MB     00:00
Fedora 32 - x86_64 - Updates                 9.9 MB/s |  23 MB     00:02
Fedora 32 - x86_64                            39 MB/s |  70 MB     00:01
real	0m32.898s
user	0m25.121s
sys	0m1.408s

NixOS’s Nix takes a little over 5s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -iA nixpkgs.ack'
unpacking channels...
created 1 symlinks in user environment
installing 'perl5.32.0-ack-3.3.1'
these paths will be fetched (15.55 MiB download, 85.51 MiB unpacked):
copying path '/nix/store/34l8jdg76kmwl1nbbq84r2gka0kw6rc8-perl5.32.0-ack-3.3.1-man' from ''...
copying path '/nix/store/czc3c1apx55s37qx4vadqhn3fhikchxi-libunistring-0.9.10' from ''...
copying path '/nix/store/9fd4pjaxpjyyxvvmxy43y392l7yvcwy1-perl5.32.0-File-Next-1.18' from ''...
copying path '/nix/store/xim9l8hym4iga6d4azam4m0k0p1nw2rm-libidn2-2.3.0' from ''...
copying path '/nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31' from ''...
copying path '/nix/store/y7i47qjmf10i1ngpnsavv88zjagypycd-attr-2.4.48' from ''...
copying path '/nix/store/dj6n505iqrk7srn96a27jfp3i0zgwa1l-acl-2.2.53' from ''...
copying path '/nix/store/w9wc0d31p4z93cbgxijws03j5s2c4gyf-coreutils-8.31' from ''...
copying path '/nix/store/ifayp0kvijq0n4x0bv51iqrb0yzyz77g-perl-5.32.0' from ''...
copying path '/nix/store/z45mp61h51ksxz28gds5110rf3wmqpdc-perl5.32.0-ack-3.3.1' from ''...
building '/nix/store/m0rl62grplq7w7k3zqhlcz2hs99y332l-user-environment.drv'...
created 49 symlinks in user environment
real	0m 5.60s
user	0m 3.21s
sys	0m 1.66s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@1996bb94a2d1:/# time (apt update && apt install -y ack-grep)
Get:1 sid InRelease [146 kB]
Get:2 sid/main amd64 Packages [8400 kB]
Fetched 8546 kB in 1s (8088 kB/s)
The following NEW packages will be installed:
  ack libfile-next-perl libgdbm-compat4 libgdbm6 libperl5.30 netbase perl perl-modules-5.30
0 upgraded, 8 newly installed, 0 to remove and 23 not upgraded.
Need to get 7341 kB of archives.
After this operation, 46.7 MB of additional disk space will be used.
real	0m9.544s
user	0m2.839s
sys	0m0.775s

Arch Linux’s pacman takes a little under 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9f6672688a64 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            130.8 KiB  1090 KiB/s 00:00
 extra          1655.8 KiB  3.48 MiB/s 00:00
 community         5.2 MiB  6.11 MiB/s 00:01
resolving dependencies...
looking for conflicting packages...

Packages (2) perl-file-next-1.18-2  ack-3.4.0-1

Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
real	0m2.936s
user	0m0.375s
sys	0m0.160s

Alpine’s apk takes a little over 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
(1/4) Installing libbz2 (1.0.8-r1)
(2/4) Installing perl (5.30.3-r0)
(3/4) Installing perl-file-next (1.18-r0)
(4/4) Installing ack (3.3.1-r0)
Executing busybox-1.31.1-r16.trigger
OK: 43 MiB in 18 packages
real	0m 1.24s
user	0m 0.40s
sys	0m 0.15s


You can expand each of these:

Fedora’s dnf takes over 4 minutes to fetch and unpack 226 MB.

% docker run -t -i fedora /bin/bash
[root@6a52ecfc3afa /]# time dnf install -y qemu
Fedora 32 openh264 (From Cisco) - x86_64     3.1 kB/s | 2.5 kB     00:00
Fedora Modular 32 - x86_64                   6.3 MB/s | 4.9 MB     00:00
Fedora Modular 32 - x86_64 - Updates         6.0 MB/s | 3.7 MB     00:00
Fedora 32 - x86_64 - Updates                 334 kB/s |  23 MB     01:10
Fedora 32 - x86_64                            33 MB/s |  70 MB     00:02

Total download size: 181 M
Downloading Packages:

real	4m37.652s
user	0m38.239s
sys	0m6.321s

NixOS’s Nix takes almost 34s to fetch and unpack 180 MB.

% docker run -t -i nixos/nix
83971cf79f7e:/# time sh -c 'nix-channel --update && nix-env -iA nixpkgs.qemu'
unpacking channels...
created 1 symlinks in user environment
installing 'qemu-5.1.0'
these paths will be fetched (180.70 MiB download, 1146.92 MiB unpacked):
real	0m 33.64s
user	0m 16.96s
sys	0m 3.05s

Debian’s apt takes over 95 seconds to fetch and unpack 224 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 sid InRelease [146 kB]
Get:2 sid/main amd64 Packages [8400 kB]
Fetched 8546 kB in 1s (5998 kB/s)
Fetched 216 MB in 43s (5006 kB/s)
real	1m25.375s
user	0m29.163s
sys	0m12.835s

Arch Linux’s pacman takes almost 44s to fetch and unpack 142 MB.

% docker run -t -i archlinux/base
[root@58c78bda08e8 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core          130.8 KiB  1055 KiB/s 00:00
 extra        1655.8 KiB  3.70 MiB/s 00:00
 community       5.2 MiB  7.89 MiB/s 00:01
Total Download Size:   135.46 MiB
Total Installed Size:  661.05 MiB
real	0m43.901s
user	0m4.980s
sys	0m2.615s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Appendix B: measurement details (2019)


You can expand each of these:

Fedora’s dnf takes almost 30 seconds to fetch and unpack 107 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y ack
Fedora Modular 30 - x86_64            4.4 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  3.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           17 MB/s |  19 MB     00:01
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
Install  44 Packages

Total download size: 13 M
Installed size: 42 M
real	0m29.498s
user	0m22.954s
sys	0m1.085s

NixOS’s Nix takes 14s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i perl5.28.2-ack-2.28'
unpacking channels...
created 2 symlinks in user environment
installing 'perl5.28.2-ack-2.28'
these paths will be fetched (14.91 MiB download, 80.83 MiB unpacked):
copying path '/nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man' from ''...
copying path '/nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27' from ''...
copying path '/nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16' from ''...
copying path '/nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48' from ''...
copying path '/nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53' from ''...
copying path '/nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31' from ''...
copying path '/nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2' from ''...
copying path '/nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28' from ''...
building '/nix/store/q3243sjg91x1m8ipl0sj5gjzpnbgxrqw-user-environment.drv'...
created 56 symlinks in user environment
real	0m 14.02s
user	0m 8.83s
sys	0m 2.69s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y ack-grep)
Get:1 sid InRelease [233 kB]
Get:2 sid/main amd64 Packages [8270 kB]
Fetched 8502 kB in 2s (4764 kB/s)
The following NEW packages will be installed:
  ack ack-grep libfile-next-perl libgdbm-compat4 libgdbm5 libperl5.26 netbase perl perl-modules-5.26
The following packages will be upgraded:
1 upgraded, 9 newly installed, 0 to remove and 60 not upgraded.
Need to get 8238 kB of archives.
After this operation, 42.3 MB of additional disk space will be used.
real	0m9.096s
user	0m2.616s
sys	0m0.441s

Arch Linux’s pacman takes a little over 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            132.2 KiB  1033K/s 00:00
 extra          1629.6 KiB  2.95M/s 00:01
 community         4.9 MiB  5.75M/s 00:01
Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
real	0m3.354s
user	0m0.224s
sys	0m0.049s

Alpine’s apk takes only about 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
/ # time apk add ack
(1/4) Installing perl-file-next (1.16-r0)
(2/4) Installing libbz2 (1.0.6-r7)
(3/4) Installing perl (5.28.2-r1)
(4/4) Installing ack (3.0.0-r0)
Executing busybox-1.30.1-r2.trigger
OK: 44 MiB in 18 packages
real	0m 0.96s
user	0m 0.25s
sys	0m 0.07s


You can expand each of these:

Fedora’s dnf takes over a minute to fetch and unpack 266 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y qemu
Fedora Modular 30 - x86_64            3.1 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  2.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           20 MB/s |  19 MB     00:00
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
Install  262 Packages
Upgrade    4 Packages

Total download size: 172 M
real	1m7.877s
user	0m44.237s
sys	0m3.258s

NixOS’s Nix takes 38s to fetch and unpack 262 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i qemu-4.0.0'
unpacking channels...
created 2 symlinks in user environment
installing 'qemu-4.0.0'
these paths will be fetched (262.18 MiB download, 1364.54 MiB unpacked):
real	0m 38.49s
user	0m 26.52s
sys	0m 4.43s

Debian’s apt takes 51 seconds to fetch and unpack 159 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 sid InRelease [149 kB]
Get:2 sid/main amd64 Packages [8426 kB]
Fetched 8574 kB in 1s (6716 kB/s)
Fetched 151 MB in 2s (64.6 MB/s)
real	0m51.583s
user	0m15.671s
sys	0m3.732s

Arch Linux’s pacman takes 1m2s to fetch and unpack 124 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core       132.2 KiB   751K/s 00:00
 extra     1629.6 KiB  3.04M/s 00:01
 community    4.9 MiB  6.16M/s 00:01
Total Download Size:   123.20 MiB
Total Installed Size:  587.84 MiB
real	1m2.475s
user	0m9.272s
sys	0m2.458s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Planet DebianMichael Stapelberg: distri: a Linux distribution to research fast package management

Over the last year or so I have worked on a research linux distribution in my spare time. It’s not a distribution for researchers (like Scientific Linux), but my personal playground project to research linux distribution development, i.e. try out fresh ideas.

This article focuses on the package format and its advantages, but there is more to distri, which I will cover in upcoming blog posts.


I was a Debian Developer for the 7 years from 2012 to 2019, but using the distribution often left me frustrated, ultimately resulting in me winding down my Debian work.

Frequently, I was noticing a large gap between the actual speed of an operation (e.g. doing an update) and the possible speed based on back of the envelope calculations. I wrote more about this in my blog post “Package managers are slow”.

To me, this observation means that either there is potential to optimize the package manager itself (e.g. apt), or what the system does is just too complex. While I remember seeing some low-hanging fruit¹, through my work on distri, I wanted to explore whether all the complexity we currently have in Linux distributions such as Debian or Fedora is inherent to the problem space.

I have completed enough of the experiment to conclude that the complexity is not inherent: I can build a Linux distribution for general-enough purposes which is much less complex than existing ones.

① Those were low-hanging fruit from a user perspective. I’m not saying that fixing them is easy in the technical sense; I know too little about apt’s code base to make such a statement.

Key idea: packages are images, not archives

One key idea is to switch from using archives to using images for package contents. Common package managers such as dpkg(1) use tar(1) archives with various compression algorithms.

distri uses SquashFS images, a comparatively simple file system image format that I happen to be familiar with from my work on the gokrazy Raspberry Pi 3 Go platform.

This idea is not novel: AppImage and snappy also use images, but only for individual, self-contained applications. distri however uses images for distribution packages with dependencies. In particular, there is no duplication of shared libraries in distri.

A nice side effect of using read-only image files is that applications are immutable and can hence not be broken by accidental (or malicious!) modification.

Key idea: separate hierarchies

Package contents are made available under a fully-qualified path. E.g., all files provided by package zsh-amd64-5.6.2-3 are available under /ro/zsh-amd64-5.6.2-3. The mountpoint /ro stands for read-only, which is short yet descriptive.

Perhaps surprisingly, building software with custom prefix values of e.g. /ro/zsh-amd64-5.6.2-3 is widely supported, thanks to:

  1. Linux distributions, which build software with prefix set to /usr, whereas FreeBSD (and the autotools default), which build with prefix set to /usr/local.

  2. Enthusiast users in corporate or research environments, who install software into their home directories.

Because using a custom prefix is a common scenario, upstream awareness for prefix-correctness is generally high, and the rarely required patch will be quickly accepted.

Key idea: exchange directories

Software packages often exchange data by placing or locating files in well-known directories. Here are just a few examples:

  • gcc(1) locates the libusb(3) headers via /usr/include
  • man(1) locates the nginx(1) manpage via /usr/share/man.
  • zsh(1) locates executable programs via PATH components such as /bin

In distri, these locations are called exchange directories and are provided via FUSE in /ro.

Exchange directories come in two different flavors:

  1. global. The exchange directory, e.g. /ro/share, provides the union of the share sub directory of all packages in the package store.
    Global exchange directories are largely used for compatibility, see below.

  2. per-package. Useful for tight coupling: e.g. irssi(1) does not provide any ABI guarantees, so plugins such as irssi-robustirc can declare that they want e.g. /ro/irssi-amd64-1.1.1-1/out/lib/irssi/modules to be a per-package exchange directory and contain files from their lib/irssi/modules.

Search paths sometimes need to be fixed

Programs which use exchange directories sometimes use search paths to access multiple exchange directories. In fact, the examples above were taken from gcc(1) ’s INCLUDEPATH, man(1) ’s MANPATH and zsh(1) ’s PATH. These are prominent ones, but more examples are easy to find: zsh(1) loads completion functions from its FPATH.

Some search path values are derived from --datadir=/ro/share and require no further attention, but others might derive from e.g. --prefix=/ro/zsh-amd64-5.6.2-3/out and need to be pointed to an exchange directory via a specific command line flag.

FHS compatibility

Global exchange directories are used to make distri provide enough of the Filesystem Hierarchy Standard (FHS) that third-party software largely just works. This includes a C development environment.

I successfully ran a few programs from their binary packages such as Google Chrome, Spotify, or Microsoft’s Visual Studio Code.

Fast package manager

I previously wrote about how Linux distribution package managers are too slow.

distri’s package manager is extremely fast. Its main bottleneck is typically the network link, even at high speed links (I tested with a 100 Gbps link).

Its speed comes largely from an architecture which allows the package manager to do less work. Specifically:

  1. Package images can be added atomically to the package store, so we can safely skip fsync(2) . Corruption will be cleaned up automatically, and durability is not important: if an interactive installation is interrupted, the user can just repeat it, as it will be fresh on their mind.

  2. Because all packages are co-installable thanks to separate hierarchies, there are no conflicts at the package store level, and no dependency resolution (an optimization problem requiring SAT solving) is required at all.
    In exchange directories, we resolve conflicts by selecting the package with the highest monotonically increasing distri revision number.

  3. distri proves that we can build a useful Linux distribution entirely without hooks and triggers. Not having to serialize hook execution allows us to download packages into the package store with maximum concurrency.

  4. Because we are using images instead of archives, we do not need to unpack anything. This means installing a package is really just writing its package image and metadata to the package store. Sequential writes are typically the fastest kind of storage usage pattern.

Fast installation also make other use-cases more bearable, such as creating disk images, be it for testing them in qemu(1) , booting them on real hardware from a USB drive, or for cloud providers such as Google Cloud.

Fast package builder

Contrary to how distribution package builders are usually implemented, the distri package builder does not actually install any packages into the build environment.

Instead, distri makes available a filtered view of the package store (only declared dependencies are available) at /ro in the build environment.

This means that even for large dependency trees, setting up a build environment happens in a fraction of a second! Such a low latency really makes a difference in how comfortable it is to iterate on distribution packages.

Package stores

In distri, package images are installed from a remote package store into the local system package store /roimg, which backs the /ro mount.

A package store is implemented as a directory of package images and their associated metadata files.

You can easily make available a package store by using distri export.

To provide a mirror for your local network, you can periodically distri update from the package store you want to mirror, and then distri export your local copy. Special tooling (e.g. debmirror in Debian) is not required because distri install is atomic (and update uses install).

Producing derivatives is easy: just add your own packages to a copy of the package store.

The package store is intentionally kept simple to manage and distribute. Its files could be exchanged via peer-to-peer file systems, or synchronized from an offline medium.

distri’s first release

distri works well enough to demonstrate the ideas explained above. I have branched this state into branch jackherer, distri’s first release code name. This way, I can keep experimenting in the distri repository without breaking your installation.

From the branch contents, our autobuilder creates:

  1. disk images, which…

  2. a package repository. Installations can pick up new packages with distri update.

  3. documentation for the release.

The project website can be found at The website is just the README for now, but we can improve that later.

The repository can be found at

Project outlook

Right now, distri is mainly a vehicle for my spare-time Linux distribution research. I don’t recommend anyone use distri for anything but research, and there are no medium-term plans of that changing. At the very least, please contact me before basing anything serious on distri so that we can talk about limitations and expectations.

I expect the distri project to live for as long as I have blog posts to publish, and we’ll see what happens afterwards. Note that this is a hobby for me: I will continue to explore, at my own pace, parts that I find interesting.

My hope is that established distributions might get a useful idea or two from distri.

There’s more to come: subscribe to the distri feed

I don’t want to make this post too long, but there is much more!

Please subscribe to the following URL in your feed reader to get all posts about distri:

Next in my queue are articles about hermetic packages and good package maintainer experience (including declarative packaging).

Feedback or questions?

I’d love to discuss these ideas in case you’re interested!

Please send feedback to the distri mailing list so that everyone can participate!

Planet DebianSven Hoexter: Nice Helper to Sanitize File Names -

One of the most awesome helpers I carry around in my ~/bin since the early '00s is the script written by Andreas Gohr. It just recently came back to use when I started to archive some awesome Corona enforced live session music with youtube-dl.

Update: Francois Marier pointed out that Debian contains the detox package, which has a similar functionality.

Planet DebianThomas Goirand: The Gnocchi package in Debian

This is a follow-up from the blog post of Russel as seen here: There’s a bunch of things he wrote which I unfortunately must say is inaccurate, and sometimes even completely wrong. It is my point of view that none of the reported bugs are helpful for anyone that understand Gnocchi and how to set it up. It’s however a terrible experience that Russell had, and I do understand why (and why it’s not his fault). I’m very much open on how to fix this on the packaging level, though some things aren’t IMO fixable. Here’s the details.

1/ The daemon startups

First of all, the most surprising thing is when Russell claimed that there’s no startup scripts for the Gnocchi daemons. In fact, they all come with both systemd and sysv-rc support:

# ls /lib/systemd/system/gnocchi-api.service
# /etc/init.d/gnocchi-api

Russell then tried to start gnocchi-api without the good options that are set in the Debian scripts, and not surprisingly, this failed. Russell attempted to do what was in the upstream doc, which isn’t adapted to what we have in Debian (the upstream doc is probably completely outdated, as Gnocchi is unfortunately not very well maintained upstream).

The bug #972087 is therefore, IMO not valid.

2/ The database setup

By default for all things OpenStack in Debian, there are some debconf helpers using dbconfig-common to help users setup database for their services. This is clearly for beginners, but that doesn’t prevent from attempting to understand what you’re doing. That is, more specifically for Gnocchi, there are 2 databases: one for Gnocchi itself, and one for the indexer, which not necessarily is using the same backend. The Debian package already setups one database, but one has to do it manually for the indexer one. I’m sorry this isn’t well enough documented.

Now, if some package are supporting sqlite as a backend (since most things in OpenStack are using SQLAlchemy), it looks like Gnocchi doesn’t right now. This is IMO a bug upstream, rather than a bug in the package. However, I don’t think the Debian packages are to be blame here, as they simply offer a unified interface, and it’s up to the users to know what they are doing. SQLite is anyway not a production ready backend. I’m not sure if I should close #971996 without any action, or just try to disable the SQLite backend option of this package because it may be confusing.

3/ The metrics UUID

Russell then thinks the UUID should be set by default. This is probably right in a single server setup, however, this wouldn’t work setting-up a cluster, which is probably what most Gnocchi users will do. In this type of environment, the metrics UUID must be the same on the 3 servers, and setting-up a random (and therefore different) UUID on the 3 servers wouldn’t work. So I’m also tempted to just close #972092 without any action on my side.

4/ The coordination URL

Since Gnocchi is supposed to be setup with more than one server, as in OpenStack, having an HA setup is very common, then a backend for the coordination (ie: sharing the workload) must be set. This is done by setting an URL that tooz understand. The best coordinator being Zookeeper, something like this should be set by hand:


Here again, I don’t think the Debian package is to be blamed for not providing the automation. I would however accept contributions to fix this and provide the choice using debconf, however, users would still need to understand what’s going on, and setup something like Zookeeper (or redis, memcache, or any other backend supported by tooz) to act as coordinator.

5/ The Debconf interface cannot replace a good documentation

… and there’s not so much I can do at my package maintainer level for this.

Russell, I’m really sorry for the bad user experience you had with Gnocchi. Now that you know a little big more about it, maybe you can have another go? Sure, the OpenStack telemetry system isn’t an easy to understand beast, but it’s IMO worth trying. And the recent versions can scale horizontally…

Planet DebianJunichi Uekawa: I am planning on talking about Rust programming in Debian environment.

I am planning on talking about Rust programming in Debian environment. Tried taking a video of me setting up the environment.

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Worse Than FailureCodeSOD: Nothing But Garbage

Janell found herself on a project where most of her team were offshore developers she only interacted with via email, and a tech lead who had not ever programmed on the .NET Framework, but did some VB6 “back in the day”, and thus “knew VB”.

The team dynamic rapidly became a scenario where the tech lead issued an edict, the offshore team blindly applied it, and then Janell was left staring at the code wondering how to move forward with this.

These decrees weren’t all bad. For example: “Avoid repeated code by re-factoring into methods,” isn’t the worst advice. It’s not complete advice- there’s a lot of gotchas in that- but it fits on a bumper-sticker and generally leads to better code.

There were other rules that… well:

To improve the performance of the garbage collector, all variables must be set to nothing (null) at the end of each method.

Any time someone says something like “to improve the performance of the garbage collector,” you know you’re probably in for a bad time. This is no exception. Now, in old versions of VB, there’s a lot of “stuff” about whether or not you need to do this. It was considered a standard practice in a lot of places, though the real WTF was clearly VB in this case.

In .NET, this is absolutely unnecessary, and there are much better approaches if you need to do some sort of cleanup, like implementing the IDisposable interface. But, since this was the rule, the offshore team followed the rule. And, since we have repeated code, the offshore team followed that rule too.


Public Sub SetToNothing(ByVal object As Object)
    object = Nothing
End Sub

If there is a platonic ideal of bad code, this is very close to that. This method attempts to solve a non-problem, regarding garbage collection. It also attempts to be “DRY”, by replacing one repeated line with… a different repeated line. And, most important: it doesn’t do what it claims.

The key here is that the parameter is ByVal. The copy of the reference in this method is set to nothing, but the original in the calling code is left unchanged in this example.

Oh, but remember when I said, “they were attempting to be DRY”? I lied. I’ll let Janell explain:

I found this gem in one of the developers classes. It turned out that he had copy and pasted it into every class he had worked on for good measure.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianFrançois Marier: Making an Apache website available as a Tor Onion Service

As part of the #MoreOnionsPorFavor campaign, I decided to follow's lead and make my homepage available as a Tor onion service.

Tor daemon setup

I started by installing the Tor daemon locally:

apt install tor

and then setting the following in /etc/tor/torrc:

SocksPort 0
SocksPolicy reject *
HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 80 [2600:3c04::f03c:91ff:fe8c:61ac]:80
HiddenServicePort 443 [2600:3c04::f03c:91ff:fe8c:61ac]:443
HiddenServiceVersion 3
HiddenServiceNonAnonymousMode 1
HiddenServiceSingleHopMode 1

in order to create a version 3 onion service without actually running a Tor relay.

Note that since I am making a public website available over Tor, I do not need the location of the website to be hidden and so I used the same settings as Cloudflare in their public Tor proxy.

Also, I explicitly used the external IPv6 address of my server in the configuration in order to prevent localhost bypasses.

After restarting the Tor daemon to reload the configuration file:

systemctl restart tor.service

I looked for the address of my onion service:

$ cat /var/lib/tor/hidden_service/hostname 

Apache configuration

Next, I enabled a few required Apache modules:

a2enmod mpm_event
a2enmod http2
a2enmod headers

and configured my Apache vhosts in /etc/apache2/sites-enabled/www.conf:

<VirtualHost *:443>
    ServerAlias ixrdj3iwwhkuau5tby5jh3a536a2rdhpbdbu6ldhng43r47kim7a3lid.onion

    Protocols h2, http/1.1
    Header set Onion-Location "http://ixrdj3iwwhkuau5tby5jh3a536a2rdhpbdbu6ldhng43r47kim7a3lid.onion%{REQUEST_URI}s"
    Header set alt-svc 'h2="ixrdj3iwwhkuau5tby5jh3a536a2rdhpbdbu6ldhng43r47kim7a3lid.onion:443"; ma=315360000; persist=1'
    Header add Strict-Transport-Security: "max-age=63072000"

    Include /etc/fmarier-org/www-common.include

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/
    SSLCertificateKeyFile /etc/letsencrypt/live/

<VirtualHost *:80>
    Redirect permanent /

<VirtualHost *:80>
    ServerName ixrdj3iwwhkuau5tby5jh3a536a2rdhpbdbu6ldhng43r47kim7a3lid.onion
    Include /etc/fmarier-org/www-common.include

Note that /etc/fmarier-org/www-common.include contains all of the configuration options that are common to both the HTTP and the HTTPS sites (e.g. document root, caching headers, aliases, etc.).

Finally, I restarted Apache:

apache2ctl configtest
systemctl restart apache2.service


In order to test that my website is correctly available at its .onion address, I opened the following URLs in a Brave Tor window:

I also checked that the main URL ( exposes a working Onion-Location header which triggers the display of a button in the URL bar (recently merged and available in Brave Nightly):

Testing that the Alt-Svc is working required using the Tor Browser since that's not yet supported in Brave:

  1. Open
  2. Wait 30 seconds.
  3. Reload the page.

On the server side, I saw the following:

2a0b:f4c2:2::1 - - [14/Oct/2020:02:42:20 +0000] "GET / HTTP/2.0" 200 2696 "-" "Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0"
2600:3c04::f03c:91ff:fe8c:61ac - - [14/Oct/2020:02:42:53 +0000] "GET / HTTP/2.0" 200 2696 "-" "Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0"

That first IP address is from a Tor exit node:

$ whois 2a0b:f4c2:2::1
inet6num:       2a0b:f4c2::/40
netname:        MK-TOR-EXIT
remarks:        -----------------------------------
remarks:        This network is used for Tor Exits.
remarks:        We do not have any logs at all.
remarks:        For more information please visit:

which indicates that the first request was not using the .onion address.

The second IP address is the one for my server:

$ dig +short -x 2600:3c04::f03c:91ff:fe8c:61ac

which indicates that the second request to Apache came from the Tor relay running on my server, hence using the .onion address.

Planet DebianDirk Eddelbuettel: tidyCpp 0.0.1: New package

A new package arrived on CRAN a few days ago. It offers a few headers files which wrap (parts) of the C API for R, but in a form that may be a little easier to use for C++ programmers. I have always liked how in Rcpp we offer good parts of the standalone R Math library in a namespace R::. While working recently with a particular C routine (for checking non-ASCII characters that will be part of the next version of the dang package which collecting various goodies in one place), I realized there may be value in collecting a few more such wrappers. So I started a few simple ones starting from simple examples.

Currently we have five headers defines.h, globals.h, internals.h, math.h, and shield.h. The first four each correpond to an R header file of the same or similar name, and the last one brings a simple yet effective alternative to PROTECT and UNPROTECT from Rcpp (in a slightly simplified way). None of the headers are “complete”, for internals.h in particular a lot more could be added (as I noticed today when experimenting with another source file that may be converted). All of the headers can be accessed with a simple #include <tidyCpp> (which, following another C++ convention, does not have a .h or .hpp suffix). And a the package ships these headers, packages desiring to use them only need LinkingTo: tidyCpp.

As usage examples, we (right now) have four files in the snippets/ directory of the package. Two of these, convolveExample.cpp and dimnamesExample.cpp both illustrate how one could change example code from Writing R Extensions. Then there are also a very simple defineExample.cpp and a shieldExample.cpp illustrating how much easier Shield() is compared to PROTECT and UNPROTECT.

Finally, there is a nice vignette discussing the package motivation with two detailed side-by-side ‘before’ and ‘after’ examples that are the aforementioned convolution and dimnames examples.

Over time, I expect to add more definitions and wrappers. Feedback would be welcome—it seems to hit a nerve already as it currently has more stars than commits even though (prior to this post) I had yet to tweet or blog about it. Please post comments and suggestions at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Cryptogram Swiss-Swedish Diplomatic Row Over Crypto AG

Previously I have written about the Swedish-owned Swiss-based cryptographic hardware company: Crypto AG. It was a CIA-owned Cold War operation for decades. Today it is called Crypto International, still based in Switzerland but owned by a Swedish company.

It’s back in the news:

Late last week, Swedish Foreign Minister Ann Linde said she had canceled a meeting with her Swiss counterpart Ignazio Cassis slated for this month after Switzerland placed an export ban on Crypto International, a Swiss-based and Swedish-owned cybersecurity company.

The ban was imposed while Swiss authorities examine long-running and explosive claims that a previous incarnation of Crypto International, Crypto AG, was little more than a front for U.S. intelligence-gathering during the Cold War.

Linde said the Swiss ban was stopping “goods” — which experts suggest could include cybersecurity upgrades or other IT support needed by Swedish state agencies — from reaching Sweden.

She told public broadcaster SVT that the meeting with Cassis was “not appropriate right now until we have fully understood the Swiss actions.”

EDITED TO ADD (10/13): Lots of information on Crypto AG.

Krebs on SecurityMicrosoft Patch Tuesday, October 2020 Edition

It’s Cybersecurity Awareness Month! In keeping with that theme, if you (ab)use Microsoft Windows computers you should be aware the company shipped a bevy of software updates today to fix at least 87 security problems in Windows and programs that run on top of the operating system. That means it’s once again time to backup and patch up.

Eleven of the vulnerabilities earned Microsoft’s most-dire “critical” rating, which means bad guys or malware could use them to gain complete control over an unpatched system with little or no help from users.

Worst in terms of outright scariness is probably CVE-2020-16898, which is a nasty bug in Windows 10 and Windows Server 2019 that could be abused to install malware just by sending a malformed packet of data at a vulnerable system. CVE-2020-16898 earned a CVSS Score of 9.8 (10 is the most awful).

Security vendor McAfee has dubbed the flaw “Bad Neighbor,” and in a blog post about it said a proof-of-concept exploit shared by Microsoft with its partners appears to be “both extremely simple and perfectly reliable,” noting that this sucker is imminently “wormable” — i.e. capable of being weaponized into a threat that spreads very quickly within networks.

“It results in an immediate BSOD (Blue Screen of Death), but more so, indicates the likelihood of exploitation for those who can manage to bypass Windows 10 and Windows Server 2019 mitigations,” McAfee’s Steve Povolny wrote. “The effects of an exploit that would grant remote code execution would be widespread and highly impactful, as this type of bug could be made wormable.”

Trend Micro’s Zero Day Initiative (ZDI) calls special attention to another critical bug quashed in this month’s patch batch: CVE-2020-16947, which is a problem with Microsoft Outlook that could result in malware being loaded onto a system just by previewing a malicious email in Outlook.

“The Preview Pane is an attack vector here, so you don’t even need to open the mail to be impacted,” said ZDI’s Dustin Childs.

While there don’t appear to be any zero-day flaws in October’s release from Microsoft, Todd Schell from Ivanti points out that a half-dozen of these flaws were publicly disclosed prior to today, meaning bad guys have had a jump start on being able to research and engineer working exploits.

Other patches released today tackle problems in Exchange Server, Visual Studio, .NET Framework, and a whole mess of other core Windows components.

For any of you who’ve been pining for a Flash Player patch from Adobe, your days of waiting are over. After several months of depriving us of Flash fixes, Adobe’s shipped an update that fixes a single — albeit critical — flaw in the program that crooks could use to install bad stuff on your computer just by getting you to visit a hacked or malicious website.

Chrome and Firefox both now disable Flash by default, and Chrome and IE/Edge auto-update the program when new security updates are available. Mercifully, Adobe is slated to retire Flash Player later this year, and Microsoft has said it plans to ship updates at the end of the year that will remove Flash from Windows machines.

It’s a good idea for Windows users to get in the habit of updating at least once a month, but for regular users (read: not enterprises) it’s usually safe to wait a few days until after the patches are released, so that Microsoft has time to iron out any chinks in the new armor.

But before you update, please make sure you have backed up your system and/or important files. It’s not uncommon for a Windows update package to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Planet DebianJonathan Dowland: The Cure — Pornography

picture of a vinyl record

Last weekend, Tim Burgess’s twitter listening party covered The Cure’s short, dark 1982 album “Pornography”. I realised I’d never actually played the record, which I picked up a couple of years ago from a shop in the Grainger Market which is sadly no longer there. It was quite a wallet-threatening shop so perhaps it’s a good thing it’s gone.

Monday was a dreary, rainy day which seemed the perfect excuse to put it on. It’s been long enough since I last listened to my CD copy of the album that there were a few nice surprises to rediscover. The closing title track sounded quite different to how I remembered it, with Robert Smith’s vocals buried deeper in the mix, but my memory might be mixing up a different session take.

Truly a fitting closing lyric for our current times: I must fight this sickness / Find a cure

Cryptogram US Cyber Command and Microsoft Are Both Disrupting TrickBot

Earlier this month, we learned that someone is disrupting the TrickBot botnet network.

Over the past 10 days, someone has been launching a series of coordinated attacks designed to disrupt Trickbot, an enormous collection of more than two million malware-infected Windows PCs that are constantly being harvested for financial data and are often used as the entry point for deploying ransomware within compromised organizations.

On Sept. 22, someone pushed out a new configuration file to Windows computers currently infected with Trickbot. The crooks running the Trickbot botnet typically use these config files to pass new instructions to their fleet of infected PCs, such as the Internet address where hacked systems should download new updates to the malware.

But the new configuration file pushed on Sept. 22 told all systems infected with Trickbot that their new malware control server had the address, which is a “localhost” address that is not reachable over the public Internet, according to an analysis by cyber intelligence firm Intel 471.

A few days ago, the Washington Post reported that it’s the work of US Cyber Command:

U.S. Cyber Command’s campaign against the Trickbot botnet, an army of at least 1 million hijacked computers run by Russian-speaking criminals, is not expected to permanently dismantle the network, said four U.S. officials, who spoke on the condition of anonymity because of the matter’s sensitivity. But it is one way to distract them at least for a while as they seek to restore operations.

The network is controlled by “Russian speaking criminals,” and the fear is that it will be used to disrupt the US election next month.

The effort is part of what Gen. Paul Nakasone, the head of Cyber Command, calls “persistent engagement,” or the imposition of cumulative costs on an adversary by keeping them constantly engaged. And that is a key feature of CyberCom’s activities to help protect the election against foreign threats, officials said.

Here’s General Nakasone talking about persistent engagement.

Microsoft is also disrupting Trickbot:

We disrupted Trickbot through a court order we obtained as well as technical action we executed in partnership with telecommunications providers around the world. We have now cut off key infrastructure so those operating Trickbot will no longer be able to initiate new infections or activate ransomware already dropped into computer systems.


We took today’s action after the United States District Court for the Eastern District of Virginia granted our request for a court order to halt Trickbot’s operations.

During the investigation that underpinned our case, we were able to identify operational details including the infrastructure Trickbot used to communicate with and control victim computers, the way infected computers talk with each other, and Trickbot’s mechanisms to evade detection and attempts to disrupt its operation. As we observed the infected computers connect to and receive instructions from command and control servers, we were able to identify the precise IP addresses of those servers. With this evidence, the court granted approval for Microsoft and our partners to disable the IP addresses, render the content stored on the command and control servers inaccessible, suspend all services to the botnet operators, and block any effort by the Trickbot operators to purchase or lease additional servers.

To execute this action, Microsoft formed an international group of industry and telecommunications providers. Our Digital Crimes Unit (DCU) led investigation efforts including detection, analysis, telemetry, and reverse engineering, with additional data and insights to strengthen our legal case from a global network of partners including FS-ISAC, ESET, Lumen’s Black Lotus Labs, NTT and Symantec, a division of Broadcom, in addition to our Microsoft Defender team. Further action to remediate victims will be supported by internet service providers (ISPs) and computer emergency readiness teams (CERTs) around the world.

This action also represents a new legal approach that our DCU is using for the first time. Our case includes copyright claims against Trickbot’s malicious use of our software code. This approach is an important development in our efforts to stop the spread of malware, allowing us to take civil action to protect customers in the large number of countries around the world that have these laws in place.

Brian Krebs comments:

In legal filings, Microsoft argued that Trickbot irreparably harms the company “by damaging its reputation, brands, and customer goodwill. Defendants physically alter and corrupt Microsoft products such as the Microsoft Windows products. Once infected, altered and controlled by Trickbot, the Windows operating system ceases to operate normally and becomes tools for Defendants to conduct their theft.”

This is a novel use of trademark law.

Cryptogram 2020 Workshop on Economics of Information Security

The Workshop on Economics of Information Security will be online this year. Register here.

Cory DoctorowAttack Surface is out!

Today is the US/Canada release-date for Attack Surface, the third Little Brother book. It’s been a long time coming (Homeland, the second book, came out in 2013)!

It’s the fourth book I’ve published in 2020, and it’s my last book of the year.

When the lockdown hit in March, I started thinking about what I’d do if my US/Canada/UK/India events were all canceled, but I still treated it as a distant contingency. But well, here we are!

My US publisher, Tor Books, has put together a series of 8 ticketed events, each with a pair of brilliant, fascinating guests, to break down all the themes in the book. Each is sponsored by a different bookstore and each comes with a copy of the book.

We kick off the series TONIGHT at 5PM Pacific/8PM Eastern with “Politics and Protest,” sponsored by The Strand NYC, with guests Eva Galperin (EFF Threat Lab) and Ron Deibert (U of T Citizen Lab).

There will be video releases of these events eventually, but if you want to attend more than one and don’t need more than one copy of the book, you can donate your copy to a school, prison, library, etc. Here’s a list of institutions seeking copies:

And if you are affiliated with an organization or institution that would like to put your name down for a freebie, here’s the form. I’m checking it several times/day and adding new entries to the list:

I got a fantastic surprise this morning: a review by Paul Di Filippo in the Washington Post:

He starts by calling me “among the best of the current practitioners of near-future sf,” and, incredibly, the review only gets better after that!

Di Filippo says the book is a “political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance,” whose hero, Masha, is “a protagonist worth rooting for, whose inner conflicts and cognitive dissonances propel her to surprising, heroic actions.”

He closes by saying that my work “charts the universal currents of the human heart and soul with precision.”

I mean, wow.

If you’d prefer an audiobook of Attack Surface; you’re in luck! I produced my own audio edition of the book, with Amber Benson narrating, and it’s amazing!

Those of you who backed the audio on Kickstarter will be getting your emails from BackerKit shortly (I’ve got an email in to them and will post an update to the KS as soon as they get back to me.

If you missed the presale, you can still get the audio, everywhere EXCEPT Audible, who refuse to carry my work as it’s DRM-free (that’s why I had to fund the audiobook; publishers aren’t interested in the rights to audio that can’t be sold on the dominant platform).

Here’s some of the stores carrying the book today:

Supporting Cast (Audiobooks from Slate):


I expect that both Downpour and Google Play should have copies for sale any minute now (both have the book in their systems but haven’t made it live yet).

And of course, you can get it direct from me, along with all my other ebooks and audiobooks:

Worse Than FailureSlow Load

LED traffic light on red

After years spent supporting an enterprisey desktop application with a huge codebase full of WTFs, Sammy thought he had seen all there was to be seen. He was about to find out how endlessly deep the bottom of the WTF barrel truly was.

During development, Sammy frequently had to restart said application: a surprisingly onerous process, as it took about 30 seconds each and every time to return to a usable state. Eventually, a mix of curiosity and annoyance spurred him into examining just why it took so long to start.

He began by profiling the performance. When the application first initialized, it performed 10 seconds of heavy processing. Then the CPU load dropped to 0% for a full 16 seconds. After that, it increased, pegging out one of the eight cores on Sammy's machine for 4 seconds. Finally, the application was ready to accept user input. Sammy knew that, for at least some of the time, the application was calling out to and waiting for a response from a server. That angle would have to be investigated as well.

Further digging and hair-pulling led Sammy to a buried bit of code, a Very Old Mechanism for configuring the visibility of items on the main menu. While some menu items were hard-coded, others were dynamically added by extension modules. The application administrator had the ability to hide or show any of them by switching checkboxes in a separate window.

When the application first started up, it retrieved the user's latest configuration from the server, applied it to their main menu, then sent the resulting configuration back to the server. The server, in turn, called DELETE * FROM MENU_CFG; INSERT INTO MENU_CFG (…); INSERT INTO MENU_CFG (…); …

Sammy didn't know what was the worst thing about all this. Was it the fact that the call to the server was performed synchronously for no reason? Or, that after multiple DELETE / INSERT cycles, the table of a mere 400 rows weighed more than 16 MB? When the users all came in to work at 9:00 AM and started up the application at roughly the same time, their concurrent transactions caused quite the bottleneck—and none of it was necessary. The Very Old Mechanism had been replaced with role-based configuration years earlier.

All Sammy could do was write up a change request to put in JIRA. Speaking of bottlenecks ...

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Planet DebianDirk Eddelbuettel: GitHub Streak: Round Seven

Six years ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

and then showed the first chart of GitHub streaking 366 days:

github activity october 2013 to october 2014

And five years ago a first follow-up appeared in this post about 731 days:

github activity october 2014 to october 2015

And four years ago we had a followup at 1096 days

github activity october 2015 to october 2016

And three years ago we had another one marking 1461 days

github activity october 2016 to october 2017

And two years ago another one for 1826 days

github activity october 2017 to october 2018

And last year another one bringing it to 2191 days

github activity october 2018 to october 2019

And as today is October 12, here is the newest one from 2019 to 2020 with a new total of 2557 days:

github activity october 2018 to october 2019

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianSteinar H. Gunderson: plocate 1.0.0 released

I've released version 1.0.0 of plocate, my faster locate(1)! (Actually, I'm now at 1.0.2, after some minor fixes and improvements.) It has a new build system, portability fixes, man pages, support for case-insensitive searches (still quite fast), basic and extended regex searches (as slow as mlocate) and a few other options. The latter two were mostly to increase mlocate compatibility, not because I think either is very widely used. That, and supporting case-insensitive searches was an interesting problem in its own right :-)

It now also has a small home page with tarballs. And access() checking is also now asynchronous via io_uring via a small trick (assuming Linux 5.6 or newer, it can run an asynchronous statx() to prime the cache, all but guaranteeing that the access() call itself won't lead to I/O), speeding up certain searches on non-SSDs even more.

There's also a Debian package in NEW.

In short, plocate now has grown up, and it wants to be your default locate. I've considered replacing mlocate's updatedb as well, but it's honestly not a space I want to be in right now; it involves so much munging with special cases caused by filesystem restrictions and the likes.

Bug reports, distribution packages and all other feedback welcome!

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 18)

Here’s part eighteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

Content warning for domestic abuse and sexual violence.

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Some show notes:

Here’s the form for getting a free Little Brother story, “Force Multiplier” by pre-ordering the print edition of Attack Surface (US/Canada only)

Here’s the schedule for the Attack Surface lectures

Here’s the list of schools and other institutions in need of donated copies of Attack Surface.

Here’s the form to request a copy of Attack Surface for schools, libraries, classrooms, etc.

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.


Cryptogram Google Responds to Warrants for “About” Searches

One of the things we learned from the Snowden documents is that the NSA conducts “about” searches. That is, searches based on activities and not identifiers. A normal search would be on a name, or IP address, or phone number. An about search would something like “show me anyone that has used this particular name in a communications,” or “show me anyone who was at this particular location within this time frame.” These searches are legal when conducted for the purpose of foreign surveillance, but the worry about using them domestically is that they are unconstitutionally broad. After all, the only way to know who said a particular name is to know what everyone said, and the only way to know who was at a particular location is to know where everyone was. The very nature of these searches requires mass surveillance.

The FBI does not conduct mass surveillance. But many US corporations do, as a normal part of their business model. And the FBI uses that surveillance infrastructure to conduct its own about searches. Here’s an arson case where the FBI asked Google who searched for a particular street address:

Homeland Security special agent Sylvette Reynoso testified that her team began by asking Google to produce a list of public IP addresses used to google the home of the victim in the run-up to the arson. The Chocolate Factory [Google] complied with the warrant, and gave the investigators the list. As Reynoso put it:

On June 15, 2020, the Honorable Ramon E. Reyes, Jr., United States Magistrate Judge for the Eastern District of New York, authorized a search warrant to Google for users who had searched the address of the Residence close in time to the arson.

The records indicated two IPv6 addresses had been used to search for the address three times: one the day before the SUV was set on fire, and the other two about an hour before the attack. The IPv6 addresses were traced to Verizon Wireless, which told the investigators that the addresses were in use by an account belonging to Williams.

Google’s response is that this is rare:

While word of these sort of requests for the identities of people making specific searches will raise the eyebrows of privacy-conscious users, Google told The Register the warrants are a very rare occurrence, and its team fights overly broad or vague requests.

“We vigorously protect the privacy of our users while supporting the important work of law enforcement,” Google’s director of law enforcement and information security Richard Salgado told us. “We require a warrant and push to narrow the scope of these particular demands when overly broad, including by objecting in court when appropriate.

“These data demands represent less than one per cent of total warrants and a small fraction of the overall legal demands for user data that we currently receive.”

Here’s another example of what seems to be about data leading to a false arrest.

According to the lawsuit, police investigating the murder knew months before they arrested Molina that the location data obtained from Google often showed him in two places at once, and that he was not the only person who drove the Honda registered under his name.

Avondale police knew almost two months before they arrested Molina that another man ­ his stepfather ­ sometimes drove Molina’s white Honda. On October 25, 2018, police obtained records showing that Molina’s Honda had been impounded earlier that year after Molina’s stepfather was caught driving the car without a license.

Data obtained by Avondale police from Google did show that a device logged into Molina’s Google account was in the area at the time of Knight’s murder. Yet on a different date, the location data from Google also showed that Molina was at a retirement community in Scottsdale (where his mother worked) while debit card records showed that Molina had made a purchase at a Walmart across town at the exact same time.

Molina’s attorneys argue that this and other instances like it should have made it clear to Avondale police that Google’s account-location data is not always reliable in determining the actual location of a person.

“About” searches might be rare, but that doesn’t make them a good idea. We have knowingly and willingly built the architecture of a police state, just so companies can show us ads. (And it is increasingly apparent that the advertising-supported Internet is heading for a crash.)

Planet DebianRussell Coker: First Attempt at Gnocchi-Statsd

I’ve been investigating the options for tracking system statistics to diagnose performance problems. The idea is to track all sorts of data about the system (network use, disk IO, CPU, etc) and look for correlations at times of performance problems. DataDog is pretty good for this but expensive, it’s apparently based on or inspired by the Etsy Statsd. It’s claimed that the gnocchi-statsd is the best implementation of the protoco used by the Etsy Statsd, so I decided to install that.

I use Debian/Buster for this as that’s what I’m using for the hardware that runs KVM VMs. Here is what I did:

# it depends on a local MySQL database
apt -y install mariadb-server mariadb-client
# install the basic packages for gnocchi
apt -y install gnocchi-common python3-gnocchiclient gnocchi-statsd uuid

In the Debconf prompts I told it to “setup a database” and not to manage keystone_authtoken with debconf (because I’m not doing a full OpenStack installation).

This gave a non-working configuration as it didn’t configure the MySQL database for the [indexer] section and the sqlite database that was configured didn’t work for unknown reasons. I filed Debian bug #971996 about this [1]. To get this working you need to edit /etc/gnocchi/gnocchi.conf and change the url line in the [indexer] section to something like the following (where the password is taken from the [database] section).

url = mysql+pymysql://gnocchi-common:PASS@localhost:3306/gnocchidb

To get the statsd interface going you have to install the gnocchi-statsd package and edit /etc/gnocchi/gnocchi.conf to put a UUID in the resource_id field (the Debian package uuid is good for this). I filed Debian bug #972092 requesting that the UUID be set by default on install [2].

Here’s an official page about how to operate Gnocchi [3]. The main thing I got from this was that the following commands need to be run from the command-line (I ran them as root in a VM for test purposes but would do so with minimum privs for a real deployment).


To communicate with Gnocchi you need the gnocchi-api program running, which uses the uwsgi program to provide the web interface by default. It seems that this was written for a version of uwsgi different than the one in Buster. I filed Debian bug #972087 with a patch to make it work with uwsgi [4]. Note that I didn’t get to the stage of an end to end test, I just got it to basically run without error.

After getting “gnocchi-api” running (in a terminal not as a daemon as Debian doesn’t seem to have a service file for it), I ran the client program “gnocchi” and then gave it the “status” command which failed (presumably due to the metrics daemon not running), but at least indicated that the client and the API could communicate.

Then I ran the “gnocchi-metricd” and got the following error:

2020-10-12 14:59:30,491 [9037] ERROR    gnocchi.cli.metricd: Unexpected error during processing job
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/gnocchi/cli/", line 87, in run
  File "/usr/lib/python3/dist-packages/gnocchi/cli/", line 248, in _run_job
  File "/usr/lib/python3/dist-packages/tooz/", line 592, in update_capabilities
    raise tooz.NotImplemented

At this stage I’ve had enough of gnocchi. I’ll give the Etsy Statsd a go next.


Thomas has responded to this post [5]. At this stage I’m not really interested in giving Gnocchi another go. There’s still the issue of the indexer database which should be different from the main database somehow and sqlite (the config file default) doesn’t work.

I expect that if I was to persist with Gnocchi I would encounter more poorly described error messages from the code which either don’t have Google hits when I search for them or have Google hits to unanswered questions from 5+ years ago.

The Gnocchi systemd config files are in different packages to the programs, this confused me and I thought that there weren’t any systemd service files. I had expected that installing a package with a daemon binary would also get the systemd unit file to match.

The cluster features of Gnocchi are probably really good if you need that sort of thing. But if you have a small instance (EG a single VM server) then it’s not needed. Also one of the original design ideas of the Etsy Statsd was that UDP was used because data could just be dropped if there was a problem. I think for many situations the same concept could apply to the entire stats service.

If the other statsd programs don’t do what I need then I may give Gnocchi another go.

Sociological ImagesSociology IRL

One of the goals of this blog is to help get sociology to the public by offering short, interesting comments on what our discipline looks like out in the world.

A sociologist can unpack this!
Photo Credit: Mario A. P., Flickr CC

We live sociology every day, because it is the science of relationships among people and groups. But because the name of our discipline is kind of a buzzword itself, I often find excellent examples of books in the nonfiction world that are deeply sociological, even if that isn’t how their authors or publishers would describe them.

Last year, I had the good fortune to help a friend as he was working on one of these books. Now that the release date is coming up, I want to tell our readers about the project because I think it is an excellent example of what happens when ideas from our discipline make it out into the “real” world beyond academia. In fact, the book is about breaking down that idea of the “real world” itself. It is called IRL: Finding realness, meaning, and belonging in our digital lives, by Chris Stedman.

In IRL, Chris tackles big questions about what it means to be authentic in a world where so much of our social interaction is now taking place online. The book goes to deep places, but it doesn’t burden the reader with an overly-serious tone. Instead, Chris brings a lightness by blending memoir, interviews, and social science, all arranged in vignettes so that reading feels like scrolling through a carefully curated Instagram feed.

What makes this book stand out to me is that Chris really brings the sociology here. In the pages of IRL I spotted Zeynep Tufekci’s Twitter and Tear Gas, Mario Small’s Someone to Talk To, Nathan Jurgenson’s work on digital dualism, Jacqui Frost’s work on navigating uncertainty, Paul McClure on technology and religion, and a nod to some work with yours truly about nonreligious folks. To see Chris citing so many sociologists, among the other essayists and philosophers that inform his work, really gives you a sense of the intellectual grounding here and what it looks like to put our field’s ideas into practice.

Above all, I think the book is worth your time because it is a glowing example of what it means to think relationally about our own lives and the lives of others. That makes Chris’ writing a model for the kind of reflections many of us have in mind when we assign personal essays to our students in Sociology 101—not because it is basic, but because it is willing to deeply consider how we navigate our relationships today and how those relationships shape us, in turn.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at

Planet DebianMarkus Koschany: My Free Software Activities in September 2020

Welcome to Here is my monthly report (+ the first week in October) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games


Debian Java

  • The focus was on two major packages this month, PDFsam, a tool to manipulate PDF files and Netbeans, one of the three well known Java IDEs. I basically updated every PDFsam related sejda dependency and packaged a new library libsejda-common-java, which is currently waiting in the NEW queue. As soon as this one has been approved, we should be able to see the latest release in Debian soon.
  • Unfortunately I came to the conclusion that maintaining Netbeans in Debian is no longer a viable solution. I have been the sole maintainer for the past five years and managed to package the basic Java IDE in Stretch. I also had a 98% ready package for Buster but there were some bugs that made it unfit for a stable release in my opinion. The truth is, it takes a lot of time to patch Netbeans, just to make the build system DFSG compliant and to build the IDE from source. We have never managed to provide more functionality than the basic Java IDE features too. Still, we had to maintain dozens of build-dependencies and there was a constant struggle to make everything work with just a single version of a library. While the Debian way works great for most common projects, it doesn’t scale very well for very complex ones like Java IDEs. Neither Eclipse nor Netbeans are really fully maintainable in Debian since they consist of hundreds of different jar files, even if the toolchain was perfect, it would require too much time to maintain all those Debian packages.
  • I voiced that sentiment on our debian-java mailinglist while also discussing the situation of complex server packages like Apache Solr. Similar to Netbeans it requires hundreds of jar files to get running. I believe our users are better served in those cases by using tools like flatpak for desktop packages or jdeb for server packages. The idea is to provide a Debian toolchain which would download a source package from upstream and then use jdeb to create a Debian package. Thus we could provide packages for very complex Java software again, although only via the Debian contrib distribution. The pros are: software is available as Debian packages and integrates well with your system and considerably less time is needed to maintain such packages: Cons: not available in Debian main, no security support, not checked for DFSG compliance.
  • Should we do that for all of our packages? No. This should really be limited to packages that otherwise would not be in Debian at all and are too complex to maintain, when even a whole team of normal contributors would struggle.
  • Finally the consequences were: the Netbeans IDE has been removed from Debian main but the Netbeans platform package, libnb-platform18-java, is up-to-date again just like visualvm, which depends on it.
  • New upstream releases were packaged for jboss-xnio, activemq, httpcomponents-client, jasypt and undertow to address several security vulnerabilities.
  • I also packaged a new version of sweethome3d, an Interior 2D design application .


  • The usual suspects: I updated binaryen and ublock-origin.
  • I eventually filed a RFA for privacybadger. As I mentioned in my last post, the upstream maintainer would like to see regular updates in Debian stable but I don’t want to regularly contribute time for this task. If someone is ready for the job, let me know.
  • I did a NMU for xjig to fix Debian bug. (#932742)

Debian LTS

This was my 55. month as a paid contributor and I have been paid to work 31,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • Investigated and fixed a regression in squid3 when using the icap server. (#965012)
  • DLA-2394-1. Issued a security update for squid3 fixing 4 CVE.
  • DLA-2400-1. Issued a security update for activemq fixing 1 CVE.
  • DLA-2403-1. Issued a security update for rails fixing 1 CVE.
  • DLA-2404-1. Issued a security update for eclipse-wtp fixing 1 CVE.
  • DLA-2405-1. Issued a security update for httpcomponents-client fixing 1 CVE.
  • Triaged open CVE for guacamole-server and guacamole-client and prepared patches for CVE-2020-9498 and CVE-2020-9497.
  • Prepared patches for 7 CVE in libonig.


Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 „Jessie“. This was my 28. month and I have been paid to work 15 hours on ELTS.

  • ELA-291-1. Issued a security update for libproxy fixing 1 CVE.
  • ELA-294-1. Issued a security update for squid3 fixing 4 CVE.
  • ELA-295-1. Issued a security update for rails fixing 2 CVE.
  • ELA-296-1. Issued a security update for httpcomponents-client fixing 1 CVE.

Thanks for reading and see you next time.

Krebs on SecurityMicrosoft Uses Trademark Law to Disrupt Trickbot Botnet

Microsoft Corp. has executed a coordinated legal sneak attack in a bid to disrupt the malware-as-a-service botnet Trickbot, a global menace that has infected millions of computers and is used to spread ransomware. A court in Virginia granted Microsoft control over many Internet servers Trickbot uses to plunder infected systems, based on novel claims that the crime machine abused the software giant’s trademarks. However, it appears the operation has not completely disabled the botnet.

A spam email containing a Trickbot-infected attachment that was sent earlier this year. Image: Microsoft.

“We disrupted Trickbot through a court order we obtained as well as technical action we executed in partnership with telecommunications providers around the world,” wrote Tom Burt, corporate vice president of customer security and trust at Microsoft, in a blog post this morning about the legal maneuver. “We have now cut off key infrastructure so those operating Trickbot will no longer be able to initiate new infections or activate ransomware already dropped into computer systems.”

Microsoft’s action comes just days after the U.S. military’s Cyber Command carried out its own attack that sent all infected Trickbot systems a command telling them to disconnect themselves from the Internet servers the Trickbot overlords used to control them. The roughly 10-day operation by Cyber Command also stuffed millions of bogus records about new victims into the Trickbot database in a bid to confuse the botnet’s operators.

In legal filings, Microsoft argued that Trickbot irreparably harms the company “by damaging its reputation, brands, and customer goodwill. Defendants physically alter and corrupt Microsoft products such as the Microsoft Windows products. Once infected, altered and controlled by Trickbot, the Windows operating system ceases to operate normally and becomes tools for Defendants to conduct their theft.”

From the civil complaint Microsoft filed on October 6 with the U.S. District Court for the Eastern District of Virginia:

“However, they still bear the Microsoft and Windows trademarks. This is obviously meant to and does mislead Microsoft’s customers, and it causes extreme damage to Microsoft’s brands and trademarks.”

“Users subject to the negative effects of these malicious applications incorrectly believe that Microsoft and Windows are the source of their computing device problems. There is great risk that users may attribute this problem to Microsoft and associate these problems with Microsoft’s Windows products, thereby diluting and tarnishing the value of the Microsoft and Windows trademarks and brands.”

Microsoft said it will leverage the seized Trickbot servers to identify and assist Windows users impacted by the Trickbot malware in cleaning the malware off of their systems.

Trickbot has been used to steal passwords from millions of infected computers, and reportedly to hijack access to well more than 250 million email accounts from which new copies of the malware are sent to the victim’s contacts.

Trickbot’s malware-as-a-service feature has made it a reliable vehicle for deploying various strains of ransomware, locking up infected systems on a corporate network unless and until the company agrees to make an extortion payment.

A particularly destructive ransomware strain that is closely associated with Trickbot — known as “Ryuk” or “Conti” — has been responsible for costly attacks on countless organizations over the past year, including healthcare providers, medical research centers and hospitals.

One recent Ryuk victim is Universal Health Services (UHS), a Fortune 500 hospital and healthcare services provider that operates more than 400 facilities in the U.S. and U.K.

On Sunday, Sept. 27, UHS shut down its computer systems at healthcare facilities across the United States in a bid to stop the spread of the malware. The disruption caused some of the affected hospitals to redirect ambulances and relocate patients in need of surgery to other nearby hospitals.

Microsoft said it did not expect its action to permanently disrupt Trickbot, noting that the crooks behind the botnet will likely make efforts to revive their operations. But so far it’s not clear whether Microsoft succeeded in commandeering all of Trickbot’s control servers, or when exactly the coordinated seizure of those servers occurred.

As the company noted in its legal filings, the set of Internet address used as Trickbot controllers is dynamic, making attempts to disable the botnet more challenging.

Indeed, according to real-time information posted by Feodo Tracker, a Swiss security site that tracks Internet servers used as controllers for Trickbot and other botnets, nearly two dozen Trickbot control servers — some of which first went active at beginning of this month — are still live and responding to requests at the time of this publication.

Trickbot control servers that are currently online. Source:

Cyber intelligence firm Intel 471 says fully taking down Trickbot would require an unprecedented level of collaboration among parties and countries that most likely would not cooperate anyway. That’s partly because Trickbot’s primary command and control mechanism supports communication over The Onion Router (TOR) — a distributed anonymity service that is wholly separate from the regular Internet.

“As a result, it is highly likely a takedown of the Trickbot infrastructure would have little medium- to long-term impact on the operation of Trickbot,” Intel 471 wrote in an analysis of Microsoft’s action.

What’s more, Trickbot has a fallback communications method that uses a decentralized domain name system called EmerDNS, which allows people to create and use domains that cannot be altered, revoked or suspended by any authority. The highly popular cybercrime store Joker’s Stash — which sells millions of stolen credit cards — also uses this setup.

From the Intel 471 report [malicious links and IP address defanged with brackets]:

“In the event all Trickbot infrastructure is taken down, the cybercriminals behind Trickbot will need to rebuild their servers and change their EmerDNS domain to point at their new servers. Compromised systems then should be able to connect to the new Trickbot infrastructure. Trickbot’s EmerDNS fall-back domain safetrust[.]bazar recently resolved to the IP address 195.123.237[.]156. Not coincidentally, this network neighborhood also hosts Bazar malware control servers.”

“Researchers previously attributed the development of the Bazar malware family to the same group behind Trickbot, due to code similarities with the Anchor malware family and its methods of operation, such as shared infrastructure between Anchor and Bazar. On Oct. 12, 2020 the fall-back domain resolved to the IP address 23.92.93[.]233, which was confirmed by Intel 471 Malware Intelligence systems to be a Trickbot controller URL in May 2019. This suggests the fall-back domain is still controlled by the Trickbot operators at the time of this report.”

Intel 471 concluded that the Microsoft action has so far has done little to disrupt the botnet’s activity.

“At the time of this report, Intel 471 has not seen any significant impact on Trickbot’s infrastructure and ability to communicate with Trickbot-infected systems,” the company wrote.

The legal filings from Microsoft are available here.

Update, 9:51 a.m. ET: Feodo Tracker now lists just six Trickbot controllers as responding. All six were first seen online in the past 48 hours. Also added perspective from Intel 471.

Planet DebianAxel Beckert: Git related shell aliases I commonly use

  • ga="git annex"
  • gap="git add -p"
  • amend="git commit --amend"

Hope this might be an inspiration to use these or similar aliases as well.

Kevin RuddABC TV: Murdoch Royal Commission

12 OCTOBER 2020

The post ABC TV: Murdoch Royal Commission appeared first on Kevin Rudd.

Kevin RuddABC RN: Murdoch Royal Commission


12 OCTOBER 2020

Topics: #MurdochRoyalCommission petition

Fran Kelly
Former prime minister Kevin Rudd has escalated his war with the Murdoch media empire, which includes the Australian newspaper and a number of major selling daily tabloids. Kevin Rudd has set up a parliamentary petition, which calls for a royal commission into News Corp’s domination of the Australian media landscape, which he has branded, quote, a cancer on democracy. More than 73,000 signatures have been gathered in the 48 hours since the petition went live on the parliamentary website on Saturday morning. In the meantime, Rupert Murdoch’s son James Murdoch, has told The New York Times that he left his family’s company over concerns it was disguising facts and spreading disinformation. Kevin Rudd joins us from Brisbane. Kevin Rudd, welcome back to Breakfast.

Kevin Rudd
Thanks for having me on the programme, Fran.

Fran Kelly
You’re you want a royal commission into what you generalise as, quote, the threats to media diversity, and that includes Nine’s takeover of the Melbourne Age and the Sydney Morning Herald. But when you drill down, you are going after News Corp, aren’t you? You are out to get Rupert Murdoch. Everyone else would just be collateral damage.

Kevin Rudd
The bottom line here is, as a former prime minister of the country, Fran, I’m passionate about the future of our democracy and a free independent and balanced media is the lifeblood of democracy. And I think most people would agree, I think most of your listeners, in the last decade or so this has just eroded. It’s the further concentration of Murdoch’s control over print media. 70% of the print readership in this country is owned by Murdoch. In my home state of Queensland virtually every single newspaper up and down the coast from Cairns to the Gold Coast is owned by Murdoch. And the ABC’s funding is under assault. So for those reasons, I think it’s high time that we had an independent, dispassionate examination of the impact of media monopoly on Australian democratic life and alternative models aimed at diversifying media ownership into the future.

Fran Kelly
And in the petition, you refer to, quote, how Australia’s print media is overwhelmingly controlled by News Corp, as you just said, with two thirds of daily newspaper readership, this power is routinely used to attack opponents in business and politics by blending editorial opinion with news reporting. What’s the the example that you want to give us know how the blending of editorial opinion with news reporting has actually become an arrogant cancer and harmed Australia?

Kevin Rudd
Well I could give you dozens of examples of this across the nation’s tabloid newspapers and with the national daily, which is the Australian. If you were simply to go back to, for example, the 2013 election, the front page of I think the Sunday Telegraph was ‘we need Tony’, that was it. ‘We need Tony’ and a huge photograph of Tony Abbott. Now, I would suggest to you, Fran, that by any person’s definition, that’s a blending of an editorial view with news coverage. If on the front page of the same paper, it has me and Anthony Albanese, or The Daily Telegraph, dressed in Nazi uniforms, I would say that’s a blending of editorial opinion with news.

Fran Kelly
It’s also transparent isn’t it, Kevin Rudd? I mean, it’s transparent what they’re doing. The readers presumably know what they’re doing. And by those papers, knowing what they’re doing, it’s not as if newspaper circulation, you know, indicating that they’re necessarily the leader of you know, opinion in the media anymore. I mean, things are changing. We’ve got news websites, people access news from all over the world.

Kevin Rudd
So why does Murdoch choose to continue to invest in essentially loss-making newspapers, Fran?

Fran Kelly
For influence.

Kevin Rudd
Yeah, the reason is political influence and power. So if you’re in my home state of Queensland, and as I said before, you can’t go anywhere else in terms of print. Your daily paper in southeast Queensland is the Courier-Mail. And then it’s the Gold Coast Bulletin on the Sunshine Coast Daily, the Bundaberg News-Mail the Mackay Mercury, the Rockhampton Morning Bulletin, the Townsville Bulletin, the Cairns Post and others. And the reason is –

Fran Kelly
But it’s not like people only get their news from reading the Townsville Bulletin.

Kevin Rudd
No, but the reason they do it, and I’ve been in many many regional radio bureaux right around the country where, what flops onto the desk of the regional newsroom? It’s that day’s Murdoch print media because they know what happens is it helps set the agenda and the tone and the framing and the parameters of the national political debate. That’s why they do it. And so therefore, my question simply to a royal commission would be: is this poisoning our democracy? I believe over time, it is. And, for example, today we have this stunning statement by James Murdoch that the reason he left the board of News Corporation was because it was participating in the legitimisation of disinformation. Yet in today’s Australian Murdoch media, not a single reference to what James Murdoch has said; James, having helped run the organisation for decades. So this is monumental news for those of us concerned about whether in fact, we’ve got balanced news reporting or the slanting of editorial opinion.

Fran Kelly
Indeed, the comments from James Murdoch are startling and do dovetail pretty closely with some of the things you say in your petition. Have you had any discussions with him as you’ve been formulating your moves against News Corp?

Kevin Rudd
No. Not at all. I think I’ve met James Murdoch twice in my life. I just observe what he has said as someone who has been at the absolute heart of the operation. You see, people have often asked me: why now? Why are you calling for this now? The trend has got worse over the last decade. In Queensland, the ACCC’s, in my view, wrong decision to allow Murdoch to purchase Australian Provincial Newspapers delivered Queensland, the state which swings so many federal elections, and its print media, almost completely into Murdoch’s hands. So that’s a big change, the assault on Australian Associated Press, an institution in this country since the 1930s, where Murdoch is de-funding and seeking to set up a rival institution. And on top of that, look, 18 out of the last 18 federal and state elections, the Murdoch print media, in its daily newspapers campaigning viciously for one side of politics and viciously against the other in the news coverage. Look, you get to a point where enough is enough. And what I find stunning, Fran, is that in less than 48 hours, despite an inhospitable Parliament House website, 75,000 people have gone online to register already their names on this petition.

Fran Kelly
Sure, a lot of people agree with you, but you didn’t agree with this necessarily. You didn’t have a problem when News Corp backed due in the Kevin07 election, did you?

Kevin Rudd
If you were leader, the Labor Party in 2007 and in elections prior to that, and to some extent subsequent to that, you would do everything you could to reduce the negative coverage from the Murdoch media against the Australian Labor Party. That’s the responsibility of the parliamentary leader. What I’m saying is, prior to 2010, News Corp ran a somewhat more balanced approach to their overall news coverage.

Fran Kelly
Did they or is balance in the eye of the beholder?

Kevin Rudd
No, no. Balance means everyone getting a fair go in the news coverage. That’s what it means. And since then, I think any objective observer would say 18 out of 18 federal and state elections on the run, Tasmania, South Australia, Victoria, New South Wales, Queensland, federally, what you have seen is News Corporation running a vicious editorial line reflected in their news coverage against one side of politics against the other when they control 70% of the show. All I’m saying to you, Fran, is that enough is enough. And when the same party campaigning to kill your budget at the ABC that’s the other. That’s the other arm of have a balanced media environment in Australia. I think our democracy is under threat. And that’s why I’ve launched this petition.

Fran Kelly
I know but you’re calling for a royal commission, so it’s a serious call. And it would, you would also need to take note of the fact that in that 10 years Labor’s had a leadership merry-go-round, it’s had clear policy failings, a split over climate policy that can’t seem to be resolved. I mean, Labor’s woes are not all about News Corp, are they?

Kevin Rudd
Of course not. And if you’ve seen everything I’ve written on this subject, we own full responsibility for problems of our own making. Since I changed the rules of the Labor Party, we’ve had two leaders in the last seven or eight years. Contrast that, the other side of politics, we’ve had a rolling door of serving prime ministers. So let’s not be distracted by the question of whether one side of politics commits political errors or not. We have and the Coalition have. What I’m talking about, is simply whether or not we have a balanced media in this country, which is capable of giving what I would describe as objective coverage to the news, facts-based news. And if you want to look at the way in which this is trending in Australia, look no further than Fox News in the United States, again a Murdoch operation, which has been the lifeblood of the Trump Administration and Trump campaign for the last four years. That’s where we’re headed unless we choose to stop the rot here in Australia.

Fran Kelly
So then, are you disappointed that Anthony Albanese is not joining you in this call for a royal commission? Or do you accept that a current political leader would be crazy to endorse a royal commission into a major media organisation?

Kevin Rudd
Well, I didn’t speak to Albo before I launched this petition, nor have I spoken to Mr Morrison before launching this petition for a royal commission. This is a time for the Australian people including all your listeners to decide whether it’s worth putting their name to it or not. And it’s not just targeted at News Corporation. But as you rightly say, they’re the dominant players so they come in for the primary attention. Fairfax Media has been taken over by Nine, whose chairman is Peter Costello. Your budget, the ABC, is under challenge and under attack. And then we have the question of the emerging monopolies in terms of Google and Facebook. These are all legitimate questions to be examined. My interest as a former prime minister, is having the lifeblood of the democracy kept alive and flowing with diverse media platforms. We are no longer getting that. Okay.

Fran Kelly
Kevin Rudd, thank you very much for joining us.

Kevin Rudd
Thanks very much, Fran.


Image: World Economic Forum

The post ABC RN: Murdoch Royal Commission appeared first on Kevin Rudd.

Planet DebianJonathan Dowland: Type design

I wanted to share Type design issue I hit recently with Striot.

Within StrIoT you define a stream-processing program, which is a series of inter-connected operators, in terms of a trio of graph types:

  • The outer-most type is a higher-order type provided by the Graph library we use: Graph a. This layer deals with all the topology concerns: what is connected to what.

  • The next type we define in StrIoT: StreamVertex, which is used to replace a in the above and make the concrete type Graph StreamVertex. Here we define all the properties of the operators. For example: the parameters supplied to the operator, and a unique vertexID integer that is unfortunately necessary. We also define which operator type each node represents, with an instance of the third type,

  • StreamOperator, a simple enumeration-style type: StreamOperator = Map | Filter | Scan…

For some recent work I needed to define some additional properties for the operators: properties that would be used in a M/M/1 model (Jackson network) to represent the program do some cost modelling with. Initially we supplied this additional information in completely separate instances of types: e.g. lists of tuples, the first of a pair representing a vertexID, etc. This was mostly fine for totally novel code, but where I had existing code paths that operated in terms of Graph StreamVertex and now needed access to these parameters, it would have meant refactoring a lot of code. So instead, I added these properties directly to the types above.

Some properties are appropriate for all node types, e.g. mean average service time. In that case, I added the parameter to the StreamVertex type:

data StreamVertex = StreamVertex
    { vertexId   :: Int
    , serviceTime:: Double

Other parameters were only applicable to certain node types. Mean average arrival rate, for example., is only valid for Source node types; selectivity is appropriate only for filter types. So, I added these to the StreamOperator type:

data StreamOperator = Map
                    | Filter Double -- selectivity
                    | Source Double -- arrival rate

This works pretty well, and most of the code paths that already exist did not need to be updated in order for the model parameters to pass through to where they are needed. But it was not a perfect solution, because I now had to modify some other, unrelated code to account for the type changes.

Mostly this was test code: where I'd defined instances of Graph StreamVertex to test something unrelated to the modelling work, I now had to add filter selectivities and source arrival rates. This was tedious but mostly solved with automatically with some editor macros.

One area though, that was a problem, was equality checks and pattern matching. Before this change, I had a few areas of code like this

if Source == operator (head (vertexList sg))
if a /= b then… -- where a and b are instances of StreamOperator

I had to replace them with little helper routines like

cmpOps :: StreamOperator -> StreamOperator -> Bool
cmpOps (Filter _) (Filter _) = True
cmpOps (FilterAcc _) (FilterAcc _) = True
cmpOps x y = x == y

A similar problem was where I needed to synthesize a Filter, and I didn't care about the selectivity, indeed, it was meaningless for the way I was using the type. I have a higher-level function that handles "hoisting" an Operator through a Merge: So, before, you have some operator occurring after a merge operation, and afterwards, you have several instances of the operator on all of the input streams prior to the Merge. Invoking it now looks like this

filterMerge = pushOp (Filter 0)

It works, the "0" is completely ignored, but the fact I have to provide it, and it's unneeded, and there is no sensible value for it, is a bit annoying.

I think there's some interesting things to consider here about Type design, especially when you have some aspects of a "thing" which are relevant only in some contexts and not others.

Planet DebianReproducible Builds: Restarting Reproducible Builds IRC meetings

The Reproducible Builds project intends to resume meeting regularly on IRC, starting today, Monday October 12th, at 18:00 UTC.

Sadly, due to the unprecedented events in 2020, there will be no in-person Reproducible Builds event this year, but please join us on the #reproducible-builds channel on An editable agenda is available. The cadence of these meetings will probably be every two weeks, although this will be discussed and decided on at the first meeting.

Charles StrossBooks I Will Not Write #8: The Year of the Conspiracy

Global viral pandemics, insane right-wing dictator-wannabes trying to set fire to the planet, and climate change aside, I'm officially declaring 2020 to be the Year of the Conspiracy Theory.

This was the year when QAnon, a frankly puerile rehashing of antisemitic conspiracy theories going back to the infamous Tsarist secret police fabrication The Protocols of the Elders of Zion, went viral: its true number of followers is unclear but in the tens of thousands, and they've begun showing up in US politics as Republican candidates capable of displacing the merely crazy, such as Tea Partiers, who at least were identifiably a political movement (backed by Koch brothers lobbying money).

Nothing about the toxic farrago of memes stewing in the Qanon midden should come as a surprise to anyone who read the Illuminatus! trilogy back in the 1970s, except possibly the fact that this craziness has leached into mainstream politics. But I think it's worrying indicative of the way our post-1995, internet-enabled media environment is messing with the collective subconscious: conspiratorial thinking is now mainstream.

Anyway. When life hands you lemons its time to make lemonade. How could I (if I had more energy and fewer plans) monetize this trend, without sacrificing my dignity, sanity, and sense of integrity along the way?

I'm calling it time for the revival of the big fat 1960s-1980s cold war spy/conspiracy thriller. A doozy of a plot downloaded itself into my head yesterday, and I have neither the time nor the marketing stance to write it, so here it is. (Marketing: I'm positioned as an SF author, not a thriller/mens adventure author, so I'd be selling to a different editorial and marketing department and the book advances for starting out again wouldn't be great.)

So, some background for a Richard Condon style comedy spy/conspiracy thriller:

The USA is an Imperial hegemonic power, and is structured as such internally (even though its foundational myth--plucky colonials rebelling against an empire--is problematically at odds with the reality of what it has become nearly 250 years later). In particular, it has an imperial-scale bureaucracy with an annual budget measured in the trillions of dollars, and baroque excrescences everywhere. Nowhere is this more evident than in the intelligence sector.

The USA has spies and analysts and cryptographers and spooks coming out of its metaphorical ears. For example, the CIA, the best-known US espionage agency, is a sprawling bureaucracy with an estimated 21,000 employees and a budget of $15Bn/year. But it's by no means the largest or most expensive agency: the NRO (the folks who run spy satellites) used to have a bigger budget than NASA. And the mere existence and name of the National Reconnaissance Office were classified secrets until 1992.

It has come to light that about 80% of the people who work in the intelligence sector in the US are not actual government officials or civil servants, but private sector contractors, mostly employed by service corporations who are cleared to handle state secrets. By some estimates there are two million security-cleared civilians working in the United States--more than the number of uniformed service personnel.

Keeping track of this baroque empire of espionage is such a headache that there's an entire government agency devoted to it: the United States Intelligence Community, established in 1981 by an executive order issued by Ronald Reagan. Per wikipedia:

The Washington Post reported in 2010 that there were 1,271 government organizations and 1,931 private companies in 10,000 locations in the United States that were working on counterterrorism, homeland security, and intelligence, and that the intelligence community as a whole includes 854,000 people holding top-secret clearances. According to a 2008 study by the ODNI, private contractors make up 29% of the workforce in the U.S. intelligence community and account for 49% of their personnel budgets.

USIC has 17 declared member agencies and a budget that hit $53.9Bn in 2012, up from $40.9Bn in 2006. Obviously this is a growth industry.

Furthermore, I find it hard to believe--even bearing in mind that this decade's normalization of conspiratorial thinking predisposes one towards such an attitude--that there aren't even more agencies out there which, like the NRO prior to 1992, remain under cover of a top secret classification. I'd expect some such agencies to focus on obvious tasks such as deniable electronic espionage (like the Russian government's APT29/Cozy Bear hacking group), weaponization of memes in pursuit of national strategic objectives (the British Army's 77th Brigade media influencers)), forensic analysis of offshore money laundering paper trails--the importance of which should be glaringly obvious to anyone with even a passing interest in world politics over the past two decades, or trying to identify foreign assets by analyzing the cornucopia of social data available from the likes of Facebook and Twitter's social graphs. (For example: it used to be the case that applicants for security clearance jobs with federal agencies were required not to have Facebook, Twitter, or similar social media accounts. They were also required not to have travelled overseas, with very limited exceptions, not to have criminal records, and so on. Bear in mind that Facebook maintains "ghost" accounts for everyone who doesn't already have a Facebook account, populated with data derived from their contacts who do. If you have access to FB's social graph you can in principle filter out all ghost accounts of the correct age and demographic (educational background, etc), cross-reference against twitter and other social media, and with a bit more effort find out if they've ever travelled abroad or had a criminal conviction. The result doesn't confirm that they're a security-cleared government employee but it's highly suggestive.)

But I digress.

The 1271 government organizations and 1931 private companies in 2010 have almost certainly mushroomed since then, during the global war on terror. Per Snowden, the proportion who are private contractors rather than civil servants, has also exploded. And, due to regulatory capture, it has become the norm for outsourcing contracts to be administered by former employees in the industries to which the contracts are awarded. There's a revolving door between civil service management and senior management in the contractor companies, simply because you need to understand the workload in order to allocate it to contractors, and because if you're a contractor knowing how the bidding process works from the inside gives you a huge advantage.

Let us posit a small group of golfing buddies in DC in the 2000s who are deeply disillusioned and cynical about the way things are developing, and conspire to milk the system. (They don't think of themselves as a conspiracy: they're golfing buddies, what could be more natural than helping your mates out?) They've all got a background in intelligence, either working in middle-to-senior management as government agency officers, or in senior management in a corporate contractor. They know the ropes: they know how the game is played.

Two of them take early retirement and invest their pensions in a pair of new startups: one of them--call them "A"--remains in place in, say, whatever department of the Defense Clandestine Service is the successor to the Counterintelligence Field Activity. They raise a slightly sketchy proposal for a domestic operation targeting proxies for Advanced Persistent Threats operating on US soil. For example: imagine QAnon is a fabrication of APT29. We know QAnon followers have carried out domestic terrorism attacks; if Q is actually a foreign intelligence service, is it plausible that they might also be using it to radicalize and recruit agents within other US intelligence services?

One of our retirees, "B", has established a small corporation that just happens to specialize in searching for signs of radicalization at home and by some magical coincidence fits the exact bill of requirements that our insider is looking for in a contractor.

Our other retiree, "C", has established a small corporation that produces Artificial Reality Games. As has been noted elsewhere Qanon bears a striking resemblance to a huge Artificial Reality Game. One of their products is not unlike "Spooks", from my 2007 novel Halting State; it's a game that encourages the players to carry out real world tasks on behalf of a shadowy national counter-espionage agency. In the novel, the players are unaware that they're working for a real national counter-espionage agency. In this scenario, the game is just a game ... but it's designed to make the players look plausibly similar to actual HUMINT assets working in a climate of surveillance capitalism and so reverting to classic tradecraft techniques in order to avoid being located by their dead letter drop's bluetooth pairing ID. But because they're actually gamers, on close examination they prove to not be actual spies. In other words, C generates lots of interesting false leads for B to explore and A to report on, but they never quite pan out.

So far, so plausible. But where's the story?

The clockwork powering the novel is simple: A runs his own pet counter-espionage project within the bigger agency and arranges to outsource the leg work to B's contractors on a cost-plus basis. Meanwhile, C's ARG designers create a perfectly balanced honeypot for B's agents. (The boots on the ground are all ignorant of the true relationship.) B is a major investor in C via a couple of offshore trusts and cut-outs: B also funnels money into A's offshore retirement fund. It's a nice little earner for everybody, bilking the federal government out of a few million bucks a year on an activity nobody expects to succeed but which might bear fruit one day and which meanwhile burnishes the status of the parent organization because it's clearly conducting innovative and proactive counter-espionage activity.

Then the wheels fall off.

C's team are running a handful of ARGs (because to run only one would be kind of suspicious). They are approached by the FBI to set up a honeypot for whichever radical group is the target of the day, be it Boogaloo Boys, Antifa, Al Qaida, Y'all Qaida, or whoever. And it turns out the FBI expect them to do something with the half-ton of ANFO they've conveniently provided (as an arrest pretext),

B's team meanwhile discover a scarily real-looking conspiracy who are planning to start some sort of war over the purity of their precious bodily fluids. A countdown is running and A's expected to actually make progress and arrest a ring of radicals before they blow anything up.

And A gradually comes to the realization that he and his golfing buddies are not the first people to have had this idea: they're not even the biggest. In fact, it begins to come to light that an entire top level division of the Department of Homeland Security (founded 2003! 240,000 employees! budget $51.67Bn!) is running plan B and funneling money to prop up various adversarial sock-puppets, one of which appears to have accidentally stolen half a dozen W76 physics packages (which, just like the good ole' days with the Minuteman III stockpile, have a permissive action lock code of "00000000"). The nukes are now in the possession of a bunch of nutters led by a preacher man who insists Jesus is coming, and his return will be heralded by a nuclear attack on the USA. But the folks behind the DHS grift can't do anything about it for obvious reasons (involving orange jumpsuits and long jail terms, or maybe actual risk of execution).

A can't expose this grift without attracting unwelcome attention, and a likely lifetime vacation in Club Fed along with his buddies B and C. But he doesn't want to be anywhere close to DC when the nukes go off. So what's a grifter to do?

C gets to give his ARG designers a new task: to set up a game targeting a very specific set of customers--conspiratorially-minded millenarian believers who are already up to their eyes in one plot and who need to be gently weaned onto a more potent brew of lies, right under the nose of the rival APT agents who have radicalized them ...

Climax: the nukes are defused, the idiot conspiracy believers are arrested, then the FBI turn up and arrest A, B, and C. On their way out the door it becomes apparent that they've been set up: they, themselves, were suckered into setting up their scam via another conspiracy ring to generate arrests for the FBI ...

(Fade to black.)

Worse Than FailureCodeSOD: Intergral to Dating

Date representations are one of those long-term problems for programmers, which all ties into the problem that "dates are hard". Nowadays, we have all these fancy date-time types that worry about things like time zones and leap years, and all of that stuff for us. Pretty much everything, at some level, relies on the Unix Epoch. But there is a time before that was the standard.

In the mainframe era, not all the numeric representations worked the same way that we're used to, and it was common to simply define the number of digits you wanted to store. So, if you wanted to store a date, you could define an 8-digit field, and store the date as 20201012: October 10th, 2020.

This is a pretty great date format, for that era. Relatively compact (and yes, the whole Y2K thing means that you might have defined that as a six digit field), inherently sortable, and it's not too bad to slice it back up into date parts, when you need it. And like anything else which was a good idea a long time ago, you still see it lurking around today. Which does become a problem when you inherit code written by people who didn't understand why things worked that way.

Virginia N inherited some C# code which meets that criteria. And the awkward date handling isn't even the WTF. There's a lot to unpack in this particular sample, so let's start with… the unpack function.

public static DateTime UnpackDateC(DateTime dateI, long sDate) { if (sDate.ToString().Length != 8) return dateI; try { return new DateTime(Convert.ToInt16(sDate.ToString().Substring(0, 4)), Convert.ToInt16(sDate.ToString().Substring(4, 2)), Convert.ToInt16(sDate.ToString().Substring(6, 2))); } catch { return dateI; } }

sDate is our integer date: 20201012. Instead of converting it to a string once, and then validating and slicing it, we call ToString four times. We also reconvert each date part back into an integer so we can pass them to DateTime, and of course, DateTime.ParseExact is just sitting there in the documentation, shaking its head at all of this.

The really weird choice to me, though, is that we pass in dateI, which appears to be the "fallback" date value. That… worries me.

Well, let's take a peek deep in the body of a method called GetMC, because that's where this unpack function is called.

while (oDataReader.Read()) { //... DataRow oRow = oDataTable.NewRow(); if (oDataReader["DEB"] != DBNull.Value) { DateTime dt = DateTime.Today; dt = UnpackDateC(dt, Convert.ToInt64(oDataReader["DEB"])); } else { oRow["DEB"] = DBNull.Value; } //... }

It's hard to know for absolute certain, based on the code provided, but I don't think UnpackDateC is actually doing anything. We can see that the default dateI value is DateTime.Today. So perhaps the desired behavior is that every invalid/unknown date is today? Seems problematic, but maybe that jives with the requriements.

But note the logic. If the database value is null, we store a null in oRow["DEB"]- our output data. If it isn't null, we unpack the date and store it in… dt. Also, if you trace the type conversions, we convert an integer in the database into an integer in our program (which it already would have been) so that we can convert that integer into a string so that we can split the string and convert each portion into integers so we can convert it into a date.

How do I know that the field is an integer in the database? Well, I don't know for sure, but let's look at the query which drives that loop.

public static void GetMC(string sConnectionString, ref DataTable dtToReturn, string sOrgafi, string valid, string exe, int iOperateur, string sColTri, bool Asc, bool DBLink, string alias) // iOperateur 0, 1, 2 { sSql = " select * from (select ENTLCOD as ENTAFI, MARKYEAR as EXE, nvl(to_char(MARKNUM),'')||'-'||nvl(MARKNUMLOT,'') as COD, to_char(MARKNUM) as NUM, MARKOBJ1 as COM, MARKSTARTDATE as DEB, MARKENDDATE as FIN, MARKNUMLOT as NUMLOT, MARKVALIDDATE, TIERNUM as FOUR, MARTBASTIT as TYP " + " from SOMEEXTERNALVIEW" + (DBLink ? alias + " " : " ") + " WHERE 1=1"; if (valid != null && valid.Length > 0) sSql += " and (MARKVALIDDATE >= " + valid + " or MARKVALIDDATE=0 or MARKVALIDDATE is null)"; if (exe != null && exe.Length > 0) sSql += " and TRIM( MARKYEAR ) ='" + exe.Trim() + "' "; sSql += " ) where 1=1"; //...

We can see that MARKSTARTDATE is the database field we call DEB. We can also see some conditional string concatenation to build our query, so hello possible SQL injection attacks. Now, I don't know that MARKSTARTDATE is an integer, but I can see that a similar field, MARKVALIDDATE is. Note the lack of quotes in the query string: "…(MARKVALIDDATE >= " + valid + " or MARKVALIDDATE=0 or MARKVALIDDATE is null)"

So MARKVALIDDATE is numeric in the database, which is great because the variable valid is passed in as a string, so we're just all over the place with types.

The structure of this query also adds on an extra layer of unnecessary complexity, as for some reason, we wrap the actual query up as a subquery, but the outer query is just SELECT * FROM (subquery) WHERE 1=1, so there is literally no reason to do that.

To finish this off, let's look at where GetMC is actually invoked, a method called CallWSM.

private void CallWSM(ref DataTable oDataTable, string sCode, string sNom, string sFourn, int iOperateur) // iOperateur 0, 1, 2 { try { m_bError = false; string sColTri = m_Grid_SortRequest.FieldName; SortOperator oDirection = m_Grid_SortRequest.SortDirection; m_sAnnee = ctlRecherche.GetValueFilterItem(1); string svalid = ""; string sdt = ctlRecherche.GetValueDateItem(0); if (sdt.Length > 0) { DateTime dtvalid = DateTime.Parse(sdt); long ldt = dtvalid.Year * 10000 + dtvalid.Month * 100 + dtvalid.Day; svalid = ldt.ToString(); } m_AppLogic.GetMC(sColTri, oDirection, m_sAdresseWS, ref oDataTable, svalid, m_sAnnee, iOperateur, PageCurrent, m_PageSize); } catch (WebException ex) { if (ex.Status == WebExceptionStatus.Timeout) { frmMessageBox.Show(ML.GetLibelle(4137), CONST.AppTITLE, MessageBoxButtons.OK, MessageBoxIcon.Error); m_bError = true; } else { frmMessageBox.Show(ex.Message); m_bError = true; } } catch (Exception ex) { frmMessageBox.Show(ex.Message); m_bError = true; } }

Now, I'm reading between the lines a bit, and maybe making some assumptions that I shouldn't be, but this method is called CallWSM, and one of the parameters we pass to GetMC is stored in a variable called m_sAdresseWS, and GetMC can apparently throw a WebException.

Are… are we building a query and then passing it off to a web service to execute? And then wrapping the response in a data reader? Because that would be terrible. But if we're not, does that mean that we're also calling a web service in some of the code Virginia didn't supply? Query the DB and call the web service in the same method? Or are we catching an exception that just could never happen, and all the WS stuff has nothing to do with web services?

Any one of those options would be a WTF.

Virginia adds, "I had the job to make a small change in the call. ...I'm used to a good amount of Daily WTF-erry in our code." After reading through the code though, Virginia had some second thoughts about changing the code. "At this point I decided not the change anything, because it hurts my head."

You and me both.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianRuss Allbery: Review: Hand to Mouth

Review: Hand to Mouth, by Linda Tirado

Publisher: G.P. Putnam's Sons
Copyright: October 2014
ISBN: 0-698-17528-X
Format: Kindle
Pages: 194

The first time Linda Tirado came to the viral attention of the Internet was in 2013 when she responded to a forum question: "Why do poor people do things that seem so self-destructive?" Here are some excerpts from her virally popular five-page response, which is included in the first chapter:

I know how to cook. I had to take Home Ec. to graduate high school. Most people on my level didn't. Broccoli is intimidating. You have to have a working stove, and pots, and spices, and you'll have to do the dishes no matter how tired you are or they'll attract bugs. It is a huge new skill for a lot of people. That's not great, but it's true. And if you fuck it up, you could make your family sick. We have learned not to try too hard to be middle class. It never works out well and always makes you feel worse for having tried and failed yet again. Better not to try. It makes more sense to get food that you know will be palatable and cheap and that keeps well. Junk food is a pleasure that we are allowed to have; why would we give that up? We have very few of them.


I smoke. It's expensive. It's also the best option. You see, I am always, always exhausted. It's a stimulant. When I am too tired to walk one more step, I can smoke and go for another hour. When I am enraged and beaten down and incapable of accomplishing one more thing, I can smoke and I feel a little better, just for a minute. It is the only relaxation I am allowed. It is not a good decision, but it is the only one that I have access to. It is the only thing I have found that keeps me from collapsing or exploding.

This book is an expansion on that essay. It's an entry in a growing genre of examinations of what it means to be poor in the United States in the 21st century. Unlike most of those examinations, it isn't written by an outsider performing essentially anthropological field work. It's one of the rare books written by someone who is herself poor and had the combination of skill and viral fame required to get an opportunity to talk about it in her own words.

I haven't had it worse than anyone else, and actually, that's kind of the point. This is just what life is for roughly a third of the country. We all handle it in our own ways, but we all work in the same jobs, live in the same places, feel the same sense of never quite catching up. We're not any happier about the exploding welfare rolls than anyone else is, believe me. It's not like everyone grows up and dreams of working two essentially meaningless part-time jobs while collecting food stamps. It's just that there aren't many other options for a lot of people.

I didn't find this book back in 2014 when it was published. I found it in 2020 during Tirado's second round of Internet fame: when the police shot out her eye with "non-lethal" rounds while she was covering the George Floyd protests as a photojournalist. In characteristic fashion, she subsequently reached out to the other people who had been blinded by the police, used her temporary fame to organize crowdfunded support for others, and is planning on having "try again" tattooed over the scar.

That will give you a feel for the style of this book. Tirado is blunt, opinionated, honest, and full speed ahead. It feels weird to call this book delightful since it's fundamentally about the degree to which the United States is failing a huge group of its citizens and making their lives miserable, but there is something so refreshing and clear-headed about Tirado's willingness to tell you the straight truth about her life. It's empathy delivered with the subtlety of a brick, but also with about as much self-pity as a brick. Tirado is not interested in making you feel sorry for her; she's interested in you paying attention.

I don't get much of my own time, and I am vicious about protecting it. For the most part, I am paid to pretend that I am inhuman, paid to cater to both the reasonable and unreasonable demands of the general public. So when I'm off work, feel free to go fuck yourself. The times that I am off work, awake, and not taking care of life's details are few and far between. It's the only time I have any autonomy. I do not choose to waste that precious time worrying about how you feel. Worrying about you is something they pay me for; I don't work for free.

If you've read other books on this topic (Emily Guendelsberger's On the Clock is still the best of those I've read), you probably won't get many new facts from Hand to Mouth. I think this book is less important for the policy specifics than it is for who is writing it (someone who is living that life and can be honest about it) and the depth of emotional specifics that Tirado brings to the description. If you have never been poor, you will learn the details of what life is like, but more significantly you'll get a feel for how Tirado feels about it, and while this is one individual perspective (as Tirado stresses, including the fact that, as a white person, there are other aspects of poverty she's not experienced), I think that perspective is incredibly valuable.

That said, Hand to Mouth provides even more reinforcement of the importance of universal medical care, the absurdity of not including dental care in even some of the more progressive policy proposals, and the difficulties in the way of universal medical care even if we solve the basic coverage problem. Tirado has significant dental problems due to unrepaired damage from a car accident, and her account reinforces my belief that we woefully underestimate how important good dental care is to quality of life. But providing universal insurance or access is only the start of the problem.

There is a price point for good health in America, and I have rarely been able to meet it. I choose not to pursue treatment if it will cost me more than it will gain me, and my cost-benefit is done in more than dollars. I have to think of whether I can afford any potential treatment emotionally, financially, and timewise. I have to sort out whether I can afford to change my life enough to make any treatment worth it — I've been told by more than one therapist that I'd be fine if I simply reduced the amount of stress in my life. It's true, albeit unhelpful. Doctors are fans of telling you to sleep and eat properly, as though that were a thing one can simply do.

That excerpt also illustrates one of the best qualities of this book. So much writing about "the poor" treats them as an abstract problem that the implicitly not-poor audience needs to solve, and this leads rather directly to the endless moralizing as "we" attempt to solve that problem by telling poor people what they need to do. Tirado is unremitting in fighting for her own agency. She has a shitty set of options, but within those options she makes her own decisions. She wants better options and more space in which to choose them, which I think is a much more productive way to frame the moral argument than the endless hand-wringing over how to help "those poor people."

This is so much of why I support universal basic income. Just give people money. It's not all of the solution — UBI doesn't solve the problem of universal medical care, and we desperately need to find a way to make work less awful — but it's the most effective thing we can do immediately. Poor people are, if anything, much better at making consequential financial decisions than rich people because they have so much more practice. Bad decisions are less often due to bad decision-making than bad options and the balancing of objectives that those of us who are not poor don't understand.

Hand to Mouth is short, clear, refreshing, bracing, and, as you might have noticed, very quotable. I think there are other books in this genre that offer more breadth or policy insight, but none that have the same feel of someone cutting through the bullshit of lazy beliefs and laying down some truth. If any of the above excerpts sound like the sort of book you would enjoy reading, pick this one up.

Rating: 8 out of 10

Planet DebianNorbert Preining: KDE/Plasma Status Update 2020-10-12

Update 2020-10-19: All packages are now available in Debian/experimental!

More than a month has passed since my last KDE/Plasma for Debian update, but things are progressing nicely.

OBS packages

On the OBS side, I have updated the KDE Apps to 20.08.2, and the KDE Frameworks to 5.75. Especially the update of apps brings in at least a critical security fix.

Concerning the soon to be released Plasma 5.20, packages are more or less ready, but as reported here we have to wait for Qt 5.15 to be uploaded to unstable, which is also planned in the near future.

Debian main packages

Uploads of Plasma 5.19.4 to Debian/experimental are processing nicely, more than half the packages are already done, and the rest is ready to go. What holds us back is the NEW queue, as usual.

We (Scarlett, Patrick, me) hope to have everything through NEW and in experimental as soon as possible, followed by an upload of probably Plasma 5.19.5 to Debian/unstable.

Thanks also to Lisandro for accepting me into the Salsa Qt/KDE team.


Planet DebianMark Brown: Book club: JSON Web Tokens

This month for our book club Daniel, Lars, Vince and I read Hardcoded secrets, unverified tokens, and other common JWT mistakes which wasn’t quite what we’d thought when it was picked. We had been expecting an analysis of JSON web tokens themselves as several us had been working in the area and had noticed various talk about problems with the standard but instead the article is more a discussion of the use of semgrep to find and fix common issues, using issues with JWT as examples.

We therefore started off with a bit of a discussion of JWT, concluding that the underlying specification was basically fine given the problem to be solved but that as with any security related technology there were plenty of potential pitfalls in implementation and that sadly many of the libraries implementing the specification make it far too easy to make mistakes such as those covered by the article through their interface design and defaults. For example interfaces that allow interchangable use of public keys and shared keys are error prone, as is is making it easy to access unauthenticated data from tokens without clearly flagging that it is unauthenticated. We agreed that the wide range of JWT implementations available and successfully interoperating with each other is a sign that JWT is getting something right in providing a specification that is clear and implementable.

Moving on to semgrep we were all very enthusiastic about the technology, language independent semantic matching with a good set of rules for a range of languages available. Those of us who work on the Linux kernel were familiar with semantic matching and patching as implemented by Coccinelle which has been used quite successfully for years to both avoiding bad patterns in code and making tree wide changes, as demonstrated by the article it is a powerful technique. We were impressed by the multi-language support and approachability of semgrep, with tools like their web editor seeming particularly helpful for people getting started with the tool, especially in conjunction with the wide range of examples available.

This was a good discussion (including the tangential discussions of quality problems we had all faced dealing with software over the years, depressing though those can be) and semgrep was a great tool to learn about, I know I’m going to be using it for some of my projects.


Planet DebianWilliam (Bill) Blough: sudo reboot

Benjy: The best-laid plans of mice...

Arthur: And men.

Frankie: What?

Arthur: Best-laid plans of mice and men.

Benjy: What have men got to do with it?

-- The Hitchhiker's Guide to the Galaxy TV series

Last year, my intent had been to post monthy updates with details of the F/LOSS contributions I had made during the previous month. I wanted to do this as a way to summarize and reflect on what I had done, and also to hopefully motivate me to do more.

Fast forward, and it's been over a year since my last blog post. So much for those plans.

I won't go into specific detail about the F/LOSS contributions I've made in the past year. This isn't meant to be a "catch-up" post, per se. It's more of an acknowledgement that I didn't do what I set out to do, as well as something of a reset to enable me to continue blogging (or not) as I see fit.

So, to summarize those contributions:

  • As expected, most of my contributions were to projects that I regularly contribute to, like Debian, Apache Axis2/C, or PasswordSafe.

  • There were also some one-off contributions to projects that I use but am not actively involved in, such as log4cxx or PyKAN.

  • There was also a third category of contributions that are a bit of a special case. I made some pseudonymous contributions to a F/LOSS project that I did not want to tie to my public identity. I hope to write more about that situation in a future post.

All in all, I'm pretty happy with the contributions I've made in the past year. Historically, my F/LOSS activity had been somewhat sporadic, sometimes with months passing in between contributions. But looking through my notes from the past year, it appears that I made contributions every single month, with no skipped months. Of course, I would have liked to have done more, but I consider the improvement in consistency to be a solid win.

As for the blog, well... Judging by the most recent year-long gap (as well as the gaps before that), I'm not likely to start regularly writing posts anytime soon. But then again, if sporadic F/LOSS contribtutions can turn into regular F/LOSS contributions, then maybe sporadic blog posts can turn into regular blog posts, too. Time will tell.

Cory DoctorowDanish Little Brother, now a free, CC-licensed download

Science Fiction Cirklen is a member-funded co-op of Danish science fiction fans; they raise money to produce print translations of sf novels that Danes would otherwise have to read in English. They work together to translate the work, commission art, and pay to have the book printed and distributed to bookstores in order to get it into Danish hands.

The SFC folks just released their Danish edition of Little Brother — translated by Lea Thume — as a Creative Commons licensed epub file, including the cover art they produced for their edition.

I’m so delighted by this! My sincere thanks to the SFC people for bringing my work to their country, and I hope someday we can toast each other in Copenhagen.

Charles StrossIntroducing a new guest blogger: Sheila Williams

It's been ages since I last hosted a guest blogger here, but today I'd like to introduce you to Sheila Williams, who will be talking about her work next week.

Normally my gues bloggers are other SF/F authors, but Sheila is something different: she's the multiple Hugo-Award winning editor of Asimov's Science Fiction magazine. She is also the winner of the 2017 Kate Wilhelm Solstice Award for distinguished contributions to the science fiction and fantasy community.

Sheila started at Asimov's in June 1982 as the editorial assistant. Over the years, she was promoted to a number of different editorial positions at the magazine and she also served as the executive editor of Analog from 1998 until 2004. With Rick Wilber, she is also the co-founder of the Dell Magazines Award for Undergraduate Excellence in Science Fiction and Fantasy. This annual award has been bestowed on the best short story by an undergraduate student at the International Conference on the Fantastic since 1994.

In addition, Sheila is the editor or co-editor of twenty-six anthologies. Her newest anthology, Entanglements: Tomorrow's Lovers, Families, and Friends, is the 2020 volume of the Twelve Tomorrow series. The book is just out from MIT Press.

Krebs on SecurityReport: U.S. Cyber Command Behind Trickbot Tricks

A week ago, KrebsOnSecurity broke the news that someone was attempting to disrupt the Trickbot botnet, a malware crime machine that has infected millions of computers and is often used to spread ransomware. A new report Friday says the coordinated attack was part of an operation carried out by the U.S. military’s Cyber Command.

Image: Shutterstock.

On October 2, KrebsOnSecurity reported that twice in the preceding ten days, an unknown entity that had inside access to the Trickbot botnet sent all infected systems a command telling them to disconnect themselves from the Internet servers the Trickbot overlords used to control compromised Microsoft Windows computers.

On top of that, someone had stuffed millions of bogus records about new victims into the Trickbot database — apparently to confuse or stymie the botnet’s operators.

In a story published Oct. 9, The Washington Post reported that four U.S. officials who spoke on condition of anonymity said the Trickbot disruption was the work of U.S. Cyber Command, a branch of the Department of Defense headed by the director of the National Security Agency (NSA).

The Post report suggested the action was a bid to prevent Trickbot from being used to somehow interfere with the upcoming presidential election, noting that Cyber Command was instrumental in disrupting the Internet access of Russian online troll farms during the 2018 midterm elections.

The Post said U.S. officials recognized their operation would not permanently dismantle Trickbot, describing it rather as “one way to distract them for at least a while as they seek to restore their operations.”

Alex Holden, chief information security officer and president of Milwaukee-based Hold Security, has been monitoring Trickbot activity before and after the 10-day operation. Holden said while the attack on Trickbot appears to have cut its operators off from a large number of victim computers, the bad guys still have passwords, financial data and reams of other sensitive information stolen from more than 2.7 million systems around the world.

Holden said the Trickbot operators have begun rebuilding their botnet, and continue to engage in deploying ransomware at new targets.

“They are running normally and their ransomware operations are pretty much back in full swing,” Holden said. “They are not slowing down because they still have a great deal of stolen data.”

Holden added that since news of the disruption first broke a week ago, the Russian-speaking cybercriminals behind Trickbot have been discussing how to recoup their losses, and have been toying with the idea of massively increasing the amount of money demanded from future ransomware victims.

“There is a conversation happening in the back channels,” Holden said. “Normally, they will ask for [a ransom amount] that is something like 10 percent of the victim company’s annual revenues. Now, some of the guys involved are talking about increasing that to 100 percent or 150 percent.”


Cryptogram Hacking Apple for Profit

Five researchers hacked Apple Computer’s networks — not their products — and found fifty-five vulnerabilities. So far, they have received $289K.

One of the worst of all the bugs they found would have allowed criminals to create a worm that would automatically steal all the photos, videos, and documents from someone’s iCloud account and then do the same to the victim’s contacts.

Lots of details in this blog post by one of the hackers.

Worse Than FailureError'd: Math is HARD!

"Boy oh boy! You can't beat that free shipping for the low, low price of $4!" Zenith wrote.


Brendan wrote, "So, as of September 24th, my rent is both changing and not changing, in the future, but also in the past?"


"Does waiting for null people take forever or no time at all?" Pascal writes.


"Whoever at McDonalds is responsible for calculating their cheeseburger deals needs to lay off of the special sauce," Valts S. wrote.


"So, assuming I were to buy 0.9 of a figure, what's the missing 10%?" writes Zenith.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Cryptogram Friday Squid Blogging: Squid-like Nebula

Pretty astronomical photo.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Saving the Humboldt Squid

Genetic research finds the Humboldt squid is vulnerable to overfishing.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Chinese Squid Fishing Near the Galapagos

The Chinese have been illegally squid fishing near the Galapagos Islands.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Krebs on SecurityAmid an Embarrassment of Riches, Ransom Gangs Increasingly Outsource Their Work

There’s an old adage in information security: “Every company gets penetration tested, whether or not they pay someone for the pleasure.” Many organizations that do hire professionals to test their network security posture unfortunately tend to focus on fixing vulnerabilities hackers could use to break in. But judging from the proliferation of help-wanted ads for offensive pentesters in the cybercrime underground, today’s attackers have exactly zero trouble gaining that initial intrusion: The real challenge seems to be hiring enough people to help everyone profit from the access already gained.

One of the most common ways such access is monetized these days is through ransomware, which holds a victim’s data and/or computers hostage unless and until an extortion payment is made. But in most cases, there is a yawning gap of days, weeks or months between the initial intrusion and the deployment of ransomware within a victim organization.

That’s because it usually takes time and a good deal of effort for intruders to get from a single infected PC to seizing control over enough resources within the victim organization where it makes sense to launch the ransomware.

This includes pivoting from or converting a single compromised Microsoft Windows user account to an administrator account with greater privileges on the target network; the ability to sidestep and/or disable any security software; and gaining the access needed to disrupt or corrupt any data backup systems the victim firm may have.

Each day, millions of malware-laced emails are blasted out containing booby-trapped attachments. If the attachment is opened, the malicious document proceeds to quietly download additional malware and hacking tools to the victim machine (here’s one video example of a malicious Microsoft Office attachment from the malware sandbox service From there, the infected system will report home to a malware control server operated by the spammers who sent the missive.

At that point, control over the victim machine may be transferred or sold multiple times between different cybercriminals who specialize in exploiting such access. These folks are very often contractors who work with established ransomware groups, and who are paid a set percentage of any eventual ransom payments made by a victim company.


Enter subcontractors like “Dr. Samuil,” a cybercriminal who has maintained a presence on more than a dozen top Russian-language cybercrime forums over the past 15 years. In a series of recent advertisements, Dr. Samuil says he’s eagerly hiring experienced people who are familiar with tools used by legitimate pentesters for exploiting access once inside of a target company — specifically, post-exploit frameworks like the closely-guarded Cobalt Strike.

“You will be regularly provided select accesses which were audited (these are about 10-15 accesses out of 100) and are worth a try,” Dr. Samuil wrote in one such help-wanted ad. “This helps everyone involved to save time. We also have private software that bypasses protection and provides for smooth performance.”

From other classified ads he posted in August and September 2020, it seems clear Dr. Samuil’s team has some kind of privileged access to financial data on targeted companies that gives them a better idea of how much cash the victim firm may have on hand to pay a ransom demand. To wit:

“There is huge insider information on the companies which we target, including information if there are tape drives and clouds (for example, Datto that is built to last, etc.), which significantly affects the scale of the conversion rate.

– experience with cloud storage, ESXi.
– experience with Active Directory.
– privilege escalation on accounts with limited rights.

* Serious level of insider information on the companies with which we work. There are proofs of large payments, but only for verified LEADs.
* There is also a private MEGA INSIDE , which I will not write about here in public, and it is only for experienced LEADs with their teams.
* We do not look at REVENUE / NET INCOME / Accountant reports, this is our MEGA INSIDE, in which we know exactly how much to confidently squeeze to the maximum in total.

According to cybersecurity firm Intel 471, Dr. Samuil’s ad is hardly unique, and there are several other seasoned cybercriminals who are customers of popular ransomware-as-a-service offerings that are hiring sub-contractors to farm out some of the grunt work.

“Within the cybercriminal underground, compromised accesses to organizations are readily bought, sold and traded,” Intel 471 CEO Mark Arena said. “A number of security professionals have previously sought to downplay the business impact cybercriminals can have to their organizations.”

“But because of the rapidly growing market for compromised accesses and the fact that these could be sold to anyone, organizations need to focus more on efforts to understand, detect and quickly respond to network compromises,” Arena continued. “That covers faster patching of the vulnerabilities that matter, ongoing detection and monitoring for criminal malware, and understanding the malware you are seeing in your environment, how it got there, and what it has or could have dropped subsequently.”


In conducting research for this story, KrebsOnSecurity learned that Dr. Samuil is the handle used by the proprietor of multi-vpn[.]biz, a long-running virtual private networking (VPN) service marketed to cybercriminals who are looking to anonymize and encrypt their online traffic by bouncing it through multiple servers around the globe.

Have a Coke and a Molotov cocktail. Image:

MultiVPN is the product of a company called Ruskod Networks Solutions (a.k.a. ruskod[.]net), which variously claims to be based in the offshore company havens of Belize and the Seychelles, but which appears to be run by a guy living in Russia.

The domain registration records for ruskod[.]net were long ago hidden by WHOIS privacy services. But according to [an advertiser on this site], the original WHOIS records for the site from the mid-2000s indicate the domain was registered by a Sergey Rakityansky.

This is not an uncommon name in Russia or in many surrounding Eastern European nations. But a former business partner of MultiVPN who had a rather public falling out with Dr. Samuil in the cybercrime underground told KrebsOnSecurity that Rakityansky is indeed Dr. Samuil’s real surname, and that he is a 32- or 33-year-old currently living in Bryansk, a city located approximately 200 miles southwest of Moscow.

Neither Dr. Samuil nor MultiVPN have responded to requests for comment.

Worse Than FailureTranslation by Column

Content management systems are… an interesting domain in software development. On one hand, they’re one of the most basic types of CRUD applications, at least at their core. On the other, it’s the sort of problem domain where you can get 90% of what you need really easily, but the remaining 10% are the cases that are going to make you pull your hair out.

Which is why pretty much every large CMS project supports some kind of plugin architecture, and usually some sort of marketplace to curate those plugins. That kind of curation is hard, writing plugins tends to be more about “getting the feature I need” and less about “releasing a reliable product”, and thus the available plugins for most CMSes tend to be of wildly varying quality.

Paul recently installed a plugin for Joomla, and started receiving this error:

Row size too large. The maximum row size for the used table type, not counting BLOBs, is 8126. This includes storage overhead, check the manual. You have to change some columns to TEXT or BLOBs

Paul, like a good developer, checked with the plugin documentation to see if he could fix that error. The documentation had this to say:

you may have too many languages installed

This left Paul scratching his head. Sure, his CMS had eight different language packs installed, but how, exactly, was that “too many”? It wasn’t until he checked into the database schema that he understood what was going on:

A screencap of the database schema, which shows that EVERY text field has a column created for EVERY language, e.g. 'alias_ru', 'alias_fr', 'alias_en'

This plugin’s approach to handling multiple languages was simply to take every text field it needed to track and add a column to the table for every language. The result was a table with 231 columns, most of them language-specific duplicates.

Now, there are a few possible reasons why this is this way. It may simply be whoever wrote the plugin didn’t know or care about setting up a meaningful lookup table. Maybe they were thinking about performance, and thought that “well, if I denormalize I can get more data with a single query”. Maybe they weren’t thinking at all. Or maybe there’s some quirk about creating your own tables as a Joomla plugin that the developer didn’t want to create more tables than absolutely necessary.

Regardless, it’s definitely one of the worst schemas you could come up with to handle localization.

Paul adds: “I vote the developer for Best Database Designer of 2020.”

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Krebs on SecurityPromising Infusions of Cash, Fake Investor John Bernard Walked Away With $30M

September featured two stories on a phony tech investor named John Bernard, a pseudonym used by a convicted thief named John Clifton Davies who’s fleeced dozens of technology companies out of an estimated $30 million with the promise of lucrative investments. Those stories prompted a flood of tips from Davies’ victims that paints a much clearer picture of this serial con man and his cohorts, including allegations of hacking, smuggling, bank fraud and murder.

KrebsOnSecurity interviewed more than a dozen of Davies’ victims over the past five years, none of whom wished to be quoted here out of fear of reprisals from a man they say runs with mercenaries and has connections to organized crime.

As described in Part II of this series, John Bernard is in fact John Clifton Davies, a 59-year-old U.K. citizen who absconded from justice before being convicted on multiple counts of fraud in 2015. Prior to his conviction, Davies served 16 months in jail before being cleared of murdering his third wife on their honeymoon in India.

The scam artist John Bernard (left) in a recent Zoom call, and a photo of John Clifton Davies from 2015.

After eluding justice in the U.K., Davies reinvented himself as The Private Office of John Bernard, pretending to a be billionaire Swiss investor who made his fortunes in the dot-com boom 20 years ago and who was seeking investment opportunities.

In case after case, Bernard would promise to invest millions in tech startups, and then insist that companies pay tens of thousands of dollars worth of due diligence fees up front. However, the due diligence company he insisted on using — another Swiss firm called Inside Knowledge — also was secretly owned by Bernard, who would invariably pull out of the deal after receiving the due diligence money.

Bernard found a constant stream of new marks by offering extraordinarily generous finders fees to investment brokers who could introduce him to companies seeking an infusion of cash. When it came time for companies to sign legal documents, Bernard’s victims interacted with a 40-something Inside Knowledge employee named “Katherine Miller,” who claimed to be his lawyer.

It turns out that Katherine Miller is a onetime Moldovan attorney who was previously known as Ecaterina “Katya” Dudorenko. She is listed as a Romanian lawyer in the U.K. Companies House records for several companies tied to John Bernard, including Inside Knowledge Solutions Ltd., Docklands Enterprise Ltd., and Secure Swiss Data Ltd (more on Secure Swiss data in a moment).

Another of Bernard’s associates listed as a director at Docklands Enterprise Ltd. is Sergey Valentinov Pankov. This is notable because in 2018, Pankov and Dudorenko were convicted of cigarette smuggling in the United Kingdom.

Sergey Pankov and Ecaterina Dudorenco, in undated photos. Source:

According to the Organized Crime and Corruption Reporting Project, “illicit trafficking of tobacco is a multibillion-dollar business today, fueling organized crime and corruption [and] robbing governments of needed tax money. So profitable is the trade that tobacco is the world’s most widely smuggled legal substance. This booming business now stretches from counterfeiters in China and renegade factories in Russia to Indian reservations in New York and warlords in Pakistan and North Africa.”

Like their erstwhile boss Mr. Davies, both Pankov and Dudorenko disappeared before their convictions in the U.K. They were sentenced in absentia to two and a half years in prison.

Incidentally, Davies was detained by Ukrainian authorities in 2018, although he is not mentioned by name in this story from the Ukrainian daily Pravda. The story notes that the suspect moved to Kiev in 2014 and lived in a rented apartment with his Ukrainian wife.

John’s fourth wife, Iryna Davies, is listed as a director of one of the insolvency consulting businesses in the U.K. that was part of John Davies’ 2015 fraud conviction. Pravda reported that in order to confuse the Ukrainian police and hide from them, Mr. Davies constantly changed their place of residence.

John Clifton Davies, a.k.a. John Bernard. Image: Ukrainian National Police.

The Pravda story says Ukrainian authorities were working with the U.K. government to secure Davies’ extradition, but he appears to have slipped away once again. That’s according to one investment broker who’s been tracking Davies’ trail of fraud since 2015.

According to that source — who we’ll call “Ben” — Inside Knowledge and The Private Office of John Bernard have fleeced dozens of companies out of nearly USD $30 million in due diligence fees over the years, with one company reportedly paying over $1 million.

Ben said he figured out that Bernard was Davies through a random occurrence. Ben said he’d been told by a reliable source that Bernard traveled everywhere in Kiev with several armed guards, and that his entourage rode in a convoy that escorted Davies’ high-end Bentley. Ben said Davies’ crew was even able to stop traffic in the downtown area in what was described as a quasi military maneuver so that Davies’ vehicle could proceed unobstructed (and presumably without someone following his car).

Ben said he’s spoken to several victims of Bernard who saw phony invoices for payments to be made to banks in Eastern Europe appear to come from people within their own organization shortly after cutting off contact with Bernard and his team.

While Ben allowed that these invoices could have come from another source, it’s worth noting that by virtue of participating in the due diligence process, the companies targeted by these schemes would have already given Bernard’s office detailed information about their finances, bank accounts and security processes.

In some cases, the victims had agreed to use Bernard’s Secure Swiss Data software and services to store documents for the due diligence process. Secure Swiss Data is one of several firms founded by Davies/Inside Knowledge and run by Dudorenko, and it advertised itself as a Swiss company that provides encrypted email and data storage services. In February 2020, Secure Swiss Data was purchased in an “undisclosed multimillion buyout” by SafeSwiss Secure Communication AG.

Shortly after the first story on John Bernard was published here, virtually all of the employee profiles tied to Bernard’s office removed him from their work experience as listed on their LinkedIn resumes — or else deleted their profiles altogether. Also, John Bernard’s main website — — replaced the content on its homepage with a note saying it was closing up shop.

Incredibly, even after the first two stories ran, Bernard/Davies and his crew continued to ply their scam with companies that had already agreed to make due diligence payments, or that had made one or all of several installment payments.

One of those firms actually issued a press release in August saying it had been promised an infusion of millions in cash from John Bernard’s Private Office. They declined to be quoted here, and continue to hold onto hope that Mr. Bernard is not the crook that he plainly is.

Worse Than FailureCodeSOD: A Long Time to Master Lambdas

At an old job, I did a significant amount of VB.Net work. I didn’t hate VB.Net. Sure, the syntax was clunky, but autocomplete mostly solved that, and it was more OR
less feature-matched to C# (and, as someone who needed to handle XML, the fact that VB.Net had XML literals was handy).

Every major feature in C# had a VB.Net equivalent, including lambdas. And hey, lambdas are great! What a wonderful way to express a filter condition.

Well, Eric O sends us this filter lambda. Originally sent to us as a single line, I’m adding line breaks for readability, because I care about this more than the original developer did.

Function(row, index) index <> 0 AND
(row(0).ToString().Equals("DIV10106") OR
row(0).ToString().Equals("326570") OR
row(0).ToString().Equals("301100") OR
row(0).ToString().Equals("305622") OR
row(0).ToString().Equals("305623") OR
row(0).ToString().Equals("317017") OR
row(0).ToString().Equals("323487") OR
row(0).ToString().Equals("323488") OR
row(0).ToString().Equals("324044") OR
row(0).ToString().Equals("317016") OR
row(0).ToString().Equals("316875") OR
row(0).ToString().Equals("323976") OR
row(0).ToString().Equals("324813") OR
row(0).ToString().Equals("147000") OR
row(0).ToString().Equals("326984") OR
row(0).ToString().Equals("326634") OR
row(0).ToString().Equals("306039") OR
row(0).ToString().Equals("307021") OR
row(0).ToString().Equals("307050") OR
row(0).ToString().Equals("307603") OR
row(0).ToString().Equals("307604") OR
row(0).ToString().Equals("307632") OR
row(0).ToString().Equals("307704") OR
row(0).ToString().Equals("308184") OR
row(0).ToString().Equals("308531") OR
row(0).ToString().Equals("309930") OR
row(0).ToString().Equals("104253") OR
row(0).ToString().Equals("104532") OR
row(0).ToString().Equals("104794") OR
row(0).ToString().Equals("104943") OR
row(0).ToString().Equals("105123") OR
row(0).ToString().Equals("105755") OR
row(0).ToString().Equals("106075") OR
row(0).ToString().Equals("108062") OR
row(0).ToString().Equals("108417") OR
row(0).ToString().Equals("108616") OR
row(0).ToString().Equals("108625") OR
row(0).ToString().Equals("108689") OR
row(0).ToString().Equals("108851") OR
row(0).ToString().Equals("108997") OR
row(0).ToString().Equals("109358") OR
row(0).ToString().Equals("109551") OR
row(0).ToString().Equals("110081") OR
row(0).ToString().Equals("111501") OR
row(0).ToString().Equals("111987") OR
row(0).ToString().Equals("112136") OR
row(0).ToString().Equals("11229") OR
row(0).ToString().Equals("112261") OR
row(0).ToString().Equals("113127") OR
row(0).ToString().Equals("113266") OR
row(0).ToString().Equals("114981") OR
row(0).ToString().Equals("116527") OR
row(0).ToString().Equals("121139") OR
row(0).ToString().Equals("121469") OR
row(0).ToString().Equals("142449") OR
row(0).ToString().Equals("144034") OR
row(0).ToString().Equals("144693") OR
row(0).ToString().Equals("144900") OR
row(0).ToString().Equals("150089") OR
row(0).ToString().Equals("194340") OR
row(0).ToString().Equals("214950") OR
row(0).ToString().Equals("215321") OR
row(0).ToString().Equals("215908") OR
row(0).ToString().Equals("216531") OR
row(0).ToString().Equals("217151") OR
row(0).ToString().Equals("220710") OR
row(0).ToString().Equals("221265") OR
row(0).ToString().Equals("221387") OR
row(0).ToString().Equals("300011") OR
row(0).ToString().Equals("300013") OR
row(0).ToString().Equals("300020") OR
row(0).ToString().Equals("300022") OR
row(0).ToString().Equals("300024") OR
row(0).ToString().Equals("300026") OR
row(0).ToString().Equals("300027") OR
row(0).ToString().Equals("300050") OR
row(0).ToString().Equals("300059") OR
row(0).ToString().Equals("300060") OR
row(0).ToString().Equals("300059") OR
row(0).ToString().Equals("300125") OR
row(0).ToString().Equals("300139") OR
row(0).ToString().Equals("300275") OR
row(0).ToString().Equals("300330") OR
row(0).ToString().Equals("300342") OR
row(0).ToString().Equals("300349") OR
row(0).ToString().Equals("300355") OR
row(0).ToString().Equals("300363") OR
row(0).ToString().Equals("300413") OR
row(0).ToString().Equals("301359") OR
row(0).ToString().Equals("302131") OR
row(0).ToString().Equals("302595") OR
row(0).ToString().Equals("302621") OR
row(0).ToString().Equals("302649") OR
row(0).ToString().Equals("302909") OR
row(0).ToString().Equals("302955") OR
row(0).ToString().Equals("302986") OR
row(0).ToString().Equals("303096") OR
row(0).ToString().Equals("303249") OR
row(0).ToString().Equals("303753") OR
row(0).ToString().Equals("304010") OR
row(0).ToString().Equals("304016") OR
row(0).ToString().Equals("304047") OR
row(0).ToString().Equals("304566") OR
row(0).ToString().Equals("305347") OR
row(0).ToString().Equals("305486") OR
row(0).ToString().Equals("305487") OR
row(0).ToString().Equals("305489") OR
row(0).ToString().Equals("305526") OR
row(0).ToString().Equals("305568") OR
row(0).ToString().Equals("305769") OR
row(0).ToString().Equals("305773") OR
row(0).ToString().Equals("305824") OR
row(0).ToString().Equals("305998") OR
row(0).ToString().Equals("306039") OR
row(0).ToString().Equals("307021") OR
row(0).ToString().Equals("307050") OR
row(0).ToString().Equals("307603") OR
row(0).ToString().Equals("307604") OR
row(0).ToString().Equals("307632") OR
row(0).ToString().Equals("307704") OR
row(0).ToString().Equals("308184") OR
row(0).ToString().Equals("308531") OR
row(0).ToString().Equals("309930") OR
row(0).ToString().Equals("322228") OR
row(0).ToString().Equals("121081") OR
row(0).ToString().Equals("321879") OR
row(0).ToString().Equals("327391") OR
row(0).ToString().Equals("328933") OR
row(0).ToString().Equals("325038")) AND DateTime.ParseExact(row(2).ToString(), "dd.MM.yyyy", System.Globalization.CultureInfo.InvariantCulture).CompareTo(DateTime.Today.AddDays(-14)) <= 0

That is 5,090 characters of lambda right there, clearly copy/pasted with modifications on each line. The original developer, at no point, paused to think a moment about whether or not this was the way to achieve their goal?

If you’re wondering about those numeric values, I’ll let Eric explain:

The magic numbers are all customer object references, except the first one, which I have no idea what is.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Kevin RuddThe Guardian: Under the cover of Covid, Morrison wants to scrap my government’s protections against predatory lending

Published in The Guardian on 5 October 2020.

Pardon me for being just a little suspicious, but when I see an avalanche of enthusiasm from such reputable institutions as the Morrison government, the Murdoch media and the Australian Banking Association (anyone remember the Hayne royal commission?) about the proposed “reform” of the National Consumer Credit Protection Act, I smell a very large rodent. “Reform” here is effectively code language for repeal. And it means the repeal of major legislation introduced by my government to bring about uniform national laws to protect Australian consumers from unregulated and often predatory lending practices.

The banks of course have been ecstatic at Morrison’s announcement, chiming in with the government’s political chimeric that allowing the nation’s lenders once again to just let it rip was now essential for national economic recovery. Westpac, whose reputation was shredded during the royal commission, was out of the blocks first in welcoming the changes: CEO Peter King said they would “reduce red tape”, “speed up the process for customers to obtain approval”, and help small businesses access credit to invest and grow.

And right on cue, Westpac shares were catapulted 7.2% to $17.54 just before midday on the day of announcement. National Australia Bank was up more than 6% at $18.26, ANZ up more than 5% at $17.76, and Commonwealth Bank was trading almost 3.5% higher at $66.49. The popping of champagne corks could be heard right around the country as the banks, once again, saw the balance of market power swing in their direction and away from consumers. And that means more profit and less consumer protection.

A little bit of history first. Back in the middle of the global financial crisis, when the banks came on bended knee to our government seeking sovereign guarantees to preserve their financial lifelines to international lines of credit, we acted decisively to protect them, and their depositors, and to underpin the stability of the Australian financial system. And despite a hail of abuse from both the Liberal party and the Murdoch media at the time, we succeeded. Not only did we keep Australia out of the global recession then wreaking havoc across all developed economies, we also prevented any single financial institution from falling over and protected every single Australian’s savings deposits. Not bad, given the circumstances.

But we were also crystal-clear with the banks and other lenders at the time that we would be acting to protect working families from future predatory lending practices. And we did so. The national consumer credit protection bill 2009 (credit bill) and the national consumer credit protection (fees) bill 2009 (fees bill) collectively made up the consumer credit protection reform package. It included:

  • A uniform and comprehensive national licensing regime for the first time across the country for those engaging in credit activities via a new Australian credit licence administered by the Australian Securities and Investments Commission as the sole regulator;
  • industry-wide responsible lending conduct requirements for licensees;
  • improved sanctions and enhanced enforcement powers for the regulator; and
  • enhanced consumer protection through dispute resolution mechanisms, court arrangements and remedies.

This reform was not dreamed up overnight. It gave effect to the Council of Australian Governments’ agreements of 26 March and 3 July 2008 to transfer responsibility for regulation of consumer credit, and a related cluster of additional financial services, to the commonwealth. It also implemented the first phase of a two-phase implementation plan to transfer credit regulation to the commonwealth endorsed by Coag on 2 October 2008. It was the product of much detailed work over more than 12 months.

Scott Morrison’s formal argument to turn all this on its head is that “as Australia continues to recover from the Covid-19 pandemic, it is more important than ever that there are no unnecessary barriers to the flow of credit to households and small businesses”.

But hang on. Where is Morrison’s evidence that there is any problem with the flow of credit at the moment? And if there were a problem, where is Morrison’s evidence that the proposed emasculation of our consumer credit protection law is the only means to improve credit flow? Neither he nor the treasurer, Josh Frydenberg, have provided us with so much as a skerrick. Indeed, the Commonwealth Bank said recently that the flow of new lending is now a little above pre-Covid levels and that, in annual terms, lending is growing at a strong pace.

More importantly, we should turn to volume VI of royal commissioner Kenneth Hayne’s final report into the Australian banking industry. Hayne, citing the commonwealth Treasury as his authority, explicitly concluded that the National Consumer Credit Protection Act has not in fact hindered the flow of credit but instead had provided system stability. As Hayne states: “I think it important to refer to a number of aspects of Treasury’s submissions in response to the commission’s interim report. Treasury indicated that ‘there is little evidence to suggest that the recent tightening in credit standards, including through Apra’s prudential measures or the actions taken by Asic in respect of [responsible lending obligations], has materially affected the overall availability of credit’.”

So once again, we find the emperor has no clothes. The truth is this attack on yet another of my government’s reforms has nothing to do with the macro-economy. It has everything to do with a Morrison government bereft of intellectual talent and policy ideas in dealing with the real challenge of national economic recovery. Just as it has everything to do with Frydenberg’s spineless yielding to the narrow interests of the banking lobby, using the Covid-19 crisis as political cover in order to lift the profit levels of the banks while throwing borrowers’ interests to the wind.

This latest flailing in the wind by Morrison et al is part of a broader pattern of failed policy responses by the government to the economic impact of the crisis. Morrison had to be dragged kicking and screaming into accepting the reality of stimulus, having rejected RBA advice last year to do precisely the same – and that was before Covid hit. And despite a decade of baseless statements about my government’s “unsustainable” levels of public debt and budget deficit, Morrison is now on track to deliver five times the level of debt and six times the level of our budget deficit. But it doesn’t stop there. Having destroyed a giant swathe of the Australian manufacturing industry by destroying our motor vehicle industry out of pure ideology, Morrison now has the temerity to make yet another announcement about the urgent need now for a new national industry policy for Australia. Hypocrisy thy name is Liberal.

Notwithstanding these stellar examples of policy negligence, incompetence and hypocrisy, there is a further pattern to Morrison’s response to the Covid crisis: to use it as political cover to justify the introduction of a number of regressive measures that will hurt working families. They’ve used Covid cover to begin dismantling successive Labor government reforms for our national superannuation scheme, the net effect of which will be to destroy the retirement income of millions of wage and salary earners. They are also on the cusp of introducing tax changes, the bulk of which will be delivered to those who do not need them, while further eroding the national revenue base at a time when all fiscal discipline appears to have been thrown out the window altogether. And that leaves to one side what they are also threatening to do – again under the cover of Covid – to undermine wages and conditions for working families under proposed changes to our industrial relations laws.

The bottom line is that Morrison’s “reform” of the National Consumer Credit Law forms part of a wider pattern of behaviour: this is a government that is slow to act, flailing around in desperate search for a substantive economic agenda to lead the nation out of recession, while using Covid to further load the dice against the interests of working families for the future.

The post The Guardian: Under the cover of Covid, Morrison wants to scrap my government’s protections against predatory lending appeared first on Kevin Rudd.

Worse Than FailureNews Roundup: Excellent Data Gathering

In a global health crisis, like say, a pandemic, accurate and complete data about its spread is a "must have". Which is why, in the UK, there's a great deal of head-scratching over how the government "lost" thousands of cases.


Normally, we don't dig too deeply into current events on this site, but the circumstances here are too "WTF" to allow them to pass without comment.

From the BBC, we know that this system was used to notify individuals if they have tested positive for COVID-19, and notify their close contacts that they have been exposed. That last bit is important. Disease spread can quickly turn exponential, and even though COVID-19 has a low probability of causing fatalities, the law of large numbers means that a lot of people will die anyway on that exponential curve. If you can track exposure, get exposed individuals tested and isolated before they spread the disease, you can significantly cut down its spread.

People are rightfully pretty upset about this mistake. Fortunately, the BBC has a followup article discussing the investigation, where an analyst explores what actually happened, and as it turns out, we're looking at an abuse of everyone's favorite data analytics tool: Microsoft Excel.

The companies administering the tests compile their data into plain text which appear to be CSV files. No real surprise there. Each test created multiple rows within the CSV file. Then, the people working for Public Health England imported that data into Excel… as .xls files.

.xls is the old Excel format, dating back into far-off years, and retained for backwards compatibility. While modern .xlsx files can support a little over a million rows, the much older format caps out at 65,536.

So: these clerks imported the CSV file, hit "save as…" and made a .xls, and ended up truncating the results. With the fact that these input datasets had multiple rows per tests, "in practice it meant that each template was limited to about 1,400 cases."

Again, "oops".

I've discussed how much users really want to do everything in Excel, and this is clearly what happened here. The users had one tool, Excel, and it looked like a hammer to them. Arcane technical details like how many rows different versions of Excel may or may not support aren't things it's fair to expect your average data entry clerk to know.

On another level, this is a clear failing of the IT services. Excel was not the right tool for this job, but in the middle of a pandemic, no one is entirely sure what they needed. Excel becomes a tempting tool, because pretty much any end user can look at complicated data and slice/shape/chart/analyze it however they like. There's a good reason why they want to use Excel for everything: it's empowering to the users. When they have an idea for a new report, they don't need to go through six levels of IT management, file requests in triplicate, and have a testing and approval cycle to ensure the report meets the specifications. They just… make it.

There are packaged tools that offer similar, purpose built functionality but still give users all the flexibility they could want for slicing data. But they're expensive, and many organizations (especially government offices) will be stingy about who gets a license. They may or may not be easy to use. And of course, the time to procure such a thing was in the years before a massive virus outbreak. Excel is there, on everyone's computer already, and does what they need.

Still, they made the mistake, they saw the consequences, so now we know, they will definitely start taking steps to correct the problem, right? They know that Excel isn't fit-for-purpose, so they're switching tools, right?

From the BBC:

To handle the problem, PHE is now breaking down the data into smaller batches to create a larger number of Excel templates in order to make sure none hit their cap.


[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.


Cory DoctorowFree copies of Attack Surface for institutions (schools, libraries, classrooms, etc)

Figuring out how to tour a book in the lockdown age is hard. Many authors have opted to do a handful of essentially identical events with a couple of stores as a way of spreading out the times so that readers with different work-schedules, etc can make it.

But not me. My next novel, Attack Surface (the third Little Brother book) comes out in the US/Canada on Oct 13 and it touches on so many burning contemporary issues that I rounded up 16 guests for 8 different themed “Attack Surface Lectures.

This has many advantages: it allows me to really explore a wide variety of subjects without trying to cram them all into a single event and it allows me to spread out the love to eight fantastic booksellers.

Half of those bookstores (the ones WITH asterices in the listing) have opted to have ME fulfil their orders, meaning that Tor is shipping me all their copies, and I’m going to sign, personalize and mail them from home the day after each event!

(The other half will be sending out books with adhesive-backed bookplates I’ve signed for them)

All of that is obviously really cool, but there is a huge fly in the ointment: given that all these events are different, what if you want to attend more than one?

This is where things get broken. Each of these booksellers is under severe strain from the pandemic (and the whole sector was under severe strain even before the pandemic), and they’re allocating resources – payrolled staff – to after-hours events that cost them real money.

So each of them has a “you have to buy a book to attend” policy. These were pretty common in pre-pandemic times, too, because so many attendees showed up at indie stores that were being destroyed by Amazon, having bought the books on Amazon.

What’s more, booksellers with in-person events at least got the possibility that attendees would buy another book while in the store, and/or that people would discover their store through the event and come back – stuff that’s not gonna happen with virtual events.

There is, frankly, no good answer to this: no one in the chain has the resources to create and deploy a season’s pass system (let alone agree on how the money from it should be divided among booksellers) – and no reader has any use for 8 (or even 2!) copies of the book.

It was my stupid mistake, as I explain here.

After I posted, several readers suggested one small way I could make this better: let readers who want to attend more than one event donate their extra copies to schools, libraries and other institutions.

Which I am now doing. If you WANT to attend more than one event and you are seeking to gift a copy to an institution, I have a list for you! It’s beneath this explanatory text.

And if you are affiliated with an institution and you want to put yourself on this list, please complete this form.

If you want to attend more than one event and you want to donate your copy of the book to one of these organizations, choose one from the list and fill its name in on the ticket-purchase page, then email me so I can cross it off the list:

I know this isn’t great, and I apologize. We’re all figuring out this book-launch-in-a-pandemic business here, and it was really my mistake. I can’t promise I won’t make different mistakes if I have to do another virtual tour, but I can promise I won’t make this one again.

Covenant House – Oakland
200 Harrison St

Madison Public Library Central Branch
201 W Mifflin St,

Monona Public Library
1000 Nichols Rd

Madison Public Library Pinney Branch
516 Cottage Grove Rd

La Cueva High School
Michael Sanchez
7801 Wilshire NE
New Mexico

2500 NE 54th St

Nuestro Mundo Community School
Hollis Rudiger
902 Nichols Rd, Monona

New Horizons Young Adult Shelter
2709 3rd Avenue

Salem Community College Library
Jennifer L. Pierce
460 Hollywood Avenue
Carneys Point

Sumas Branch of the Whatcom County Library System
Laurie Dawson – Youth Focus Librarian
461 2nd Street

Riverview Learning Center
Kris Rodger
32302 NE 50th Street

Cranston Public Library
Ed Garcia
140 Sockanosset Cross Rd

Paideia School Library
Anna Watkins
1509 Ponce de Leon Ave

Westfield Community School
Stephen Fodor
2100 sleepy hollow

Worldbuilders Nonprofit
Gray Miller, Executive Director
1200 3rd St
Stevens Point

Northampton Community College
Marshal Miller
3835 Green Pond Road

Metropolitan Business Academy Magnet High School
Steve Staysniak
115 Water Street
New Haven

New Haven Free Public Library
Meghan Currey
133 Elm Street
New Haven

New Haven Free Public Library – Mitchell Branch
Marian Huggins
37 Harrison Street
New Haven

New Haven Free Public Library – Wilson Branch
Luis Chavez-Brumell
303 Washington Ave.
New Haven

New Haven Free Public Library – Stetson Branch
Diane Brown
200 Dixwell Ave.
New Haven

New Haven Free Public Library – Fair Haven Branch
Kirk Morrison
182 Grand Ave.
New Haven

University of Chicago
Acquisitions Room 170
1100 E. 57th St

Greenburgh Public Library
John Sexton, Director
300 Tarrytown Rd

Red Rocks Community College Library
Karen Neville
13300 W Sixth Avenue

Biola University Library
Chuck Koontz
13800 Biola Ave.
La Mirada

Otto Bruyns Public Library
Aubrey Hiers
241 W Mill Rd

California Rehabilitation Center Library
William Swafford
5th Street and Western Ave.

Hastings High School
Rachel Haider
200 General Sieben Drive

Ballard High School Library
TuesD Chambers
1418 NW 65th St.

Southwest Georgia Regional Library System
Catherine Vanstone
301 South Monroe Street

Los Angeles Center For Enriched Studies (LACES) Library
Rustum Jacob
5931 W. 18th St
Los Angeles

Dmitriy Vysotskiy or Shawn Coyle
1048 W. 37th St. Suite 105

Rising Tide Charter Public School
Katie Klein
59 Armstrong

Doolen Middle School
Carmen Coulter
2400 N Country Club Rd
Tucson AZ

Fruchthendler Elementary School
Jessica Carter
7470 E Cloud Rd

Brandon Branch Library
Mary Erickson
305 S. Splitrock Blvd.
Brandon South Dakota

Alternative Family Education of the Santa Cruz City Schools
Dorothee Ledbetter
185 Benito Av
Santa Cruz

Helios School
Elizabeth Wallace
597 Central Ave.

Eden Prairie High School
Jenn Nelson
17185 Valley View Rd
Eden Prairie

Helios School
Elizabeth Wallace
597 Central Ave.

Flatbush Commons mutual aid library
Joshua Wilkerson
101 Kenilworth Pl

Cryptogram On Risk-Based Authentication

Interesting usability study: “More Than Just Good Passwords? A Study on Usability and Security Perceptions of Risk-based Authentication“:

Abstract: Risk-based Authentication (RBA) is an adaptive security measure to strengthen password-based authentication. RBA monitors additional features during login, and when observed feature values differ significantly from previously seen ones, users have to provide additional authentication factors such as a verification code. RBA has the potential to offer more usable authentication, but the usability and the security perceptions of RBA are not studied well.

We present the results of a between-group lab study (n=65) to evaluate usability and security perceptions of two RBA variants, one 2FA variant, and password-only authentication. Our study shows with significant results that RBA is considered to be more usable than the studied 2FA variants, while it is perceived as more secure than password-only authentication in general and comparably se-cure to 2FA in a variety of application types. We also observed RBA usability problems and provide recommendations for mitigation.Our contribution provides a first deeper understanding of the users’perception of RBA and helps to improve RBA implementations for a broader user acceptance.

Paper’s website. I’ve blogged about risk-based authentication before.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 17)

Here’s part seventeen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Some show notes:

Here’s the form for getting a free Little Brother story, “Force Multiplier” by pre-ordering the print edition of Attack Surface (US/Canada only)

Here’s the Kickstarter for the Attack Surface audiobook, where every backer gets Force Multiplier.

Here’s the schedule for the Attack Surface lectures

Here’s the form to request a copy of Attack Surface for schools, libraries, classrooms, etc.

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.


Sam VargheseSomething fishy about Trump’s taxes? Did nobody suspect it all along?

Recently, the New York Times ran an article about Donald Trump having paid no federal income taxes for 10 of the last 15 years; many claimed it was a blockbuster story and that it would have far-reaching effects on the forthcoming presidential election.

If this was the first time Trump’s tax evasion was being mentioned, then, sure, it would have been a big deal.

But right from the time he first refused to make his tax returns public — before he was elected — this question has been hanging over Trump’s head.
And the only logical conclusion has been that if he was making excuses, then there must be something fishy about it. Five years later, we find that he was highly adept at claiming exemptions from paying income tax for various reasons.

Surprised? Some people assume that one would have to be. But this was only to be expected, given all the excuses that Trump has trotted out over all these years to avoid releasing his tax returns.

When I posted a tweet pointing out that Trump had been doing exactly what the big tech companies — Apple, Google, Microsoft, Amazon and Facebook, among others — do, others were prone to try and portray Trump’s actions as somehow different. But nobody specified the difference.

The question that struck me was: why is the New York Times publishing this story at this time? The answer is obvious: it would like to influence the election in favour of the Democrat candidate, Joe Biden.

The newspaper is not the only institution or individual trying to carry water for Biden: in recent days, we have seen the release of a two-part TV series The Comey Rule, based on a book written by James Comey, the former FBI director, about the run-up to the 2016 election. It is extremely one-sided.

Also out is a two-part documentary made by Alex Gibney, titled Agents of Chaos. which depends mostly on government sources to spread the myth that the Russians were responsible for Trump’s election.

Another documentary, Totally Under Control, about the Trump administration’s response to the coronavirus pandemic, will be coming out on October 13. Again, Gibney is the director and producer.

On the Republican side, there has been nothing of this kind. Whether that says something about the level of confidence in the Trump camp is open to question.

With less than a month to go for the election, it should only be expected that the propaganda war will intensify as both sides try to push their man into the White House.

Worse Than FailureCodeSOD: Switched Requirements

Code changes over time. Sometimes, it feels like gremlins sweep through the codebase and change it for us. Usually, though, we have changes to requirements, which drives changes to the code.

Thibaut was looking at some third party code to implement tooling to integrate with it, and found this C# switch statement:

if (effectID != 10) { switch (effectID) { case 21: case 24: case 27: case 28: case 29: return true; case 22: case 23: case 25: case 26: break; default: switch (effectID) { case 49: case 50: return true; } break; } return false; } return true;

I'm sure this statement didn't start this way. And I'm sure that, to each of the many developers who swung through to add their own little case to the switch their action made some sort of sense, and maybe they knew they should refactor, they need to get this functionality in and they need it now, and code cleanliness can wait. And wait. And wait.

Until Thibaut comes through, and replaces it with this:

switch (effectID) { case 10: case 21: case 24: case 27: case 28: case 29: case 49: case 50: return true; } return false;
[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!


Cryptogram COVID-19 and Acedia

Note: This isn’t my usual essay topic. Still, I want to put it on my blog.

Six months into the pandemic with no end in sight, many of us have been feeling a sense of unease that goes beyond anxiety or distress. It’s a nameless feeling that somehow makes it hard to go on with even the nice things we regularly do.

What’s blocking our everyday routines is not the anxiety of lockdown adjustments, or the worries about ourselves and our loved ones — real though those worries are. It isn’t even the sense that, if we’re really honest with ourselves, much of what we do is pretty self-indulgent when held up against the urgency of a global pandemic.

It is something more troubling and harder to name: an uncertainty about why we would go on doing much of what for years we’d taken for granted as inherently valuable.

What we are confronting is something many writers in the pandemic have approached from varying angles: a restless distraction that stems not just from not knowing when it will all end, but also from not knowing what that end will look like. Perhaps the sharpest insight into this feeling has come from Jonathan Zecher, a historian of religion, who linked it to the forgotten Christian term: acedia.

Acedia was a malady that apparently plagued many medieval monks. It’s a sense of no longer caring about caring, not because one had become apathetic, but because somehow the whole structure of care had become jammed up.

What could this particular form of melancholy mean in an urgent global crisis? On the face of it, all of us care very much about the health risks to those we know and don’t know. Yet lurking alongside such immediate cares is a sense of dislocation that somehow interferes with how we care.

The answer can be found in an extreme thought experiment about death. In 2013, philosopher Samuel Scheffler explored a core assumption about death. We all assume that there will be a future world that survives our particular life, a world populated by people roughly like us, including some who are related to us or known to us. Though we rarely or acknowledge it, this presumed future world is the horizon towards which everything we do in the present is oriented.

But what, Scheffler asked, if we lose that assumed future world — because, say, we are told that human life will end on a fixed date not far after our own death? Then the things we value would start to lose their value. Our sense of why things matter today is built on the presumption that they will continue to matter in the future, even when we ourselves are no longer around to value them.

Our present relations to people and things are, in this deep way, future-oriented. Symphonies are written, buildings built, children conceived in the present, but always with a future in mind. What happens to our ethical bearings when we start to lose our grip on that future?

It’s here, moving back to the particular features of the global pandemic, that we see more clearly what drives the restlessness and dislocation so many have been feeling. The source of our current acedia is not the literal loss of a future; even the most pessimistic scenarios surrounding COVID-19 have our species surviving. The dislocation is more subtle: a disruption in pretty much every future frame of reference on which just going on in the present relies.

Moving around is what we do as creatures, and for that we need horizons. COVID-19 has erased many of the spatial and temporal horizons we rely on, even if we don’t notice them very often. We don’t know how the economy will look, how social life will go on, how our home routines will be changed, how work will be organized, how universities or the arts or local commerce will survive.

What unsettles us is not only fear of change. It’s that, if we can no longer trust in the future, many things become irrelevant, retrospectively pointless. And by that we mean from the perspective of a future whose basic shape we can no longer take for granted. This fundamentally disrupts how we weigh the value of what we are doing right now. It becomes especially hard under these conditions to hold on to the value in activities that, by their very nature, are future-directed, such as education or institution-building.

That’s what many of us are feeling. That’s today’s acedia.

Naming this malaise may seem more trouble than its worth, but the opposite is true. Perhaps the worst thing about medieval acedia was that monks struggled with its dislocation in isolation. But today’s disruption of our sense of a future must be a shared challenge. Because what’s disrupted is the structure of care that sustains why we go on doing things together, and this can only be repaired through renewed solidarity.

Such solidarity, however, has one precondition: that we openly discuss the problem of acedia, and how it prevents us from facing our deepest future uncertainties. Once we have done that, we can recognize it as a problem we choose to face together — across political and cultural lines — as families, communities, nations and a global humanity. Which means doing so in acceptance of our shared vulnerability, rather than suffering each on our own.

This essay was written with Nick Couldry, and previously appeared on

Krebs on SecurityAttacks Aimed at Disrupting the Trickbot Botnet

Over the past 10 days, someone has been launching a series of coordinated attacks designed to disrupt Trickbot, an enormous collection of more than two million malware-infected Windows PCs that are constantly being harvested for financial data and are often used as the entry point for deploying ransomware within compromised organizations.

A text snippet from one of the bogus Trickbot configuration updates. Source: Intel 471

On Sept. 22, someone pushed out a new configuration file to Windows computers currently infected with Trickbot. The crooks running the Trickbot botnet typically use these config files to pass new instructions to their fleet of infected PCs, such as the Internet address where hacked systems should download new updates to the malware.

But the new configuration file pushed on Sept. 22 told all systems infected with Trickbot that their new malware control server had the address, which is a “localhost” address that is not reachable over the public Internet, according to an analysis by cyber intelligence firm Intel 471.

It’s not known how many Trickbot-infected systems received the phony update, but it seems clear this wasn’t just a mistake by Trickbot’s overlords. Intel 471 found that it happened yet again on Oct. 1, suggesting someone with access to the inner workings of the botnet was trying to disrupt its operations.

“Shortly after the bogus configs were pushed out, all Trickbot controllers stopped responding correctly to bot requests,” Intel 471 wrote in a note to its customers. “This possibly means central Trickbot controller infrastructure was disrupted. The close timing of both events suggested an intentional disruption of Trickbot botnet operations.”

Intel 471 CEO Mark Arena said it’s anyone’s guess at this point who is responsible.

“Obviously, someone is trying to attack Trickbot,” Arena said. “It could be someone in the security research community, a government, a disgruntled insider, or a rival cybercrime group. We just don’t know at this point.

Arena said it’s unclear how successful these bogus configuration file updates will be given that the Trickbot authors built a fail-safe recovery system into their malware. Specifically, Trickbot has a backup control mechanism: A domain name registered on EmerDNS, a decentralized domain name system.

“This domain should still be in control of the Trickbot operators and could potentially be used to recover bots,” Intel 471 wrote.

But whoever is screwing with the Trickbot purveyors appears to have adopted a multi-pronged approach: Around the same time as the second bogus configuration file update was pushed on Oct. 1, someone stuffed the control networks that the Trickbot operators use to keep track of data on infected systems with millions of new records.

Alex Holden is chief technology officer and founder of Hold Security, a Milwaukee-based cyber intelligence firm that helps recover stolen data. Holden said at the end of September Trickbot held passwords and financial data stolen from more than 2.7 million Windows PCs.

By October 1, Holden said, that number had magically grown to more than seven million.

“Someone is flooding the Trickbot system with fake data,” Holden said. “Whoever is doing this is generating records that include machine names indicating these are infected systems in a broad range of organizations, including the Department of Defense, U.S. Bank, JP Morgan Chase, PNC and Citigroup, to name a few.”

Holden said the flood of new, apparently bogus, records appears to be an attempt by someone to dilute the Trickbot database and confuse or stymie the Trickbot operators. But so far, Holden said, the impact has been mainly to annoy and aggravate the criminals in charge of Trickbot.

“Our monitoring found at least one statement from one of the ransomware groups that relies on Trickbot saying this pisses them off, and they’re going to double the ransom they’re asking for from a victim,” Holden said. “We haven’t been able to confirm whether they actually followed through with that, but these attacks are definitely interfering with their business.”

Intel 471’s Arena said this could be part of an ongoing campaign to dismantle or wrest control over the Trickbot botnet. Such an effort would hardly be unprecedented. In 2014, for example, U.S. and international law enforcement agencies teamed up with multiple security firms and private researchers to commandeer the Gameover Zeus Botnet, a particularly aggressive and sophisticated malware strain that had enslaved up to 1 million Windows PCs globally.

Trickbot would be an attractive target for such a takeover effort because it is widely viewed as a platform used to find potential ransomware victims. Intel 471 describes Trickbot as “a malware-as-a-service platform that caters to a relatively small number of top-tier cybercriminals.”

One of the top ransomware gangs in operation today — which deploys ransomware strains known variously as “Ryuk” and “Conti,” is known to be closely associated with Trickbot infections. Both ransomware families have been used in some of the most damaging and costly malware incidents to date.

The latest Ryuk victim is Universal Health Services (UHS), a Fortune 500 hospital and healthcare services provider that operates more than 400 facilities in the U.S. and U.K.

On Sunday, Sept. 27, UHS shut down its computer systems at healthcare facilities across the United States in a bid to stop the spread of the malware. The disruption has reportedly caused the affected hospitals to redirect ambulances and relocate patients in need of surgery to other nearby hospitals.

Cryptogram Friday Squid Blogging: After Squidnight

Review of a squid-related children’s book.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: COVID-19 Found on Chinese Squid Packaging

I thought the virus doesn’t survive well on food packaging:

Authorities in China’s northeastern Jilin province have found the novel coronavirus on the packaging of imported squid, health authorities in the city of Fuyu said on Sunday, urging anyone who may have bought it to get themselves tested.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: No Escape

"Last night&#39;s dinner was delicious!" Drew W. writes.


"Gee, thanks Microsoft! It's great to finally see progress on this long-awaited enhancement. That bot is certainly earning its keep," writes Tim R.


Derek wrote, "Well, I guess the Verizon app isn't totally useless when it comes to telling me how many notifications I have. I mean, one out of three isn't THAT bad."


"Perhaps Ubisoft UK are trying to reach bilingual gamers?" writes Craig.


Jeremy H. writes, "Whether this is a listing for a pair of pet training underwear or a partial human torso, I'm not sure if I'm comfortable with that 'metal clicker'."


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Krebs on SecurityRansomware Victims That Pay Up Could Incur Steep Fines from Uncle Sam

Companies victimized by ransomware and firms that facilitate negotiations with ransomware extortionists could face steep fines from the U.S. federal government if the crooks who profit from the attack are already under economic sanctions, the Treasury Department warned today.

Image: Shutterstock

In its advisory (PDF), the Treasury’s Office of Foreign Assets Control (OFAC) said “companies that facilitate ransomware payments to cyber actors on behalf of victims, including financial institutions, cyber insurance firms, and companies involved in digital forensics and incident response, not only encourage future ransomware payment demands but also may risk violating OFAC regulations.”

As financial losses from cybercrime activity and ransomware attacks in particular have skyrocketed in recent years, the Treasury Department has imposed economic sanctions on several cybercriminals and cybercrime groups, effectively freezing all property and interests of these persons (subject to U.S. jurisdiction) and making it a crime to transact with them.

A number of those sanctioned have been closely tied with ransomware and malware attacks, including the North Korean Lazarus Group; two Iranians thought to be tied to the SamSam ransomware attacks; Evgeniy Bogachev, the developer of Cryptolocker; and Evil Corp, a Russian cybercriminal syndicate that has used malware to extract more than $100 million from victim businesses.

Those that run afoul of OFAC sanctions without a special dispensation or “license” from Treasury can face several legal repercussions, including fines of up to $20 million.

The Federal Bureau of Investigation (FBI) and other law enforcement agencies have tried to discourage businesses hit by ransomware from paying their extortionists, noting that doing so only helps bankroll further attacks.

But in practice, a fair number of victims find paying up is the fastest way to resume business as usual. In addition, insurance providers often help facilitate the payments because the amount demanded ends up being less than what the insurer might have to pay to cover the cost of the affected business being sidelined for days or weeks at a time.

While it may seem unlikely that companies victimized by ransomware might somehow be able to know whether their extortionists are currently being sanctioned by the U.S. government, they still can be fined either way, said Ginger Faulk, a partner in the Washington, D.C. office of the law firm Eversheds Sutherland.

Faulk said OFAC may impose civil penalties for sanctions violations based on “strict liability,” meaning that a person subject to U.S. jurisdiction may be held civilly liable even if it did not know or have reason to know it was engaging in a transaction with a person that is prohibited under sanctions laws and regulations administered by OFAC.

“In other words, in order to be held liable as a civil (administrative) matter (as opposed to criminal), no mens rea or even ‘reason to know’ that the person is sanctioned is necessary under OFAC regulations,” Faulk said.

But Fabian Wosar, chief technology officer at computer security firm Emsisoft, said Treasury’s policies here are nothing new, and that they mainly constitute a warning for individual victim firms who may not already be working with law enforcement and/or third-party security firms.

Wosar said companies that help ransomware victims negotiate lower payments and facilitate the financial exchange are already aware of the legal risks from OFAC violations, and will generally refuse clients who get hit by certain ransomware strains.

“In my experience, OFAC and cyber insurance with their contracted negotiators are in constant communication,” he said. “There are often even clearing processes in place to ascertain the risk of certain payments violating OFAC.”

Along those lines, OFAC said the degree of a person/company’s awareness of the conduct at issue is a factor the agency may consider in assessing civil penalties. OFAC said it would consider “a company’s self-initiated, timely, and complete report of a ransomware attack to law enforcement to be a significant mitigating factor in determining an appropriate enforcement outcome if the situation is later determined to have a sanctions nexus.”

Kevin RuddCNBC: On US-China tensions, growing export restrictions




The post CNBC: On US-China tensions, growing export restrictions appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Imploded Code

Cassi’s co-worker (previously) was writing some more PHP. This code needed to take a handful of arguments, like id and label and value, and generate HTML text inputs.

Well, that seems like a perfect use case for PHP. I can’t possibly see how this could go wrong.

echo $sep."<SCRIPT>CreateField(" . implode(', ', array_map('json_encode', $args)) . ");</SCRIPT>\n";

Now, PHP’s array_map is a beast of a function, and its documentation has some pretty atrocious examples. It’s not just a map, but also potentially a n-ary zip, but if you use it that way, then the function you’re passing needs to be able to handle the right number of arguments, and– sorry. Lost my train of thought there when checking the PHP docs. Back to the code.

In this case, we use array_map to make all our fields JavaScript safe by json_encodeing them. Then we use implode to mash them together into a comma separated string. Then we concatenate all that together into a CreateField call.

CreateField is a JavaScript function. Cassi didn’t supply the implementation, but lets us know that it uses document.write to output the input tag into the document body.

So, this is server side code which generates calls to client side code, where the client side code runs at page load to document.write HTML elements into the document… elements which the server side code could have easily supplied.

I’ll let Cassi summarize:

I spent 5 minutes staring at this trying to figure out what to say about it. I just… don’t know. What’s the worst part? json_encoding individual arguments and then imploding them? Capital HTML tags? A $sep variable that doesn’t need to exist? (Trust me on this one.) Maybe it’s that it’s a line of PHP outputting a Javascript function call that in turn uses document.write to output a text HTML input field? Yeah, it’s probably that one.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Cryptogram Detecting Deep Fakes with a Heartbeat

Researchers can detect deep fakes because they don’t convincingly mimic human blood circulation in the face:

In particular, video of a person’s face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.

Deep fakes don’t lack such circulation-induced shifts in color, but they don’t recreate them with high fidelity. The researchers at SUNY and Intel found that “biological signals are not coherently preserved in different synthetic facial parts” and that “synthetic content does not contain frames with stable PPG.” Translation: Deep fakes can’t convincingly mimic how your pulse shows up in your face.

The inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person’s face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.

Of course, this is an arms race. I expect deep fake programs to become good enough to fool FakeCatcher in a few months.


Cory DoctorowAttack Surface on the MMT Podcast

I was incredibly happy to appear on the MMT Podcast again this week, talking about economics, science fiction, interoperability, tech workers and tech ethics, and my new novel ATTACK SURFACE, which comes out in the UK tomorrow (Oct 13 US/Canada):

We also delved into my new nonfiction book, HOW TO DESTROY SURVEILLANCE CAPITALISM, and how it looks at the problem of misinformation, and how that same point of view is weaponized by Masha, the protagonist and antihero of Attack Surface.

It’s no coincidence that both of these books came out at the same time, as they both pursue the same questions from different angles:

  • What does technology do?

  • Who does it do it TO and who does it do it FOR?

  • How can we change how technology works?


  • Why do people make tools of oppression, and what will it take to make them stop?

Here’s the episode’s MP3:

and here’s their feed:

Sam VargheseOne feels sorry for Emma Alberici, but that does not mask the fact that she was incompetent

Last month, the Australian Broadcasting Corporation, a taxpayer-funded entity, made several people redundant, due to a cut in funding by the Federal Government. Among them was Emma Alberici, a presenter who has been lionised a great deal as someone with great talents, but is actually a mediocre hack who lacks ability.

What marked Alberici out is the fact that she had the glorious title of chief economics correspondent at the ABC, but was never seen on any TV show giving her opinion about anything to do with economics. Over the last two years, China and the US have been engaged in an almighty stoush; Australia, as a country that considers the US as its main ally and has China as its major trading partner, has naturally been of interest too.

But the ABC always put forward people like Peter Ryan, a senior business correspondent, or Ian Verrender, the business editor, when there was a need for someone to appear during a news bulletin and provide a little insight into these matters. Alberici, it seemed, was persona non grata.

The reason why she was kept behind the curtains, it turns out, was because she did not really know much about economics. This was made plain in April 2018 when she wrote what columnist Joe Aston of the Australian Financial Review described as “an undergraduate yarn touting ‘exclusive analysis’ of one publicly available document from which she derived that ‘one in five of Australia’s top companies has paid zero tax for the past three years’ and that ‘Australia’s largest companies haven’t paid corporate tax in 10 years’.”

This article drew much criticism from many people, including politicians, who complained to the ABC. Her magnum opus has now disappeared from the web – an archived version is here. The fourth paragraph read: “Exclusive analysis released by ABC today reveals one in five of Australia’s top companies has paid zero tax for the past three years.” A Twitter teaser said: “Australia’s largest companies haven’t paid corporate tax in 10 years.”

As Aston pointed out in withering tones: “Both premises fatally expose their author’s innumeracy. The first is demonstrably false. Freely available data produced by the Australian Taxation Office show that 32 of Australia’s 50 largest companies paid $19.33 billion in company tax in FY16 (FY17 figures are not yet available). The other 18 paid nothing. Why? They lost money, or were carrying over previous losses.

“Company tax is paid on profits, so when companies make losses instead of profits, they don’t pay it. Amazing, huh? And since 1989, the tax system has allowed losses in previous years to be carried forward – thus companies pay tax on the rolling average of their profits and losses. This is stuff you learn in high school. Except, obviously, if your dream by then was to join the socialist collective at Ultimo, to be a superstar in the cafes of Haberfield.”

As expected, Alberici protested and even called in her lawyer when the ABC took down the article. It was put up again with several corrections. But the damage had been done and the great economics wizard had been unmasked. She did not take it well, protesting that 17 years ago she was nominated for a Walkley Award on tax minimisation.

Underlining her lack of knowledge of economics, Alberici, who was paid a handsome $189,000 per annum by the ABC, exposed herself again in May the same year. In an interview with Labor shadow treasurer Jim Chalmers, she kept pushing him as to what he would do with the higher surpluses that he proposed to run were a Labor government to be elected.

Aston has his own inimitable style, so let me use his words: “This time, Alberici’s utter non-comprehension of public sector accounting is laid bare in three unwitting confessions in her studio interview with Labor’s finance spokesman Jim Chalmers after Tuesday’s Budget.

“[Shadow Treasurer] Chris Bowen on the weekend told my colleague Barrie Cassidy that you want to run higher surpluses than the Government. How much higher than the Government and what would you do with that money?”

“Wearing that unmistakable WTF expression on his dial, Chalmers was careful to evade her illogical query.

“Undeterred, Alberici pressed again. ‘And what will you do with those surpluses?’ A second time, Chalmers dissembled.

“A third time, the cock crowed (this really was as inevitable as the betrayal of Jesus by St Peter). ‘Sorry, no, you said you would run higher surpluses, so what happens to that money?’

“Hanged [sic] by her own persistence. Chalmers, at this point, put her out of her blissful ignorance – or at least tried! ‘Surpluses go towards paying down debt’.

“Bingo. C’est magnifique! Hey, we majored in French. After 25 years in business and finance reporting, Alberici apparently thinks a budget surplus is an account with actual money in it, not a figure reached by deducting the Commonwealth’s expenditure from its revenue.”

Alberici has continued to complain that her knowledge of economics is adequate. She has been extremely annoyed when such criticism comes from any Murdoch publication. But the fact is that she is largely ignorant of the basics of economics.

Her incompetence is not limited to this one field alone. As I have pointed out, there have been occasions in the past, when she has shown that her knowledge of foreign affairs is as good as her knowledge of economics.

After she was made redundant, Alberici published a series of tweets which she later removed. An archive of that is here.

Alberici was the host of a program called Business Breakfast some years ago. It failed and had to be taken off the air. Then she was made the main host of the ABC’s flagship late news program, Lateline. That program was taken off the air last year due to budget cuts. However, Alberici’s performance did not set the Yarra on fire, to put it mildly.

Now she has joined an insurance comparison website, Compare The Market. That company has a bit of a dodgy reputation as the Australian news website The New Daily pointed out in 2014. As reporter George Lekakis wrote: “The website is owned by a leading global insurer that markets many of its own its products on the site. While offers a broad range of products for life insurance and travel cover, most of its general insurance offerings are underwritten by an insurer known as Auto & General.

“Auto & General is a subsidiary of global financial services group Budget Holdings Limited and is the ultimate owner of An investigation by The New Daily of the brands marketed by the website for auto and home insurance reveals a disproportionate weighting to A&G products.”

Once again, the AFR, this time through columnist Myriam Robyn, was not exactly complimentary about Alberici’s new billet. But perhaps it is a better fit for Alberici than the ABC where she was really a square peg in a round hole. She is writing a book in which, no doubt, she will again try to put her case forward and prove she was a brilliant journalist who lost her job due to other people politicking against her. But the facts say otherwise.

Worse Than FailureCodeSOD: A Poor Facsimile

For every leftpad debacle, there are a thousand “utility belt” libraries for JavaScript. Whether it’s the “venerable” JQuery, or lodash, or maybe Google’s Closure library, there’s a pile of things that usually end up in a 50,000 line util.js file available for use, and packaged up a little more cleanly.

Dan B had a co-worker who really wanted to start using Closure. But they also wanted to be “abstract”, and “loosely coupled”, so that they could swap out implementations of these utility functions in the future.

This meant that they wrote a lot of code like:

initech.util.clamp = function(value, min, max) {
  return goog.math.clamp(value, min, max);

But it wasn’t enough to just write loads of functions like that. This co-worker “knew” they’d need to provide their own implementations for some methods, reflecting how the utility library couldn’t always meet their specific business needs.

Unfortunately, Dan noticed some of those “reimplementations” didn’t always behave correctly, for example:

initech.util.isDefAndNotNull(null); //returns true

Let’s look at the Google implementations of some methods, annotated by Dan:

goog.isDef = function(val) {
  return val !== void 0; // tests for undefined
goog.isDefAndNotNull = function(val) {
  return val != null; // Also catches undefined since ==/!= instead of ===/!== allows for implicit type conversion to satisfy the condition.

Setting aside the problems of coercing vs. non-coercing comparison operators even being a thing, these both do the job they’re supposed to do. But Dan’s co-worker needed to reinvent this particular wheel.

initech.util.isDef = function (val) {
  return val !== void 0; 

This code is useless, since we already have it in our library, but whatever. It’s fine and correct.

initech.util.isDefAndNotNull = initech.util.isDef; 

This developer is the kid who copied someone else’s homework and still somehow managed to get a failing grade. They had a working implementation they could have referenced to see what they needed to do. The names of the methods themselves should have probably provided a hint that they needed to be mildly different. There was no reason to write any of this code, and they still couldn’t get that right.

Dan adds:

Thankfully a lot of this [functionality] was recreated in C# for another project…. I’m looking to bring all that stuff back over… and just destroy this monstrosity we created. Goodbye JavaScript!

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.


Krebs on SecurityWho’s Behind Monday’s 14-State 911 Outage?

Emergency 911 systems were down for more than an hour on Monday in towns and cities across 14 U.S. states. The outages led many news outlets to speculate the problem was related to Microsoft‘s Azure web services platform, which also was struggling with a widespread outage at the time. However, multiple sources tell KrebsOnSecurity the 911 issues stemmed from some kind of technical snafu involving Intrado and Lumen, two companies that together handle 911 calls for a broad swath of the United States.


On the afternoon of Monday, Sept. 28, several states including Arizona, California, Colorado, Delaware, Florida, Illinois, Indiana, Minnesota, Nevada, North Carolina, North Dakota, Ohio, Pennsylvania and Washington reported 911 outages in various cities and localities.

Multiple news reports suggested the outages might have been related to an ongoing service disruption at Microsoft. But a spokesperson for the software giant told KrebsOnSecurity, “we’ve seen no indication that the multi-state 911 outage was a result of yesterday’s Azure service disruption.”

Inquiries made with emergency dispatch centers at several of the towns and cities hit by the 911 outage pointed to a different source: Omaha, Neb.-based Intrado — until last year known as West Safety Communications — a provider of 911 and emergency communications infrastructure, systems and services to telecommunications companies and public safety agencies throughout the country.

Intrado did not respond to multiple requests for comment. But according to officials in Henderson County, NC, which experienced its own 911 failures yesterday, Intrado said the outage was the result of a problem with an unspecified service provider.

“On September 28, 2020, at 4:30pm MT, our 911 Service Provider observed conditions internal to their network that resulted in impacts to 911 call delivery,” reads a statement Intrado provided to county officials. “The impact was mitigated, and service was restored and confirmed to be functional by 5:47PM MT.  Our service provider is currently working to determine root cause.”

The service provider referenced in Intrado’s statement appears to be Lumen, a communications firm and 911 provider that until very recently was known as CenturyLink Inc. A look at the company’s status page indicates multiple Lumen systems experienced total or partial service disruptions on Monday, including its private and internal cloud networks and its control systems network.

Lumen’s status page indicates the company’s private and internal cloud and control system networks had outages or service disruptions on Monday.

In a statement provided to KrebsOnSecurity, Lumen blamed the issue on Intrado.

“At approximately 4:30 p.m. MT, some Lumen customers were affected by a vendor partner event that impacted 911 services in AZ, CO, NC, ND, MN, SD, and UT,” the statement reads. “Service was restored in less than an hour and all 911 traffic is routing properly at this time. The vendor partner is in the process of investigating the event.”

It may be no accident that both of these companies are now operating under new names, as this would hardly be the first time a problem between the two of them has disrupted 911 access for a large number of Americans.

In 2019, Intrado/West and CenturyLink agreed to pay $575,000 to settle an investigation by the Federal Communications Commission (FCC) into an Aug. 2018 outage that lasted 65 minutes. The FCC found that incident was the result of a West Safety technician bungling a configuration change to the company’s 911 routing network.

On April 6, 2014, some 11 million people across the United States were disconnected from 911 services for eight hours thanks to an “entirely preventable” software error tied to Intrado’s systems. The incident affected 81 call dispatch centers, rendering emergency services inoperable in all of Washington and parts of North Carolina, South Carolina, Pennsylvania, California, Minnesota and Florida.

According to a 2014 Washington Post story about a subsequent investigation and report released by the FCC, that issue involved a problem with the way Intrado’s automated system assigns a unique identifying code to each incoming call before passing it on to the appropriate “public safety answering point,” or PSAP.

“On April 9, the software responsible for assigning the codes maxed out at a pre-set limit,” The Post explained. “The counter literally stopped counting at 40 million calls. As a result, the routing system stopped accepting new calls, leading to a bottleneck and a series of cascading failures elsewhere in the 911 infrastructure.”

Compounding the length of the 2014 outage, the FCC found, was that the Intrado server responsible for categorizing and keeping track of service interruptions classified them as “low level” incidents that were never flagged for manual review by human beings.

The FCC ultimately fined Intrado and CenturyLink $17.4 million for the multi-state 2014 outage. An FCC spokesperson declined to comment on Monday’s outage, but said the agency was investigating the incident.

Kevin RuddThe Market: China’s fears about America


Few Western observers know China better than The Honorable Kevin Rudd. As a young diplomat, the Australian, who speaks fluent Mandarin, was stationed in Beijing in the 1980s. As Australia’s Prime Minister and then Foreign Minister from 2007 to 2012, he led his country through the delicate tension between its most important alliance partner (the USA) and its largest trading partner (China).

Today Mr. Rudd is President of the Asia Society Policy Institute in New York. In an in-depth conversation via Zoom, he explains why a fundamental competition has begun between the two great powers. He would not rule out a hot war: «We know from history that it is easy to start a conflict, but it is bloody hard to end it», he warns.

Mr. Rudd, the conflict between the U.S. and China has escalated significantly over the past three years. To what extent has that escalation been driven by the presence of two strongmen, i.e. Donald Trump and Xi Jinping?

The strategic competition between the two countries is the product of both structural and leadership factors. The structural factors are pretty plain, and that is the continuing change in the balance of power in military, economic, and technological terms. This has an impact on China’s perception of its ability to be more assertive in the region and the world against America. The second dynamic is Xi Jinping’s leadership style, which is more assertive and aggressive than any of his post-Mao predecessors, Deng Xiaoping, Jiang Zemin and Hu Jintao. The third factor is Donald Trump, who obsesses about particular parts of the economy, namely trade and to a lesser degree technology.

Would America’s position be different if someone else than Trump was President?

No. The structural factors about the changing balance of power, as well as Xi Jinping’s leadership style, have caused China to rub up against American interests and values very sharply. Indeed, China is rubbing up against the interests and values of most other Western countries and some Asian democracies as well. Had Hillary Clinton won in 2016, her response would have been very robust. Trump has for the most part been superficially robust, principally on trade and technology. He was only triggered into more comprehensive robustness by the Covid-19 crisis threatening his reelection. If the next President of the U.S. is a Democrat, my judgement would be that the new Administration will be equally but more systematically hard-line in their reaction to China.

Has a new Cold War started?

I don’t like to embrace the language of a Cold War 2.0, because we should not forget that the Cold War of the 20th century had three big defining characteristics: One, the Soviets and the Americans threatened each other with nuclear Armageddon for forty years; two, they fought more than twenty proxy wars around the world; and three, they had zero economic engagement with each other. The current conflict between the U.S. and China on the other hand is characterised by two things: One, an economic decoupling in areas such as trade, supply chains, foreign direct investment, capital markets, technology, and talent. At the same time, it is also an increasingly sharp ideological war on values. The Chinese authoritarian capitalist model has asserted itself beyond China and challenges America.

How do you see that economic decoupling playing out?

The three formal instruments of power in the U.S. to enforce decoupling are entity listing, the new export control regime, and thirdly, the new powers given to the Committee on Foreign Investment in the United States, CFIUS. Those are powerful instruments which potentially affect third countries as well, through sanctions imposed under the entity list. You can take the example of semiconductors, where the recent changes of the entity list virtually limits the exports of semiconductors to a defined list of Chinese enterprises from anywhere in the world, as long as they are based on American intellectual property.

These measures have cut off Chinese companies like Huawei or SMIC from acquiring high-end semiconductor technology anywhere in the world. The reaction in Beijing has been muted so far, with no direct retaliation. Why?

In China there is a division of opinion on the question of how to respond. The hawks have an «eye for an eye» posture, that’s driven both by a perception of strategy, but also with an eye on domestic sentiment. The America doves within the leadership – and they do exist – argue a different proposition. They think China is not yet ready for a complete decoupling. If it’s going to happen, they at least try to slow it down. Plus, they want to keep their powder dry until they see the outcome of the election and what the next Administration will do. That’s the reason why we have seen only muted responses so far.

Isn’t it the case that both sides would lose if they drive decoupling too far? And given that, could it be that there won’t be any further decoupling?

We are past that point. Whoever wins the election, America will resolve in decoupling in a number of defined areas. First and foremost in those global supply chains where the products are of too crucial importance to the U.S. to depend on Chinese supply. Think medical supplies or pharmaceuticals. The second area is in defined critical technologies. The Splinternet is not just a term, it’s becoming a reality. Thirdly, you will see a partial decoupling on the global supply of semiconductors to China. Not just those relevant to 5G and Artificial Intelligence, but semiconductors in general. The centrality of microchips to computing power for all purposes, and the spectrum of application in the civilian and military economy is huge. Fourth, I think foreign direct investment in both directions will shrink to zero. The fifth area of decoupling is happening in talent markets. The hostility towards Chinese students in the U.S. is reaching ridiculous proportions.

Do you see a world divided into two technology spheres, one with American standards and one with Chinese standards?

This is the logical consequence. Assume you have Huawei 5G systems rolled out across the 75 countries that take part in the Belt and Road Initiative, then what follows from that is a series of industry standards that become accepted and compatible within those countries, as opposed to those that rely on American systems. But then another set of questions arises: Let’s say China is effectively banned from purchasing semiconductors based on American technology and is dependent on domestic supply. Chinese semiconductors are slower than their American counterparts, and likely to remain for the decade ahead. Do the BRI countries accept a slower microprocessor product standard for being part of the Chinese technological ecosystem? I don’t know the answer to that, but I think your prognosis of two technology spheres is correct.

China throws huge amounts of money into the project of building up its semiconductor capabilities. Are they still so far behind?

Despite their acts to buy, borrow or steal intellectual property, they constantly remain three to seven years behind the U.S., Taiwan and South Korea, i.e. behind the likes of Intel, TSMC and Samsung. It’s obviously hard to reverse engineer semiconductors, as opposed to a Tupolev, as the Soviets had to find out, which can be reverse engineered over a weekend.

Wouldn’t America hurt itself too much if it cut off China from its semiconductor industry altogether?

There is an argument that 50% of the profits of the US semiconductor industry come from their clients in China. That money funds their R&D in order to keep three to seven years ahead of China. The U.S. Department of Defense knows that, what’s why the Pentagon doesn’t necessarily side with the anti China hawks on this issue. So the debate between the US semiconductor industry versus the perceived national security interest has yet to be resolved. It has been resolved in terms of AI chips, and Huawei is the first big casualty of that. But for semiconductors in general the question is not solved yet.

Will countries in Europe or Southeast Asia be forced to decide which technology sphere they want to belong to?

Until the 5G revolution, they have managed to straddle both worlds. But now China has banked on the strategy of being the technology leader in certain categories, and the one they are in at the moment is in 5G technology and systems. If you look at the next five years, and if you look at the success of China in the other technology categories in its Made in China 2025 Strategy, then it becomes clear that we increasingly are going to end up in a binary technology world. Policy makers in various nations will have to answer questions around the relative sophistication of the technology, industry standards, concerns on privacy and data governance, and of course a very big question: What are our points of exposure to the U.S. or China? What will we lose in our China relationship by joining the American sphere in certain fields, and vice versa? Those variables will impact the decision making processes everywhere from Bern to Berlin to Bangkok.

But in the end, they will have to choose?

Yes. India for example has done it in the field of 5G. For India, that was a big call, given the size of its market and China’s desire to bring India slowly but surely towards its side of the Splinternet.

The third field of conflict after trade and technology lies in financial markets. We know that Washington can weaponise the Dollar if it wants to. So far, this front has been rather quiet. What are your expectations?

Two measures have been taken so far by the Trump Administration. One, the direction to U.S. government pension funds not to invest in Chinese listed companies; and two, the direction to the New York Stock Exchange not to sustain the listing of Chinese firms unless they conform to global accounting standards. I regard these as two simple shots across the bow.

With more to follow?

Like on most other things including technology, the U.S. is divided in terms of where its interests lie. Just like Silicon Valley, Wall Street has a big interest in not having too harsh restrictions on financial markets. Just look at the volume of business. Financial market collaboration between the Chinese and American financial systems in terms of investment flows for Treasuries, bonds and equities is a $5 trillion per year business. This is not small. I presume the reason we have only seen two rather small warning shots so far is that the Administration is deeply sensitive to Wall Street interests, led by Secretary of the Treasury Steven Mnuchin. Make no mistake: Given the serious Dollars at stake in financial markets, an escalation there will make the trade war look like child’s play.

Which way will it resolve?

The risk I see is that if the Chinese crack down further in Hong Kong. If there is an eruption of protests resulting in violence, we should not be surprised by the possibility of Washington deciding to de-link the U.S. financial system from the Hong Kong Dollar and the Hong Kong financial market. That would be a huge step.

How probable is it that Washington would choose to weaponise the Dollar?

We don’t know. The Democrats in my judgement do not have a policy on that at present. Perhaps the best way to respond to your question is to say this: There are three things that China still fears about America. The US military, its semiconductor industry, and the Dollar. If you are attentive to the internalities of the Chinese national economic self-sufficiency debate at present, it often is expressed in terms of «Let’s not allow to happen to us in financial markets what is happening in technology markets». But if the U.S. goes into hardline mode against China for general strategy purposes, then the only thing that would deter Washington is the amount of self-harm it would inflict on Wall Street, if it is forced to decouple from the Chinese market. If that would happen, it would place Frankfurt, Zurich, Paris or the City of London in a pretty interesting position.

Would you say that there is a form of mutually assured destruction, MAD, in financial markets, which would prevent the U.S. from going into full hardline mode?

If the Americans wanted to send a huge warning shot to the Chinese, they are probably more disposed towards using sectoral measures, like the one I outlined for Hong Kong, and not comprehensive measures. But never forget: American political elites, Republicans and Democrats, have concluded that Xi Jinping’s China is not a status quo power, but that it wishes to replace the U.S. in its position of global leadership. Therefore, the inherent rationality or irrationality of individual measures is no longer necessarily self-evident against the general strategic question. The voices in America to prevent a financial decoupling from China are strong at present, but that does not necessarily mean they will prevail.

China’s strategy, meanwhile, is to welcome U.S. banks with open arms. Is it working?

The general strategy of China is that the more economic enmeshment occurs – not just with the U.S., but also with Europe, Japan and the likes –, then the less likely countries are going to take a hard-line policy against Beijing. China is a master in using its economic leverage to secure foreign policy and national security policy ends. They know this tool very well. The more friends you have, be it JPMorgan, Morgan Stanley or the big technology firms of Silicon Valley, the more it complicates the decision making process in the West. China knows that. I’m sure you’ve heard it a thousand times from Swiss companies as well: How can we grow a global business and ignore the Chinese market? Every company in the world is asking that question.

You wrote an article in Foreign Affairs in August, warning of a potential military conflict triggered by events in the South China Sea or Taiwan. Do you really see the danger of a hot war?

I don’t mean to be alarmist, not at all. But I was talking to too many people both in Washington and Beijing that were engaged in scenario planning, to believe that this was any longer just theoretical. My simple thesis is this: These things can happen pretty easily once you have whipped up nationalist narratives on both sides and then have a major incident that goes wrong. A conflict is easy to start, but history tells us they are bloody hard to stop.

Of course the main argument against that is that there is too much at stake on both sides, which will prevent an escalation into a hot war.

You see, within that argument lies the perceived triumph of European rationality over East Asian reality. All that European rationality worked really well in 1914, when nobody thought that war was inevitable. The title of my article Beware the Guns of August referred to the time between the assassination in Sarajevo at the end of June, the failure of diplomacy in July, and then miscommunication, poor signalling and the dynamics of mobilisation in the end led to a situation that neither the Kaiser nor the Czar could stop. Nationalism is as poisonous today as it was in Europe for centuries. It’s just that you’ve all killed each other twice before you found out it was a bad idea. Remember, in East Asia, we have the rolling problems of our own version of Alsace-Lorraine: it’s called Taiwan.

Influential voices in Washington say that the time of ambiguity is over. The U.S. should make its support for Taiwan explicit. Do you agree?

I don’t. If you change your declaratory policy on Taiwan, then there is a real danger that you by accident create the crossing of a red line in Chinese official perception, and you bring on the very crisis you are seeking to avoid. It’s far better if you simply had an operational strategy, which aims to maximally enhance Taiwan’s ability to deter a Chinese attack.

Over the past years, the Chinese Communist Party has morphed into the Party of Xi. How do you see the internal dynamics within the CCP playing out over the coming years?

Xi Jinping’s position as Paramount Leader makes him objectively the most powerful Chinese leader since Mao. During the days of Deng Xiaoping, there were counterweighting voices to Deng, represented at the most senior levels, and there was a debate of economic and strategic policy between them. The dynamics of collective leadership applied then, they applied under Jiang Zemin, they certainly applied under Hu Jintao. They now no longer apply.

What will that mean for the future?

In the seven years he’s been in power so far, China moved to the left on domestic politics, giving a greater role to the Party. In economic policy, we’ve seen it giving less headroom for the private sector. China has become more nationalist and more internationally assertive as a consequence of it becoming more nationalist. There are however opposing voices among the top leadership, and the open question is whether these voices can have any coalescence in the lead-up to the 20th Party Congress in 2022, which will decide on whether Xi Jinping’s term is extended. If it is extended, you can say he then becomes leader for life. That will be a seminal decision for the Party.

What’s your prediction?

For a range of internal political reasons, which have to do with power politics, plus disagreements on economic policy and some disagreements on foreign policy, the internal political debate in China will become sharper than we have seen so far. If I was a betting man, at this stage, I would say it is likely that Xi will prevail.

«It's obviously hard to reverse engineer semiconductors.»

The post The Market: China’s fears about America appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Taking Your Chances

A few months ago, Sean had some tasks around building a front-end for some dashboards. Someone on the team picked a UI library for managing their widgets. It had lovely features like handling flexible grid layouts, collapsing/expanding components, making components full screen and even drag-and-drop widget rearrangement.

It was great, until one day it wasn't. As more and more people started using the front end, they kept getting more and more reports about broken dashboards, misrendering, duplicated widgets, and all sorts of other difficult to explain bugs. These were bugs which Sean and the other developers had a hard time replicating, and even on the rare occassions that they did replicate it, they couldn't do it twice in a row.

Stumped, Sean took a peek at the code. Each on-screen widget was assigned a unique ID. Except at those times when the screen broke- the IDs weren't unique. So clearly, something went wrong with the ID generation, which seemed unlikely. All the IDs were random-junk, like 7902699be4, so there shouldn't be a collision, right?

generateID() { return sha256(Math.floor(Math.random() * 100)). substring(1, 10); }

Good idea: feeding random values into a hash function to generate unique IDs. Bad idea: constraining your range of random values to integers between 0 and 99.

SHA256 will take inputs like 0, and output nice long, complex strings like "5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9". And a mildly different input will generate a wildly different output. This would be good for IDs, if you had a reasonable expectation that the values you're seeding are themselves unique. For this use case, Math.random() is probably good enough. Even after trimming to the first ten characters of the hash, I only hit two collisions for every million IDs in some quick tests. But Math.floor(Math.random() * 100) is going to give you a collision a lot more frequently. Even if you have a small number of widgets on the screen, a collision is probable. Just rare enough to be hard to replicate, but common enough to be a serious problem.

Sean did not share what they did with this library. Based on my experience, it was probably "nothing, we've already adopted it" and someone ginned up an ugly hack that patches the library at runtime to replace the method. Despite the library being open source, nobody in the organization wants to maintain a fork, and legal needs to get involved if you want to contribute back upstream, so just runtime patch the object to replace the method with one that works.

At least, that's a thing I've had to do in the past. No, I'm not bitter.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 16)

Here’s part sixteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.


Cryptogram Negotiating with Ransomware Gangs

Really interesting conversation with someone who negotiates with ransomware gangs:

For now, it seems that paying ransomware, while obviously risky and empowering/encouraging ransomware attackers, can perhaps be comported so as not to break any laws (like anti-terrorist laws, FCPA, conspiracy and others) ­ and even if payment is arguably unlawful, seems unlikely to be prosecuted. Thus, the decision whether to pay or ignore a ransomware demand, seems less of a legal, and more of a practical, determination ­ almost like a cost-benefit analysis.

The arguments for rendering a ransomware payment include:

  • Payment is the least costly option;
  • Payment is in the best interest of stakeholders (e.g. a hospital patient in desperate need of an immediate operation whose records are locked up);
  • Payment can avoid being fined for losing important data;
  • Payment means not losing highly confidential information; and
  • Payment may mean not going public with the data breach.

The arguments against rendering a ransomware payment include:

  • Payment does not guarantee that the right encryption keys with the proper decryption algorithms will be provided;
  • Payment further funds additional criminal pursuits of the attacker, enabling a cycle of ransomware crime;
  • Payment can do damage to a corporate brand;
  • Payment may not stop the ransomware attacker from returning;
  • If victims stopped making ransomware payments, the ransomware revenue stream would stop and ransomware attackers would have to move on to perpetrating another scheme; and
  • Using Bitcoin to pay a ransomware attacker can put organizations at risk. Most victims must buy Bitcoin on entirely unregulated and free-wheeling exchanges that can also be hacked, leaving buyers’ bank account information stored on these exchanges vulnerable.

When confronted with a ransomware attack, the options all seem bleak. Pay the hackers ­ and the victim may not only prompt future attacks, but there is also no guarantee that the hackers will restore a victim’s dataset. Ignore the hackers ­ and the victim may incur significant financial damage or even find themselves out of business. The only guarantees during a ransomware attack are the fear, uncertainty and dread inevitably experienced by the victim.

Cryptogram Hacking a Coffee Maker

As expected, IoT devices are filled with vulnerabilities:

As a thought experiment, Martin Hron, a researcher at security company Avast, reverse engineered one of the older coffee makers to see what kinds of hacks he could do with it. After just a week of effort, the unqualified answer was: quite a lot. Specifically, he could trigger the coffee maker to turn on the burner, dispense water, spin the bean grinder, and display a ransom message, all while beeping repeatedly. Oh, and by the way, the only way to stop the chaos was to unplug the power cord.


In any event, Hron said the ransom attack is just the beginning of what an attacker could do. With more work, he believes, an attacker could program a coffee maker — ­and possibly other appliances made by Smarter — ­to attack the router, computers, or other devices connected to the same network. And the attacker could probably do it with no overt sign anything was amiss.

Kevin RuddBBC World: Kevin Rudd on Xi Jinping’s zero-carbon pledge

BBC World


Mike Embley
Let’s speak to a man who has a lot of experience with China. Kevin Rudd, former Prime Minister of Australia, now president of the Asia Society, Policy Institute. Mr. Rudd, is that headline grabbing thing to say, isn’t it? the United Nations General Assembly? Do you think in a very directed economy with a population that’s given very little individual choice, it is possible? And if so, how?

Kevin Rudd
I think it is possible. And the reason I say so is because China does have a state planning system. And on top of that, China has concluded, I think that it’s in its own national interests, to bring about a radical reduction in carbon over time. So the two big announcements coming out of Xi Jinping at the UN General Assembly have been A, the one you’ve just referred to, which is to achieve carbon neutrality before 2060. And also for China to reach what’s called peak greenhouse gas emissions before 2030. The international community have been pressuring them to bring that forward as much as possible, respectively, we hope to close to 2050. And we hope close to 2025, as far as peak emissions are concerned, but we’ll have to wait and see until China produces its 14th five year plan.

Mike Embley
I guess we can probably assume that President Trump wasn’t impressed, he’s made his position pretty clear on these matters. How do you think other countries will respond? Or will they just say, well, that’s China, China can do that we really can’t?

Kevin Rudd
Well, I think you’re right to characterize President Trump’s reaction as dismissive, because ultimately, he does not seem to be persuaded at all by the climate change science. And the United States under his presidency has been completely missing in action, in terms of global work on climate change, mitigation activity. But when you look at the other big player on global climate change actions, the European Union, and I think positively speaking, both the European Union and working as they did, through their virtual discussions with the Chinese leadership last week, had been encouraging directly the Chinese to take these sorts of actions. I think the Europeans will be pleased by what they have heard. But there’s a caveat to this as well. One of the other things that Europeans asked is for China, not just to make concrete its commitments in terms of peak emissions and carbon neutrality, but also not to offshore China’s responsibilities by building a lot of carbon based or fossil fuel based power generation plants in the Belt and Road Initiative countries beyond China’s borders. That’s where we have to see further action by China as well.

Mike Embley
Just briefly, if you can, you know, there will be people yelling at the screen, given how fast climate change is becoming a climate emergency, that 2050-2060 is nowhere near soon enough that we have to inflict really massive changes on ourselves.

Kevin Rudd
Well, the bottom line is that if you look at the science, it requires us to be carbon neutral by 2050. As a planet, given the China is the world’s largest emitter by a country mile, then I’d say to those throwing shoes at the screen, getting China to that position, given where they were, frankly 10 years ago when I first began negotiating with the Chinese leadership on climate change matters is a big step forward. And secondly, what I find most encouraging though, is that China’s national political interests, I think, have been engaged. They don’t want to suffocate their own national economic future, any more than they want to suffocate the world’s.

The post BBC World: Kevin Rudd on Xi Jinping’s zero-carbon pledge appeared first on Kevin Rudd.

Kevin RuddCNN: Kevin Rudd on Rio Tinto and Indigenous Heritage Sites


Michael Holmes
Former Australian Prime Minister and president of the Asia Society Policy Institute, Kevin Rudd joins me now. Prime Minister, I remember this, this well, you know, it was it was back in, you know, 2008, when you did something that was historic, you presented an apology to Indigenous Australians for past wrongs. 12 years later, this is happening, or what what has changed in Australia in relation to how Aboriginals are treated?

Kevin Rudd
Well, the Apology in 2008, helped turn a page in the recognition by white Australia, about the level of mistreatment of black Australia over the previous couple of hundred years. And some progress has been made in closing the gap in some areas of health and education between indigenous and non Indigenous Australians. But when you see actions, like we’ve just seen with Rio Tinto, in this willful destruction of a 46,000 year old archaeological site, which even their own internal archaeological advisors said, was a major site of archaeological significance in Australia, you scratch your head, and can only conclude that Rio Tinto really do see themselves as above and beyond government, and above and beyond the public interests shared by all Australians.

Michael Holmes
And we heard in Angus’ report, you know, that some of these sites the notion of destroying some of these sites for coal mine or whatever, you know, that it’s like destroying a war cemetery to indigenous people? Why are indigenous voices not being heard the way they should be? And, you know, let’s be honest coal, that’s not going to be a thing in 20-30 years, and meanwhile, you destroy something that’s been there for 30,000 years. Why aren’t they being heard?

Kevin Rudd
Well, you’re right to pose the question, because when I was Prime Minister, I began by acknowledging indigenous people8/7

* of this country as the oldest continuing culture on Earth. That is not a small statement. It’s a very large state now, but it’s also an historical and pre historical fact. And therefore, when we look at the architectural and the archaeological legacy of indigenous settlement in this country, we have a unique global responsibility to act responsibly. I can only put down this buccaneer cavalier attitude to the corporate arrogance of Rio Tinto and one or two of the other major mining companies that I have a very skeptical view of BHP Billiton in this respect, and their attitude to Indigenous Australians, but also a very comfortable, far too comfortable relationship with the conservative federal government of Australia, which replaced my government. Which, as you’ve just demonstrated, through one of the decisions made by their environment minister, are quite happy to allow indigenous interests and archaeological significance to take a back place.

Michael Holmes
Yeah, as I said, I remember 2008 well, when that apology was made. It was such a significant moment for Australia and Australians. I mean, I haven’t lived in Australia for 25 years, but you know, have have attitudes changed in recent years in Australia, or changed enough when it comes to these sorts of national treasures and appreciation of them? Has white Australia altered its view of Aboriginals Aboriginal heritage enough.

Kevin Rudd
I think the National Apology and the broader processes of reconciliation in this country, which my government took forward from 2008 onwards, have a continuing effect on the whole country, including white Australians. And what is my evidence of that? The evidence is that when Rio Tinto in their monumental arrogance, thought they could just slide through by blowing this 46,000 year old archaeologically, significant ancient caves to bits. Even let me call it, white Australia, turned around and said, this is just a bridge too far. Even conservative members of parliament in a relevant parliamentary inquiry, scratch their head and said this is just beyond the pale. And so if you want evidence of the fact that underlying community attitudes have changed, and political attitudes have changed, that’s it. But we still have a buccaneer approach on the part of Rio Tinto, who I think not just in Australia, but globally have demonstrated a level of arrogance to elected democratic governments around the world whereby Rio Tinto thinks it’s up here, and the elected governments and the norms of the societies in which they operate it down here, Rio Tinto has a major challenge. itself, as do other big miners like BHP Billiton, which has to understand that they are dealing with the people’s resource. And they are working within a framework of laws, which places an absolute priority on the centrality of the indigenous heritage of all these countries including Australia.


The post CNN: Kevin Rudd on Rio Tinto and Indigenous Heritage Sites appeared first on Kevin Rudd.

LongNowCharting Earth’s (Many) Mass Extinctions

How many mass extinctions has the Earth had, really? Most people talk today as if it’s five, but where one draws the line determines everything, and some say over twenty.

However many it might be, new mass extinctions seem to reveal themselves with shocking frequency. Just last year researchers argued for another giant die-off just preceding the Earth’s worst, the brutal end-Permian.

Now, one more has come into focus as stratigraphy improves. With that, four of the most awful things that life has ever suffered all came down (in many cases, literally, as volcanic ash) within 60 million years. Not a great span of time with which to spin the wheel, in case your time machine is imprecise…

Worse Than FailureThe Watchdog Hydra

Ammar A uses Node to consume an HTTP API. The API uses cookies to track session information, and ensures those cookies expire, so clients need to periodically re-authenticate. How frequently?

That's an excellent question, and one that neither Ammar nor the docs for this API can answer. The documentation provides all the clarity and consistency of a religious document, which is to say you need to take it on faith that these seemingly random expirations happen for a good reason.

Ammar, not feeling particularly faithful, wrote a little "watchdog" function. Once you log in, every thirty seconds it tries to fetch a URL, hopefully keeping the session alive, or, if it times out, starts a new session.

That's what it was supposed to do, but Ammar made a mistake.

function login(cb) {{url: '/login', form:{username: config.username, password: config.password}, function(err, response) { if (err) return cb(err) if (response.statusCode != 200) return cb(response.statusCode); console.log("[+] Login succeeded"); setInterval(watchdog, 30000); return cb(); }) } function watchdog() { // Random request to keep session alive request.get({url: '/getObj', form:{id: 1}, (err, response)=>{ if (!err && response.statusCode == 200) return console.log("[+] Watchdog ping succeeded"); console.error("[-] Watchdog encountered error, session may be dead"); login((err)=>{ if (err) console.error("[-] Failed restarting session:",err); }) }) }

login sends the credentials, and if we get a 200 status back, we setInterval to schedule a call to the watchdog every 30,000 milliseconds.

In the watchdog, we fetch a URL, and if it fails, we call login. Which attempts to login, and if it succeeds, schedules the watchdog every 30,000 milliseconds.

Late one night, Ammar got a call from the API vendor's security team. "You are attacking our servers, and not only have you already cut off access to most of our customers, you've caused database corruption! There will be consequences for this!"

Ammar checked, and sure enough, his code was sending hundreds of thousands of requests per second. It didn't take him long to figure out why: requests from the watchdog were failing with a 500 error, so it called the login method. The login method had been succeeding, so another watchdog got scheduled. Thirty seconds later, that failed, as did all the previously scheduled watchdogs, which all called login again. Which, on success, scheduled a fresh round of watchdogs. Every thirty seconds, the number of scheduled calls doubled. Before long, Ammar's code was DoSing the API.

In the aftermath, it turned out that Ammar hadn't caused any database corruption, to the contrary, it was an error in the database which caused all the watchdog calls to fail. The vendor corrupted their own database, without Ammar's help. Coupled with Ammar's careless setInterval management, that error became even more punishing.

Which raises the question: why wasn't the vendor better prepared for this? Ammar's code was bad, sure, a lurking timebomb, just waiting to go off. But any customer could have produced something like that. A hostile entity could easily have done something like that. And somehow, this vendor had nothing in place to deal with what appeared to be a denial-of-service attack, short of calling the customer responsible?

I don't know what was going on with the vendor's operations team, but I suspect that the real WTF is somewhere in there.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


LongNowThe Language Keepers Podcast

A six-part podcast from Emergence Magazine explores the plight of four Indigenous languages spoken in California—Tolowa Dee-ni’, Karuk, Wukchumni, and Kawaiisu—among the most vulnerable in the world:

“Two centuries ago, as many as ninety languages and three hundred dialects were spoken in California; today, only half of these languages remain. In Episode One, we are introduced to the language revitalization efforts of these four Indigenous communities. Through their experiences, we examine the colonizing histories that brought Indigenous languages to the brink of disappearance and the struggle for Indigenous cultural survival in America today.”


MEBandwidth for Video Conferencing

For the Linux Users of Victoria (LUV) I’ve run video conferences on Jitsi and BBB (see my previous post about BBB vs Jitsi [1]). One issue with video conferences is the bandwidth requirements.

The place I’m hosting my video conference server has a NBN link with allegedly 40Mb/s transmission speed and 100Mb/s reception speed. My tests show that it can transmit at about 37Mb/s and receive at speeds significantly higher than that but also quite a bit lower than 100Mb/s (around 60 or 70Mb/s). For a video conference server you have a small number of sources of video and audio and a larger number of targets as usually most people will have their microphones muted and video cameras turned off. This means that the transmission speed is the bottleneck. In every test the reception speed was well below half the transmission speed, so the tests confirmed my expectation that transmission was the only bottleneck, but the reception speed was higher than I had expected.

When we tested bandwidth use the maximum upload speed we saw was about 4MB/s (32Mb/s) with 8+ video cameras and maybe 20 people seeing some of the video (with a bit of lag). We used 3.5MB/s (28Mb/s) when we only had 6 cameras which seemed to be the maximum for good performance.

In another test run we had 4 people all sending video and the transmission speed was about 260KB/s.

I don’t know how BBB manages the small versions of video streams. It might reduce the bandwidth when the display window is smaller.

I don’t know the resolutions of the cameras. When you start sending video in BBB you are prompted for the “quality” with “medium” being default. I don’t know how different camera hardware and different choices about “quality” affect bandwidth.

These tests showed that for the cameras we had available a small group of people video chatting a 100/40 NBN link (the fastest Internet link in Australia that’s not really expensive) a small group of people can be all sending video or a medium size group of people can watch video streams from a small group.

For meetings of the typical size of LUV meetings we won’t have a bandwidth problem.

There is one common case that I haven’t yet tested, where there is a single video stream that many people are watching. If 4 people are all sending video with 260KB/s transmission bandwidth then 1 person could probably send video to 4 for 65KB/s. Doing some simple calculations on those numbers implies that we could have 1 person sending video to 240 people without running out of bandwidth. I really doubt that would work, but further testing is needed.

Krebs on SecurityWho is Tech Investor John Bernard?

John Bernard, the subject of a story here last week about a self-proclaimed millionaire investor who has bilked countless tech startups, appears to be a pseudonym for John Clifton Davies, a U.K. man who absconded from justice before being convicted on multiple counts of fraud in 2015. Prior to his conviction, Davies served 16 months in jail before being cleared of murdering his wife on their honeymoon in India.

The Private Office of John Bernard, which advertises itself as a capital investment firm based in Switzerland, has for years been listed on multiple investment sites as the home of a millionaire who made his fortunes in the dot-com boom 20 years ago and who has oodles of cash to invest in tech startups.

But as last week’s story noted, Bernard’s investment company is a bit like a bad slot machine that never pays out. KrebsOnSecurity interviewed multiple investment brokers who all told the same story: After promising to invest millions after one or two phone calls and with little or no pushback, Bernard would insist that companies pay tens of thousands of dollars worth of due diligence fees up front.

However, the due diligence company he insisted on using — another Swiss firm called Inside Knowledge — also was secretly owned by Bernard, who would invariably pull out of the deal after receiving the due diligence money.

Neither Mr. Bernard nor anyone from his various companies responded to multiple requests for comment over the past few weeks. What’s more, virtually all of the employee profiles tied to Bernard’s office have since last week removed those firms from their work experience as listed on their LinkedIn resumes — or else deleted their profiles altogether.

Sometime on Thursday John Bernard’s main website — — replaced the content on its homepage with a note saying it was closing up shop.

“We are pleased to announce that we are currently closing The Private Office fund as we have reached our intended investment level and that we now plan to focus on helping those companies we have invested into to grow and succeed,” the message reads.

As noted in last week’s story, the beauty of a scam like the one multiple investment brokers said was being run by Mr. Bernard is that companies bilked by small-time investment schemes rarely pursue legal action, mainly because the legal fees involved can quickly surpass the losses. What’s more, most victims will likely be too ashamed to come forward.

Also, John Bernard’s office typically did not reach out to investment brokers directly. Rather, he had his firm included on a list of angel investors focused on technology companies, so those seeking investments usually came to him.

Finally, multiple sources interviewed for this story said Bernard’s office offered a finders fee for any investment leads that brokers brought his way. While such commissions are not unusual, the amount promised — five percent of the total investment in a given firm that signed an agreement — is extremely generous. However, none of the investment brokers who spoke to KrebsOnSecurity were able to collect those fees, because Bernard’s office never actually consummated any of the deals they referred to him.


After last week’s story ran, KrebsOnSecurity heard from a number of other investment brokers who had near identical experiences with Bernard. Several said they at one point spoke with him via phone or Zoom conference calls, and that he had a distinctive British accent.

When questioned about why his staff was virtually all based in Ukraine when his companies were supposedly in Switzerland, Bernard replied that his wife was Ukrainian and that they were living there to be closer to her family.

One investment broker who recently got into a deal with Bernard shared a screen shot from a recent Zoom call with him. That screen shot shows Bernard bears a striking resemblance to one John Clifton Davies, a 59-year-old from Milton Keynes, a large town in Buckinghamshire, England about 50 miles (80 km) northwest of London.

John Bernard (left) in a recent Zoom call, and a photo of John Clifton Davies from 2015.

In 2015, Mr. Davies was convicted of stealing more than GBP 750,000 from struggling companies looking to restructure their debt. For at least seven years, Davies ran multiple scam businesses that claimed to provide insolvency consulting to distressed companies, even though he was not licensed to do so.

“After gaining the firm’s trust, he took control of their assets and would later pocket the cash intended for creditors,” according to a U.K. news report from 2015. “After snatching the cash, Davies proceeded to spend the stolen money on a life of luxury, purchasing a new upmarket home fitted with a high-tech cinema system and new kitchen.”

Davies disappeared before he was convicted of fraud in 2015. Two years before that, Davies was released from prison after being held in custody for 16 months on suspicion of murdering his new bride in 2004 on their honeymoon in India.

Davies’ former wife Colette Davies, 39, died after falling 80 feet from a viewing point at a steep gorge in the Himachal Pradesh region of India. Mr. Davies was charged with murder and fraud after he attempted to collect GBP 132,000 in her life insurance payout, but British prosecutors ultimately conceded they did not have enough evidence to convict him.


While the photos above are similar, there are other clues that suggest the two identities may be the same person. A review of business records tied to Davies’ phony insolvency consulting businesses between 2007 and 2013 provides some additional pointers.

John Clifton Davies’ former listing at the official U.K. business registrar Companies House show his company was registered at the address 26 Dean Forest Way, Broughton, Milton Keynes.

A search on that street address at turns up several interesting results, including a listing for registered to a John Davies at the email address

A Companies House official record for Seneca Equities puts it at John Davies’ old U.K. address at 26 Dean Forest Way and lists 46-year-old Iryna Davies as a director. “Iryna” is a uniquely Ukrainian spelling of the name Irene (the Russian equivalent is typically “Irina”).

A search on John Clifton Davies and Iryna turned up this 2013 story from The Daily Mirror which says Iryna is John C. Davies’ fourth wife, and that the two were married in 2010.

A review of the Swiss company registrar for The Inside Knowledge GmbH shows an Ihor Hubskyi was named as president of the company. This name is phonetically the same as Igor Gubskyi, a Ukrainian man who was listed in the U.K.’s Companies House records as one of five officers for Seneca Equities along with Iryna Davies.

KrebsOnSecurity sought comment from both the U.K. police district that prosecuted Davies’ case and the U.K.’s National Crime Agency (NCA). Neither wished to comment on the findings. “We can neither confirm nor deny the existence of an investigation or subjects of interest,” a spokesperson for the NCA said.

Worse Than FailureError'd: Where to go, Next?

"In this screenshot, 'Lyckades' means 'Succeeded' and the buttons say 'Try again' and 'Cancel'. There is no 'Next' button," wrote Martin W.


"I have been meaning to send a card, but I just wasn't planning on using PHP's eval() to send it," Andrew wrote.


Martyn R. writes, "I was trying to connect to PA VPN and it seems they think that downgrading my client software will help."


"What the heck, Doordash? I was expecting a little more variety from you guys...but, to be honest, I gotta wonder what 'null' flavored cheesecake is like," Joshua M. writes.


Nicolas wrote, "NaN exclusive posts? I'm already on the inside loop. After all, I love my grandma!"


[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Krebs on SecurityMicrosoft: Attackers Exploiting ‘ZeroLogon’ Windows Flaw

Microsoft warned on Wednesday that malicious hackers are exploiting a particularly dangerous flaw in Windows Server systems that could be used to give attackers the keys to the kingdom inside a vulnerable corporate network. Microsoft’s warning comes just days after the U.S. Department of Homeland Security issued an emergency directive instructing all federal agencies to patch the vulnerability by Sept. 21 at the latest.

DHS’s Cybersecurity and Infrastructure Agency (CISA) said in the directive that it expected imminent exploitation of the flaw — CVE-2020-1472 and dubbed “ZeroLogon” — because exploit code which can be used to take advantage of it was circulating online.

Last night, Microsoft’s Security Intelligence unit tweeted that the company is “tracking threat actor activity using exploits for the CVE-2020-1472 Netlogon vulnerability.”

“We have observed attacks where public exploits have been incorporated into attacker playbooks,” Microsoft said. “We strongly recommend customers to immediately apply security updates.”

Microsoft released a patch for the vulnerability in August, but it is not uncommon for businesses to delay deploying updates for days or weeks while testing to ensure the fixes do not interfere with or disrupt specific applications and software.

CVE-2020-1472 earned Microsoft’s most-dire “critical” severity rating, meaning attackers can exploit it with little or no help from users. The flaw is present in most supported versions of Windows Server, from Server 2008 through Server 2019.

The vulnerability could let an unauthenticated attacker gain administrative access to a Windows domain controller and run an application of their choosing. A domain controller is a server that responds to security authentication requests in a Windows environment, and a compromised domain controller can give attackers the keys to the kingdom inside a corporate network.

Scott Caveza, research engineering manager at security firm Tenable, said several samples of malicious .NET executables with the filename ‘SharpZeroLogon.exe’ have been uploaded to VirusTotal, a service owned by Google that scans suspicious files against dozens of antivirus products.

“Given the flaw is easily exploitable and would allow an attacker to completely take over a Windows domain, it should come as no surprise that we’re seeing attacks in the wild,” Caveza said. “Administrators should prioritize patching this flaw as soon as possible. Based on the rapid speed of exploitation already, we anticipate this flaw will be a popular choice amongst attackers and integrated into malicious campaigns.”

Kevin RuddWall Street Journal: China Backslides on Economic Reform

Published in the Wall Street Journal on 24 September 2020


China is the only major world economy reporting any economic growth today. It went first into Covid-19 and was first out, grinding out 3.2% growth in the most recent quarter while the U.S. shrank 9.5% and other advanced economies endured double-digit declines. High-tech monitoring, comprehensive testing and aggressive top-down containment measures enabled China to get the virus under control while others struggled. The Middle Kingdom may even deliver a modest year-over-year economic expansion in 2020.

This rebound is real, but behind the short-term numbers the economic restart is dubious. China’s growth spurt isn’t the beginning of a robust recovery but an uneven bounce fueled by infrastructure construction. Second-quarter data showed the same imbalance other nations are wrestling with: Investment contributed 5 percentage points to growth, while consumption fell, subtracting 2.3 points.

Since 2017, the China Dashboard, a joint project of Rhodium Group and the Asia Society Policy Institute, has tracked economic policy in China closely for signs of progress. Despite repeated commitments from Chinese authorities to open up and address the country’s overreliance on debt, the China Dashboard has observed delayed attempts and even backtracking on reforms. The Covid-19 outbreak offered an opportunity for Beijing to shift course and deliver on market reforms. Signals from leaders this spring hinted at fixing defunct market mechanisms. But notably, the long list of reforms promised in May was almost the same as previous lists—such as separating capital management from business management at state firms and opening up to foreign investment while increasing the “quality” of outbound investment—which were adopted by the Third Plenum in 2013. In other words, promised recent reforms didn’t happen, and nothing in the new pronouncements explains why or how this time will be different.

An honest look at the forces behind China’s growth this year shows a doubling down on state-managed solutions, not real reform. State-owned entities, or SOEs, drove China’s investment-led recovery. In the first half of 2020, according to China’s National Bureau of Statistics, fixed-asset investment grew by 2.1% among SOEs and decreased by 7.3% in the private sector. Finished product inventory for domestic private firms rose sharply in the same period—a sign of sales difficulty—while SOE inventory decreased slightly, showing the uneven nature of China’s growth.

Perhaps the most significant demonstration of mistrust in markets is the “internal circulation” program first floated by President Xi Jinping in May. On the surface, this new initiative is supposed to expand domestic demand to complement, but not replace, external demand. But Beijing is trying to boost home consumption by making it a political priority. With household demand still shrinking, expect more subsidies to producers and other government interventions, rather than measures that empower buyers. Dictating to markets and decreeing that consumption will rise aren’t the hallmarks of an advanced economy.

The pandemic has forced every country to put short-term stability above future concerns, but no other country has looming burdens like China’s. In June the State Council ordered banks to “give up” 1.5 trillion yuan (around $220 billion) in profit, the only lever of control authorities have to reduce debt costs and support growth. In August China’s economic planning agency ordered six banks to set aside a quota to fund infrastructure construction with long-term loans at below-market rates. These temporary solutions threaten the financial stability of a system that is already overleveraged, pointing to more bank defaults and restructurings ahead.

China also faces mounting costs abroad. By pumping up production over the past six months, as domestic demand stagnated, Beijing has ballooned its trade surplus, fueling an international backlash against state-driven capitalism that goes far beyond Washington. The U.S. has pulled the plug on some channels for immigration, financial flows and technology engagement. Without naming China, a European Commission policy paper in June took aim at “subsidies granted by non-EU governments to companies in the EU.” Governments and industry groups in Germany, the Netherlands, France and Italy are also pushing China to change.

Misgivings about Beijing’s direction are evident even among foreign firms that have invested trillions of dollars in China in recent decades. A July 2020 UBS Evidence Lab survey of more than 1,000 chief financial officers across sectors in the U.S., China and North Asia found that 75% of respondents are considering moving some production out of China or have already done so. Nearly half the American executives with plans to leave said they would relocate more than 60% of their firms’ China-based production. Earlier this year Beijing floated economic reforms intended to forestall an exodus of foreign companies, but nothing has come of it.

For years, the world has watched and waited for China to become more like a free-market economy, thereby reducing American security concerns. At a time of profound stress world-wide, the multiple gauges of reform we have been monitoring through the China Dashboard point in the opposite direction. China’s economic norms are diverging from, rather than converging with, the West’s. Long-promised changes detailed at the beginning of the Xi era haven’t materialized.

Though Beijing talks about “market allocation” efficiency, it isn’t guided by what mainstream economists would call market principles. The Chinese economy is instead a system of state capitalism in which the arbiter is an uncontestable political authority. That may or may not work for China, but it isn’t what liberal democracies thought they would get when they invited China to take a leading role in the world economy.

The post Wall Street Journal: China Backslides on Economic Reform appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Generic Comment

To my mind, code comments are important to explain why the code what it does, not so much what it does. Ideally, the what is clear enough from the code that you don’t have to. Today, we have no code, but we have some comments.

Chris recently was reviewing some C# code from 2016, and found a little conversation in the comments, which may or may not explain whats or whys. Line numbers included for, ahem context.

4550: //A bit funky, but entirely understandable: Something that is a C# generic on the storage side gets
4551: //represented on the client side as an array. Why? A C# generic is rather specific, i.e., Java
4552: //doesn't have, for example, a Generic List class. So we're messing with arrays. No biggie.

Now, honestly, I’m more confused than I probably would have been just from the code. Presumably as we’re sending things to a client, we’re going to serialize it to an intermediate representation, so like, sure, arrays. The comment probably tells me why, but it’s hardly a necessary explanation here. And what does Java have to do with anything? And also, Java absolutely does support generics, so even if the Java trivia were relevant, it’s not accurate.

I’m not the only one who had some… questions. The comment continues:

4553: //
4554: //WTF does that have to do with anything? Who wrote this inane, stupid comment, 
4555: //but decided not to put comments on anything useful?

Not to sound judgmental, but if you’re having flamewars in your code comments, you may not be on a healthy, well-functioning team.

Then again, if this is someplace in the middle of your file, and you’re on line 4550, you probably have some other problems going on.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!


Krebs on SecurityGovt. Services Firm Tyler Technologies Hit in Apparent Ransomware Attack

Tyler Technologies, a Texas-based company that bills itself as the largest provider of software and technology services to the United States public sector, is battling a network intrusion that has disrupted its operations. The company declined to discuss the exact cause of the disruption, but their response so far is straight out of the playbook for responding to ransomware incidents.

Plano, Texas-based Tyler Technologies [NYSE:TYL] has some 5,300 employees and brought in revenues of more than $1 billion in 2019. It sells a broad range of services to state and local governments, including appraisal and tax software, integrated software for courts and justice agencies, enterprise financial software systems, public safety software, records/document management software solutions and transportation software solutions for schools.

Earlier today, the normal content on was replaced with a notice saying the site was offline. In a statement provided to KrebsOnSecurity after the markets closed central time, Tyler Tech said early this morning the company became aware that an unauthorized intruder had gained access to its phone and information technology systems.

“Upon discovery and out of an abundance of caution, we shut down points of access to external systems and immediately began investigating and remediating the problem,” Tyler’s Chief Information Officer Matt Bieri said. “We have since engaged outside IT security and forensics experts to conduct a detailed review and help us securely restore affected equipment. We are implementing enhanced monitoring systems, and we have notified law enforcement.”

“At this time and based on the evidence available to us to-date, all indications are that the impact of this incident is limited to our internal network and phone systems,” their statement continues. “We currently have no reason to believe that any client data, client servers, or hosted systems were affected.”

While it may be comforting to hear that last bit, the reality is that it is still early in the company’s investigation. Also, ransomware has moved well past just holding a victim firm’s IT systems hostage in exchange for an extortion payment: These days, ransomware purveyors will offload as much personal and financial data that they can before unleashing their malware, and then often demand a second ransom payment in exchange for a promise to delete the stolen information or to refrain from publishing it online.

Tyler Technologies declined to say how the intrusion is affecting its customers. But several readers who work in IT roles at local government systems that rely on Tyler Tech said the outage had disrupted the ability of people to pay their water bills or court payments.

“Tyler has access to a lot of these servers in cities and counties for remote support, so it was very thoughtful of them to keep everyone in the dark and possibly exposed if the attackers made off with remote support credentials while waiting for the stock market to close,” said one reader who asked to remain anonymous.

Depending on how long it takes for Tyler to recover from this incident, it could have a broad impact on the ability of many states and localities to process payments for services or provide various government resources online.

Tyler Tech has pivoted on the threat of ransomware as a selling point for many of its services, using its presence on social media to promote ransomware survival guides and incident response checklists. With any luck, the company was following some of its own advice and will weather this storm quickly.

Update, Sept. 24, 6:00 p.m. ET: Tyler said in an updated statement on its website that a review of its logs, monitoring, traffic reports and cases related to utility and court payments revealed no outages with those systems. However, several sources interviewed for this story who work in tech roles at local governments which rely on Tyler Tech said they proactively severed their connections to Tyler Tech systems after learning about the intrusion. This is a fairly typical response for companies that outsource payment transactions to a third party when and that third party ends up experiencing a ransomware attack, but the end result (citizens unable to make payments) is the same.

Update, 11:49 p.m. ET: Tyler is now encouraging all customers to change any passwords for any remote network access for Tyler staff, after receiving reports from some customers about suspicious logins. From a statement Tyler sent to customers:

“We apologize for the late-night communications, but we wanted to pass along important information as soon as possible. We recently learned that two clients have report suspicious logins to their systems using Tyler credentials. Although we are not aware of any malicious activity on client systems and we have not been able to investigate or determine the details regarding these logins, we wanted to let you know immediately so that you can take action to protect your systems.”

Cory DoctorowAnnouncing the Attack Surface tour

It’s been 12 years since I went on my first book tour and in the years since, I’ve met and spoken with tens of thousands of readers in hundreds of cities on five continents in support of more than a dozen books.

Now I’ve got another major book coming out: ATTACK SURFACE.

How do you tour a book during a pandemic? I think we’re still figuring that out. I’ll tell you one thing, I won’t be leaving Los Angeles this time around. Instead, my US publisher, Tor Books, has set up eight remote “Attack Surface Lectures.”

Each event has a different theme and different guest-hosts/co-discussants, chosen both for their expertise and their ability to discuss their subjects in ways that are fascinating and engaging.

ATTACK SURFACE is the third Little Brother book, a standalone book for adults.

It stars Masha, a young woman who is finally reckoning with the moral character of the work she’s done, developing surveillance tools to fight Iraqi insurgents and ex-Soviet democracy activists.

Masha has struggled with her work for years, compartmentalizing her qualms, rationalizing her way into worse situations.

She goes home to San Francisco and discovers her best friend, a BLM activist, is being targeted by the surveillance weapons Masha herself invented.

What follows is a Little Brother-style technothriller, full of rigorous description and extrapolation on cybersecurity, surveillance and resistance, that illuminates the tale of a tech worker grappling with their own life’s work.

Obviously, this covers a lot of ground, as is reflected in the eight nights of talks we’re announcing today:

I. Politics & Protest, Oct 13, with Eva Galperin and Ron Deibert, hosted by The Strand Bookstore

II. Cross-Medium SciFi, Oct 14, with Amber Benson and John Rogers, hosted by Brookline Booksmith

III. ​​Intersectionality: Race, Surveillance, and Tech and Its History, Oct 15, with Malkia Cyril and Meredith Whittaker, hosted by Booksmith

IV. SciFi Genre, Oct 16, with Sarah Gailey and Chuck Wendig, hosted by Fountain Books

V. Cyberpunk and Post-Cyberpunk, Oct 19, with Bruce Sterling and Christopher Brown, hosted by Andeersons Bookshop

VI. Tech in SciFi, Oct 20, with Ken Liu and Annalee Newitz, hosted by Interabang

VII. Little Revolutions, Oct 21, with Tochi Onyebuchi and Bethany C Morrow, hosted by Skylight Books

VIII. OpSec & Personal Cyber-Security: How Can You Be Safe?, Oct 22, with Runa Sandvik and Window Snyder, hosted by Third Place Books

Some of the events come with either a hardcover and a signed bookplate, or, with some stores, actual signed books.

(those stores’ stock is being shipped to my house, and I’m signing after each event and mailing out from here)

(yes, really)

I’ve never done anything like this and I’d be lying if I said I wasn’t nervous about it. Book tours are crazy marathons – I did 35 cities in 3 countries in 45 days for Walkaway a couple years ago – and this is an entirely different kind of thing.

But I’m also (very) excited. Revisiting Little Brother after seven years is quite an experience. ATTACK SURFACE – a book about uprisings, police-state tactics, and the digital tools as oppressors and liberators – is (unfortunately) very timely.

Having an excuse to talk about this book and its themes with you all – and with so many distinguished and brilliant guests – is going to keep me sane next month. I really hope you can make it.

Cory DoctorowPoesy the Monster Slayer

POESY THE MONSTER SLAYER is my first-ever picture book, illustrated by Matt Rockefeller and published by Firstsecond. It’s an epic tale of toy-hacking, bedtime-avoidance and monster-slaying.

The book’s publication was attended by a superb and glowing review from Kirkus:

“The lights are out, and the battle begins. She knows the monsters are coming, and she has a plan. First, the werewolf appears. No problem. Poesy knows the tools to get rid of him: silver (her tiara) and light. She wins, of course, but the ruckus causes a ‘cross’ Daddy to appear at her door, telling her to stop playing with toys and go back to bed. She dutifully lets him tuck her back in. But on the next page, her eyes are open. ‘Daddy was scared of monsters. Let DADDY stay in bed.’ Poesy keeps fighting the monsters that keep appearing out of the shadows, fearlessly and with all the right tools, to the growing consternation of her parents, a Black-appearing woman and a White-appearing man, who are oblivious to the monsters and clearly fed up and exhausted but used to this routine. Poesy is irresistible with her brave, two-sided personality. Her foes don’t stand a chance (and neither do her parents). Rockefeller’s gently colored cartoon art enhances her bravery with creepily drawn night creatures and lively, expressive faces.

“This nighttime mischief is not for the faint of heart. (Picture book. 5-8)”

Kirkus joins Publishers Weekly in its praise: “Strikes a gently edgy tone, and his blow-by-blow account races to its closing spread: of two tired parents who resemble yet another monster.”

Sociological ImagesIs Knowing Half the Battle?

We seem to have been struggling with science for the past few…well…decades. The CDC just updated what we know about COVID-19 in the air, misinformation about trendy “wellness products” abounds, and then there’s the whole climate crisis.

This is an interesting pattern because many public science advocates put a lot of work into convincing us that knowing more science is the route to a more fulfilling life. Icons like Carl Sagan and Neil deGrasse Tyson, as well as modern secular movements, talk about the sense of profound wonder that comes along with learning about the world. Even GI Joe PSAs told us that knowing was half the battle.

The problem is that we can be too quick to think that knowing more will automatically make us better at addressing social problems. That claim is based on two assumptions: one, that learning things feels good and motivates us to action, and two, that knowing more about a topic makes people more likely to appreciate and respect that topic. Both can be true, but they are not always true.

The first is a little hasty. Sure, learning can feel good, but research on teaching and learning shows that it doesn’t always feel good, and I think we often risk losing students’ interest because they assume that if a topic is a struggle, they are not meant to be studying it.

The second is often wrong, because having more information does not always empower us to make better choices. Research shows us that knowing more about a topic can fuel all kinds of other biases, and partisan identification is increasingly linked with with attitudes toward science.

To see this in action, I took a look at some survey data collected by the Pew Research Center in 2014. The survey had seven questions checking attitudes about science – like whether people kept up with science or felt positively about it – and six questions checking basic knowledge about things like lasers and red blood cells. I totaled up these items into separate scales so that each person has a score for how much they knew and how positively or negatively they thought about science in general. These scales are standardized, so people with average scores are closer to zero. Plotting out these scores shows us a really interesting null finding documented by other research – there isn’t a strong relationship between knowing more and feeling better about science.

The purple lines mark average scores in each scale, and the relationship between science knowledge and science attitudes is fairly flat.

Here, both people who are a full standard deviation above the mean and multiple standard deviations below the mean on their knowledge score still hold pretty average attitudes about science. We might expect an upward sloping line, where more knowledge associates with more positive attitudes, but we don’t see that. Instead, attitudes about science, whether positive or negative, get more diffuse among people who get fewer answers correct. The higher the knowledge, the more tightly attitudes cluster around average.

This is an important point that bears repeating for people who want to change public policy or national debate on any science-based issue. It is helpful to inform people about these serious issues, but shifting their attitudes is not simply a matter of adding more information. To really change minds, we have to do the work to put that information into conversation with other meaning systems, emotions, and moral assumptions.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at

Kevin RuddABC Radio Melbourne: On the Coalition’s changes to the NBN


Topics: NBN, Energy, Gas, Dan Andrews and Victoria

Rafael Epstein
Welcome, if you’re watching us on facebook live today the Coalition government came forward and said they will fund NBN to your home and not just to your street. 6 million homes at least some businesses as well, at a cost of about well somewhere north of $5 billion. Kevin Rudd is claiming vindication. He is of course, the former prime minister whose government came up with the NBN idea. And he’s president of the Asia Society Policy Institute in New York. At the moment, he is in Queensland, Kevin Rudd, thanks for joining us.

Kevin Rudd
Good to be with you.

Rafael Epstein
They’re doing the same thing, but better and cheaper. They say.

Kevin Rudd
Well, I think in your intro, you said ‘claiming vindication.’ All I’m interested in is the facts here. We laid out a plan for a National Broadband Network back in 2009 to take fiber optic cable to every single premise is house in small business and school in the country. Other than the most remote areas where we rely on a combination of satellite and wireless services, so that was 93% of the country. And it was to be 100 megabits per second, capable of further upgrade. So that’s what we were rolling out. Whistle onto the 2013 election. What did they then do? Partly in response to pressure from Murdoch, who didn’t want a new competition for his failing Fox Entertainment Network on cable by

Rafael Epstein
Disputed by Malcolm Turnbull, but continue? Yes.

Kevin Rudd
Well, that might be disputed, I’m just referring to basically unassailable facts. Murdoch did not want Netflix streamed live to people’s homes, because that would provide direct competition far too early than he wanted for his Fox entertainment cable network, at that stage is only really profitable business in Australia. Anyway, we get to 2013. What did they do they kill the original model, and instead we’re not gonna have fiber optic to the premises and have fiber optic to this thing called the node wherever that was, we still haven’t found ours in our suburbs up in sunshine beach. And that’s where it’s languished ever since. And as a result, when COVID hit people across the country have been screaming out because the inadequacy of their broadband services. That’s the history, however, they may choose to rewrite it.

Rafael Epstein
Well, they do. The essential argument without getting into the technical detail from the government is that you went too hard, too fast. You hadn’t connected nearly enough people by the time of that election, and they needed to go back and do it more slowly. And with the best technology at the time, how do you respond to that?

Kevin Rudd
What a load of codswallop. We launched the thing in 2009, I was actually participating in the initial rollout of cable ceremonies. I remember doing one in northwestern Tasmania, late 09, early 2010. And we had rolled out cable in a number of suburbs and a number of cities and regional towns right across Australia and we deliberately started in regional areas. We did not start in the inner city areas because we wanted to make sure that every Australian household and small business had a go.

Rafael Epstein
Is that what led to the higher cost, Kevin Rudd, is because they did the criticism from them is that yours was too expensive what you were doing at the time.

Kevin Rudd
Well, I make no apology for what it costs to layout high speed broadband in Darwin, or in Bernie in northwestern Tasmania, or in these parts of the world where yes, residences are further apart. But either you had a digital divide in Australia, which separated rich from poor from inner city to outer suburbs, or from capital cities to the country and regions. I grew up in the regions, I happen to believe that people out there are deserving of high speed broadband just as much as anybody else. And they killed this plan in 2013 spent 10s of billions of dollars in a botch job, which resulted in broadband speeds for Australia, among the slowest in the OECD. How they defend that, as a policy achievement, beggars belief.

Rafael Epstein
Doesn’t it makes sense though, to have a demand driven program? The minister was saying, look what we’re gonna do, we’ll roll the fiber down the street, and then the fiber will come into your home only when the customer wants it. Isn’t that a sensible way to build a network?

Kevin Rudd
So why have I and others around the country and members of parliament, government and opposition received screaming petitions from people over the last, you know, four to five years, demanding their broadband services. And in fact, it’s just been virtually impossible for many people to get connected. The truth is this has been an absolute botch up in the conceptual design, that is, substituting fiber optic cable with copper, for that last link, which means that however fast the data has traveled along to the fiber optic node. Once it’s constricted and strangled at the copper intersection point it then slows down to a snail’s pace, which is why everyone listening to your program has been pulling their hair out if they hadn’t been able to be connected to fiber optic themselves.

Rafael Epstein
We don’t know for sure which program would have been more expensive though, do we? I mean they’re massive projects. You only really know the costs when you reach the point in the project? I mean, how do we know yours would have been any cheaper or any better?

Kevin Rudd
We planned ours thoroughly. The first conceptual design contrary to Liberal Party propaganda, was put together over a six month period involving technical committees, presided over by the Secretary of the Commonwealth Treasurery. The Liberal Party propaganda was that the NBN was invented on the back of an envelope or the back of a restaurant menu. That’s utter bullshit. The bottom line is it was well planned, well developed by a specialist committee of experts both technical and financial presided over by the Secretary the Treasury. It was then rolled out and we intend to have it rolled out across the nation by 2018-2019. That was how it was going. We had a battle plan with which to do that. But when it gets cut off at midships by the conservatives with an entirely different delivery model, the amount of waste of taxpayers dollars involved in going for plan Zed rather than Plan A…

Rafael Epstein
So can I can ask you to expand on that Kevin Rudd because you’re accusing them of waste. Why is it a waste? What have they done that you say is a waste?

Kevin Rudd
Let me just give you one example of waste. One example of waste is when you’ve got a massive rollout program underway with multiple contractors already engaged across the country, preparing trenches and laying cable and then with a central fiat from the then Abbott-Turnbull-Morrison government saying stop. What happens is all these projects and all these contractual arrangements get thrown up into the air, many of which I imagine involve penalty payments for loss of work, then you’ve got a whole problem now reconvening a massive rollout program, building up capacity and signing new contracts. Again, at loss to the taxpayer, I would invite the Auditor General to investigate what has actually been lost in quantitative dollar terms through the mismanagement of this project.

Rafael Epstein
You’re listening to Kevin Rudd, of course, formal Prime Minister of Australia regional conceiver of the NBN with his then Communications Minister Stephen Conroy. The phone number is 1300 222 774. Kevin Rudd, the government had a few announcements over the last week or two around energy, especially the announcement yesterday. There is some there’s quite a lot of senses in there in spending on every low emissions technology that is available that might include something like gases or transition fuel, that’s, that’s a decent step forward, isn’t it, even if they don’t have the emissions tag you’d like them to have?

Kevin Rudd
Let me put it to you in these terms. The strategy we had was to give renewable energy initial boost. We did that through the Mandatory Renewable Energy Target, which we legislated. And it worked. Remember, when we were elected to office 4% of Australia’s electricity and energy supply came from renewables. We legislated to 20% by 2020. And guess what it is? Now it’s 21%. The whole idea was to give that whole slew of renewable and energy industries, a starting chance, you know, a critical mass to participate in the national energy market. What they should be now doing, if they’re going to throw taxpayer dollars at a next tranche of energy is throw it open to competitive renewables and see who comes up with the best bid.

Rafael Epstein
They can do that. I mean, they’ve announced funding the arena, which Labor government came up with, but renewable people can bid for that money. There’s nothing stopping them bidding for their money.

Kevin Rudd
Yeah, but the whole point of investing taxpayer dollars in projects of this nature is to advance the renewables cause or to advance, shall we say, zero emissions? You see, the reason the Renewable Energy Agency was established again by the Labor government was to provide a financing mechanism for renewables that is 100% clean energy. Gas projects exist on the basis of their own financial economics out there in the marketplace, and together with coal will ultimately feel the impact of international investment houses incrementally withdrawing financial support from any carbon based fuel that’s happening over time. Cole’s the product leader in terms of people exiting those investments, others will follow.

Rafael Epstein
What do you make of the way the majors handled? The premier here in Victoria, Dan Andrews.

Kevin Rudd
I think what’s the media has comprehensively failed to understand what it’s like to be in the hot seat. You see, I began my career in public life. As a state official, I was Director General of the Queensland Cabinet Office for five years. I’ve got some familiarity with how state governments go, and how they work and how they operate. And against all the measures of what’s possible and not possible. I think Dan Andrews has done a really good job. And what people who criticize him fail the answer…

Rafael Epstein
Even with the hotel’s problem?

Kevin Rudd
Yeah, well, look, you know, it reminds me very much of when I was Prime Minister. And the Liberals then attacked me and held me personally accountable and responsible for the tragic deaths of four young men who were killed in the rollout of the home insulation program, as a consequence of a failure of contractors to adhere to state based occupational health and safety standards. So all I would say is when people start waving the finger at a head of government and saying you Dan Andrews are responsible for the second shift at two o’clock in the morning at hotel facility F, frankly, doesn’t pass the common sense test, I think…

Rafael Epstein
Shouldn’t his health department be in charge of the…

Kevin Rudd
The voters of Victoria and the voters of the nation, I think are common sense people, and they would ask themselves, what would I do in the same situation? Tough call. But I think in overall terms, Andrews has handled it well.

Rafael Epstein
I know you’re a big critic of the Murdoch media. I just wonder if you see. Is there more criticism that comes from a newspaper like the Herald Sun or the Australian towards Dan Andrews, do you think?

Kevin Rudd
Well, I look at the Murdoch rags every day. And they are rags. If you don’t believe they are go to the ABC and look at Iview at the BBC series on the Murdoch dynasty and the manipulation of truth for editorial purposes. Something which I noticed the ABC in Australia is always coy to probe to fear there’ll be too much Murdoch retaliation against the ABC.

Rafael Epstein
I’m happy to probe it but there are some good journalists at the Australian and the Herald Sun aren’t there?

Kevin Rudd
Well, well actually very few of your colleagues are. That’s the truth of it. Tell me this. When was the last time Four Corners did an expose into the abuse of media power in Australia by the murdochs. Do you know the answer to that one is?

Rafael Epstein
I don’t answer for Four Corners.

Kevin Rudd
No Hang on. You’re the ABC…

Rafael Epstein
Yes but you can’t ask me about Four Corners Kevin Rudd, I don’t have nothing to do with it.

Kevin Rudd
The answer is 25 years. So what I’m saying is that your principal investigator program has left it as too hot to handle for quarter of a century. So to answer your question about Murdoch’s handling of state Labor premiers. If I was to look each day at the coverage and the Queensland Courier Mail and the Melbourne Herald Sun, versus the Daily Telegraph in Sydney, where there’s a Liberal Premier, and the Adelaide advertiser with the Liberal Premier, against any measure, it is chalk and cheese. It is 95% attack in Victoria and Queensland. And it’s 95% of defense in NSW.

Rafael Epstein
Don’t you think there’s some significant criticisms produced by the journalists at those newspapers that are worthy of coverage? In Victoria.

Kevin Rudd
When I speak to journalists who have left for example, the Australian newspaper, and who tell me that the reason they have left is because they are sick and tired of the number of times not only has their copy been rewritten, but in fact rebalanced and with often a contrary headline put across the top of it. Frankly, we all know the joke in the Murdoch newspaper empire. And that is the publisher has a view. It’s a deeply conservative worldview. He doesn’t like Labor, governments federal or states that don’t that don’t doth their caps in honor of his political and financial interests. And he seeks therefore to delegitimize and remove them from office. There’s a pattern of this over 10 years.

Rafael Epstein
I’m going to ask you to give a short answer to this one if I can, because it is a new topic that I know you could talk about for half an hour but if I can. Gut feel who’s gonna win the American election?

Kevin Rudd
I think given up lived in the US for the last five years. I think there is sufficient turn off factor in relation to Trump and his handling of Coronavirus that will cause of itself, Biden to win in what will still be a contested election and end up probably under appeal to the United States Senate.

Rafael Epstein
Thanks for your time.

Kevin Rudd
Good to be with you.

The post ABC Radio Melbourne: On the Coalition’s changes to the NBN appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Random While

A bit ago, Aurelia shared with us a backwards for loop. Code which wasn’t wrong, but was just… weird. Well, now we’ve got some code which is just plain wrong, in a number of ways.

The goal of the following Java code is to generate some number of random numbers between 1 and 9, and pass them off to a space-separated file.

StringBuffer buffer = new StringBuffer();
long count = 0;
long numResults = GetNumResults();

while (count < numResults)
	ArrayList<BigDecimal> numbers = new ArrayList<BigDecimal>();
	while (numbers.size() < 1)
		int randInt = random.nextInt(10);
		long randLong = randInt & 0xffffffffL;
		if (!numbers.contains(new BigDecimal(randLong)) && (randLong != 0))
			buffer.append(" ");
			numbers.add(new BigDecimal(randLong));
		System.out.println("Random Integer: " + randInt + ", Long Integer: " + randLong);	
	buffer = new StringBuffer();

Pretty quickly, we get a sense that something is up, with the while (count < numResults)- this begs to be a for loop. It’s not wrong to while this, but it’s suspicious.

Then, right away, we create an ArrayList<BigDecimal>. There is no reasonable purpose to using a BigDecimal to hold a value between 1 and 9. But the rails don’t really start to come off until we get into the inner loop.

while (numbers.size() < 1)
		int randInt = random.nextInt(10);
		long randLong = randInt & 0xffffffffL;
    if (!numbers.contains(new BigDecimal(randLong)) && (randLong != 0))

This loop condition guarantees that we’ll only ever have one element in the list, which means our numbers.contains check doesn’t mean much, does it?

But honestly, that doesn’t hold a candle to the promotion of randInt to randLong, complete with an & 0xffffffffL, which guarantees… well, nothing. It’s completely unnecessary here. We might do that sort of thing when we’re bitshifting and need to mask out for certain bytes, but here it does nothing.

Also note the (randLong != 0) check. Because they use random.nextInt(10), that generates a number in the range 0–9, but we want 1 through 9, so if we draw a zero, we need to re-roll. A simple, and common solution to this would be to do random.nextInt(9) + 1, but at least we now understand the purpose of the while (numbers.size() < 1) loop- we keep trying until we get a non-zero value.

And honestly, I should probably point out that they include a println to make sure that both the int and the long versions match, but how could they not?

Nothing here is necessary. None of this code has to be this way. You don’t need the StringBuffer. You don’t need nested while loops. You don’t need the ArrayList<BigDecimal>, you don’t need the conversion between integer types. You don’t need the debugging println.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

MEQemu (KVM) and 9P (Virtfs) Mounts

I’ve tried setting up the Qemu (in this case KVM as it uses the Qemu code in question) 9P/Virtfs filesystem for sharing files to a VM. Here is the Qemu documentation for it [1].

VIRTFS="-virtfs local,path=/vmstore/virtfs,security_model=mapped-xattr,id=zz,writeout=immediate,fmode=0600,dmode=0700,mount_tag=zz"
VIRTFS="-virtfs local,path=/vmstore/virtfs,security_model=passthrough,id=zz,writeout=immediate,mount_tag=zz"

Above are the 2 configuration snippets I tried on the server side. The first uses mapped xattrs (which means that all files will have the same UID/GID and on the host XATTRs will be used for storing the Unix permissions) and the second uses passthrough which requires KVM to run as root and gives the same permissions on the host as on the VM. The advantages of passthrough are better performance through writing less metadata and having the same permissions in host and VM. The advantages of mapped XATTRs are running KVM/Qemu as non-root and not having a SUID file in the VM imply a SUID file in the host.

Here is the link to Bonnie++ output comparing Ext3 on a KVM block device (stored on a regular file in a BTRFS RAID-1 filesystem on 2 SSDs on the host), a NFS share from the host from the same BTRFS filesystem, and virtfs shares of the same filesystem. The only tests that Ext3 doesn’t win are some of the latency tests, latency is based on the worst-case not the average. I expected Ext3 to win most tests, but didn’t expect it to lose any latency tests.

Here is a link to Bonnie++ output comparing just NFS and Virtfs. It’s obvious that Virtfs compares poorly, giving about half the performance on many tests. Surprisingly the only tests where Virtfs compared well to NFS were the file creation tests which I expected Virtfs with mapped XATTRs to do poorly due to the extra metadata.

Here is a link to Bonnie++ output comparing only Virtfs. The options are mapped XATTRs with default msize, mapped XATTRs with 512k msize (I don’t know if this made a difference, the results are within the range of random differences), and passthrough. There’s an obvious performance benefit in passthrough for the small file tests due to the less metadata overhead, but as creating small files isn’t a bottleneck on most systems a 20% to 30% improvement in that area probably doesn’t matter much. The result from the random seeks test in passthrough is unusual, I’ll have to do more testing on that.

SE Linux

On Virtfs the XATTR used for SE Linux labels is passed through to the host. So every label used in a VM has to be valid on the host and accessible to the context of the KVM/Qemu process. That’s not really an option so you have to use the context mount option. Having the mapped XATTR mode work for SE Linux labels is a necessary feature.


The msize mount option in the VM doesn’t appear to do anything and it doesn’t appear in /proc/mounts, I don’t know if it’s even supported in the kernel I’m using.

The passthrough and mapped XATTR modes give near enough performance that there doesn’t seem to be a benefit of one over the other.

NFS gives significant performance benefits over Virtfs while also using less CPU time in the VM. It has the issue of files named .nfs* hanging around if the VM crashes while programs were using deleted files. It’s also more well known, ask for help with an NFS problem and you are more likely to get advice than when asking for help with a virtfs problem.

Virtfs might be a better option for accessing databases than NFS due to it’s internal operation probably being a better map to Unix filesystem semantics, but running database servers on the host is probably a better choice anyway.

Virtfs generally doesn’t seem to be worth using. I had hoped for performance that was better than NFS but the only benefit I seemed to get was avoiding the .nfs* file issue.

The best options for storage for a KVM/Qemu VM seem to be Ext3 for files that are only used on one VM and for which the size won’t change suddenly or unexpectedly (particularly the root filesystem) and NFS for everything else.


Cryptogram Documented Death from a Ransomware Attack

A Dusseldorf woman died when a ransomware attack against a hospital forced her to be taken to a different hospital in another city.

I think this is the first documented case of a cyberattack causing a fatality. UK hospitals had to redirect patients during the 2017 WannaCry ransomware attack, but there were no documented fatalities from that event.

The police are treating this as a homicide.

Cryptogram On Executive Order 12333

Mark Jaycox has written a long article on the US Executive Order 12333: “No Oversight, No Limits, No Worries: A Primer on Presidential Spying and Executive Order 12,333“:

Abstract: Executive Order 12,333 (“EO 12333”) is a 1980s Executive Order signed by President Ronald Reagan that, among other things, establishes an overarching policy framework for the Executive Branch’s spying powers. Although electronic surveillance programs authorized by EO 12333 generally target foreign intelligence from foreign targets, its permissive targeting standards allow for the substantial collection of Americans’ communications containing little to no foreign intelligence value. This fact alone necessitates closer inspection.

This working draft conducts such an inspection by collecting and coalescing the various declassifications, disclosures, legislative investigations, and news reports concerning EO 12333 electronic surveillance programs in order to provide a better understanding of how the Executive Branch implements the order and the surveillance programs it authorizes. The Article pays particular attention to EO 12333’s designation of the National Security Agency as primarily responsible for conducting signals intelligence, which includes the installation of malware, the analysis of internet traffic traversing the telecommunications backbone, the hacking of U.S.-based companies like Yahoo and Google, and the analysis of Americans’ communications, contact lists, text messages, geolocation data, and other information.

After exploring the electronic surveillance programs authorized by EO 12333, this Article proposes reforms to the existing policy framework, including narrowing the aperture of authorized surveillance, increasing privacy standards for the retention of data, and requiring greater transparency and accountability.