Planet Russell


Planet DebianAbhijith PA: Transition from Thunderbird to Mutt

I was going OK with Thunderbird and enigmail(though it have many problems). Normally I go through changelogs before updating packages and rarely do a complete upgrage of my machine. Couple of days ago I did a complete upgrade of system which updated my Thunderbird to latest version and throwing of enigmail plugin for using their native openPGP support. There is a blog from Mozilla which I should’ve read earlier. Thunderbird’s builtin openPGP functionality is still in experimental, atleast not ready for my workflow. I could’ve downgrade to version 68. But I chose to move to my secondary MUA, mutt. I was using mutt for emails and newsletters that I check twice in a year a so.

So I started configuring mutt to handle my big mailboxes. It took three evenings to configure mutt to my workflow. Though the basic setup can be done in less than an hour it is the small nitpicks consumed much of my time. Currently I have isync to pull and keep mails offline. Mutt to read, msmtp to send, abook as the email address book and urlview to see the links in mail. I am still learning notmuch and virtual mailbox ways to filter.


There are ton of articles out there to configure mutt and all related things to it. But I find certain configs very hard to get. So I will write down those.

So far, everything going okay.


  • some times mbsync throws EOF and secret key not found error.
  • searching is still a pain in mutt
  • nano’s spell checker also check things which I am replying to.

More to come

Well for now I moved mail from part of the Thunderbird. But Thunderbird was more than a MUA to me. It was my RSS reader, calendar and to-do list manager. I will write more about those once I make a complete transition.

Planet DebianJunichi Uekawa: It's been 20 years since I became a Debian Developer.

It's been 20 years since I became a Debian Developer. Lots of fun things happened, and I think fondly of the team. I am no longer active for the past 10 years due to family reasons, and it's surprising that I have been inactive for that long. I still use Debian, and I still participate in the local Debian meetings.


Planet DebianDirk Eddelbuettel: Rcpp 1.0.6: Some Updates

rcpp logo

The Rcpp team is proud to announce release 1.0.6 of Rcpp which arrived at CRAN earlier today, and has been uploaded to Debian too. Windows and macOS builds should appear at CRAN in the next few days. This marks the first release on the new six-months cycle announced with release 1.0.5 in July. As reminder, interim ‘dev’ or ‘rc’ releases will often be available in the Rcpp drat repo; this cycle there were four.

Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 2174 packages on CRAN depend on Rcpp for making analytical code go faster and further (which is an 8.5% increase just since the last release), along with 207 in BioConductor.

This release features six different pull requests from five different contributors, mostly fixing fairly small corner cases, plus some minor polish on documentation and continuous integration. Before releasing we once again made numerous reverse dependency checks none of which revealed any issues. So the passage at CRAN was pretty quick despite the large dependency footprint, and we are once again grateful for all the work the CRAN maintainers do.

Changes in Rcpp patch release version 1.0.6 (2021-01-14)

  • Changes in Rcpp API:

    • Replace remaining few uses of EXTPTR_PTR with R_ExternalPtrAddr (Kevin in #1098 fixing #1097).

    • Add push_back and push_front for DataFrame (Walter Somerville in #1099 fixing #1094).

    • Remove a misleading-to-wrong comment (Mattias Ellert in #1109 cleaning up after #1049).

    • Address a sanitizer report by initializing two private bool variables (Benjamin Christoffersen in #1113).

    • External pointer finalizer toggle default values were corrected to true (Dirk in #1115).

  • Changes in Rcpp Documentation:

    • Several URLs were updated to https and/or new addresses (Dirk).
  • Changes in Rcpp Deployment:

    • Added GitHub Actions CI using the same container-based setup used previously, and also carried code coverage over (Dirk in #1128).
  • Changes in Rcpp support functions:

    • Rcpp.package.skeleton() avoids warning from R. (Dirk)

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2616 previous questions.

If you like this or other open-source work I do, you can sponsor me at GitHub. My sincere thanks to my current sponsors for me keeping me caffeinated.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMichael Prokop: Revisiting 2020


Mainly to recall what happened last year and to give thoughts and plan for the upcoming year(s) I’m once again revisiting my previous year (previous editions: 2019, 2018, 2017, 2016, 2015, 2014, 2013 + 2012).

Due to the Coronavirus disease (COVID-19) pandemic, 2020 was special™ for several reasons, but overall I consider myself and my family privileged and am very grateful for that.

In terms of IT events, I planned to attend Grazer Linuxdays and DebConf in Haifa/Israel. Sadly Grazer Linuxdays didn’t take place at all, and DebConf took place online instead (which I didn’t really participate in for several reasons). I took part in the well organized DENOG12 + ATNOG 2020/1 online meetings. I still organize our monthly Security Treff Graz (STG) meetups, and for half of the year, those meetings took place online (which worked OK-ish overall IMO).

Only at the beginning of 2020, I managed to play Badminton (still playing in the highest available training class (in german: “Kader”) at the University of Graz / Universitäts-Sportinstitut, USI). For the rest of the year – except for ~2 weeks in October or so – the sessions couldn’t occur.

Plenty of concerts I planned to attend were cancelled for obvious reasons, including the ones I would have played myself. But I managed to attend Jazz Redoute 2020 – Dom im Berg, Martin Grubinger in Musikverein Graz and Emiliano Sampaio’s Mega Mereneu Project at WIST Moserhofgasse (all before the corona situation kicked in). The concert from Tonč Feinig & RTV Slovenia Big Band occurred under strict regulations in Summer. At the beginning of 2020, I also visited Literaturshow “Roboter mit Senf” at Literaturhaus Graz.

The lack of concerts and rehearsals also severely impacted my playing the drums (including at HTU BigBand Graz), which pretty much didn’t take place. :(

Grml-wise we managed to publish release 2020.06, codename Ausgehfuahangl. Regarding jenkins-debian-glue I tried to clarify its state and received some really lovely feedback.

I consider 2020 as the year where I dropped regular usage of Jabber (so far my accounts still exist, but I’m no longer regularly online and am not sure for how much longer I’ll keep my accounts alive as such).

Business-wise it was our seventh year of business with SynPro Solutions GmbH. No big news but steady and ongoing work with my other business duties Grml Solutions and Grml-Forensic.

As usual, I shared childcare with my wife. Due to the corona situation, my wife got a new working schedule, which shuffled around our schedule a bit on Mondays + Tuesdays. Still, we managed to handle the homeschooling/distance learning quite well. Currently we’re sitting in the third lockdown, and yet another round of homeschooling/distance learning is going on those days (let’s see how long…). I counted 112 actual school days in all of 2020 for our older daughter with only 68 school days since our first lockdown on 16th of March, whereas we had 213(!) press conferences by our Austrian government in 2020. (Further rants about the situation in Austria snipped.)

Book reading-wise I managed to complete 60 books (see “Mein Lesejahr 2020“). Once again, I noticed that what felt like good days for me always included reading books, so I’ll try to keep my reading pace for 2021. I’ll also continue with my hobbies “Buying Books” and “Reading Books”, to get worse at Tsundoku.

Hoping for vaccination and a more normal 2021, Schwuppdiwupp!

Cryptogram Click Here to Kill Everybody Sale

For a limited time, I am selling signed copies of Click Here to Kill Everybody in hardcover for just $6, plus shipping.

Note that I have had occasional problems with international shipping. The book just disappears somewhere in the process. At this price, international orders are at the buyer’s risk. Also, the USPS keeps reminding us that shipping — both US and international — may be delayed during the pandemic.

I have 500 copies of the book available. When they’re gone, the sale is over and the price will revert to normal.

Order here.

EDITED TO ADD: I was able to get another 500 from the publisher, since the first 500 sold out so quickly.

Please be patient on delivery. There are already 550 orders, and that’s a lot of work to sign and mail. I’m going to be doing them a few at a time over the next several weeks. So all of you people reading this paragraph before ordering, understand that there are a lot of people ahead of you in line.

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’m speaking (online) as part of Western Washington University’s Internet Studies Lecture Series on January 20, 2021.
  • I’m speaking (online) at ITU Denmark on February 2, 2021. Details to come.
  • I’m being interviewed by Keith Cronin as part of The Center for Innovation, Security, and New Technology’s CSINT Conversations series, February 10, 2021 from 11:00 AM – 11:30 AM CST.
  • I’ll be speaking at an Informa event on February 28, 2021. Details to come.

The list is maintained on this page.

LongNowImagining 02030

Bases on the moon and colonies on Mars. The eradication of poverty. Catastrophic climate change.

WIRED shares six visions of what the world of 02030 could look like.

Worse Than FailureError'd: Something or Nothing at All

"I didn't know that I could buy an empty shopping cart from, but here I am," Tom writes.


Calvin K. writes, "Samsung is really confused here...Samsng is really confused here....


"I think I'll get a big raise if I can get the DO NOT ISSUE certification." wrote Thomas J.


David B. wrote, "After my payment info, they seem to think I still owe them, just not very much."


"Just did an induction questionnaire for a venue. Usually these things are annual, looks like I hit the jackpot on this one!" Justin R. wrote.


[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Cryptogram Cell Phone Location Privacy

We all know that our cell phones constantly give our location away to our mobile network operators; that’s how they work. A group of researchers has figured out a way to fix that. “Pretty Good Phone Privacy” (PGPP) protects both user identity and user location using the existing cellular networks. It protects users from fake cell phone towers (IMSI-catchers) and surveillance by cell providers.

It’s a clever system. The players are the user, a traditional mobile network operator (MNO) like AT&T or Verizon, and a new mobile virtual network operator (MVNO). MVNOs aren’t new. They’re intermediaries like Cricket and Boost.

Here’s how it works:

  1. One-time setup: The user’s phone gets a new SIM from the MVNO. All MVNO SIMs are identical.
  2. Monthly: The user pays their bill to the MVNO (credit card or otherwise) and the phone gets anonymous authentication (using Chaum blind signatures) tokens for each time slice (e.g., hour) in the coming month.
  3. Ongoing: When the phone talks to a tower (run by the MNO), it sends a token for the current time slice. This is relayed to a MVNO backend server, which checks the Chaum blind signature of the token. If it’s valid, the MVNO tells the MNO that the user is authenticated, and the user receives a temporary random ID and an IP address. (Again, this is now MVNOs like Boost already work.)
  4. On demand: The user uses the phone normally.

The MNO doesn’t have to modify its system in any way. The PGPP MVNO implementation is in software. The user’s traffic is sent to the MVNO gateway and then out onto the Internet, potentially even using a VPN.

All connectivity is data connectivity in cell networks today. The user can choose to be data-only (e.g., use Signal for voice), or use the MVNO or a third party for VoIP service that will look just like normal telephony.

The group prototyped and tested everything with real phones in the lab. Their approach adds essentially zero latency, and doesn’t introduce any new bottlenecks, so it doesn’t have performance/scalability problems like most anonymity networks. The service could handle tens of millions of users on a single server, because it only has to do infrequent authentication, though for resilience you’d probably run more.

The paper is here.


Planet DebianSteinar H. Gunderson: Bullseye freeze

Bullseye is freezing! Yay! (And Trondheim is now below -10.)

It's too late for that kind of change now, but it would have been nice if plocate could have been default for bullseye:

plocate popcon graph

Surprisingly enough, mlocate has gone straight downhill:

mlocate popcon graph

It seems that since buster, there's an override in place to change its priority away from standard, and I haven't been able to find anyone who could tell me why. (It was known that it was request moved away from standard for cloud images, which makes a lot of sense, but not for desktop/server images.)

Perhaps for bookworm, we can get a locate back in the default install? plocate really is a much better user experience, in my (admittely biased) opinion. :-)

Cryptogram Extracting Personal Information from Large Language Models Like GPT-2

Researchers have been able to find all sorts of personal information within GPT-2. This information was part of the training data, and can be extracted with the right sorts of queries.

Paper: “Extracting Training Data from Large Language Models.”

Abstract: It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model.

We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model’s training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data.

We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. For example, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.

From a blog post:

We generated a total of 600,000 samples by querying GPT-2 with three different sampling strategies. Each sample contains 256 tokens, or roughly 200 words on average. Among these samples, we selected 1,800 samples with abnormally high likelihood for manual inspection. Out of the 1,800 samples, we found 604 that contain text which is reproduced verbatim from the training set.

The rest of the blog post discusses the types of data they found.

Worse Than FailureCodeSOD: A Match Made In…

Andy C writes:

One of our colleagues dug this code up from an outsourced project. Took a few us to try to find out what it actually does, we're still not completely sure.

This is the associated Java code:

if (productList != null && !productList.isEmpty()) { for (int i = 0; i < productList.size(); i++) { String currentProductID = String.valueOf(productList.get(i).getProductId()); String toMatchProductID = String.valueOf(currentProductID); if (currentProductID.equals(toMatchProductID)) { productName = productList.get(i).getProductName(); break; } } }

If you just skim over the code, something you might do if you were just going through a large codebase, it looks like a reasonable "search" method. Find the object with the matching ID. But that's only on a skim. If you actually read the code, well…

First, we start with a check- make sure we actually have a productList. The null check is reasonable (I'll assume this predates Java's Optional type), but the isEmpty check is arguably superfluous, since we enter a for-loop based on size()- an empty list would just bypass the for loop. Still, that's all harmless.

In the loop, we grab the current item (item 0, on the first iteration), and that's our currentProductID. We choose to cast it into a string, which may or may not be a reasonable choice, depending on how we represent IDs. Since this is imitating a search method, we also need a toMatchProductID… which we make by cloning the currentProductID.

If the currentProductID equals the toMatchProductID, which it definitely will, we'll fetch the product name and then exit the loop.

So, what this method actually does is pretty simple: it gets the productName of the first item in the productList, if there are any items in that productList. The real question is: how did this happen? Was this a case of copy/paste coding gone wrong? Purposeful obfuscation by the outsourcing team? Just a complete misunderstanding of the requirements corrected through quick hacking without actually fixing the code? Some combination of all three?

We know what the code does. What the people writing it do, that we're definitely not sure about.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Planet DebianAntoine Beaupré: New phone: Pixel 4a

I'm sorry to announce that I gave up on the Fairphone series and switched to a Google Phone (Pixel 4a) running CalyxOS.

Problems in fairy land

My fairphone2, even if it is less than two years old, is having major problems:

  • from time to time, the screen flickers and loses "touch" until I squeeze it back together
  • the camera similarly disconnects regularly
  • even when it works, the camera is... pretty bad: low light is basically unusable, it's slow and grainy
  • the battery can barely keep up for one day
  • the cellular coverage is very poor, in Canada: I lose signal at the grocery store and in the middle of my house...

Some of those problems are known: the Fairphone 2 is old now. It was probably old even when I got it. But I can't help but feel a little sad to let it go: the entire point of that device was to make it easy to fix. But alas, because it's sold only in Europe, local stores don't carry replacement parts. To be fair, Fairphone did offer to fix the device, but with a 2 weeks turnaround, I had to get another phone anyways.

I did actually try to buy a fairphone3, from Clove. But they did some crazy validation routine. By email, they asked me to provide a photo copy of a driver's license and the credit card, arguing they need to do this to combat fraud. I found that totally unacceptable and asked them to cancel my order. And because I'm not sure the FP3 will fix the coverage issues, I decided to just give up on Fairphone until they officially ship to the Americas.

Do no evil, do not pass go, do not collect 200$

So I got a Google phone, specifically a Pixel 4a. It's a nice device, all small and shiny, but it's "plasticky" - I would have prefered metal, but it seems you need to pay much, much more to get that (in the Pixel 5).

In any case, it's certainly a better form factor than the Fairphone 2: even though the screen is bigger, the device itself is actually smaller and thinner, which feels great. The OLED screen is beautiful, awesome contrast and everything, and preliminary tests show that the camera is much better than the one on the Fairphone 2. (The be fair, again, that is another thing the FP3 improved significantly. And that is with the stock Camera app from CalyxOS/AOSP, so not as good as the Google Camera app, which does AI stuff.)

CalyxOS: success

The Pixel 4a not not supported by LineageOS: it seems every time I pick a device in that list, I manage to miss the right device by one (I bought a Samsung S9 before, which is also unsupported, even though the S8 is). But thankfully, it is supported by CalyxOS.

That install was a breeze: I was hesitant in playing again with installing a custom Android firmware on a phone after fighting with this quite a bit in the past (e.g. htc-one-s, lg-g3-d852). But it turns out their install instructions, mostly using a AOSP alliance device-flasher works absolutely great. It assumes you know about the commandline, and it does require to basically curl | sudo (because you need to download their binary and run it as root), but it Just. Works. It reminded me of how great it was to get the Fairphone with TWRP preinstalled...

Oh, and kudos to the people in #calyxos on Freenode: awesome tech support, super nice folks. An amazing improvement over the ambiance in #lineageos! :)

Migrating data

Unfortunately, migrating the data was the usual pain in the back. This should improve the next time I do this: CalyxOS ships with seedvault, a secure backup system for Android 10 (or 9?) and later which backs up everything (including settings!) with encryption. Apparently it works great, and CalyxOS is also working on a migration system to switch phones.

But, obviously, I couldn't use that on the Fairphone 2 running Android 7... So I had to, again, improvised. The first step was to install Syncthing, to have an easy way to copy data around. That's easily done through F-Droid, already bundled with CalyxOS (including the privileged extension!). Pair the devices and boom, a magic portal to copy stuff over.

The other early step I took was to copy apps over using the F-Droid "find nearby" functionality. It's a bit quirky, but really helps in copying a bunch of APKs over.

Then I setup a temporary keepassxc password vault on the Syncthing share so that I could easily copy-paste passwords into apps. I used to do this in a text file in Syncthing, but copy-pasting in the text file is much harder than in KeePassDX. (I just picked one, maybe KeePassDroid is better? I don't know.) Do keep a copy of the URL of the service to reduce typing as well.

Then the following apps required special tweaks:

  • AntennaPod has an import/export feature: export on one end, into the Syncthing share, then import on the other. then go to the queue and select all episodes and download
  • the Signal "chat backup" does copy the secret key around, so you don't get the "security number change" warning (even if it prompts you to re-register) - external devices need to be relinked though
  • AnkiDroid, DSub, Nextcloud, and Wallabag required copy-pasting passwords

I tried to sync contacts with DAVx5 but that didn't work so well: the account was setup correctly, but contacts didn't show up. There's probably just this one thing I need to do to fix this, but since I don't really need sync'd contact, it was easier to export a VCF file to Syncthing and import again.

Known problems

One problem with CalyxOS I found is that the fragile little microg tweaks didn't seem to work well enough for Signal. That was unexpected so they encouraged me to file that as a bug.

The other "issue" is that the bootloader is locked, which makes it impossible to have "root" on the device. That's rather unfortunate: I often need root to debug things on Android. In particular, it made it difficult to restore data from OSMand (see below). But I guess that most things just work out of the box now, so I don't really need it and appreciate the extra security. Locking the bootloader means full cryptographic verification of the phone, so that's a good feature to have!

OSMand still doesn't have a good import/export story. I ended up sharing the Android/data/ directory and importing waypoints, favorites and tracks by hand. Even though maps are actually in there, it's not possible for Syncthing to write directly to the same directory on the new phone, "thanks" to the new permission system in Android which forbids this kind of inter-app messing around.

Tracks are particularly a problem: my older OSMand setup had all those folders neatly sorting those tracks by month. This makes it really annoying to track every file manually and copy it over. I have mostly given up on that for now, unfortunately. And I'll still need to reconfigure profiles and maps and everything by hand. Sigh. I guess that's a good clearinghouse for my old tracks I never use...

Update: turns out setting storage to "shared" fixed the issue, see comments below!


Overall, CalyxOS seems like a good Android firmware. The install is smooth and the resulting install seems solid. The above problems are mostly annoyances and I'm very happy with the experience so far, although I've only been using it for a few hours so this is very preliminary.

Rondam RamblingsPSA: I'm debating Matt Slick tonight

FYI, I'm doing a YouTube debate this evening at 5:30 PST with Matt Slick on the topic of "Atheism, Christianity, and morality".  It will also be recorded so you don't have to watch it live.  Here is the link.

Planet DebianVincent Fourmond: Taking advantage of Ruby in QSoas

First of all, let me all wish you a happy new year, with all my wishes of health and succes. I sincerely hope this year will be simpler for most people as last year !

For the first post of the year, I wanted to show you how to take advantage of Ruby, the programming language embedded in QSoas, to make various things, like:

  • creating a column with the sum of Y values;
  • extending values that are present only in a few lines;
  • renaming datasets using a pattern.

Summing the values in a column

When using commands that take formulas (Ruby code), like apply-formula, the code is run for every single point, for which all the values are updated. In particulier, the state of the previous point is not known. However, it is possible to store values in what is called global variables, whose name start with an $ sign. Using this, we can keep track of the previous values. For instance, to create a new column with the sum of the y values, one can use the following approach:
QSoas> eval $sum=0
QSoas> apply-formula /extra-columns=1 $sum+=y;y2=$sum
The first line initializes the variable to 0, before we start summing, and the code in the second line is run for each dataset row, in order. For the first row, for instance, $sum is initially 0 (from the eval line); after the execution of the code, it is now the first value of y. After the second row, the second value of y is added, and so on. The image below shows the resulting y2 when used on:
QSoas> generate-dataset -1 1 x

Extending values in a column

Another use of the global variables is to add "missing" data. For instance, let's imagine that a files given the variation of current over time as the potential is changed, but the potential is only changed stepwise and only indicated when it changes:
## time	current	potential
0	0.1	0.5
1	0.2
2	0.3
3	0.2
4	1.2	0.6
5	1.3
If you need to have the values everywhere, for instance if you need to split on their values, you could also use a global variable, taking advantage of the fact that missing values are represented by QSoas using "Not A Number" values, which can be detected using the Ruby function nan?:
QSoas> apply-formula "if y2.nan?; then y2=$value; else $value=y2;end"
Note the need of quotes because there are spaces in the ruby code. If the value of y2 is NaN, that is it is missing, then it is taken from the global variable $value else $value is set the current value of y2. Hence, the values are propagated down:
## time	current	potential
0	0.1	0.5
1	0.2	0.5
2	0.3	0.5
3	0.2	0.5
4	1.2	0.6
5	1.3	0.6
Of course, this doesn't work if the first value of y2 is missing.

Renaming using a pattern

The command save-datasets can be used to save a whole series of datasets to the disk. It can also rename them on the fly, and, using the /mode=rename option, does only the renaming part, without saving. You can make full use of meta-data (see also a first post here)for renaming. The full power is unlocked using the /expression= option. For instance, for renaming the last 5 datasets (so numbers 0 to 4) using a scheme based on the value of their pH meta-data, you can use the following code:
QSoas> save-datasets /mode=rename /expression='"dataset-#{$meta.pH}"' 0..4
The double quotes are cumbersome but necessary, since the outer quotes (') prevent the inner ones (") to be removed and the inner quotes are here to indicate to Ruby that we are dealing with text. The bit inside #{...} is interpreted by Ruby as Ruby code; here it is $meta.pH, the value of the "pH" meta-data. Finally the 0..4 specifies the datasets to work with. So theses datasets will change name to become dataset-7 for pH 7, etc...

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

Cryptogram Changes in WhatsApp’s Privacy Policy

If you’re a WhatsApp user, pay attention to the changes in the privacy policy that you’re being forced to agree with.

In 2016, WhatsApp gave users a one-time ability to opt out of having account data turned over to Facebook. Now, an updated privacy policy is changing that. Come next month, users will no longer have that choice. Some of the data that WhatsApp collects includes:

  • User phone numbers
  • Other people’s phone numbers stored in address books
  • Profile names
  • Profile pictures and
  • Status message including when a user was last online
  • Diagnostic data collected from app logs

Under the new terms, Facebook reserves the right to share collected data with its family of companies.

EDITED TO ADD (1/13): WhatsApp tries to explain.

Worse Than FailureCodeSOD: Callback Bondage

"Garbage collected languages can't have memory leaks," is well established as a myth, but we still have plenty of code which refuses to clean up after itself properly.

An anonymous submitter was working with a single-page-app front-end which wraps a stream abstraction around a websocket. Messages arrive on the stream, and callbacks get invoked. When certain parameters change, new callbacks need to be registered to handle the new behavior. The old callbacks need to be unbound- and it's that step this code doesn't do.

const channelName =;'updated', (data) => { if ( === channelName) { this.updateData(data) } });

The bind method attaches a new callback to a given channel. Without a matching unbind to remove the old callback, the old callback will sit in memory, and keep getting invoked even as it does nothing useful. Over time, this leads to performance issues.

Or, at least, it could lead to performance issues. The original developer had a… special solution to handling garbage collection. I'll let our submitter explain:

On the plus side, their cleanup logic for the component that uses this data unsubscribes all open websocket channels across the entire app on unmount, including channels owned by unrelated components, so we can rest assured that eventually they'll definitely be gone. Along with everything else.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Krebs on SecurityMicrosoft Patch Tuesday, January 2021 Edition

Microsoft today released updates to plug more than 80 security holes in its Windows operating systems and other software, including one that is actively being exploited and another which was disclosed prior to today. Ten of the flaws earned Microsoft’s most-dire “critical” rating, meaning they could be exploited by malware or miscreants to seize remote control over unpatched systems with little or no interaction from Windows users.

Most concerning of this month’s batch is probably a critical bug (CVE-2021-1647) in Microsoft’s default anti-malware suite — Windows Defender — that is seeing active exploitation. Microsoft recently stopped providing a great deal of detail in their vulnerability advisories, so it’s not entirely clear how this is being exploited.

But Kevin Breen, director of research at Immersive Labs, says depending on the vector the flaw could be trivial to exploit.

“It could be as simple as sending a file,” he said. “The user doesn’t need to interact with anything, as Defender will access it as soon as it is placed on the system.”

Fortunately, this bug is probably already patched by Microsoft on end-user systems, as the company continuously updates Defender outside of the normal monthly patch cycle.

Breen called attention to another critical vulnerability this month — CVE-2020-1660 — which is a remote code execution flaw in nearly every version of Windows that earned a CVSS score of 8.8 (10 is the most dangerous).

“They classify this vulnerability as ‘low’ in complexity, meaning an attack could be easy to reproduce,” Breen said. “However, they also note that it’s ‘less likely’ to be exploited, which seems counterintuitive. Without full context of this vulnerability, we have to rely on Microsoft to make the decision for us.”

CVE-2020-1660 is actually just one of five bugs in a core Microsoft service called Remote Procedure Call (RPC), which is responsible for a lot of heavy lifting in Windows. Some of the more memorable computer worms of the last decade spread automatically by exploiting RPC vulnerabilities.

Allan Liska, senior security architect at Recorded Future, said while it is concerning that so many vulnerabilities around the same component were released simultaneously, two previous vulnerabilities in RPC — CVE-2019-1409 and CVE-2018-8514 — were not widely exploited.

The remaining 70 or so flaws patched this month earned Microsoft’s less-dire “important” ratings, which is not to say they’re much less of a security concern. Case in point: CVE-2021-1709, which is an “elevation of privilege” flaw in Windows 8 through 10 and Windows Server 2008 through 2019.

“Unfortunately, this type of vulnerability is often quickly exploited by attackers,” Liska said. “For example, CVE-2019-1458 was announced on December 10th of 2019, and by December 19th an attacker was seen selling an exploit for the vulnerability on underground markets. So, while CVE-2021-1709 is only rated as [an information exposure flaw] by Microsoft it should be prioritized for patching.”

Trend Micro’s ZDI Initiative pointed out another flaw marked “important” — CVE-2021-1648, an elevation of privilege bug in Windows 8, 10 and some Windows Server 2012 and 2019 that was publicly disclosed by ZDI prior to today.

“It was also discovered by Google likely because this patch corrects a bug introduced by a previous patch,” ZDI’s Dustin Childs said. “The previous CVE was being exploited in the wild, so it’s within reason to think this CVE will be actively exploited as well.”

Separately, Adobe released security updates to tackle at least eight vulnerabilities across a range of products, including Adobe Photoshop and Illustrator. There are no Flash Player updates because Adobe retired the browser plugin in December (hallelujah!), and Microsoft’s update cycle from last month removed the program from Microsoft’s browsers.

Windows 10 users should be aware that the operating system will download updates and install them all at once on its own schedule, closing out active programs and rebooting the system. If you wish to ensure Windows has been set to pause updating so you have ample opportunity to back up your files and/or system, see this guide.

Please back up your system before applying any of these updates. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once. You never know when a patch roll-up will bork your system or possibly damage important files. For those seeking more flexible and full-featured backup options (including incremental backups), Acronis and Macrium are two that I’ve used previously and are worth a look.

That said, there don’t appear to be any major issues cropping up yet with this month’s update batch. But before you apply updates consider paying a visit to, which usually has the skinny on any reports about problematic patches.

As always, if you experience glitches or issues installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.


Krebs on SecuritySolarWinds: What Hit Us Could Hit Others

New research into the malware that set the stage for the megabreach at IT vendor SolarWinds shows the perpetrators spent months inside the company’s software development labs honing their attack before inserting malicious code into updates that SolarWinds then shipped to thousands of customers. More worrisome, the research suggests the insidious methods used by the intruders to subvert the company’s software development pipeline could be repurposed against many other major software providers.

In a blog post published Jan. 11, SolarWinds said the attackers first compromised its development environment on Sept. 4, 2019. Soon after, the attackers began testing code designed to surreptitiously inject backdoors into Orion, a suite of tools used by many Fortune 500 firms and a broad swath of the federal government to manage their internal networks.

Image: SolarWinds.

According to SolarWinds and a technical analysis from CrowdStrike, the intruders were trying to work out whether their “Sunspot” malware — designed specifically for use in undermining SolarWinds’ software development process — could successfully insert their malicious “Sunburst” backdoor into Orion products without tripping any alarms or alerting Orion developers.

In October 2019, SolarWinds pushed an update to their Orion customers that contained the modified test code. By February 2020, the intruders had used Sunspot to inject the Sunburst backdoor into the Orion source code, which was then digitally signed by the company and propagated to customers via SolarWinds’ software update process.

Crowdstrike said Sunspot was written to be able to detect when it was installed on a SolarWinds developer system, and to lie in wait until specific Orion source code files were accessed by developers. This allowed the intruders to “replace source code files during the build process, before compilation,” Crowdstrike wrote.

The attackers also included safeguards to prevent the backdoor code lines from appearing in Orion software build logs, and checks to ensure that such tampering wouldn’t cause build errors.

“The design of SUNSPOT suggests [the malware] developers invested a lot of effort to ensure the code was properly inserted and remained undetected, and prioritized operational security to avoid revealing their presence in the build environment to SolarWinds developers,” CrowdStrike wrote.

A third malware strain — dubbed “Teardrop” by FireEye, the company that first disclosed the SolarWinds attack in December — was installed via the backdoored Orion updates on networks that the SolarWinds attackers wanted to plunder more deeply.

So far, the Teardrop malware has been found on several government networks, including the Commerce, Energy and Treasury departments, the Department of Justice and the Administrative Office of the U.S. Courts.

SolarWinds emphasized that while the Sunspot code was specifically designed to compromise the integrity of its software development process, that same process is likely common across the software industry.

“Our concern is that right now similar processes may exist in software development environments at other companies throughout the world,” said SolarWinds CEO Sudhakar Ramakrishna. “The severity and complexity of this attack has taught us that more effectively combatting similar attacks in the future will require an industry-wide approach as well as public-private partnerships that leverage the skills, insight, knowledge, and resources of all constituents.”

Cryptogram Military Cryptanalytics, Part III

The NSA has just declassified and released a redacted version of Military Cryptanalytics, Part III, by Lambros D. Callimahos, October 1977.

Parts I and II, by Lambros D. Callimahos and William F. Friedman, were released decades ago — I believe repeatedly, in increasingly unredacted form — and published by the late Wayne Griswold Barker’s Agean Park Press. I own them in hardcover.

Like Parts I and II, Part III is primarily concerned with pre-computer ciphers. At this point, the document only has historical interest. If there is any lesson for today, it’s that modern cryptanalysis is possible primarily because people make mistakes

The monograph took a while to become public. The cover page says that the initial FOIA request was made in July 2012: eight and a half years ago.

And there’s more books to come. Page 1 starts off:

This text constitutes the third of six basic texts on the science of cryptanalytics. The first two texts together have covered most of the necessary fundamentals of cryptanalytics; this and the remaining three texts will be devoted to more specialized and more advanced aspects of the science.

Presumably, volumes IV, V, and VI are still hidden inside the classified libraries of the NSA.

And from page ii:

Chapters IV-XI are revisions of seven of my monographs in the NSA Technical Literature Series, viz: Monograph No. 19, “The Cryptanalysis of Ciphertext and Plaintext Autokey Systems”; Monograph No. 20, “The Analysis of Systems Employing Long or Continuous Keys”; Monograph No. 21, “The Analysis of Cylindrical Cipher Devices and Strip Cipher Systems”; Monograph No. 22, “The Analysis of Systems Employing Geared Disk Cryptomechanisms”; Monograph No.23, “Fundamentals of Key Analysis”; Monograph No. 15, “An Introduction to Teleprinter Key Analysis”; and Monograph No. 18, “Ars Conjectandi: The Fundamentals of Cryptodiagnosis.”

This points to a whole series of still-classified monographs whose titles we do not even know.

EDITED TO ADD: I have been informed by a reliable source that Parts 4 through 6 were never completed. There may be fragments and notes, but no finished works.

Planet DebianJohn Goerzen: Remote Directory Tree Comparison, Optionally Asynchronous and Airgapped

Note: this is another article in my series on asynchronous communication in Linux with UUCP and NNCP.

In the previous installment on store-and-forward backups, I mentioned how easy it is to do with ZFS, and some of the tools that can be used to do it without ZFS. A lot of those tools are a bit less robust, so we need some sort of store-and-forward mechanism to verify backups. To be sure, verifying backups is good with ANY scheme, and this could be used with ZFS backups also.

So let’s say you have a shiny new backup scheme in place, and you’d like to verify that it’s working correctly. To do that, you need to compare the source directory tree on machine A with the backed-up directory tree on machine B.

Assuming a conventional setup, here are some ways you might consider to do that:

  • Just copy everything from machine A to machine B and compare locally
  • Or copy everything from machine A to a USB drive, plug that into machine B, and compare locally
  • Use rsync in dry-run mode and see if it complains about anything

The first two options are not particularly practical for large datasets, though I note that the second is compatible with airgapping. Using rsync requires both systems to be online at the same time to perform the comparison.

What would be really nice here is a tool that would write out lots of information about the files on a system: their names, sizes, last modified dates, maybe even sha256sum and other data. This file would be far smaller than the directory tree itself, would compress nicely, and could be easily shipped to an airgapped system via NNCP, UUCP, a USB drive, or something similar.

Tool choices

It turns out there are already quite a few tools in Debian (and other Free operating systems) to do this, and half of them are named mtree (though, of course, not all mtrees are compatible with each other.) We’ll look at some of the options here.

I’ve made a simple test directory for illustration purposes with these commands:

mkdir test
cd test
echo hi > hi
ln -s hi there
ln hi foo
touch empty
mkdir emptydir
mkdir somethingdir
cd somethingdir
ln -s ../there

I then also used touch to set all files to a consistent timestamp for illustration purposes.

Tool option: getfacl (Debian package: acl)

This comes with the acl package, but can be used with other than ACL purposes. Unfortunately, it doesn’t come with a tool to directly compare its output with a filesystem (setfacl, for instance, can apply the permissions listed but won’t compare.) It ignores symlinks and doesn’t show sizes or dates, so is ineffective for our purposes.

Example output:

$ getfacl --numeric -R test
# file: test/hi
# owner: 1000
# group: 1000

Tool option: fmtree, the FreeBSD mtree (Debian package: freebsd-buildutils)

fmtree can prepare a “specification” based on a directory tree, and compare a directory tree to that specification. The comparison also is aware of files that exist in a directory tree but not in the specification. The specification format is a bit on the odd side, but works well enough with fmtree. Here’s a sample output with defaults:

$ fmtree -c -p test
# .
/set type=file uid=1000 gid=1000 mode=0644 nlink=1
.               type=dir mode=0755 nlink=4 time=1610421833.000000000
    empty       size=0 time=1610421833.000000000
    foo         nlink=2 size=3 time=1610421833.000000000
    hi          nlink=2 size=3 time=1610421833.000000000
    there       type=link mode=0777 time=1610421833.000000000 link=hi

... skipping ...

# ./somethingdir
/set type=file uid=1000 gid=1000 mode=0777 nlink=1
somethingdir    type=dir mode=0755 nlink=2 time=1610421833.000000000
    there       type=link time=1610421833.000000000 link=../there
# ./somethingdir


You might be wondering here what it does about special characters, and the answer is that it has octal escapes, so it is 8-bit clean.

To compare, you can save the output of fmtree to a file, then run like this:

cd test
fmtree < ../test.fmtree

If there is no output, then the trees are identical. Change something and you get a line of of output explaining each difference. You can also use fmtree -U to change things like modification dates to match the specification.

fmtree also supports quite a few optional keywords you can add with -K. They include things like file flags, user/group names, various tipes of hashes, and so forth. I'll note that none of the options can let you determine which files are hardlinked together.

Here's an excerpt with -K sha256digest added:

    empty       size=0 time=1610421833.000000000 \
    foo         nlink=2 size=3 time=1610421833.000000000 \

If you include a sha256digest in the spec, then when you verify it with fmtree, the verification will also include the sha256digest. Obviously fmtree -U can't correct a mismatch there, but of course it will detect and report it.

Tool option: mtree, the NetBSD mtree (Debian package: mtree-netbsd)

mtree produces (by default) output very similar to fmtree. With minor differences (such as the name of the sha256digest in the output), the discussion above about fmtree also applies to mtree.

There are some differences, and the most notable is that mtree adds a -C option which reads a spec and converts it to a "format that's easier to parse with various tools." Here's an example:

$ mtree -c -K sha256digest -p test | mtree -C
. type=dir uid=1000 gid=1000 mode=0755 nlink=4 time=1610421833.0 flags=none 
./empty type=file uid=1000 gid=1000 mode=0644 nlink=1 size=0 time=1610421833.0 flags=none 
./foo type=file uid=1000 gid=1000 mode=0644 nlink=2 size=3 time=1610421833.0 flags=none 
./hi type=file uid=1000 gid=1000 mode=0644 nlink=2 size=3 time=1610421833.0 flags=none 
./there type=link uid=1000 gid=1000 mode=0777 nlink=1 link=hi time=1610421833.0 flags=none 
./emptydir type=dir uid=1000 gid=1000 mode=0755 nlink=2 time=1610421833.0 flags=none 
./somethingdir type=dir uid=1000 gid=1000 mode=0755 nlink=2 time=1610421833.0 flags=none 
./somethingdir/there type=link uid=1000 gid=1000 mode=0777 nlink=1 link=../there time=1610421833.0 flags=none 

Most definitely an improvement in both space and convenience, while still retaining the relevant information. Note that if you want the sha256digest in the formatted output, you need to pass the -K to both mtree invocations. I could have done that here, but it is easier to read without it.

mtree can verify a specification in either format. Given what I'm about to show you about bsdtar, this should illustrate why I bothered to package mtree-netbsd for Debian.

Unlike fmtree, the mtree -U command will not adjust modification times based on the spec, but it will report on differences.

Tool option: bsdtar (Debian package: libarchive-tools)

bsdtar is a fascinating program that can work with many formats other than just tar files. Among the formats it supports is is the NetBSD mtree "pleasant" format (mtree -C compatible).

bsdtar can also convert between the formats it supports. So, put this together: bsdtar can convert a tar file to an mtree specification without extracting the tar file. bsdtar can also use an mtree specification to override the permissions on files going into tar -c, so it is a way to prepare a tar file with things owned by root without resorting to tools like fakeroot.

Let's look at how this can work:

$ cd test
$ bsdtar --numeric -cf - --format=mtree .

. time=1610472086.318593729 mode=755 gid=1000 uid=1000 type=dir
./empty time=1610421833.0 mode=644 gid=1000 uid=1000 type=file size=0
./foo nlink=2 time=1610421833.0 mode=644 gid=1000 uid=1000 type=file size=3
./hi nlink=2 time=1610421833.0 mode=644 gid=1000 uid=1000 type=file size=3
./ormat\075mtree time=1610472086.318593729 mode=644 gid=1000 uid=1000 type=file size=5632
./there time=1610421833.0 mode=777 gid=1000 uid=1000 type=link link=hi
./emptydir time=1610421833.0 mode=755 gid=1000 uid=1000 type=dir
./somethingdir time=1610421833.0 mode=755 gid=1000 uid=1000 type=dir
./somethingdir/there time=1610421833.0 mode=777 gid=1000 uid=1000 type=link link=../there

You can use mtree -U to verify that as before. With the --options mtree: set, you can also add hashes and similar to the bsdtar output. Since bsdtar can use input from tar, pax, cpio, zip, iso9660, 7z, etc., this capability can be used to create verification of the files inside quite a few different formats. You can convert with bsdtar -cf output.mtree --format=mtree @input.tar. There are some foibles with directly using these converted files with mtree -U, but usually minor changes will get it there.

Side mention: stat(1) (Debian package: coreutils)

This tool isn't included because it won't operate recursively, but is a tool in the similar toolbox.

Putting It Together

I will still be developing a complete non-ZFS backup system for NNCP (or UUCP) in a future post. But in the meantime, here are some ideas you can reflect on:

  • Let's say your backup scheme involves sending a full backup every night. On the source system, you could pipe the generated tar file through something like tee >(bsdtar -cf bcakup.mtree @-) to generate an mtree file in-band while generating the tar file. This mtree file could be shipped over for verification.
  • Perhaps your backup scheme involves sending incremental backup data via rdup or even ZFS, but you would like to periodically verify that everything is good -- that an incremental didn't miss something. Something like mtree -K sha256 -c -x -p / | mtree -C -K sha256 would let you accomplish that.

I will further develop at least one of these ideas in a future post.

Bonus: cross-tool comparisons

In my mtree-netbsd packaging, I added tests like this to compare between tools:

fmtree -c -K $(MTREE_KEYWORDS) | mtree
mtree -c -K $(MTREE_KEYWORDS) | sed -e 's/\(md5\|sha1\|sha256\|sha384\|sha512\)=/\1digest=/' -e 's/rmd160=/ripemd160digest=/' | fmtree
bsdtar -cf - --options 'mtree:uname,gname,md5,sha1,sha256,sha384,sha512,device,flags,gid,link,mode,nlink,size,time,uid,type,uname' --format mtree . | mtree

Planet DebianMolly de Blanc: 1028 Words on Free Software

The promise of free software is a near-future utopia, built on democratized technology. This future is just and it is beautiful, full of opportunity and fulfillment for everyone everywhere. We can create the things we dream about when we let our minds wander into the places they want to. We can be with the people we want and need to be, when we want and need to.

This is currently possible with the technology we have today, but it’s availability is limited by the reality of the world we live in – the injustice, the inequity, the inequality. Technology runs the world, but it does not serve the interests of most of us. In order to create a better world, our technology must be transparent, accountable, trustworthy. It must be just. It must be free.

The job of the free software movement is to demonstrate that this world is possible by living its values now: justice, equity, equality. We build them into our technology, and we build technology that make it possible for these values to exist in the world.

At the Free Software Foundation, we liked to say that we used all free software because it was important to show that we could. You can do anything with free software, so we did everything with it. We demonstrated the importance of unions for tech workers and non-profit workers by having one. We organized collectively and protected our rights for the sake of ourselves and one another. We had non-negotiable salaries, based on responsibility level and position. That didn’t mean we worked in an office free from the systemic problems that plague workplaces everywhere, but we were able to think about them differently.

Things were this way because of Richard Stallman – but I view his influence on these things as negative rather than positive. He was a cause that forced these outcomes, rather than being supportive of the desires and needs of others. Rather than indulge in gossip or stories, I would like to jump to the idea that he was supposed to have been deplatformed in October 2019. In resigning from his position as president of the FSF, he certainly lost some of his ability to reach audiences. However, Richard still gives talks. The FSF continues to use his image and rhetoric in their own messaging and materials. They gave him time to speak at their annual conference in 2020. He maintains leadership in the GNU project and otherwise within the FSF sphere. The people who empowered him for so many years are still in charge.

Richard, and the continued respect and space he is given, is not the only problem. It represents a bigger problem. Sexism and racism (among others) run rampant in the community. This happens because of bad actors and, more significantly, by the complacency of organizations, projects, and individuals afraid of losing contributors, respect, or funding. In a sector that has so much money and so many resources, women are still being paid less than men; we deny people opportunities to learn and grow in the name of immediate results; people who aren’t men, who aren’t white, are abused and harassed; people are mentally and emotionally taken advantage of, and we are coerced into burn out and giving up our lives for these companies and projects and we are paid for tolerating all of this by being told we’re doing a good job or making a difference.

But we’re not making a difference. We’re perpetuating the worst of the status quo that we should be fighting against. We must not continue. We cannot. We need to live our ideals as they are, and take the natural next steps in their evolution. We cannot have a world of just technology when we live in a world of exclusion; we cannot have free software if we continue to allow, tolerate, and laud the worst of us. I’ve been in and around free software for seventeen years. Nearly every part of it I’ve participated in has members and leadership that benefit from allowing and encouraging the continuation of maleficence and systemic oppression.

We must purge ourselves of these things – of sexism, racism, injustice, and the people who continue and enable it. There is no space to argue over whether a comment was transphobic – if it hurt a trans person then it is transphobic and it is unacceptable. Racism is a global problem and we must be anti-racist or we are complicit. Sexism is present and all men benefit from it, even if they don’t want to. These are free software issues. These are things that plague software, and these are things software reinforces within our societies.

If a technology is exclusionary, it does not work. If a community is exclusionary, it must be fixed or thrown away. There is no middle ground here. There is no compromise. Without doing this, without taking the hard, painful steps to actually live the promise of user freedom and everything it requires and entails, our work is pointless and free software will fail.

I don’t think it’s too late for there to be a radical change – the radical change – that allows us to create the utopia we want to see in the world. We must do that by acknowledging that just technology leads to a just society, and that a just society allows us to make just technology. We must do that by living within the principles that guide this future now.

I don’t know what will happen if things don’t change soon. I recently saw someone comment that change doesn’t happens unless one person is willing to sacrifice everything to make that change, to lead and inspire others to play small parts. This is unreasonable to ask of or expect from someone. I’ve been burning myself out to meet other people’s expectations for seventeen years, and I can’t keep doing it. Of course I am not alone, and I am not the only one working on and occupied by these problems. More people must step up, not just for my sake, but for the sake of all of us, the work free software needs to do, and the future I dream about.

Planet DebianPetter Reinholdtsen: Latest Jami back in Debian Testing, and scriptable using dbus

After a lot of hard work by its maintainer Alexandre Viau and others, the decentralized communication platform Jami (earlier known as Ring), managed to get its latest version into Debian Testing. Several of its dependencies has caused build and propagation problems, which all seem to be solved now.

In addition to the fact that Jami is decentralized, similar to how bittorrent is decentralized, I first of all like how it is not connected to external IDs like phone numbers. This allow me to set up computers to send me notifications using Jami without having to find get a phone number for each computer. Automatic notification via Jami is also made trivial thanks to the provided client side API (as a DBus service). Here is my bourne shell script demonstrating how to let any system send a message to any Jami address. It will create a new identity before sending the message, if no Jami identity exist already:

# Usage: $0  
# Send  to , create local jami account if
# missing.
# License: GPL v2 or later at your choice
# Author: Petter Reinholdtsen

if [ -z "$HOME" ] ; then
    echo "error: missing \$HOME, required for dbus to work"
    exit 1

# First, get dbus running if not already running
if [ -e $PIDFILE ] ; then
    . $PIDFILE
    if ! kill -0 $DBUS_SESSION_BUS_PID 2>/dev/null ; then
if [ -z "$DBUS_SESSION_BUS_ADDRESS" ] && [ -x "$DBUSLAUNCH" ]; then
    dbus-daemon --session --address="$DBUS_SESSION_BUS_ADDRESS" --nofork --nopidfile --syslog-only < /dev/null > /dev/null 2>&1 3>&1 &
        echo export DBUS_SESSION_BUS_ADDRESS
    ) > $PIDFILE
    . $PIDFILE
fi &

dringop() {
    part="$1"; shift
    op="$1"; shift
    dbus-send --session \
        --dest="cx.ring.Ring" /cx/ring/Ring/$part cx.ring.Ring.$part.$op $*

dringopreply() {
    part="$1"; shift
    op="$1"; shift
    dbus-send --session --print-reply \
        --dest="cx.ring.Ring" /cx/ring/Ring/$part cx.ring.Ring.$part.$op $*

firstaccount() {
    dringopreply ConfigurationManager getAccountList | \
      grep string | awk -F'"' '{print $2}' | head -n 1


if [ -z "$account" ] ; then
    echo "Missing local account, trying to create it"
    dringop ConfigurationManager addAccount \
    if [ -z "$account" ] ; then
        echo "unable to create local account"
        exit 1

# Not using dringopreply to ensure $2 can contain spaces
dbus-send --print-reply --session \
  --dest=cx.ring.Ring \
  /cx/ring/Ring/ConfigurationManager \
  cx.ring.Ring.ConfigurationManager.sendTextMessage \
  string:"$account" string:"$1" \

If you want to check it out yourself, visit the the Jami system project page to learn more, and install the latest Jami client from Debian Unstable or Testing.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Kevin RuddCNBC: Washington, Beijing and Taiwan



11 JANUARY 2021


Topics: China, Taiwan, Mike Pompeo, Donald Trump, Joe Biden

The post CNBC: Washington, Beijing and Taiwan appeared first on Kevin Rudd.

Kevin RuddCrikey: Murdoch cannot whitewash his role in the most destructive presidency in US history

This article first appeared on Crikey on 12 January 2021.

Donald Trump may have lit the match that caused his country’s turmoil, but it was Rupert Murdoch who crammed the joint full of explosives.

His systematic manipulation and radicalisation of the American right-wing polity at large, and the Republican Party in particular, should ring alarm bells throughout our nation, including in the office of the prime minister.

Over the past 25 years, Murdoch has used his Fox News network to unite American conservatives under his banner and shift them from the centre right to the far right with an intoxicating diet of grievance-driven, race-fuelled identity politics.

By the time Trump announced his presidential campaign, these voters had been indoctrinated into a universe of “fake news”, “alternative facts” and elaborate conspiracy theories. The operational definition of fake news, in the eyes of the Trump presidency, became anything other than Fox News.

After some initial disagreements, Murdoch backed Trump all the way to the White House. And they kept in lockstep throughout the Trump presidency.

Trump would often repeat publicly the talking points he’d picked up from Fox. Newt Gingrich, the former Republican speaker, recommended booking interviews on Trump’s favourite shows as among the most effective ways of communicating directly with the Oval Office. And nothing delighted Murdoch’s swaggering ego, hard-right ideology and business tax interests more.

Fox covered up for Trump’s mistakes, trying desperately to keep track with his shifting claims about the mildness or severity of the coronavirus.

When Fox’s news reporters found nothing newsworthy in documents relating to Joe Biden’s son Hunter, Murdoch’s New York Post (under the watchful eye of his leading Australian henchman Col Allan) swooped in by pressuring junior reporters to put their names to its dubious front page story.

Like Trump, Murdoch’s news outlets also gave succour to the dangerous QAnon cult, with the devastating consequences witnessed in Washington last week.

It is now beyond time for Scott Morrison to stand up and denounce QAnon before it can fully take root here in Australia. Even if it strains the prime minister’s personal friendships with members of the far right, he should send the sort of crystal-clear signal that Trump proved himself unable to before it was too late.

Fox News was also buoyed by its reputation as the president’s favourite network. In 2020, six of the seven top American cable programs were on Fox News.

Murdoch’s gamble also paid off personally with Trump’s tax cuts delivering him a US$2 billion gift courtesy of American taxpayers.

Make no mistake: Trump may have been inaugurated as president, but Murdoch was never far off — always seeking to influence and ventilate Trump’s increasingly deranged worldview. Nothing can erase that fact, no matter how much Murdoch tries to dissociate himself from the outgoing president. Murdoch cannot whitewash his central role in the single most destructive presidency in US history, including to America’s critical alliance relationships.

Meanwhile, Murdoch is working on taking Australia down the same path. Sky News Australia, once dismissed as a niche outlet with a tiny viewership of right-wing nut jobs, is spreading its wings online. Its YouTube channel has 1.1 million subscribers, and its content is also broadcast free-to-air across 30 regional markets in every state and territory.

If you watched Sky News’ coverage of last week’s siege of the Capitol, it hit the same overall themes as Fox News: namely, that although violence is, of course, to be condemned, let’s be honest, it’s the fault of the meddling elites who refused to hear the truth about Trump’s fraudulent electoral defeat.

In Murdoch’s hands, Sky News represents a dangerous tool able to amplify the power of his print monopoly. He will use it to further radicalise the Liberal and National Party base and increase his capacity to guide future preselections and leadership contests. The Coalition is at risk of becoming a fully captured subsidiary of the Murdoch organisation as he pushes them further and further to the far right.

The Liberals would do well to learn the lessons of the Republicans, who were so intoxicated by Fox News’ short-term political usefulness that they didn’t care if it radicalised their base. Over time these voters became detached from long-standing Republican values and slid into a culture of grievance, “all government is evil”, ethnic tribalism, identity politics, and the conspiratorial world of QAnon.

The core problem in this country is that the political class, and most journalists, are too frightened to engage in a full and frank debate about the issue. In public life, Murdoch is he who shall not be named.

This is partly why more than 500,000 Australians signed a national petition last year calling on the parliament to establish a Murdoch royal commission, which would gather evidence and make recommendations at arm’s length from politicians who are too vulnerable to Murdoch’s wrath.

Off the back of that petition, the Senate will soon begin conducting hearings — an inquiry that, as of Tuesday, is still accepting submissions.

I am separately urging Australians to join me in taking direct action against Murdoch’s cash cow in Australia,, by pledging to say no to its services until News Corp ceases its climate change vandalism.

More than 5000 Australians have already joined me in that pledge. The numbers are growing fast.

This year will be crucial in the campaign to establish a Murdoch royal commission to preserve our democracy by tackling monopolies wherever they exist in our news media. Australians observing what has happened in America have detected the whiff of gunpowder, and they haven’t a moment to lose.

The post Crikey: Murdoch cannot whitewash his role in the most destructive presidency in US history appeared first on Kevin Rudd.

Planet DebianRussell Coker: PSI and Cgroup2

In the comments on my post about Load Average Monitoring [1] an anonymous person recommended that I investigate PSI. As an aside, why do I get so many great comments anonymously? Don’t people want to get credit for having good ideas and learning about new technology before others?

PSI is the Pressure Stall Information subsystem for Linux that is included in kernels 4.20 and above, if you want to use it in Debian then you need a kernel from Testing or Unstable (Bullseye has kernel 4.19). The place to start reading about PSI is the main Facebook page about it, it was originally developed at Facebook [2].

I am a little confused by the actual numbers I get out of PSI, while for the load average I can often see where they come from (EG have 2 processes each taking 100% of a core and the load average will be about 2) it’s difficult to work out where the PSI numbers come from. For my own use I decided to treat them as unscaled numbers that just indicate problems, higher number is worse and not worry too much about what the number really means.

With the cgroup2 interface which is supported by the version of systemd in Testing (and which has been included in Debian backports for Buster) you get PSI files for each cgroup. I’ve just uploaded version 1.3.5-2 of etbemon (package mon) to Debian/Unstable which displays the cgroups with PSI numbers greater than 0.5% when the load average test fails.

System CPU Pressure: avg10=0.87 avg60=0.99 avg300=1.00 total=20556310510
/system.slice avg10=0.86 avg60=0.92 avg300=0.97 total=18238772699
/system.slice/system-tor.slice avg10=0.85 avg60=0.69 avg300=0.60 total=11996599996
/system.slice/system-tor.slice/tor@default.service avg10=0.83 avg60=0.69 avg300=0.59 total=5358485146

System IO Pressure: avg10=18.30 avg60=35.85 avg300=42.85 total=310383148314
 full avg10=13.95 avg60=27.72 avg300=33.60 total=216001337513
/system.slice avg10=2.78 avg60=3.86 avg300=5.74 total=51574347007
/system.slice full avg10=1.87 avg60=2.87 avg300=4.36 total=35513103577
/system.slice/mariadb.service avg10=1.33 avg60=3.07 avg300=3.68 total=2559016514
/system.slice/mariadb.service full avg10=1.29 avg60=3.01 avg300=3.61 total=2508485595
/system.slice/matrix-synapse.service avg10=2.74 avg60=3.92 avg300=4.95 total=20466738903
/system.slice/matrix-synapse.service full avg10=2.74 avg60=3.92 avg300=4.95 total=20435187166

Above is an extract from the output of the loadaverage check. It shows that tor is a major user of CPU time (the VM runs a ToR relay node and has close to 100% of one core devoted to that task). It also shows that Mariadb and Matrix are the main users of disk IO. When I installed Matrix the Debian package told me that using SQLite would give lower performance than MySQL, but that didn’t seem like a big deal as the server only has a few users. Maybe I should move Matrix to the Mariadb instance. to improve overall system performance.

So far I have not written any code to display the memory PSI files. I don’t have a lack of RAM on systems I run at the moment and don’t have a good test case for this. I welcome patches from people who have the ability to test this and get some benefit from it.

We are probably about 6 months away from a new release of Debian and this is probably the last thing I need to do to make etbemon ready for that.

Worse Than FailureCodeSOD: Put in Order

Rust is one of the "cool" languages these days. It promises all the low-level power of C with memory safety and "modern" programming conventions like iterables and maps. High performance, expressive language, low-level power seems like a great combination for certain domains.

Now, Jenna Winchester needed to do some Morton Coding or Z-indexing, which is an algorithm which lets you take multidimensional points and turn them into 1-dimensional points in a way that maintains their spatial relationships- essentially a fast way of traversing a quadtree. It's a fairly simple and fast algorithm, especially if you implement it using bitwise operations. A naive implementation, without optimizations, can do its job with very few CPU cycles, relatively speaking.

And while Jenna could have implemented her own version of it, never reinvent a wheel that someone else probably has. So she tracked down a Rust library (or crate, if we're using Rust terminology) which promised to do the job. Jenna's expectation was that she could feed in her 5-dimensional point, and get back the z-index by simply doing something like let output = input.z_index(). Let's call the library morty_code, because we should focus more on the painful experience of working with a badly designed API than worry about calling out a library for a mildly niche language for a very specific problem domain.

That, of course, would be too easy. The code which Jenna needed to write to perform the core purpose of what the library claimed to do was this:

fn morton_encode_u8_5d_zdex (input: [u8; 5]) -> u64 { use zorder::*; let usize_bits = 8*core::mem::size_of::<usize>(); let transmute_input = |x: &u8| -> FromU8 {(*x).into()}; input // Take the original input, .iter() // element by element... .map(transmute_input) // Transform each to the custom input types zindex needs... .z_index() // Compute the result... .unwrap() // Panic if there's an error... (Can there be one? Who knows!) .iter_storage() // Take the result usize by usize... .fold(0 as u64, |acc, a| (acc<<usize_bits) | a as u64) // ...and finally, unify the iterator of usizes into a single u64. // Can you just FEEL the ergonomics? }

Now, even if you don't know Rust, which I don't, this looks menacing, even before you read Jenna's comments. Here's the key things: you can compute the Z-index using bitwise operations. The library author, however, didn't understand this, or didn't care, so instead used a different datastructure: a vector of bits. The line where we define transmute_input invokes FromU8, which takes an 8-bit number and turns it into an 8-item vector of bits. Which, despite knowing that it will always need exactly 8 items to hold 8 bits, the actual implementation of FromU8 dynamically allocates that memory.

So, with that in mind, we can trace through the implementation. We take our 5-dimensions of 8-bit integers as input. We iterate across each one, converting each to a vector-of-bits using .map(transmute_input), for each dimension, we can then calculate the z_index(), which comes back as a vector-of-bits, so we have to unwrap() it. We chunk the results back up using that iter_storage() and then finally we can reduce the z-indexes for each dimension using fold to bitshift them around.

If that seems like a lot of work to implement a simple algorithm first described in the 1960s, you'd be right. Jenna ran some performance tests comparing her naive implementation with the implementation from this library:

I checked the assembly that's emitted for a simple case of two u32s to one u64. A very naive version needed 600 machine instructions. morty_code needed more than three thousand. And since it contains multiple subroutines, morty_code turns out to be two orders of magnitude slower than the naive version.

But hey, we wouldn't want to use the naive version, because we'd have to worry about things like edge cases and faulty assumptions which surely means the library has to be more correct, right?

I whipped up a couple simple tests to ensure that the functions operate correctly. Surprise! The morty_code version doesn't. It ends up putting the high-significance bits at the end and the low-significance bits at the beginning. Printing the vector-of-bits directly shows the result correctly, but printing it after transforming it into a u64 shows the bits reversed.

Which is to say that the internal representation surprises you with its endianess. I suspect that it was that endian problem which initially lead to the creation of the vector-of-bits type that's used internally, but there are far easier ways to resolve conflicts with byte order.

Jenna contacted the original developer of the library, hoping to maybe help improve the experience for other developers.

This was the point at which I decided that the code has absolutely no redeeming features. A few fruitless posts of dialogue later, I realised that talking to TDWTF would be much more productive than talking to the maintainer. So... here we are.

Here we are, but where is "here" on the z-order curve?

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianRussell Coker: RISC-V and Qemu

RISC-V is the latest RISC architecture that’s become popular. It is the 5th RISC architecture from the University of California Berkeley. It seems to be a competitor to ARM due to not having license fees or restrictions on alterations to the architecture (something you have to pay extra for when using ARM). RISC-V seems the most popular architecture to implement in FPGA.

When I first tried to run RISC-V under QEMU it didn’t work, which was probably due to running Debian/Unstable on my QEMU/KVM system and there being QEMU bugs in Unstable at the time. I have just tried it again and got it working.

The Debian Wiki page about RISC-V is pretty good [1]. The instructions there got it going for me. One thing I wasted some time on before reading that page was trying to get a netinst CD image, which is what I usually do for setting up a VM. Apparently there isn’t RISC-V hardware that boots from a CD/DVD so there isn’t a Debian netinst CD image. But debootstrap can install directly from the Debian web server (something I’ve never wanted to do in the past) and that gave me a successful installation.

Here are the commands I used to setup the base image:

apt-get install debootstrap qemu-user-static binfmt-support debian-ports-archive-keyring

debootstrap --arch=riscv64 --keyring /usr/share/keyrings/debian-ports-archive-keyring.gpg --include=debian-ports-archive-keyring unstable /mnt/tmp

I first tried running RISC-V Qemu on Buster, but even ls didn’t work properly and the installation failed.

chroot /mnt/tmp bin/bash
# ls -ld .
/usr/bin/ls: cannot access '.': Function not implemented

When I ran it on Unstable ls works but strace doesn’t work in a chroot, this gave enough functionality to complete the installation.

chroot /mnt/tmp bin/bash
# strace ls -l
/usr/bin/strace: test_ptrace_get_syscall_info: PTRACE_TRACEME: Function not implemented
/usr/bin/strace: ptrace(PTRACE_TRACEME, ...): Function not implemented
/usr/bin/strace: PTRACE_SETOPTIONS: Function not implemented
/usr/bin/strace: detach: waitpid(1602629): No child processes
/usr/bin/strace: Process 1602629 detached

When running the VM the operation was noticably slower than the emulation of PPC64 and S/390x which both ran at an apparently normal speed. When running on a server with equivalent speed CPU a ssh login was obviously slower due to the CPU time taken for encryption, a ssh connection from a system on the same LAN took 6 seconds to connect. I presume that because RISC-V is a newer architecture there hasn’t been as much effort made on optimising the Qemu emulation and that a future version of Qemu will be faster. But I don’t think that Debian/Bullseye will give good Qemu performance for RISC-V, probably more changes are needed than can happen before the freeze. Maybe a version of Qemu with better RISC-V performance can be uploaded to backports some time after Bullseye is released.

Here’s the Qemu command I use to run RISC-V emulation:

qemu-system-riscv64 -machine virt -device virtio-blk-device,drive=hd0 -drive file=/vmstore/riscv,format=raw,id=hd0 -device virtio-blk-device,drive=hd1 -drive file=/vmswap/riscv,format=raw,id=hd1 -m 1024 -kernel /boot/riscv/vmlinux-5.10.0-1-riscv64 -initrd /boot/riscv/initrd.img-5.10.0-1-riscv64 -nographic -append net.ifnames=0 noresume security=selinux root=/dev/vda ro -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-device,rng=rng0 -device virtio-net-device,netdev=net0,mac=02:02:00:00:01:03 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper

Currently the program /usr/sbin/sefcontext_compile from the selinux-utils package needs execmem access on RISC-V while it doesn’t on any other architecture I have tested. I don’t know why and support for debugging such things seems to be in early stages of development, for example the execstack program doesn’t work on RISC-V now.

RISC-V emulation in Unstable seems adequate for people who are serious about RISC-V development. But if you want to just try a different architecture then PPC64 and S/390 will work better.

Planet DebianJohn Goerzen: The Good, Bad, and Scary of the Banning of Donald Trump, and How Decentralization Makes It All Better

It is undeniable that banning Donald Trump from Facebook, Twitter, and similar sites is a benefit for the moment. It may well save lives, perhaps lots of lives. But it raises quite a few troubling issues.

First, as EFF points out, these platforms have privileged speakers with power, especially politicians, over regular users. For years now, it has been obvious to everyone that Donald Trump has been violating policies on both platforms, and yet they did little or nothing about it. The result we saw last week was entirely forseeable — and indeed, WAS forseen, including by elements in those companies themselves. (ACLU also raises some good points)

Contrast that with how others get treated. Facebook, two days after the coup attempt, banned Benjamin Wittes, apparently because he mentioned an Atlantic article opposed to nutcase conspiracy theories. The EFF has also documented many more egregious examples: taking down documentation of war crimes, childbirth images, black activists showing the racist messages they received, women discussing online harassment, etc. The list goes on; YouTube, for instance, has often been promoting far-right violent videos while removing peaceful LGBTQ ones.

In short, have we simply achieved legal censorship by outsourcing it to dominant corporations?

It is worth pausing at this point to recognize two important princples:

First, that we do not see it as right to compel speech.

Secondly, that there exist communications channels and other services that nobody is calling on to suspend Donald Trump.

Let’s dive into those a little bit.

There have been no prominent calls for AT&T, Verizon, Gmail, or whomever provides Trump and his campaign with cell phones or email to suspend their service to him. Moreover, the gas stations that fuel his vehicles and the airports that service his plane continue to provide those services, and nobody has seriously questioned that, either. Even his Apple phone that he uses to post to Twitter remains, as far as I know, fully active.

Secondly, imagine you were starting up a small web forum focused on raising tomato plants. It is, and should be, well within your rights to keep tomato-haters out, as well as people that have no interest in tomatoes but would rather talk about rutabagas, politics, or Mars. If you are going to host a forum about tomatoes, you have the right to keep it a forum about tomatoes; you cannot be forced to distribute someone else’s speech. Likewise in traditional media, a newspaper cannot be forced to print every letter to the editor in full.

In law, there is a notion of a common carrier, that provides services to the general public without discrimination. Phone companies and ISPs fall under this.

Facebook, Twitter, and tomato sites don’t. But consider what happens if Facebook bans you. You might be using Facebook-owned Whatsapp to communicate with family and friends, and suddenly find yourself unable to ask someone to pick you up. Or your treasured family photos might be in Facebook-owned Instagram, lost forever. It’s not just Facebook; similar things happen with Google, locking people out of their phones and laptops, their emails, even their photos.

Is it right that Facebook and Google aren’t regulated as common carriers? Perhaps, or perhaps we need some line of demarcation between their speech-to-the-public services (Facebook timeline posts, YouTube) and private communication (Whatsapp, Gmail). It’s a thorny issue; should government be regulating speech instead? That’s also fraught. So is corporate control.

Decentralization Helps Dramatically

With email, you get to pick your email provider (yes, there are two or three big ones, but still plenty of others). Each email provider will have its own set of things it considers acceptable, and its own set of other servers and accounts it’s willing to exchange mail with. (It is extremely common for mail providers to choose not to accept mail from various other mail servers based on ISP, IP address, reputation, and so forth.)

What if we could do something like that for Twitter and Facebook?

Let you join whatever instance you like. Maybe one instance is all about art and they don’t talk about politics. Or another is all about Free Software and they don’t have advertising. And then there are plenty of open instances that accept anything that’s respectful. And, like email, people of one server can interact with those using another just as easily as if they were using the same one.

Well, this isn’t hypothetical; it already exists in the Fediverse. The most common option is Mastodon, and it so happens that a month ago I wrote about its benefits for other reasons, and included some links on getting started.

There is no reason that we must all let our online speech be controlled by companies with a profit motive to keep hate speech on their platforms. There is no reason that we must all have a single set of rules, or accept strong corporate or government control, either. The quality of conversation on Mastodon is far higher than either Twitter or Facebook; decentralization works and it’s here today.


Planet DebianDirk Eddelbuettel: BH 1.75.0-0: New upstream release, added Beast


Boost is a very large and comprehensive set of (peer-reviewed) libraries for the C++ programming language, containing well over 100 individual libraries. The BH package provides a sizeable subset of header-only libraries for use by R.

Version 1.75.0 of Boost was released in December, right on schedule with their April, August and December releases. I now try to follow these releases at a lower (annual) cadence and prepared BH 1.75.0-0 in mid-December. Extensive reverse-depends checks revealed a need for changes in a handful of packages whose maintainers I contacted then. With one exception, everybody responded in kind and brought updated packages to CRAN which permitted us to upload the package there two days ago. And thanks to this planned and coordinated upload, the package is now available on CRAN a mere two days later. My thanks to the maintainers of these packages for helping it along; this prompt responses really are appreciated. The version on CRAN is the same as the one the drat announced in this tweet asking for testing help. If you installed that version, you are still current as no changes were required since December and CRAN now contains same file.

This release adds one new library: Boost Beast, an http and websocket library built on top of Boost Asio. Other changes are highlighed below.

Changes in version 1.75.0-0 (2020-12-12)

  • Removed file NAMESPACE as the package has neither R code, nor a shared library to load

  • The file LICENSE_1_0.txt is now included (as requested in #73)

  • Added new beast library (as requested in #74)

  • Upgraded to Boost 1.75.0 (#75)

Via CRANberries, there is a diffstat report relative to the previous release. Its final line is quite impressive: 3485 files changed, 100838 insertions(+), 84890 deletions(-). Wow.

Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityUbiquiti: Change Your Password, Enable 2FA

Ubiquiti, a major vendor of cloud-enabled Internet of Things (IoT) devices such as routers, network video recorders, security cameras and access control systems, is urging customers to change their passwords and enable multi-factor authentication. The company says an incident at a third-party cloud provider may have exposed customer account information and credentials used to remotely manage Ubiquiti gear.

In an email sent to customers today, Ubiquiti Inc. [NYSE: UI] said it recently became aware of “unauthorized access to certain of our information technology systems hosted by a third party cloud provider,” although it declined to name that provider.

The statement continues:

“We are not currently aware of evidence of access to any databases that host user data, but we cannot be certain that user data has not been exposed. This data may include your name, email address, and the one-way encrypted password to your account (in technical terms, the passwords are hashed and salted). The data may also include your address and phone number if you have provided that to us.”

Ubiquiti has not yet responded to requests for more information, but the notice was confirmed as official in a post on the company’s user support forum.

The warning from Ubiquiti carries particular significance because the company has made it fairly difficult for customers using the latest Ubiquiti firmware to interact with their devices without first authenticating through the company’s cloud-based systems.

This has become a sticking point for many Ubiquiti customers, as evidenced by numerous threads on the topic in the company’s user support forums over the past few months.

“While I and others do appreciate the convenience and option of using hosted accounts, this incident clearly highlights the problem with relying on your infrastructure for authenticating access to our devices,” wrote one Ubiquiti customer today whose sentiment was immediately echoed by other users. “A lot us cannot take your process for granted and need to keep our devices offline during setup and make direct connections by IP/Hostname using our Mobile Apps.”

To manage your security settings on a Ubiquiti device, visit and log in. Click on ‘Security’ from the left-hand menu.

1. Change your password
2. Set a session timeout value
3. Enable 2FA


According to Ubiquiti’s investment literature, the company has shipped more than 85 million devices that play a key role in networking infrastructure in over 200 countries and territories worldwide.

This is a developing story that may be updated throughout the day.

Planet DebianBastian Venthur: Dear Apple,

In the light of WhatsApp’s recent move to enforce new Privacy Agreements onto its users, alternative messenger services like Signal are currently gaining some more momentum.

While this sounds good, it is hard to believe that this will be more than a dent in WhatsApp’s user base. WhatsApp is way too ubiquitous, and the whole point of using such a service for most users is to use the one that everyone is using. Unfortunately.

Convincing WhatsApp users to additionally install Signal is hard: they already have SMS for the few people that are not using WhatsApp, now expecting them to install a third app for the same purpose seems ridiculous.

Android mitigates this problem a lot by allowing to make other apps — like Signal — the default SMS/MMS app on the phone. Suddenly people are able to use Signal for SMS/MMS and Signal messages transparently. Signal is smart enough to figure out if the conversation partner is using Signal and enables encryption, video calls and other features. If not, it just falls back to plain old SMS. All in the same app, very convenient for the user!

I don’t really get why the same thing is not possible on iOS? Apple is well known for taking things like privacy and security for its users seriously, and this seems like a low-hanging fruit. So dear Apple, wouldn’t now be a good time to team up with WhatsApp-alternatives like Signal to help the users to make the right choice?

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 27)

Here’s part twenty-seven of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.


Cryptogram Finding the Location of Telegram Users

Security researcher Ahmed Hassan has shown that spoofing the Android’s “People Nearby” feature allows him to pinpoint the physical location of Telegram users:

Using readily available software and a rooted Android device, he’s able to spoof the location his device reports to Telegram servers. By using just three different locations and measuring the corresponding distance reported by People Nearby, he is able to pinpoint a user’s precise location.


A proof-of-concept video the researcher sent to Telegram showed how he could discern the address of a People Nearby user when he used a free GPS spoofing app to make his phone report just three different locations. He then drew a circle around each of the three locations with a radius of the distance reported by Telegram. The user’s precise location was where all three intersected.


Fixing the problem — or at least making it much harder to exploit it — wouldn’t be hard from a technical perspective. Rounding locations to the nearest mile and adding some random bits generally suffices. When the Tinder app had a similar disclosure vulnerability, developers used this kind of technique to fix it.

Cryptogram On US Capitol Security — By Someone Who Manages Arena-Rock-Concert Security

Smart commentary:

…I was floored on Wednesday when, glued to my television, I saw police in some areas of the U.S. Capitol using little more than those same mobile gates I had ­ the ones that look like bike racks that can hook together ­ to try to keep the crowds away from sensitive areas and, later, push back people intent on accessing the grounds. (A new fence that appears to be made of sturdier material was being erected on Thursday.) That’s the same equipment and approximately the same amount of force I was able to use when a group of fans got a little feisty and tried to get backstage at a Vanilla Ice show.


There’s not ever going to be enough police or security at any event to stop people if they all act in unison; if enough people want to get to Vanilla Ice at the same time, they’re going to get to Vanilla Ice. Social constructs and basic decency, not lightweight security gates, are what hold everyone except the outliers back in a typical crowd.


When there are enough outliers in a crowd, it throws the normal dynamics of crowd control off; everyone in my business knows this. Citizens tend to hold each other to certain standards ­ which is why my 40,000-person town does not have 40,000 police officers, and why the 8.3 million people of New York City aren’t policed by 8.3 million police officers.

Social norms are the fabric that make an event run smoothly — and, really, hold society together. There aren’t enough police in your town to handle it if everyone starts acting up at the same time.

I like that she uses the term “outliers,” and I make much the same points in Liars and Outliers.

Cryptogram Cloning Google Titan 2FA keys

This is a clever side-channel attack:

The cloning works by using a hot air gun and a scalpel to remove the plastic key casing and expose the NXP A700X chip, which acts as a secure element that stores the cryptographic secrets. Next, an attacker connects the chip to hardware and software that take measurements as the key is being used to authenticate on an existing account. Once the measurement-taking is finished, the attacker seals the chip in a new casing and returns it to the victim.

Extracting and later resealing the chip takes about four hours. It takes another six hours to take measurements for each account the attacker wants to hack. In other words, the process would take 10 hours to clone the key for a single account, 16 hours to clone a key for two accounts, and 22 hours for three accounts.

By observing the local electromagnetic radiations as the chip generates the digital signatures, the researchers exploit a side channel vulnerability in the NXP chip. The exploit allows an attacker to obtain the long-term elliptic curve digital signal algorithm private key designated for a given account. With the crypto key in hand, the attacker can then create her own key, which will work for each account she targeted.

The attack isn’t free, but it’s not expensive either:

A hacker would first have to steal a target’s account password and also gain covert possession of the physical key for as many as 10 hours. The cloning also requires up to $12,000 worth of equipment and custom software, plus an advanced background in electrical engineering and cryptography. That means the key cloning — ­were it ever to happen in the wild — ­would likely be done only by a nation-state pursuing its highest-value targets.

That last line about “nation-state pursuing its highest-value targets” is just not true. There are many other situations where this attack is feasible.

Note that the attack isn’t against the Google system specifically. It exploits a side-channel attack in the NXP chip. Which means that other systems are probably vulnerable:

While the researchers performed their attack on the Google Titan, they believe that other hardware that uses the A700X, or chips based on the A700X, may also be vulnerable. If true, that would include Yubico’s YubiKey NEO and several 2FA keys made by Feitian.

Rondam RamblingsRon prognosticates: Trump will pardon the capitol rioters

I don't think I'm going very far out on a limb to make this prediction, but I just wanted to get it on the record before it happened.  It seems like some low-lying prophetic fruit that I just felt like picking this morning.  (Oh, and I also predict that he will also issue blanket pardons to himself and his family.)As long as I'm writing, an administrative note: I am now moderating all comments on

Worse Than FailureIt's a Gift

Tyra was standing around the coffee maker with her co-workers when their phones all dinged with an email from management.

Edgar is no longer employed at Initech. If you see him on the property, for any reason, please alert security.

"Well, that's about time," Tyra said.

They had all been expecting an email like that. Edgar had been having serious conflicts with management. The team had been expanding recently, and along with the expansion, new processes and new tooling were coming online. Edgar hated that. He hated having new co-workers who didn't know the codebase as intimately as he did. "My technical knowledge is a gift!" He hated that they were moving to a CI pipeline that had more signoffs and removed his control over development. "My ability to react quickly to needed changes is a gift!" He hated that management- and fellow developers- were requesting more code coverage in their tests. "I write good code the first time, because I've got a gift for programming!"

These conflicts escalated, never quite to screaming, but voices were definitely raised. "You're all getting in the way," was a common refrain from Edgar, whether it was to his new peers or to management or to the janitor who was taking too long to clean the restroom. It seemed like everyone knew Edgar was going to get fired but Edgar.

Six months later, the team was humming along nicely. Pretty much no one thought about Edgar, except maybe to regale newbies with a tale of the co-worker from hell. One day, Tyra finished a requirement, ensured all the tests were green in their CI server, and then submitted a pull request. One of her peers reviewed the request, merged it, and pushed it along to their CD pipeline.

Fortunately for them, part of the CD step was to run the tests again; one of the tests failed. The failing test was not anything related to any of the changes in Tyra's PR. In fact, the previous commit passed the unit test fine, and the two versions were exactly the same in source control.

Tyra and her peers dug in, trying to see what might have changed in the CD environment, or what was maybe wrong about the test. Before long, they were digging through the CD pipeline scripts themselves. They hadn't been modified in months, but was there maybe a bad assumption somewhere? Something time based?

No. As it turned out, after many, many hours of debugging, there was an "extra" few lines in one of the shell scripts. It would randomly select one of the Python files in the commit, and a small percentage of the time, it would choose a random line in the file, and on that line replace the spaces with tabs. Since whitespace is syntactically significant in Python that created output which failed with an IndentationError.

A quick blame confirmed that Edgar had left them that little gift. As for how it had gone unnoticed for so long? Well, for starters, he had left during that awkward transition period when they were migrating new tools. The standard code-review/peer-review processes weren't fully settled, so he was able to sneak in the commit. The probability that it would tamper with a file was very low, and it wouldn't repeat itself on the next build.

It was hard to say how many times this had caused a problem. If a developer saw the unit test fail after accepting the PR, they may have just triggered a fresh build manually. But, more menacing, they didn't have 100% unit test coverage, and there were some older Python files (mostly written by Edgar) which had no unit tests at all. How many times might they have pushed broken files to production, only to have mysterious failures?

In the end, Edgar's last "gift" to the team was the need to audit their entire CI/CD pipeline to see if he left any more little "surprises".

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Planet DebianEnrico Zini: Viewing OpenStreetMap

weeklyOSM posts lots of interesting links. Here are some beautiful views of OpenStreetMap edits and renderings:

Planet DebianTim Retout: Blog Posts

Planet DebianIustin Pop: Dealing with evil ads


I usually don’t mind ads, as not as they not very intrusive. I get that the current media model is basically ad-funded, and that unless I want to pay $1/month or so to 50 web sites, I have to accept ads, so I don’t run an ad-blocker.

Sure, sometimes are annoying (hey YT, mid-roll ads are borderline), but I’ve also seen many good ads, as in interesting or funny even. Well, I don’t think I ever bought anything as direct result from ads, so I don’t know how useful ads are for the companies, but hey, what do I care.

Except… there a few ad networks that run what I would say are basically revolting ads. Things I don’t want to ever accidentally see while eating, or things that are really make you go WTF? Maybe you know them, maybe you don’t, but I guess there are people who don’t know how to clean their ears, or people for whom a fast 7 day weight loss routine actually works.

Thankfully, most of the time I don’t browse sites which use this networks, but randomly they do “leak� to even sites I do browse. If I’m not very stressed already, I can ignore them, otherwise they really, really annoy me.

Case in point, I was on Slashdot, and because I was logged on and recently had mod points, the right side column had a check-box “disable ads�. That sidebar had some relatively meaningful ads, like a VPN subscription (not that I would use it, but it is a tech thing), or even a book about Kali Linux, etc. etc. So I click the “disable ads�, and the right column goes away. I scroll down happily, only to be met, at the bottom, by the “best way to clean your ear�, “the most 50 useless planes ever built� (which had a drawing of something that was for sure never ever built outside of in movies), “you won’t believe how this child actor looks today�, etc.

Solving the problem

The above really, really pissed me off, so I went to search “how to block ad network�. To my surprise, the fix was not that simple, for standard users at least.

Method 1: hosts file

The hosts file is reasonable as it is relatively cross-platform (Linux and Windows and Mac, I think), but how the heck do you edit hosts on your phone?

And furthermore, it has some significant downsides.

First, /etc/hosts lists individual hosts, so for an entire ad network, the example I had had two screens of host names. This is really unmaintainable, since rotating host names, or having a gazillion of them is trivial.

Second, it cannot return negative answers. I.e. you have to give each of those hosts a valid IPv4/IPv6, and have something either reply with 404 or another 4xx response, or not listen on port 80/443. Too annoying.

And finally, it’s a client-side solution, so one would have to replicate it across all clients in a home, and keep it in sync.

Method 2: ad-blockers

I dislike ad-blockers on principle, since they need wide permissions on all pages, but it is a recommended solution. However, to my surprise, one finds threads saying ad-blocker foo has whitelisted ad network bar, at which point you’re WTF? Why do I use an ad-blocker if they get paid by the lowest of the ad networks to show the ads?

And again, it’s a client-side solution, and one would have to deploy it across N clients, and keep them in sync, etc.

Method 3: HTTP proxy blocking

To my surprise, I didn’t find this mentioned in a quick internet search. Well, HTTP proxies have long gone the way of the dodo due to “HTTPs everywhere�, and while one can still use them even with HTTPS, it’s not that convenient:

  • you need to tunnel all traffic through them, which might result in bottlenecks (especially for media playing/maybe video-conference/etc.).
  • or even worse, there might be protocol issues/incompatibilities due to 100% tunneling.
  • running a proxy opens up some potential security issues on the internal network, so you need to harden the proxy as well, and maintain it.
  • you need to configure all clients to know about the proxy (via DHCP or manually), which might or might not work well, since it’s client-dependent.
  • you can only block at CONNECT level (host name), and you have to build up regexes for the host name.

On the good side, the actual blocking configuration is centralised, and the only distributed configuration is pointing the clients through the proxy.

While I used to run a proxy back in HTTP times, the gains were significant back them (media elements caching, downloads caching, all with a slow pipe, etc.), but today is not worth it, so I’ve stopped and won’t bring a proxy back just for this.

Method 4: DNS resolver filtering

After thinking through all the options, I thought - hey, a caching/recursive DNS resolver is what most people with a local network run, right? How difficult is to block at resolver level?

… and oh my, it is so trivial, for some resolvers at least. And yes, I didn’t know about this a week ago 😅

Response Policy Zones

Now, of course, there is a standard for this, called Response Policy Zone, and which is supported across multiple resolvers. There are many tutorials on how to use RPZs to configure things, some of them quite detailed - e.g. this one, or a simple/more straightforward one here.

The upstream BIND documentation also explains things quite well here, so you can go that route as well. It looks a bit hairy for me thought, but it works, and since it is a standard, it can be more easily deployed.

There are many discussions on the internet about how to configure RPZs, how to not even resolve the names (if you’re going to return something explicitly/statically), etc. so there are docs, but again it seems a bit overdone.

Resolver hooks

There’s another way too, if your resolver allows scripting. For example, the PowerDNS resolver allow Lua scripting, and has a relatively simple API—at least, to me it looks way, way simpler than the RPZ equivalent.

After 20 minutes of reading the docs, I ended up with this, IMO trivial, solution (in a file named e.g. rules.lua):

… and that’s it. Well, enable it/load the file in the configuration, but nothing else. Syntax is pretty straightforward, matching by suffix here, and if you need more complex stuff, you can of course do it; it’s just Lua and a simple API.

I don’t see any immediate equivalent in Bind, so there’s that, but if you can use PowerDNS, then the above solution seems simple for simple cases, and could be extended if needed (not sure in which cases).

The only other thing one needs to do is to serve the local/custom resolver to all clients, whether desktop or mobile, and that’s it. DNS server is bread-and-butter in DHCP, so better support than proxy, and once the host name has been (mis)resolved, nothing is involved anymore in the communication path. True, your name server might get higher CPU usage, but for home network, this should not be a problem.

Can this filtering method (either RPZ or hooks) be worked around by ad networks? Sure, like anything. But changing the base domain is not fun. DNSSEC might break it (note Bind RPZ can be configure to ignore DNSSEC), but I’m more worried about DNS-over-HTTPS, which I thought initially it’s done for the user, but now I’m not so sure anymore. Not being in control even of your own DNS resolver seems… evil 😈, but what do I know.

Happy browsing!

10 lines of Lua, and now for sure I’m going to get even fatter without the “this natural method will melt your belly fat in 7 days� information. Or I will just throw away banana peels without knowing what I could do with hem.

After a few days, I asked myself “but ads are not so bad, why did I…� and then realised that yes, ads are not so bad anymore. And Slashdot actually loads faster 😜

So, happy browsing!

Planet DebianDirk Eddelbuettel: RcppArmadillo Minor update

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 802 other packages on CRAN.

This release was needed because we use the Matrix package for some (optional) tests related to sparse matrices, and a small and subtle change and refinement in the recent 1.3.0 release of Matrix required us to make an update for the testing. Nothing has changed in how we set up, or operate on, sparse matrices. My thanks to Binxiang and Martin Maechler for feedback and suggestion on the initial fix both Binxiang and I set up independently. At the same time we upgrade some package internals related to continuous integration (for that, also see my blog post and video from earlier this week). Lastly Conrad sent in a one-line upstream fix for dealing with NaN in sign().

The full set of changes follows.

Changes in RcppArmadillo version (2021-01-08)

  • Correct one unit test for Matrix 1.3.0-caused changed (Binxiang in #319 and Dirk in #322).

  • Suppress one further warning from Matrix (Dirk)

  • Apply an upstream NaN correction (Conrad in #321)

  • Added GitHub Actions CI using from r-ci (Dirk)

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Rondam RamblingsThe rats are finally begining to flee the SS Trump

From The Washington Post: Whether President Trump is forced from office or serves out the remaining days of his term, he is now destined to slink out of the White House considerably diminished from the strapping, fearsome force he and his advisers imagined he would be in his post-presidency.In the wake of the mob attack on the Capitol that Trump incited, some allies have abandoned him, many

Rondam RamblingsArnold Schwarzenegger compares the Capitol attack to Kristallnacht

Arnold Schwarzenegger just released a video in which he compares the Jan 6 attack on the U.S. Capitol to Kristallnacht, and draws a straight line from the Proud Boys to the early Nazis by way of his own personal experience growing up in Austria in the long shadow of World War II.  The video gets a little corny and treacly towards the end, but the analogy is apt and the warning is one we would all

Rondam RamblingsThe Capitol attack was not the beginning, and it won't be the end

A dry run of the attack on the U.S. capitol occurred the day before in northern California: Here in California’s rural, conservative northern counties — where people have long wanted to split from California and form a new state called Jefferson — the kind of anger and distrust of the government that Trump has fomented is on full display.And it is not likely to go away any time soon, because

Rondam RamblingsTrump and the Reverse Cargo Cult

Speaking of prophetic observations, here is a particularly profound one made by Hans Howe back in 2017:Trump administration lies constantly but doesn’t even attempt to make it seem like they aren’t lying....Trump’s supporters don’t care about being lied to. You can point out the lies until you’re blue in the face, but it makes no difference to them. Why? Because it is just a game to them. The

Planet DebianKentaro Hayashi: Use external E-mail server for subdomain with Sakura Mailbox service

If you want to set up subdomain, you may setup E-mail server on your own. But if there is not afford to setup it by yourself, you need external E-mail server.

In this article, I'll explain how to use external mailbox service - Sakura Mailbox service.


  • Owner of subdomain (
  • Have an account of Sakura mailbox service

I've chosen Sakura mailbox service because of maintainance cost (87yen/month)

Configure dnsZoneEntry

Set dnZoneEntry, it means that gpg --clearsign and send to Here is an example for

fabre IN A
mail.fabre IN A
fabre IN MX 10 is Web server ( and is E-mail server (Sakura mailbox service) for my case.

Note that It varies for your case.

DebianDotNet - Debian Wiki

Configure Sakura mailbox service

Set subdomain without transfer in Domain/SSL menu. Then create each account for After a while, you can use E-mail account. Yay!


Planet DebianJonathan McDowell: Free Software Activities for 2020

As a reader of Planet Debian I see a bunch of updates at the start of each month about what people are up to in terms of their Free Software activities. I’m not generally active enough in the Free Software world to justify a monthly report, but I did a report of my Free Software Activities for 2019 and thought I’d do another for 2020. I ended up not doing as much as last year; I put a lot of that down to fatigue about the state of the world and generally not wanting to spend time on the computer at the end of the working day.


2020 was unsurprisingly not a great year for conference attendance. I was fortunate enough to make it to FOSDEM and CopyleftConf 2020 - I didn’t speak at either, but had plenty of interesting hallway track conversations as well as seeing some good talks. I hadn’t been planning to attend DebConf20 due to time constraints, but its move to an entirely online conference meant I was able to attend a few talks at least. I have to say I don’t like virtual conferences as much as the real thing; it’s not as easy to have the casual chats at them, and it’s also harder to carve out the exclusive time when you’re at home. That said I spoke at NIDevConf this year, which was also fully virtual. It’s not a Free Software focussed conference, but there’s a lot of crossover in terms of technologies and I spoke on my experiences with Go, some of which are influenced by my packaging experiences within Debian.


Most of my contributions to Free software happen within Debian.

As part of the Data Protection Team I responded to various inbound queries to that team. Some of this involved chasing up other project teams who had been slow to respond - folks, if you’re running a service that stores personal data about people then you need to be responsive to requests about it.

The Debian Keyring was possibly my largest single point of contribution. We’re in a roughly 3 month rotation of who handles the keyring updates, and I handled 2020.02.02, 2020.03.24, 2020.06.24, 2020.09.24 + 2020.12.24

For Debian New Members I’m mostly inactive as an application manager - we generally seem to have enough available recently. If that changes I’ll look at stepping in to help, but I don’t see that happening. I continue to be involved in Front Desk, having various conversations throughout the year with the rest of the team, but there’s no doubt Mattia and Pierre-Elliott are the real doers at present.

In terms of package uploads I continued to work on gcc-xtensa-lx106, largely doing uploads to deal with updates to the GCC version or packaging (5, 6 + 7). sigrok had a few minor updates, libsigkrok 0.5.2-2, libsigrokdecode 0.5.3-2 as well as a new upstream release of Pulseview 0.4.2-1 and a fix to cope with change to QT 0.4.2-2. Due to the sigrok-firmware requirement on sdcc I also continued to help out there, updating to 4.0.0+dfsg-1 and doing some fixups in 4.0.0+dfsg-2.

Despite still not writing an VHDL these days I continue to try and make sure ghdl is ok, because I found it a useful tool in the past. In 2020 that meant a new upstream release, 0.37+dfsg-1 along with a couple of more minor updates (0.37+dfsg-2 + 0.37+dfsg-3.

libcli had a new upstream release, 1.10.4-1, and I did a long overdue update to sendip to the latest upstream release, 2.6-1 having been poked about an outstanding bug by the Reproducible Builds folk.

OpenOCD is coming up to 4 years since its last stable release, but I did a snapshot upload to Debian experimental (0.10.0+g20200530-1) and a subsequent one to unstable (0.10.0+g20200819-1). There are also moves to produce a 0.11.0 release and I uploaded 0.11.0~rc1-1 as a result. libjaylink got a bump as a result (0.2.0-1) after some discussion with upstream.


On the subject of OpenOCD I’ve tried to be a bit more involved upstream. I’m not familiar enough with the intricacies of JTAG/SWD/the various architectures supported to contribute to the core, but I pushed the config for my HIE JTAG adapter upstream and try and review patches that don’t require in depth hardware knowledge.


I’ve been contributing to the Linux kernel for a number of years now, mostly just minor bits here and there for issues I hit. This year I spent a lot of time getting support for the MikoTik RB3011 router upstreamed. That included the basic DTS addition, fixing up QCA8K to support SGMII CPU connections, adding proper 802.1q VLAN support to QCA8K and cleaning up an existing QCOM ADM driver that’s required for the NAND. There were a number of associated bugfixes/minor changes found along the way too. It can be a little frustrating at times going round the review loop with submitting things upstream, but I do find it quite satisfying when it all comes together and I have no interest in weird vendor trees that just bitrot over time.

Software in the Public Interest

I haven’t sat on the board of SPI since 2015 but I was still acting as the primary maintainer of the membership website (with Martin Michlmayr as the other active contributor) and hosting it on my own machine. I managed to finally extricate myself from this role in August. I remain a contributing member.

Personal projects

2020 finally saw another release (0.6.0, followed swiftly by 0.6.1 to allow the upload of 0.6.1-1 to Debian) of onak. This release finally adds various improvements to deal with the hostility shown to the OpenPGP keyserver network in recent years, including full signature verification as an option.

I fixed an oversight in my Digoo/1-wire temperature decoder and a bug that turned up on ARM but not MIPS in my mqtt-arp code. I should probably package it for Debian (even if I don’t upload it), as I’m running it on my RB3011 now.

Planet DebianThorsten Alteholz: My Debian Activities in December 2020

FTP master

This month I only accepted 8 packages and like last month rejected 0. Despite the holidays 293 packages got accepted.

Debian LTS

This was my seventy-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 26h. During that time I did LTS uploads of:

  • [DLA 2489-1] minidlna security update for two CVEs
  • [DLA 2490-1] x11vnc security update for one CVE
  • [DLA 2501-1] influxdb security update for one CVE
  • [DLA 2511-1] highlight.js security update for one CVE

Unfortunately package slirp has the same version in Stretch and Buster. So I first had to upload slirp/1:1.0.17-11 to unstable, in order to be allowed to fix the CVE in Buster and to finally upload a new version to Stretch. Meanwhile the fix for Buster has been approved by the Release Team and I am waiting for the next point release now.

I also prepared a debdiff for influxdb, which will result in DSA-4823-1 in January.

As there appeared new CVEs for openjpeg2, I did not do an upload yet. This is planned for January now.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the thirtieth ELTS month.

During my allocated time I uploaded:

  • ELA-341-1 for highlight.js

As well as for LTS, I did not finish work on all CVEs of openjpeg2, so the upload is postponed to January.

Last but not least I did some days of frontdesk duties.

Unfortunately I also had to give back some hours.

Other stuff

This month I uploaded new upstream versions of:

I fixed one or two bugs in:

I improved packaging of:

Some packages just needed a source upload:

… and there have been even some new packages:

With these uploads I finished the libosmocom- and libctl-transitions.

The Debian Med Advent Calendar was again really successful this year. There was no new record, but with 109, the second most number of bugs has been closed.

year number of bugs closed
2011 63
2012 28
2013 73
2014 5
2015 150
2016 95
2017 105
2018 81
2019 104
2020 109

Well done everybody who participated. It is really nice to see that Andreas is no longer a lone wolf.

Rondam RamblingsFaith and Insurrection

Article III section 3 of the Constitution of the United States says:"Treason against the United States, shall consist only in levying War against them, or in adhering to their Enemies, giving them Aid and Comfort."Donald Trump, while wielding the authority of the Presidency, incited a violent insurrection against the government of the United States.  It cannot possibly be any clearer that he

Planet DebianLouis-Philippe Véronneau: puppetserver 6: a Debian packaging post-mortem

I have been a Puppet user for a couple of years now, first at work, and eventually for my personal servers and computers. Although it can have a steep learning curve, I find Puppet both nimble and very powerful. I also prefer it to Ansible for its speed and the agent-server model it uses.

Sadly, Puppet Labs hasn't been the most supportive upstream and tends to move pretty fast. Major versions rarely last for a whole Debian Stable release and the upstream .deb packages are full of vendored libraries.1

Since 2017, Apollon Oikonomopoulos has been the one doing most of the work on Puppet in Debian. Sadly, he's had less time for that lately and with Puppet 5 being deprecated in January 2021, Thomas Goirand, Utkarsh Gupta and I have been trying to package Puppet 6 in Debian for the last 6 months.

With Puppet 6, the old ruby Puppet server using Passenger is not supported anymore and has been replaced by puppetserver, written in Clojure and running on the JVM. That's quite a large change and although puppetserver does reuse some of the Clojure libraries puppetdb (already in Debian) uses, packaging it meant quite a lot of work.

Work in the Clojure team

As part of my efforts to package puppetserver, I had the pleasure to join the Clojure team and learn a lot about the Clojure ecosystem.

As I mentioned earlier, a lot of the Clojure dependencies needed for puppetserver were already in the archive. Unfortunately, when Apollon Oikonomopoulos packaged them, the leiningen build tool hadn't been packaged yet. This meant I had to rebuild a lot of packages, on top of packaging some new ones.

Since then, thanks to the efforts of Elana Hashman, leiningen has been packaged and lets us run the upstream testsuites and create .jar artifacts closer to those upstream releases.

During my work on puppetserver, I worked on the following packages:

List of packages
  • backport9
  • bidi-clojure
  • clj-digest-clojure
  • clj-helper
  • clj-time-clojure
  • clj-yaml-clojure
  • cljx-clojure
  • core-async-clojure
  • core-cache-clojure
  • core-match-clojure
  • cpath-clojure
  • crypto-equality-clojure
  • crypto-random-clojure
  • data-csv-clojure
  • data-json-clojure
  • data-priority-map-clojure
  • java-classpath-clojure
  • jnr-constants
  • jnr-enxio
  • jruby
  • jruby-utils-clojure
  • kitchensink-clojure
  • lazymap-clojure
  • liberator-clojure
  • ordered-clojure
  • pathetic-clojure
  • potemkin-clojure
  • prismatic-plumbing-clojure
  • prismatic-schema-clojure
  • puppetlabs-http-client-clojure
  • puppetlabs-i18n-clojure
  • puppetlabs-ring-middleware-clojure
  • puppetserver
  • raynes-fs-clojure
  • riddley-clojure
  • ring-basic-authentication-clojure
  • ring-clojure
  • ring-codec-clojure
  • shell-utils-clojure
  • ssl-utils-clojure
  • test-check-clojure
  • tools-analyzer-clojure
  • tools-analyzer-jvm-clojure
  • tools-cli-clojure
  • tools-reader-clojure
  • trapperkeeper-authorization-clojure
  • trapperkeeper-clojure
  • trapperkeeper-filesystem-watcher-clojure
  • trapperkeeper-metrics-clojure
  • trapperkeeper-scheduler-clojure
  • trapperkeeper-webserver-jetty9-clojure
  • url-clojure
  • useful-clojure
  • watchtower-clojure

If you want to learn more about packaging Clojure libraries and applications, I rewrote the Debian Clojure packaging tutorial and added a section about the quirks of using leiningen without a dedicated dh_lein tool.

Work left to get puppetserver 6 in the archive

Unfortunately, I was not able to finish the puppetserver 6 packaging work. It is thus unlikely it will make it in Debian Bullseye. If the issues described below are fixed, it would be possible to to package puppetserver in bullseye-backports though.

So what's left?


Although I tried my best (kudos to Utkarsh Gupta and Thomas Goirand for the help), jruby in Debian is still broken. It does build properly, but the testsuite fails with multiple errors:

  • ruby-psych is broken (#959571)
  • there are some random java failures on a few tests (no clue why)
  • tests ran by raklelib/rspec.rake fail to run, maybe because the --pattern command line option isn't compatible with our version of rake? Utkarsh seemed to know why this happens.

jruby testsuite failures aside, I have not been able to use the jruby.deb the package currently builds in jruby-utils-clojure (testsuite failure). I had the same exact failure with the (more broken) jruby version that is currently in the archive, which leads me to think this is a LOAD_PATH issue in jruby-utils-clojure. More on that below.

To try to bypass these issues, I tried to vendor jruby into jruby-utils-clojure. At first I understood vendoring meant including upstream pre-built artifacts (jruby-complete.jar) and shipping them directly.

After talking with people on the #debian-mentors and #debian-ftp IRC channels, I now understand why this isn't a good idea (and why it's not permitted in Debian). Many thanks to the people who were patient and kind enough to discuss this with me and give me alternatives.

As far as I now understand it, vendoring in Debian means "to have an embedded copy of the source code in another package". Code shipped that way still needs to be built from source. This means we need to build jruby ourselves, one way or another. Vendoring jruby in another package thus isn't terribly helpful.

If fixing jruby the proper way isn't possible, I would suggest trying to build the package using embedded code copies of the external libraries jruby needs to build, instead of trying to use the Debian libraries.2 This should make it easier to replicate what upstream does and to have a final .jar that can be used.


This package is a first-level dependency for puppetserver and is the glue between jruby and puppetserver.

It builds fine, but the testsuite fails when using the Debian jruby package. I think the problem is caused by a jruby LOAD_PATH issue.

The Debian jruby package plays with the LOAD_PATH a little to try use Debian packages instead of downloading gems from the web, as upstream jruby does. This seems to clash with the gem-home, gem-path, and jruby-load-path variables in the jruby-utils-clojure package. The testsuite plays around with these variables and some Ruby libraries can't be found.

I tried to fix this, but failed. Using the upstream jruby-complete.jar instead of the Debian jruby package, the testsuite passes fine.

This package could clearly be uploaded to NEW right now by ignoring the testsuite failures (we're just packaging static .clj source files in the proper location in a .jar).


jruby issues aside, packaging puppetserver itself is 80% done. Using the upstream jruby-complete.jar artifact, the testsuite fails with a weird Clojure error I'm not sure I understand, but I haven't debugged it for very long.

Upstream uses git submodules to vendor puppet (agent), hiera (3), facter and puppet-resource-api for the testsuite to run properly. I haven't touched that, but I believe we can either:

  • link to the Debian packages
  • fix the Debian packages if they don't include the right files (maybe in a new binary package that just ships part of the source code?)

Without the testsuite actually running, it's hard to know what files are needed in those packages.

What now

Puppet 5 is now deprecated.

If you or your organisation cares about Puppet in Debian,3 puppetserver really isn't far away from making it in the archive.

Very talented Debian Developers are always eager to work on these issues and can be contracted for very reasonable rates. If you're interested in contracting someone to help iron out the last issues, don't hesitate to reach out via one of the following:

As for I, I'm happy to say I got a new contract and will go back to teaching Economics for the Winter 2021 session. I might help out with some general Debian packaging work from time to time, but it'll be as a hobby instead of a job.


The work I did during the last 6 weeks would be not have been possible without the support of the Wikimedia Foundation, who were gracious enough to contract me. My particular thanks to Faidon Liambotis, Moritz Mühlenhoff and John Bond.

Many, many thanks to Rob Browning, Thomas Goirand, Elana Hashman, Utkarsh Gupta and Apollon Oikonomopoulos for their direct and indirect help, without which all of this wouldn't have been possible.

  1. For example, the upstream package for the Puppet Agent vendors OpenSSL. 

  2. One of the problems of using Ruby libraries already packaged in Debian is that jruby currently only supports Ruby 2.5. Ruby libraries in Debian are currently expected to work with Ruby 2.7, with the transition to Ruby 3.0 planned after the Bullseye release. 

  3. If you run Puppet, you clearly should care: the .deb packages upstream publishes really aren't great and I would not recommend using them. 


Cory DoctorowMashapedia

Well this is pretty terrific: Pavel Anni was so taken with my 2020 novel ATTACK SURFACE (the third Little Brother novel) that he’s created “Mashapedia,” a chapter-by-chapter breakdown of the real world technologies in the tale.

Pavel is both comprehensive and comprehensible, with short definitions and links for the mundane (MIT Media Lab, EL wire, PGP) to the exotic (binary transparency, reverse shells, adversarial preturbation).

When I was an adolescent, my friend group traded secret knowledge as a kind of social currency – tricks for getting free payphone calls, or doubling the capacity of a floppy disc, or calling the White House switchboard.

I doted on books that promised more of the same: Paladin Press and Amok Catalog titles, Steal This Book, the Anarchist Cookbook, the Whole Earth Review and the Whole Earth Catalog.

But when I sat down in 2006 to write the first Little Brother book, I realized that facts were now cheap – anything could be discovered with a single search. The thing in short supply now was search terms – knowing what to search for.

As John Ciardi wrote,

The old crow is getting slow;
the young crow is not.
Of what the young crow does not know,
the old crow knows a lot.

The young crow flies above, below,
and rings around the slow old crow.
What does the fast young crow not know?

So I set out to write a book of realistic scenarios, dramatizing what tech COULD do, on the assumption that readers would glean those all-important search-terms from the tale, and that this could launch them on a voyage of discovery.

That’s the ethic I’ve stuck with through all three novels and the short stories in the series. It seems to have worked. Anni’s Mashapedia is the apotheosis of that plan: a comprehensive set of search terms masquerading as a glossary.

Anni’s hosted Mashapedia on Github, and you can amend, extend or contest his definitions by opening an issue in the repo. What a delight!

Worse Than FailureError'd: Nope, that was Prod

"They say you shouldn't test in prod... They aren't wrong." Dave P. writes.


Dave W. wrote, "I guess even the USPS's 'missing mail' site is missing in action."


"$69.99 instead of $69.99...Thank you, Microsoft?" writes Carlos.


Pascal wrote, "Looks like the test was more successful than they had planned."


"Looks like eBay is keeping it as simple as possible," Peter K. writes.


[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianDirk Eddelbuettel: #32: Portable Continuous Integration using r-ci

Welcome to the 32th post in the rarely raucous R recommendations series, or R4 for short. This post covers continuous integration, a topic near and dear to many of us who have to recognise its added value.

The popular and widely-used service at Travis is undergoing changes driven by a hard-to-argue with need for monetization. A fate that, if we’re honest, lies ahead for most “free” services so who know, maybe one day we have to turn away from other currently ubiquitous service. Because one never knows, it can pay off to not get to tied to any one service. Which brings us to today’s post and my r-ci service which allows me to run CI at Travis, at GitHub, at Azure, and on local Docker containers as the video demonstrates. It will likely also work at GitLab and other services, I simply haven’t tried any others.

The slides are here. The r-ci website introduces r-ci at a high-level. This repo at GitHub contains, and can be used to raise issues, ask questions, or provide feedback.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianReproducible Builds (diffoscope): diffoscope 164 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 164. This version includes the following changes:

[ Chris Lamb ]
* Truncate jsondiff differences at 512 bytes lest they consume the entire page.
* Wrap our external call to cmp(1) with a profile (to match the internal
* Add a note regarding the specific ordering of the new
  all_tools_are_listed test.

[ Dimitrios Apostolou ]
* Performance improvements:
  - Improve speed of has_same_content by spawning cmp(1) less frequently.
  - Log whenever the external cmp(1) command is spawn.ed
  - Avoid invoking external diff for identical, short outputs.
* Rework handling of temporary files:
  - Clean up temporary directories as we go along, instead of at the end.
  - Delete FIFO files when the FIFO feeder's context manager exits.

[ Mattia Rizzolo ]
* Fix a number of potential crashes in --list-debian-substvars, including
  explicitly listing lipo and otool as external tools.
 - Remove redundant code and let object destructors clean up after themselves.

[ Conrad Ratschan ]
* Add a comparator for Flattened Image Trees (FIT) files, a boot image format
  used by U-Boot.

You find out more by visiting the project homepage.


Krebs on SecuritySealed U.S. Court Records Exposed in SolarWinds Breach

The ongoing breach affecting thousands of organizations that relied on backdoored products by network software firm SolarWinds may have jeopardized the privacy of countless sealed court documents on file with the U.S. federal court system, according to a memo released Wednesday by the Administrative Office (AO) of the U.S. Courts.

The judicial branch agency said it will be deploying more stringent controls for receiving and storing sensitive documents filed with the federal courts, following a discovery that its own systems were compromised as part of the SolarWinds supply chain attack. That intrusion involved malicious code being surreptitiously inserted into updates shipped by SolarWinds for some 18,000 users of its Orion network management software as far back as March 2020.

“The AO is working with the Department of Homeland Security on a security audit relating to vulnerabilities in the Judiciary’s Case Management/Electronic Case Files system (CM/ECF) that greatly risk compromising highly sensitive non-public documents stored on CM/ECF, particularly sealed filings,” the agency said in a statement published Jan. 6.

“An apparent compromise of the confidentiality of the CM/ECF system due to these discovered vulnerabilities currently is under investigation,” the statement continues. “Due to the nature of the attacks, the review of this matter and its impact is ongoing.”

The AO declined to comment on specific questions about their breach disclosure. But a source close to the investigation told KrebsOnSecurity that the federal court document system was “hit hard,” by the SolarWinds attackers, which multiple U.S. intelligence and law enforcement agencies have attributed as “likely Russian in origin.”

The source said the intruders behind the SolarWinds compromise seeded the AO’s network with a second stage “Teardrop” malware that went beyond the “Sunburst” malicious software update that was opportunistically pushed out to all 18,000 customers using the compromised Orion software. This suggests the attackers were targeting the agency for deeper access to its networks and communications.

The AO’s court document system powers a publicly searchable database called PACER, and the vast majority of the files in PACER are not restricted and are available to anyone willing to pay for the records.

But experts say many other documents stored in the AO’s system are sealed — either temporarily or indefinitely by the courts or parties to a legal matter — and may contain highly sensitive information, including intellectual property and trade secrets, or even the identities of confidential informants.

Nicholas Weaver, a lecturer at the computer science department at University of California, Berkeley, said the court document system doesn’t hold documents that are classified for national security reasons. But he said the system is full of sensitive sealed filings — such as subpoenas for email records and so-called “trap and trace” requests that law enforcement officials use to determine with whom a suspect is communicating via phone, when and for how long.

“This would be a treasure trove for the Russians knowing about a lot of ongoing criminal investigations,” Weaver said. “If the FBI has indicted someone but hasn’t arrested them yet, that’s all under seal. A lot of the investigative tools that get protected under seal are filed very early on in the process, often with gag orders that prevent [the subpoenaed party] from disclosing the request.”

The acknowledgement from the AO comes hours after the U.S. Justice Department said it also was a victim of the SolarWinds intruders, who took control over the department’s Office 365 system and accessed email sent or received from about three percent of DOJ accounts (the department has more than 100,000 employees).

The SolarWinds hack also reportedly jeopardized email systems used by top Treasury Department officials, and granted the attackers access to networks inside the Energy, Commerce and Homeland Security departments.

The New York Times on Wednesday reported that investigators are examining whether a breach at another software provider — JetBrains — may have precipitated the attack on SolarWinds. The company, which was founded by three Russian engineers in the Czech Republic, makes a tool called TeamCity that helps developers test and manage software code. TeamCity is used by developers at 300,000 organizations, including SolarWinds and 79 of the Fortune 100 companies.

“Officials are investigating whether the company, founded by three Russian engineers in the Czech Republic with research labs in Russia, was breached and used as a pathway for hackers to insert back doors into the software of an untold number of technology companies,” The Times said. “Security experts warn that the monthslong intrusion could be the biggest breach of United States networks in history.”

Under the AO’s new procedures, highly sensitive court documents filed with federal courts will be accepted for filing in paper form or via a secure electronic device, such as a thumb drive, and stored in a secure stand-alone computer system. These sealed documents will not be uploaded to CM/ECF.

“This new practice will not change current policies regarding public access to court records, since sealed records are confidential and currently are not available to the public,” the AO said.

James Lewis, senior vice president at the Center for Strategic and International Studies, said it’s too soon to tell the true impact of the breach at the court system, but the fact that they were apparently targeted is a “a very big deal.”

“We don’t know what the Russians took, but the fact that they had access to this system means they had access to a lot of great stuff, because federal cases tend to involve fairly high profile targets,” he said.

Krebs on SecurityAll Aboard the Pequod!

Like countless others, I frittered away the better part of Jan. 6 doomscrolling and watching television coverage of the horrifying events unfolding in our nation’s capital, where a mob of President Trump supporters and QAnon conspiracy theorists was incited to lay siege to the U.S. Capitol. For those trying to draw meaning from the experience, might I suggest consulting the literary classic Moby Dick, which simultaneously holds clues about QAnon’s origins and offers an apt allegory about a modern-day Captain Ahab and his ill-fated obsessions.

Many have speculated that Jim Watkins, the administrator of the online message board 8chan (a.k.a. 8kun), and/or his son Ron are in fact “Q,” the anonymous persona behind the QAnon conspiracy theory, which holds that President Trump is secretly working to save the world from a satanic cult of pedophiles and cannibals.

Last year, as I was scrutinizing the computer networks that kept QAnon online, researcher Ron Guilmette pointed out a tantalizing utterance from Watkins the younger which adds tenuous credence to the notion that one or both of them is Q.

We’ll get to how the Great White Whale (the Capitol?) fits into this tale in a moment. But first, a bit of background. A person identified only as “Q” has for years built an impressive following for the far-right conspiracy movement by leaving periodic “Q drops,” cryptic messages that QAnon adherents spend much time and effort trying to decipher and relate to current events.

Researchers who have studied more than 5,000 Q drops are convinced that there are two distinct authors of these coded utterances. The leading theory is that those identities corresponded to the aforementioned father-and-son team responsible for operating 8chan.

Jim Watkins, 56, is the current owner of 8chan, a community perhaps now best known as a forum for violent extremists and mass shooters. Watkins is an American pig farmer based in the Philippines; Ron reportedly resides in Japan.

In the aftermath of back-to-back mass shootings on Aug. 3 and Aug. 4, 2019 in which a manifesto justifying one of the attacks was uploaded to 8chan, Cloudflare stopped providing their content delivery network to 8chan. Several other providers quickly followed suit, leaving 8chan offline for months before it found a haven at a notorious bulletproof hosting facility in Russia.

One reason Q watchers believe Ron and Jim Watkins may share authorship over the Q drops is that while 8chan was offline, the messages from Q ceased. The drops reappeared only months later when 8chan rebranded as 8kun.


Here’s where the admittedly “Qonspiratorial” clue about the Watkins’ connection to Q comes in. On Aug. 5, 2019, Ron Watkins posted a Twitter message about 8chan’s ostracization which compared the community’s fate to that of the Pequod, the name of the doomed whaling ship in the Herman Melville classic “Moby Dick.”

“If we are still down in a few hours then maybe 8chan will just go clearnet and we can brave DDOS attacks like Ishmael on the Pequod,” Watkins the younger wrote.

Ishmael, the first-person narrator in the novel, is a somewhat disaffected American sailor who decides to try his hand at a whaling ship. Ishmael is a bit of a minor character in the book; very soon into the novel we are introduced to a much more interesting and enigmatic figure — a Polynesian harpooner by the name of Queequeg.

Apart from being a cannibal from the Pacific islands who has devoured many people, Queequeg is a pretty nice guy and shows Ismael the ropes of whaling life. Queequeg is covered head to toe in tattoos, which are described by the narrator as the work of a departed prophet and seer from the cannibal’s home island.

Like so many Q drops, Queequeg’s tattoos tell a mysterious tale, but we never quite learn what that full story is. Indeed, the artist who etched them into Queequeg’s body is long dead, and the cannibal himself can’t seem to explain what it all means.

Ishmael describes Queequeg’s mysterious markings in this passage:

“…a complete theory of the heavens and earth, and a mystical treatise on the art of attaining truth; so that Queequeg in his own proper person was a riddle to unfold; a wondrous work in one volume; but whose mysteries not even himself could read, though his own live heart beat against them; and these mysteries were therefore destined in the end to moulder away with the living parchment whereon they were inscribed, and so be unsolved to the last.”


It’s perhaps fitting then that one of the most recognizable figures from the mob that stormed the U.S. Capitol on Wednesday was a heavily-tattooed, spear-wielding QAnon leader who goes by the name “Q Shaman” (a.k.a. Jake Angeli).

“Q Shaman,” a.k.a. Jake Angeli, at a Black Lives Matter event in Arizona (left) and Wednesday, confronted by U.S. Capitol Police. Image: Twitter, @KelemenCari.

“Angeli’s presence at the riot, along with others wearing QAnon paraphernalia, comes as the conspiracy-theory movement has been responsible for the popularization of Trump’s voter-fraud conspiracy theories,” writes Rachel E. Greenspan for Yahoo! News.

“As Q has become increasingly hands-off, giving fewer and fewer messages to his devotees, QAnon leaders like Angeli have gained fame and power in the movement,” Greenspan wrote.

If somehow Moby Dick was indeed the inspiration for the “Q” identity in QAnon, yesterday’s events at The Capitol were the inexorable denouement of a presidential term that increasingly came to be defined by conspiracy theories. In a somewhat prescient Hartford Courant op-ed published in 2018, author Steven Almond observed that Trump’s presidency could be best understood through the lens of the Pequod’s Captain Ahab. To wit:

“Melville is offering a mythic account of how one man’s virile bombast ensnares everyone and everything it encounters. The setting is nautical, the language epic. But the tale, stripped to its ribs, is about the seductive power of the wounded male ego, how naturally a ship steered by men might tack to its vengeful course.”

“Trump’s presidency has been, in its way, a retelling of this epic. Whether we cast him as agent or principal hardly matters. What matters is that Americans have joined the quest. In rapture or disgust, we’ve turned away from the compass of self-governance and toward the mesmerizing drama of aggression on display, the masculine id unchained and all that it unchains within us. With every vitriolic tweet storm and demeaning comment, Trump strikes through the mask.”


If all of the above theorizing reads like yet another crackpot QAnon conspiracy, that may be the inevitable consequence of my spending far too much time going down this particular rabbit hole (and re-reading Moby Dick in the process!).

In any case, none of this is likely to matter to the diehard QAnon conspiracy theorists themselves, says Mike Rothschild, a writer who specializes in researching and debunking conspiracy theories.

“Even if Jim Watkins was revealed as owning the board or making the posts, it wouldn’t matter,” Rothschild said. “Anything that happens that disconfirms Q being an official in the military industrial complex is going to help fuel their persecution complex.”

Rothschild has been working hard on finishing his next book, “The Storm is Upon Us: How QAnon Became a Movement, Cult, and Conspiracy Theory of Everything,” which is due to be published in October 2021. Who’s printing the book? Ten points if you guessed Melville House, an independent publisher named after Herman Melville.

Rondam RamblingsMy take on yesterday's insurrection

It was never a question of whether Donald Trump would destroy the Republican party, but when and how, and whether he would take the rest of the country (and possibly the world) down along with it.  The one silver lining to yesterday's horrific events in our nation's capitol is that we finally have the beginning of a real answer.  Mike Pence and Mitch McConnell finally joined the rest of the rats

Worse Than FailureGoing Backwards

Nearly 15 years ago, fresh out of college, Giuseppe was hired on at a mobile networking company. The company wasn't a software company, but since they were heavy in networking and could handle all sorts of weird technical problems there, software must basically be the same thing, so they also started producing applications to manage those networks.

It didn't take them long to discover that "building networks" and "writing software" are different skillsets, and so they did what every company with some room in the budget does: they hire a pack of Highly Paid Consultants to get their project on track.

Giuseppe joined the team just as the budget for consultants ran out. They "knowledge transfered" "everything" in a whirlwind of incomprehensible PowerPoint presentations, and then vanished back into whatever lairs HPCs come from, and left Giuseppe and the other new hires to figure out what was going on.

When the HPCs were hired, the company had extensively specified the requirements. Each requirement was expressed in terms like:

  • 12.2.23.a.1 The user will be able to reverse their journey through the app using the back button.
  • 12.2.23.a.2 The app will remember the last 5 screens the user has visited.

There was an elaborate tracking system, with multiple signoffs, to confirm who tested what and when. That specific requirement, 12.2.23.a.2, was signed as "passed" by the original HPC. Then it was passed off to a QA test, which also signed off. Then it went to the users, who submitted a defect: the back button often doesn't behave as expected. It drifted back to the original developer, who closed the defect as "can not replicate", QA signed off again, the user re-opened the ticket claiming the issue was still there, and round and round until the budget for consulting ran out and it was Giuseppe's turn to figure out what was actually going on.

There was no automated testing, and no formal test plan, but it didn't take Giuseppe long to figure out what the bug was. In fact, it took slightly more than 6 navigation actions, to discover that he could replicate the bug consistently. It also explained why the HPC couldn't replicate the bug: they navigated through exactly six pages, and confirmed that if they hit the back button 5 times, they got the same pages they'd been through. If they'd gone to one more page, they would have seen that it didn't work. And the offending code was easy to spot:

private void OnPageNavigated(Page pageFrom) { if(m_backStack.Count < 5) { m_backStack.Push(pageFrom); } }

Instead of tracking the last five pages the user viewed, this tracked the first five pages they viewed, and ignored every navigation thereafter.

This was just one of many bugs around the back button alone, and additional confusion about the specs. Many pages might open a modal dialog- should that be in the history? If it is, does it count against the 5 pages total? It ended up creating situations where the stack of navigation events needed to track more than 5 items, because of the fuzzy definition of what was a "page", but that created additional bugs because if you could navigate backwards more than five times, you were technically in violation of the spec.

Giuseppe adds:

As I've moved up the software engineering ladder I cringe about how the project was run and managed. Things like unit testing or CI didnt exist, all testing was extremely manual, performed by a dedicated team of 3 testers to an exhaustive specification. … As a [junior] developer nobody ever performed a code review on what I wrote, it went straight into the app. In the whole time I worked there only one person gave me feedback on my code when I screwed up OTA updates that amounted to 'dont do it again'. I was there for over 2 years, and it should be no surprise that the product still hadn't been signed off when I left.

It wasn't signed off when he left, and it probably hasn't been signed off to this day.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianRussell Coker: Monopoly the Game

The Smithsonian Mag has an informative article about the history of the game Monopoly [1]. The main point about Monopoly teaching about the problems of inequality is one I was already aware of, but there are some aspects of the history that I learned from the article.

Here’s an article about using modified version of Monopoly to teach Sociology [2].

Maria Paino and Jeffrey Chin wrote an interesting paper about using Monopoly with revised rules to teach Sociology [3]. They publish the rules which are interesting and seem good for a class.

I think it would be good to have some new games which can teach about class differences. Maybe have an “Escape From Poverty” game where you have choices that include drug dealing to try and improve your situation or a cooperative game where people try to create a small business. While Monopoly can be instructive it’s based on the economic circumstances of the past. The vast majority of rich people aren’t rich from land ownership.

Planet DebianJohn Goerzen: This Is How Tyrants Go: Alone

I remember reading an essay a month or so ago — sadly I forget where — talking about how things end for tyrants. If I were to sum it up, it would be with the word “alone.” Their power fading, they find that they had few true friends or believers; just others that were greedy for power or riches and, finding those no longer to be had, depart the sinking ship. The article looked back at examples like Nixon and examples from the 20th century in Europe and around the world.

Today we saw images of a failed coup attempt.

But we also saw hope.

Already senior staff in the White House are resigning. Ones that had been ardent supporters. In the end, just 6 senators supported the objection to the legitimate electors. Six. Lindsay Graham, Mike Pence, and Mitch McConnel all deserted Trump.

CNN reports that there are serious conversations about invoking the 25th amendment and removing him from office, because even Republicans are to the point of believing that America should not have two more weeks of this man.

Whether those efforts are successful or not, I don’t know. What I do know is that these actions have awakened many people, in a way that nothing else could for four years, to the dangers of Trump and, in the end, have bolstered the cause of democracy.

Hard work will remain but today, Donald Trump is in the White House alone, abandoned by allies and blocked by Twitter. And we know that within two weeks, he won’t be there at all.

We will get through this.


Cryptogram Russia’s SolarWinds Attack and Software Security

The information that is emerging about Russia’s extensive cyberintelligence operation against the United States and other countries should be increasingly alarming to the public. The magnitude of the hacking, now believed to have affected more than 250 federal agencies and businesses — 足primarily through a malicious update of the SolarWinds network management software — 足may have slipped under most people’s radar during the holiday season, but its implications are stunning.

According to a Washington Post report, this is a massive intelligence coup by Russia’s foreign intelligence service (SVR). And a massive security failure on the part of the United States is also to blame. Our insecure Internet infrastructure has become a critical national security risk足 — one that we need to take seriously and spend money to reduce.

President-elect Joe Biden’s initial response spoke of retaliation, but there really isn’t much the United States can do beyond what it already does. Cyberespionage is business as usual among countries and governments, and the United States is aggressively offensive in this regard. We benefit from the lack of norms in this area and are unlikely to push back too hard because we don’t want to limit our own offensive actions.

Biden took a more realistic tone last week when he spoke of the need to improve US defenses. The initial focus will likely be on how to clean the hackers out of our networks, why the National Security Agency and US Cyber Command failed to detect this intrusion and whether the 2-year-old Cybersecurity and Infrastructure Security Agency has the resources necessary to defend the United States against attacks of this caliber. These are important discussions to have, but we also need to address the economic incentives that led to SolarWinds being breached and how that insecure software ended up in so many critical US government networks.

Software has become incredibly complicated. Most of us almost don’t know all of the software running on our laptops and what it’s doing. We don’t know where it’s connecting to on the Internet足 — not even which countries it’s connecting to足 — and what data it’s sending. We typically don’t know what third party libraries are in the software we install. We don’t know what software any of our cloud services are running. And we’re rarely alone in our ignorance. Finding all of this out is incredibly difficult.

This is even more true for software that runs our large government networks, or even the Internet backbone. Government software comes from large companies, small suppliers, open source projects and everything in between. Obscure software packages can have hidden vulnerabilities that affect the security of these networks, and sometimes the entire Internet. Russia’s SVR leveraged one of those vulnerabilities when it gained access to SolarWinds’ update server, tricking thousands of customers into downloading a malicious software update that gave the Russians access to those networks.

The fundamental problem is one of economic incentives. The market rewards quick development of products. It rewards new features. It rewards spying on customers and users: collecting and selling individual data. The market does not reward security, safety or transparency. It doesn’t reward reliability past a bare minimum, and it doesn’t reward resilience at all.

This is what happened at SolarWinds. A New York Times report noted the company ignored basic security practices. It moved software development to Eastern Europe, where Russia has more influence and could potentially subvert programmers, because it’s cheaper.

Short-term profit was seemingly prioritized over product security.

Companies have the right to make decisions like this. The real question is why the US government bought such shoddy software for its critical networks. This is a problem that Biden can fix, and he needs to do so immediately.

The United States needs to improve government software procurement. Software is now critical to national security. Any system for acquiring software needs to evaluate the security of the software and the security practices of the company, in detail, to ensure they are sufficient to meet the security needs of the network they’re being installed in. Procurement contracts need to include security controls of the software development process. They need security attestations on the part of the vendors, with substantial penalties for misrepresentation or failure to comply. The government needs detailed best practices for government and other companies.

Some of the groundwork for an approach like this has already been laid by the federal government, which has sponsored the development of a “Software Bill of Materials” that would set out a process for software makers to identify the components used to assemble their software.

This scrutiny can’t end with purchase. These security requirements need to be monitored throughout the software’s life cycle, along with what software is being used in government networks.

None of this is cheap, and we should be prepared to pay substantially more for secure software. But there’s a benefit to these practices. If the government evaluations are public, along with the list of companies that meet them, all network buyers can benefit from them. The US government acting purely in the realm of procurement can improve the security of nongovernmental networks worldwide.

This is important, but it isn’t enough. We need to set minimum safety and security standards for all software: from the code in that Internet of Things appliance you just bought to the code running our critical national infrastructure. It’s all one network, and a vulnerability in your refrigerator’s software can be used to attack the national power grid.

The IOT Cybersecurity Improvement Act, signed into law last month, is a start in this direction.

The Biden administration should prioritize minimum security standards for all software sold in the United States, not just to the government but to everyone. Long gone are the days when we can let the software industry decide how much emphasis to place on security. Software security is now a matter of personal safety: whether it’s ensuring your car isn’t hacked over the Internet or that the national power grid isn’t hacked by the Russians.

This regulation is the only way to force companies to provide safety and security features for customers — just as legislation was necessary to mandate food safety measures and require auto manufacturers to install life-saving features such as seat belts and air bags. Smart regulations that incentivize innovation create a market for security features. And they improve security for everyone.

It’s true that creating software in this sort of regulatory environment is more expensive. But if we truly value our personal and national security, we need to be prepared to pay for it.

The truth is that we’re already paying for it. Today, software companies increase their profits by secretly pushing risk onto their customers. We pay the cost of insecure personal computers, just as the government is now paying the cost to clean up after the SolarWinds hack. Fixing this requires both transparency and regulation. And while the industry will resist both, they are essential for national security in our increasingly computer-dependent worlds.

This essay previously appeared on

Planet DebianUrvika Gola: Dog tails and tales from 2020

There is no denying in the fact that 2020 was a challenging year for everybody, including animals.

In India, animals such as dogs who mostly filled their bellies at street food stalls, were starving as there were no street eateries operating during a long long lockdown. I was in my home town, New Delhi, working from home like most of us.

During the month of July 2020, a dog near the place I live (we fondly called her Brownie) delivered 7 pups in wilderness.

I would never forget my first sight of them, inside a dirty, garbage filled land there were the cutest, cleanest, tiniest ball of fur! All of them were toppled as the land’s surface was uneven.. The first instinct was to put them all together on a flat surface. After the search mission completed, this was the sight..

Brownie and her litter put together in the same land where she gave birth.

The next day, I sought help from a animal-lover person to build a temporary shed for the puppies! We came and changed sheets, cleaned the surroundings and put fresh water for Brownie until… started raining heavily one night and we were worried if the shed would sustain the heavy rainfall.
Next morning the first thing was to check on the pups, luckily, the pups were fine however, the entire area and their bed was damp.

Without any second thought, the pups were moved from there to a safe house shelter as it was predicted that the rains will continue for a few more weeks due to monsoon. Soon, 2 months went by, from observing the pups crawl over, their eyes open and to their first bark, despite the struggles, it was an beautiful experience.
Brownie weaned off the pups and thus, they were ready for adoption! However, my biggest fear was, will anyone come forward to adopt them??

With such thoughts parallelly running in my mind, I started to post about adoption for these 7 pups.
To my biggest surprise, one by one, 5 amazing humans came forward and decided to give these pups a better life than what they would get on the streets of India. I wouldn’t be able to express in words how grateful I will to be all the five dog parents who decided to adopt an Indian Street Puppy/Indies/Desi Puppy, opening up the space in their hearts and homes for the pups!

One of the 5 adopted pups is adopted by a person who hails from USA, but currently working in India. It’s so heartwarming to see, that in India, despite so much awareness created against breeders and their methods, people still prefer to go for foreign bred puppy and disregard Indian/Desi Dogs.. On the other hand, there are foreigners who value the life of a Indian/Desi Dog :�)

The 5 Adopted Pups who now have a permanent loving family!

The adorable, “Robin�!
“Don� and his new big brother!

The naughty and handsome, “Swayze�!

First Pup who got adopted – “Pluto�
Playful and Beautiful, “Bella�!

If this isn’t perfect, I don’t know what is! God had planned loving families for them and they found it..
However, Its been almost six months now, that we haven’t found a permanent home for 2 of the 7 pups, but they have the perfect foster family taking care of them right now.

Meet Momo and Beesa,
2 out of the 7 pups, who are still waiting for a forever home, currently living with a loving foster family.

Vaccinations, Deworming is done.
Female pups, 6 months old.

Now as winters are here, Along with one of my friend, who is also fostering the two pups, arranged gunny sack bags for our street, stray dogs. Two NGOs namely, Lotus Indie Foundation and We Exist Foundation who work Animal Welfare in India, were providing dog beds to ground volunteers like us. We are fortunate that they selected us and helped us to make winters less harsh for the stray dogs. However, the cold is such, I also purchased dog coats as well and put in on a few furries. After hours of running behind dogs and convincing them to wear coats, we managed to put it on a few.

Brownie, the mom dog!

This is a puppy!
She did not let us put coat on her 😀

Another topic that needs more sensitivity is Sterilization/Neutering of dogs, that’s a viable method cited by the Government to control dog population and end suffering of puppies who die under the wheels of cars. However, the implementation of this is worrisome, as it’s not as robust. In a span of 6 months, I managed to get 5 dog sterilized in my area, number is not big but I feel it’s a good start as an individual 😊

When I see them now, healthier, happier, running around, with no fear of getting attacked by dogs, I can’t express the content I feel. For 2 of the dogs (Brownie and her friend) I got it done personally from a private vet. For the other 3, I got it done via Municipal Corporations who do it for free for dogs, you’d have to call them and they come with dog catchers and a van and drop them back in the same area, but volunteers like us have to be very vigilant and active during the whole process to follow up with them.

Dogs getting dropped off after sterilization.

My 2020 ended with this, I am not sure why I am I even writing this in my blog where mostly I focused on my technical work and experiences, but this pandemic was challenging was everybody and what we planned couldn’t happen, but because of 2020, because of the pandemic, I was on WFH in my city and I was able to help a few dogs in my area have a healthy life ahead! 😊

What I learned during this entire adventure was, there are a lot of sweet, sensitive, caring people that we are just yet to meet. Along the way, we will also meet insensitive and discouraging people, who are unwilling to change or listen, ignore them and continue your good.

Have 1 person by your side, it’s so much stronger than 10 against you.

Silver lining! Hope you all had some positive experiences despite the adversity faced by every single one of us.

Cryptogram APT Horoscope

This delightful essay matches APT hacker groups up with astrological signs. This is me:

Capricorn is renowned for its discipline, skilled navigation, and steadfastness. Just like Capricorn, Helix Kitten (also known as APT 35 or OilRig) is a skilled navigator of vast online networks, maneuvering deftly across an array of organizations, including those in aerospace, energy, finance, government, hospitality, and telecommunications. Steadfast in its work and objectives, Helix Kitten has a consistent track record of developing meticulous spear-phishing attacks.

Planet DebianJonathan Dowland: PaperWM

My PaperWM desktop, as I write this post.

My PaperWM desktop, as I write this post.

Just before Christmas I decided to try out a GNOME extension I'd read about, PaperWM. It looked promising, but I was a little nervous about breaking my existing workflow, which was heavily reliant on the Put Windows extension.

It's great! I have had to carefully un-train some of my muscle memory but it seems to be worth it. It seems to strike a great balance between the rigidity of a tile-based window manager and a more traditional floating-windows one.

I'm always wary of coming to rely upon large extensions or plugins. The parent software is often quite hands-off about caring about supporting users of them, or breaking them by making API changes. Certainly those Firefox users who were heavily dependent on plugins prior to the Quantum fire-break are still very, very angry. (I actually returned to Firefox at that point, so I avoided the pain, and enjoy the advantages of the re-architecture). PaperWM hopefully is large enough and popular enough to avoid that fate.

Worse Than FailureCodeSOD: Copy Paste Paste Paste Paste

"Hey," someone on Russell F's team said. "We should add some keyboard navigation to our web app." That struck everyone as an "easy win", specifically because they were really talking about making "enter" and "escape" do something useful.

They wrote some quick requirements, passed it off, and the developer submitted a surprisingly long pull request. Russell passed a sample of three out of the dozens of methods:

function lr_control(event) { var current = document.getElementById('lr'); var next = document.getElementById('rf'); var back = document.getElementById('lf'); if (event.key == "Enter") { divareInoperativeSensorsReplacedShow(); } if (event.key == "Escape") { unCheck(current); } nav_control_clickable(current, next, back, event); return; } function rf_control(event) { var current = document.getElementById('rf'); var next = document.getElementById('rr'); var back = document.getElementById('lr'); if (event.key == "Enter") { divareInoperativeSensorsReplacedShow(); } if (event.key == "Escape") { unCheck(current); } nav_control_clickable(current, next, back, event); return; } function rr_control(event) { var current = document.getElementById('rr'); var next = document.getElementById('yesReplace'); var back = document.getElementById('rf'); if (event.key == "Enter") { divareInoperativeSensorsReplacedShow(); } if (event.key == "Escape") { unCheck(current); } nav_control_clickable(current, next, back, event); return; }

At no point did the similarity between all these methods ever make the developer responsible think, "oh, there's gotta be an easier way." No, they just copied and pasted, again and again, changing only the document elements that they point at.

Russell supplied some comments on the request, and pointers on how to streamline this code.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Planet DebianRussell Coker: Planet Linux Australia

Linux Australia have decided to cease running the Planet installation on I believe that blogging is still useful and a web page with a feed of Australian Linux blogs is a useful service. So I have started running a new Planet Linux Australia on There has been discussion about getting some sort of redirection from the old Linux Australia page, but they don’t seem able to do that.

If you have a blog that has a reasonable portion of Linux and FOSS content and is based in or connected to Australia then email me on russell at to get it added.

When I started running this I took the old list of feeds from, deleted all blogs that didn’t have posts for 5 years and all blogs that were broken and had no recent posts. I emailed people who had recently broken blogs so they could fix them. It seems that many people who run personal blogs aren’t bothered by a bit of downtime.

As an aside I would be happy to setup the monitoring system I use to monitor any personal web site of a Linux person and notify them by Jabber or email of an outage. I could set it to not alert for a specified period (10 mins, 1 hour, whatever you like) so it doesn’t alert needlessly on routine sysadmin work and I could have it check SSL certificate validity as well as the basic page header.

Planet DebianRussell Coker: Weather and Boinc

I just wrote a Perl script to look at the Australian Bureau of Meteorology pages to find the current temperature in an area and then adjust BOINC settings accordingly. The Perl script (in this post after the break, which shouldn’t be in the RSS feed) takes the URL of a Bureau of Meteorology observation point as ARGV[0] and parses that to find the current (within the last hour) temperature. Then successive command line arguments are of the form “24:100” and “30:50” which indicate that at below 24C 100% of CPU cores should be used and below 30C 50% of CPU cores should be used. In warm weather having a couple of workstations in a room running BOINC (or any other CPU intensive task) will increase the temperature and also make excessive noise from cooling fans.

To change the number of CPU cores used the script changes /etc/boinc-client/global_prefs_override.xml and then tells BOINC to reload that config file. This code is a little ugly (it doesn’t properly parse XML, it just replaces a line of text) and could fail on a valid configuration file that wasn’t produced by the current BOINC code.

The parsing of the BoM page is a little ugly too, it relies on the HTML code in the BoM page – they could make a page that looks identical which breaks the parsing or even a page that contains the same data that looks different. It would be nice if the BoM published some APIs for getting the weather. One thing that would be good is TXT records in the DNS. DNS supports caching with specified lifetime and is designed for high throughput in aggregate. If you had a million IOT devices polling the current temperature and forecasts every minute via DNS the people running the servers wouldn’t even notice the load, while a million devices polling a web based API would be a significant load. As an aside I recommend playing nice and only running such a script every 30 minutes, the BoM page seems to be updated on the half hour so I have my cron jobs running at 5 and 35 minutes past the hour.

If this code works for you then that’s great. If it merely acts as an inspiration for developing your own code then that’s great too! BOINC users outside Australia could replace the code for getting meteorological data (or even interface to a digital thermometer). Australians who use other CPU intensive batch jobs could take the BoM parsing code and replace the BOINC related code. If you write scripts inspired by this please blog about it and comment here with a link to your blog post.

use strict;
use Sys::Syslog;

# St Kilda Harbour RMYS

my $URL = $ARGV[0];

open(IN, "wget -o /dev/null -O - $URL|") or die "Can't get $URL";
  if($_ =~ /tr class=.rowleftcolumn/)

sub get_data
  if(not $_[0] =~ /headers=.t1-$_[1]/)
    return undef;
  $_[0] =~ s/^.*headers=.t1-$_[1]..//;
  $_[0] =~ s/<.td.*$//;
  return $_[0];

my @datetime;
my $cur_temp -100;

  if($_ =~ /^<.tr>$/)
  my $res;
  if($res = get_data($_, "datetime"))
    @datetime = split(/\//, $res)
  elsif($res = get_data($_, "tmp"))
    $cur_temp = $res;
if($#datetime != 1 or $cur_temp == -100)
  die "Can't parse BOM data";

my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime();

if($mday - $datetime[0] > 1 or ($datetime[0] > $mday and $mday != 1))
  die "Date wrong\n";

my $mins;
my @timearr = split(/:/, $datetime[1]);
$mins = $timearr[0] * 60 + $timearr [1];
if($timearr[1] =~ /pm/)
  $mins += 720;
if($mday != $datetime[0])
  $mins += 1440;

if($mins + 60 < $hour * 60 + $min)
  die "page outdated\n";

my %temp_hash;
foreach ( @ARGV[1..$#ARGV] )
  my @tmparr = split(/:/, $_);
  $temp_hash{$tmparr[0]} = $tmparr[1];
my @temp_list = sort(keys(%temp_hash));
my $percent = 0;
my $i;
for($i = $#temp_list; $i >= 0 and $temp_list[$i] > $cur_temp; $i--)
  $percent = $temp_hash{$temp_list[$i]}

my $prefs = "/etc/boinc-client/global_prefs_override.xml";
open(IN, "<$prefs") or die "Can't read $prefs";
my @prefs_contents;
  push(@prefs_contents, $_);

openlog("boincmgr-cron", "", "daemon");

my @cpus_pct = grep(/max_ncpus_pct/, @prefs_contents);
my $cpus_line = $cpus_pct[0];
$cpus_line =~ s/..max_ncpus_pct.$//;
$cpus_line =~ s/^.*max_ncpus_pct.//;
if($cpus_line == $percent)
  syslog("info", "Temp $cur_temp" . "C, already set to $percent");
  exit 0;
open(OUT, ">$") or die "Can't read $";
for($i = 0; $i <= $#prefs_contents; $i++)
  if($prefs_contents[$i] =~ /max_ncpus_pct/)
    print OUT "   <max_ncpus_pct>$percent.000000</max_ncpus_pct>\n";
    print OUT $prefs_contents[$i];
rename "$", "$prefs" or die "can't rename";
system("boinccmd --read_global_prefs_override");
syslog("info", "Temp $cur_temp" . "C, set percentage to $percent");

Planet DebianBen Hutchings: Debian LTS work, December 2020

I was assigned 16 hours of work by Freexian's Debian LTS initiative and carried over 9 hours from earlier months. I worked 16.5 hours this month, so I will carry over 8.5 hours to January. (Updated: corrected number of hours worked.)

I updated linux-4.19 to include the changes in the Debian 10.7 point release, uploaded the package, and issued DLA-2483-1 for this.

I picked some regression fixes from the Linux 4.9 stable branch to the linux package, and uploaded the package. This unfortunately failed to build on arm64 due to some upstream changes uncovering an old bug, so I made a second upload fixing that. I issued DLA-2494-1 for this.

I updated the linux packaging branch for stretch to Linux 4.9.249, but haven't made another package upload yet.

Krebs on SecurityHamas May Be Threat to 8chan, QAnon Online

In October 2020, KrebsOnSecurity looked at how a web of sites connected to conspiracy theory movements QAnon and 8chan were being kept online by DDoS-Guard, a dodgy Russian firm that also hosts the official site for the terrorist group Hamas. New research shows DDoS-Guard relies on data centers provided by a U.S.-based publicly traded company, which experts say could be exposed to civil and criminal liabilities as a result of DDoS-Guard’s business with Hamas.

Many of the IP address ranges in in this map of QAnon and 8Chan-related sites — are assigned to VanwaTech. Source:

Last year’s story examined how a phone call to Oregon-based CNServers was all it took to briefly sideline multiple websites related to 8chan/8kun — a controversial online image board linked to several mass shootings — and QAnon, the far-right conspiracy theory which holds that a cabal of Satanic pedophiles is running a global child sex-trafficking ring and plotting against President Donald Trump.

From that piece:

A large number of 8kun and QAnon-related sites (see map above) are connected to the Web via a single Internet provider in Vancouver, Wash. called VanwaTech (a.k.a. “OrcaTech“). Previous appeals to VanwaTech to disconnect these sites have fallen on deaf ears, as the company’s owner Nick Lim reportedly has been working with 8kun’s administrators to keep the sites online in the name of protecting free speech.

After that story, CNServers and a U.K.-based hosting firm called SpartanHost both cut ties with VanwaTech. Following a brief disconnection, the sites came back online with the help of DDoS-Guard, an Internet company based in Russia. DDoS-Guard is now VanwaTech’s sole connection to the larger Internet.

A review of the several thousand websites hosted by DDoS-Guard is revelatory, as it includes a vast number of phishing sites and domains tied to cybercrime services or forums online.

Replying to requests for comment from a CBSNews reporter following up on my Oct. 2020 story, DDoS-Guard issued a statement saying, “We observe network neutrality and are convinced that any activity not prohibited by law in our country has the right to exist.”

But experts say DDoS-Guard’s business arrangement with a Denver-based publicly traded data center firm could create legal headaches for the latter thanks to the Russian company’s support of Hamas.

In a press release issued in late 2019, DDoS-Guard said its services rely in part on a traffic-scrubbing facility in Los Angeles owned by CoreSite [NYSE:COR], a real estate investment trust which invests in “carrier-neutral data centers and provides colocation and peering services.”

This facilities map published by DDoS-Guard suggests the company’s network actually has at least two points of presence in the United States.

Hamas has long been named by the U.S. Treasury and State departments as a Specially Designated Global Terrorist (SDGT) organization. Under such a designation, any U.S. person or organization that provides money, goods or services to an SDGT entity could face civil and/or criminal prosecution and hefty fines ranging from $250,000 to $1 million per violation.

Sean Buckley, a former Justice Department prosecutor with the law firm Kobre & Kim, said U.S. persons and companies within the United States “are prohibited from any transaction or dealing in property or interests in property blocked pursuant to an entity’s designation as a SDGT, including but not limited to the making or receiving of any contribution of funds, goods, or services to or for the benefit of individuals or entities so designated.”

CoreSite did not respond to multiple requests for comment. But Buckley said companies can incur fines and prosecution for violating SDGT sanctions even when they don’t know that they are doing so.

In 2019, for example, a U.S. based cosmetics company was fined $1 million after investigators determined its eyelash kits were sourcing materials from North Korea, even though the supplier in that case told the cosmetics firm the materials had come from China.

“U.S. persons or companies found to willfully violate these regulations can be subject to criminal penalties under the International Emergency Economic Powers Act,” Buckley said. “However, even in the case that they are unaware they’re violating these regulations, or if the transaction isn’t directly with the sanctioned entity, these companies still run a risk of facing substantial civil and monetary penalties by the Department of Treasury’s Office of Foreign Asset Control if the sanctioned entity stands to benefit from such a transaction.”

DDoS-Guard said its partnership with CoreSite will help its stable of websites load more quickly and reliably for people visiting them from the United States. It is possible that when and if CoreSite decides it’s too risky to continue doing business with DDoS-Guard, sites like those affiliated with Hamas, QAnon and 8Chan may become more difficult to reach.

Meanwhile, DDoS-Guard customer VanwaTech continues to host a slew of sites promoting the conspiracy theory that the U.S. 2020 presidential election was stolen from President Donald Trump via widespread voting fraud and hacked voting machines, including maga[.]host, donaldsarmy[.]us, and donaldwon[.]com.

These sites are being used to help coordinate a protest rally in Washington, D.C. on January 6, 2021, the same day the U.S. Congress is slated to count electoral votes certified by the Electoral College, which in December elected Joseph R. Biden as the 46th president of The United States.

In a tweet late last year, President Trump urged his supporters to attend the Jan. 6 protest, saying the event “will be wild.”

8chan, which has rebranded as 8kun, has been linked to white supremacism, neo-Nazism, antisemitism, multiple mass shootings, and child pornography. The FBI in 2019 identified QAnon as a potential domestic terror threat, noting that some of its followers have been linked to violent incidents motivated by fringe beliefs.

Planet DebianBernd Zeimetz: Building reverse build dependencies in salsa CI

For the next library soname bump of gpsd I needed to rebuild all reverse dependencies. As this is a task I have to do very often, I came up with some code to generate (and keep uptodate) an include for the gitlab CI. Right now it is rather uncommented, undocumented, but works well. If you like it, MRs are very welcome.

The generated files are here:




Please do no abuse the salsa CI. Don’t build all of your 100 reverse dependencies with every commit!

Planet DebianSteve Kemp: Brexit has come

Nothing too much has happened recently, largely as a result of the pandemic killing a lot of daily interests and habits.

However as a result of Brexit I'm having to do some paperwork, apparently I now need to register for permanent residency under the terms of the withdrawal agreement, and that will supersede the permanent residency I previously obtained.

Of course as a UK citizen I've now lost the previously-available freedom of movement. I can continue to reside here in Helsinki, Finland, indefinitely, but I cannot now move to any other random EU country.

It has crossed my mind, more than a few times, that I should attempt to achieve Finnish citizenship. As a legal resident of Finland the process is pretty simple, I just need two things:

  • Prove I've lived here for the requisite number of years.
  • Pass a language test.

Of course the latter requirement is hard, I can understand a lot of spoken and written Finnish, but writing myself, and speaking a lot is currently beyond me. I need to sit down and make the required effort to increase my fluency. There is the alternative option of learning Swedish, which is a hack a lot of immigrants use:

  • Learning Swedish is significantly easier for a native English-speaker.
  • But the downside is that it would be learning a language solely to "cheat" the test, it wouldn't actually be useful in my daily life.

Finland has two official languages, and so the banks, the medical world, the tax-office, etc, are obliged to provide service in both. However daily life, ordering food at restaurants, talking to parents in the local neighborhood? Finnish, or English are the only real options. So if I went this route I'd end up in a weird situation where I had to learn a language to pass a test, but then would continue to need to learn more Finnish to live my life. That seems crazy, unless I were desperate for a second citizenship which I don't think I am.

Learning Finnish has not yet been a priority, largely because I work in English in the IT-world, and of course when I first moved here I was working (remotely) for a UK company, and didn't have the time to attend lessons (because they were scheduled during daytime, on the basis that many immigrants are unemployed). Later we had a child, which meant that early-evening classes weren't a realistic option either.

(Of course I learned a lot of the obvious things immediately upon moving, things like numbers, names for food, days of the week were essential. Without those I couldn't have bought stuff in shops and would have starved!)

On the topic of languages a lot of people talk about how easy it is for children to pick up new languages, and while that is broadly true it is also worth remembering just how many years of correction and repetition they have to endure as part of the process.

For example we have a child, as noted already, he is spoken to by everybody in Finnish. I speak to him in English, and he hears his mother and myself speaking English. But basically he's 100% Finnish with the exception of:

  • Me, speaking English to him.
  • His mother and I speaking English in his hearing.
  • Watching Paw Patrol.

If he speaks Finnish to me I pretend to not understand him, even when I do, just for consistency. As a result of that I've heard him tell strangers "Daddy doesn't speak Finnish" (in Finnish) when we've been stopped and asked for directions. He also translates what some other children have said into English for my benefit which is adorable

Anyway he's four, and he's pretty amazing at speaking to everybody in the correct language - he's outgrown the phase where he'd mix different languages in the same sentence ("more leipä", "saisinko milk") - when I took him to the UK he surprised and impressed me by being able to understand a lot of the heavy/thick accents he'd never heard before. (I'll still need to train him on Rab C. Nesbitt when he's a wee bit older, but so far no worries.)

So children learn languages, easily and happily? Yes and no. I've spent nearly two years correcting his English and he still makes the same mistake with gender. It's not a big deal, at all, but it's a reminder that while children learn this stuff, they still don't do it as easily as people imagine. I'm trying to learn and if I'd been corrected for two years over the same basic point you'd rightly think I was "slow", but actually that's just how it works. Learning languages requires a hell of a lot of practice, a lot of effort, and a lot of feedback/corrections.

Specifically Finnish doesn't have gendered pronouns, the same word is used for "he" and "she". This leads to a lot of Finnish people, adults and children, getting the pronouns wrong in English. In the case of our child he'll say "Mommy is sleeping, when he wake up?" In the case of adults I've heard people say "My girlfriend is a doctor, he works in a hospital", or "My dad is an accountant, she works for a big firm". As I say I've spent around two years making this correction to the child, and he's still nowhere near getting it right. Kinda adorable actually:

  • "Mommy is a woman we say "when she wakes up"..."
  • "Adriana is a girl we say "her bike".."

Worse Than FailureThe Contract Position

Mandi didn't plan to take a staff job at a university. To the contrary, she'd heard some bad things: loads of office politics, budgets so thin you need quantum mechanics to describe their position, and bureaucracy thick enough to drown any project.

But one day, she met her old colleague Scot for lunch, and they got to chatting about his university job. "Oh, yeah, that's common enough," he said, "which is why my team isn't structured that way. We're doing in-house development of educational solutions, which is a fancy way of saying 'nobody understands what we do, so they leave us alone'."

Scot invited her to take a tour of his office, meet some of his co-workers, talk a little about the work they were doing. They were based in a rented office just at the edge of campus, sharing the floor with a few scrappy startups. It wasn't a fancy space, and it was a little cramped, but the first and last thing Mandi noticed was how happy everyone was to be there.

Augusta, the front-end lead, talked a little about their framework selection process, and how they made their choices, not based on what was new and trendy, but based on what felt like a really good fit for their subject matter. Harry, who handled the middleware, was happy to explain how he'd needed some time to get up to speed on the right cloud scaling options, but the team was there to support him, and they eventually got a great set of automation built which handled spikes but kept costs down. Quinn rhapsodized about how great it was to work closely with the end users, to really build the solution that worked best for them, and how exciting it was to see their requirements translate into implemented software with tangible benefits.

Unlike pretty much any place Mandi had ever seen, everyone was happy to be there. Everyone liked the work they were doing. Everyone felt empowered to make the best choices, to work through challenges with the rest of the team, and everyone enjoyed celebrating their successes together.

"I always have to bring people in," Scot said, "because nobody believes me when I tell them about how great my job is."

"Honestly, I still don't believe it," Mandi said.

"Well, I did have a bit of an ulterior motive. We're looking to scale up the team a bit, which means I'll have a position soon. It'll take a little bit to grind those gears- that has to go through the university hiring process, but I hope you apply. I think you'd be a great fit."

Mandi did apply, when the position finally opened up. It was a slow-moving interview process, mostly through the university HR department, but she met Scot one more time, early in the process. Then, she landed the job, a contract-to-hire position.

At that point, Scot didn't work there anymore. He had resigned, and since the team was actively working, and since the HR process was painfully slow, the HR department didn't hire a replacement as an employee- they hired a contractor. Technically, Mandi worked under the same contract, and thus her direct manager was Cyril, the new team lead.

There was just one problem with that: by both university policy and IRS rules, contract employees can't manage regular employees. So Cyril's title was just "scrum master", and he technically had no management authority. Which meant the regular employees ignored him.

Mandi and one other contractor reported to Cyril, but nobody else did.

The overall project lead, Ruthie, was also a contractor, but hired through a different contracting firm. Not only did she have no authority over regular employees, she had no authority over any other contractors. Nobody reported to her, but she was in a management role.

The result of this management omni-shambles was meetings. Loads of meetings. Daily standups became daily "take a load off, we'll be here awhiles". After the standup, Cyril would be pulled into meeting after meeting as every section of the department started pulling in different directions, so despite being the "scrum master", he had no idea what anyone on the team was doing. Ruthie threw meetings on everyone's calendar, which nobody attended, because nobody worked for Ruthie. The only way for a contractor to get a regular employee's attention was to schedule a meeting.

Above both Ruthie and Cyril was the technical lead for the entire campus IT department. He was the only person in the org chart that everyone technically reported to, but he had never been a fan of the entire "rent an off campus office and let smart people solve problems," approach. While he was the only one who could potentially set some direction, he had no interest in doing so. The one time Mandi was on a conference call with him, he excused himself, "This isn't really a priority for me right now, I have other issues I need to address that are more important."

Mandi stuck it out until the end of her contract period. She never received an offer for a full-time position, and frankly, she wouldn't have taken it anyway. Her fellow subcontractor, the only other person who reported to Cyril, did. So his HR hiring process can work, eventually, for some people.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianRuss Allbery: New year haul

For once, I've already read and reviewed quite a few of these books.

Elizabeth Bear — Machine (sff)
Timothy Caulfield — Your Day, Your Way (non-fiction)
S.A. Chakraborty — The City of Brass (sff)
John Dickerson — The Hardest Job in the World (non-fiction)
Tracy Deonn — Legendborn (sff)
Lindsay Ellis — Axiom's End (sff)
Alix E. Harrow — The Once and Future Witches (sff)
TJ Klune — The House in the Cerulean Sea (sff)
Maria Konnikova — The Biggest Bluff (non-fiction)
Talia Levin — Culture Warlords (non-fiction)
Yoon Ha Lee — Phoenix Extravagent (sff)
Yoon Ha Lee, et al. — The Vela (sff)
Michael Lewis — Flash Boys (non-fiction)
Michael Lewis — Losers (non-fiction)
Michael Lewis — The Undoing Project (non-fiction)
Megan Lindholm — Wizard of the Pigeons (sff)
Nathan Lowell — Quarter Share (sff)
Adrienne Martini — Somebody's Gotta Do It (non-fiction)
Tamsyn Muir — Princess Florinda and the Forty-Flight Tower (sff)
Naomi Novik — A Deadly Education (sff)
Margaret Owen — The Merciful Crow (sff)
Anne Helen Peterson — Can't Even (non-fiction)
Devon Price — Laziness Does Not Exist (non-fiction)
The Secret Barrister — The Secret Barrister (non-fiction)
Studs Terkel — Working (non-fiction)
Kathi Weeks — The Problem with Work (non-fiction)
Reeves Wiedeman — Billion Dollar Loser (non-fiction)

Rather a lot of non-fiction in this batch, much more than usual. I've been in a non-fiction mood lately.

So many good things to read!

Chaotic Idealism“It only kills the old and sick.”

I’ve heard a lot of people, usually conservatives, say that if COVID only kills elderly or sick high-risk people at high rates, then everyone else shouldn’t have to follow any preventative measures. The high-risk people should be the only ones who have to isolate.

The whole idea is pretty ableist. It implies that the lives of the high-risk should be sacrificed for the convenience and financial prosperity of everybody else. “But that’s not it,” they say. “High-risk people can still quarantine and be perfectly safe. I just shouldn’t have to change my lifestyle for something that doesn’t threaten me.”

Okay, so let’s take that as a premise: All high-risk people need to be able to isolate, all low-risk people should be completely unrestricted. What does that look like in reality, if we truly value the lives of the high-risk population?

First of all, there are more high-risk people than you think. Elders, cancer survivors, anyone with diabetes, kidney disease, heart disease; anyone immunocompromised, anyone with severe asthma or lung disease. That’s about 10% of the population (and I’m not even counting the moderate-risk folks for whom COVID is still more deadly than the flu). Those people need to be able to stay at home, completely isolated, because everyone else is taking no precautions at all. Therefore, we need to provide them their salaries–remember, many of them are working and they can no longer work outside the home–as well as make sure their medical needs are cared for safely. We also need to deliver their groceries and other home necessities. They will need to be given legal assurance that their jobs will be there for them when they leave isolation, and that they will not have their utilities cut or be evicted.

For various reasons, many of the high-risk cannot stay entirely alone. They live in nursing homes or prisons; they need home health care workers; they need regular doctors’ visits; some are children living with families. These doctors and home health care workers are now making house calls, since the people going for normal checkups are taking no precautions. To keep the high-risk safe, all doctors, home-health care workers, prison staff, and nursing home staff will also need to isolate completely. They will need the same precautions and support as a high-risk person. Also, they will not be able to interact with low-risk people taking no precautions. That means that doctors will now need to be divided up between those seeing high-risk patients (and isolating) and those seeing low-risk patients. You are likely to need to change doctors.

Let’s break up a few families while we’re at it. Any high-risk people, medical, or residential center staff living with family have a choice: Their entire families can go into isolation (as above: No working outside the home, no going into public spaces whatsoever), or they can find an apartment outside their home, move out, and isolate there, or move to specially designated hotels, where all of the residents are isolated. All of the hotel staff will have to isolate, too, of course. All those who work in a prison, nursing home, or as a home-health care worker, won’t see their families until herd immunity unless their families isolate totally with them. And they’ll need most of the same services: Grocery delivery, high-risk medical care, and free rent or hotel bill.

Of course, after the pandemic ends, everybody who has been in isolation will need help getting back to work. They’ll need financial assistance not just until herd immunity, but until they’re able to find another job or return to their old one. Some may never be able to return, since younger people have simply taken their positions; so we had better make sure they don’t starve.

Now, if you don’t want to do this set-up, that’s fine. But it’s the only way to protect high-risk people while allowing low-risk people to take no precautions whatsoever. If you don’t support it, though, either admit that everybody has to take precautions, or admit that you don’t think high-risk people’s lives are worth inconvenience and lowered profits.

Cryptogram Backdoor in Zyxel Firewalls and Gateways

This is bad:

More than 100,000 Zyxel firewalls, VPN gateways, and access point controllers contain a hardcoded admin-level backdoor account that can grant attackers root access to devices via either the SSH interface or the web administration panel.


Installing patches removes the backdoor account, which, according to Eye Control researchers, uses the “zyfwp” username and the “PrOw!aN_fXp” password.

“The plaintext password was visible in one of the binaries on the system,” the Dutch researchers said in a report published before the Christmas 2020 holiday.

Cryptogram Latest on the SVR’s SolarWinds Hack

The New York Times has an in-depth article on the latest information about the SolarWinds hack (not a great name, since it’s much more far-reaching than that).

Interviews with key players investigating what intelligence agencies believe to be an operation by Russia’s S.V.R. intelligence service revealed these points:

  • The breach is far broader than first believed. Initial estimates were that Russia sent its probes only into a few dozen of the 18,000 government and private networks they gained access to when they inserted code into network management software made by a Texas company named SolarWinds. But as businesses like Amazon and Microsoft that provide cloud services dig deeper for evidence, it now appears Russia exploited multiple layers of the supply chain to gain access to as many as 250 networks.
  • The hackers managed their intrusion from servers inside the United States, exploiting legal prohibitions on the National Security Agency from engaging in domestic surveillance and eluding cyberdefenses deployed by the Department of Homeland Security.
  • “Early warning” sensors placed by Cyber Command and the National Security Agency deep inside foreign networks to detect brewing attacks clearly failed. There is also no indication yet that any human intelligence alerted the United States to the hacking.
  • The government’s emphasis on election defense, while critical in 2020, may have diverted resources and attention from long-brewing problems like protecting the “supply chain” of software. In the private sector, too, companies that were focused on election security, like FireEye and Microsoft, are now revealing that they were breached as part of the larger supply chain attack.
  • SolarWinds, the company that the hackers used as a conduit for their attacks, had a history of lackluster security for its products, making it an easy target, according to current and former employees and government investigators. Its chief executive, Kevin B. Thompson, who is leaving his job after 11 years, has sidestepped the question of whether his company should have detected the intrusion.
  • Some of the compromised SolarWinds software was engineered in Eastern Europe, and American investigators are now examining whether the incursion originated there, where Russian intelligence operatives are deeply rooted.

Separately, it seems that the SVR conducted a dry run of the attack five months before the actual attack:

The hackers distributed malicious files from the SolarWinds network in October 2019, five months before previously reported files were sent to victims through the company’s software update servers. The October files, distributed to customers on Oct. 10, did not have a backdoor embedded in them, however, in the way that subsequent malicious files that victims downloaded in the spring of 2020 did, and these files went undetected until this month.


“This tells us the actor had access to SolarWinds’ environment much earlier than this year. We know at minimum they had access Oct. 10, 2019. But they would certainly have had to have access longer than that,” says the source. “So that intrusion [into SolarWinds] has to originate probably at least a couple of months before that ­- probably at least mid-2019 [if not earlier].”

The files distributed to victims in October 2019 were signed with a legitimate SolarWinds certificate to make them appear to be authentic code for the company’s Orion Platform software, a tool used by system administrators to monitor and configure servers and other computer hardware on their network.


Planet DebianIustin Pop: Year 2020 review

Year 2020. What a year! Sure, already around early January there were rumours/noise about Covid-19, but who would have thought where it will end up! Thankfully, none of my close or extended family was directly (medically) affected by Covid, so I/we had a privileged year compared to so many other people.

I thought how to write a mini-summary, but prose is too difficult, so let’s just go month-by-month. Please note that my memory is fuzzy after 9 months cooked up in the apartment, so things could ±1 month compared to what I wrote.



Ski weekend. Skiing is awesome! Cancelling a US work trip since there will be more opportunities soon (har har!).


Ski vacation. Yep, skiing is awesome. Can’t wait for next season (har har!). Discussions about Covid start in the office, but more “is this scary or just interesting?” (yes, this was before casualties). Then things start escalating, work-from-home at least partially, etc. etc. Definitely not just “intersting” anymore.

In Garmin-speak, I got ~700+ “intensity minutes” in February (correlates with activity time, but depends on intensity of the effort whether 1:1 or 2 intensity minutes for one wall-clock minute).


Sometimes during the month, my workplace introduces mandatory WFH. I remember being the last person in our team in the office, on the last day we were allowed to work, and cleaning my desk/etc., thinking “all this, and we’ll be back in 3 weeks or so”. Har har!

I buy a webcam, just in case WFH gets extended. And start to increase my sports - getting double the intensity minutes (1500+).


Switzerland enters the first, hard, lockdown. Or was it late March? Not entirely sure, but in my mind March was the opening, and April was the first main course.

It is challenging, having to juggle family and work and stressed schedule, but also interesting. Looking back, I think I liked April the most, as people were actually careful at that time.

I continue upgrading my “home office” - new sound system, so that I don’t have to plug in/plug out cables.

1700+ intensity minutes this month.


Continued WFH, somewhat routine now. My then internet provider started sucking hard, so I upgrade with good results. I’m still happy, half a year later (quite happy, even).

Still going strong otherwise, but waiting for summer vacation, whatever it will be. A tiny bit more effort, so 1800 intensity minutes in May.


Switzerland relaxes the lock down, but not my company, so as the rest of the family goes out and about, I start feeling alone in the apartment. And somewhat angry at it, which impacts my sports (counter-intuitively), so I only get 1500 intensity minutes.

I go and buy a coffee machine—a real one, that takes beans and grinds them, so I get to enjoy the smell of freshly-ground coffee and the fun of learning about coffee beans, etc. But it occupies the time.

On the work/job front, I think at this time I finally got a workstation for home, instead of a laptop (which was ultra-portable too), so together with the coffee machine, it feels like a normal work environment. Well, modulo all the people. At least I’m not crying anymore every time I open a new tab in Chrome…


Situation is slowly going better, but no, not my company. Still mandatory WFH, with (if I recall correctly) one day per week allowed, and no meeting other people. I get angrier, but manage to channel my energy into sports, almost doubly my efforts in July - 2937 intensity minutes, not quite reaching the 3000 magic number.

I buy more stuff to clean and take care of my bicycles, which I don’t really use. So shopping therapy too.


The month starts with a one week family vacation, but I take a bike too, so I manage to put in some effort (it was quite nice riding TBH). A bit of changes in the personal life (nothing unexpected), which complicates things a bit, but at this moment I really thought Switzerland is going to continue to decrease in infections/R-factor/etc. so things will get back to normal, right? My company expands a bit the work-from-office part, so I’m optimistic.

Sports wise, still going strong, 2500 intensity minutes, preparing for the single race this year.


The personal life changes from August start to stabilise, so things become again routine, and I finally get to do a race. Life was good for an extended weekend (well, modulo race angst, but that’s part of the fun), and I feel justified to take it slow the week after the race. And the week after that too.

I end up the month with close, but not quite, 1900 intensity minutes.


October starts with school holidays and a one week family vacation, but I feel demotivated. Everything is closing down again (well, modulo schools), and I actually have difficulty getting re-adjusted to no longer being alone in the apartment during the work hours.

I only get ~1000 intensity minutes in October, mainly thanks to good late autumn weather and outside rides. And I start playing way more computer games. I also sell my PS4, hoping to get a PS5 next month.


November continues to suck. I think my vacation in October was actually detrimental - it broke my rhythm, I don’t really do sport anymore, not consistently at least, so I only get 700+ intensity minutes. And I keep playing computer games, even if I missed the PS5 ordering window; so I switch to PC gaming.

My home office feels very crowded, so as kind of anti-shopping therapy, I sell tons of smallish stuff; can’t believe how much crap I kept around while not really using it.

I also manage to update/refresh all my Debian packages, since next freeze approaches. Better than for previous releases, so it feels good.


December comes, end of the year, the much awaited vacation - which we decide to cancel due to the situation in whole of Switzerland (and neighbouring countries). I basically only play computer games, and get grand total of 345 activity minutes this month.

And since my weight is inversely correlated to my training, I’m basically back at my February weight, having lost all the gains I made during the year. I mean, having gained back all the fat I lost. Err, you know what I mean; I’m back close to my high-watermark, which is not good.


I was somehow hoping that the end of the year will allow me to reset and restart, but somehow - a few days into January - it doesn’t really feel so. My sleep schedule is totally ruined, my motivation is so-so, and I think the way I crashed in October was much harder/worse than I realised at the time, but in a way expected for this crazy year.

I have some projects for 2021 - or at least, I’m trying to make up a project list - in order to get a bit more structure in my continued “stuck inside the house” part, which is especially terrible when on-call. I don’t know how the next 3-6 months will evolve, but I’m thankful that so far, we are all healthy. Actually, me personally I’ve been healthier physically than in other years, due to less contact with other people.

On the other side, thinking of all the health-care workers, or even service workers, my IT job is comfy and all I am is a spoiled person (I could write many posts on specifically this topic). I really need to up my willpower and lower my spoil level. Hints are welcome :(

Wish everybody has a better year in 2021.

Planet DebianJan Wagner: Backing up Windows (the hard way)

Sometimes you need to do things you don't like and you don't know where you will end up.
In our household there exists one (production) system running Windows. Don't ask why and please no recommandations how to substitute it. Some things are hard to (ex)change, for example your love partner.

Looking into Backup with rsync on Windows (WSL) I needed to start a privileged powershell, so I first started an unprivileged one:


Just to start a privileged:

Start-Process powershell -Verb runAs

Now you can follow the Instructions from Microsoft to install OpenSSH. Or just install the OpenSSH Server:

Add-WindowsCapability -Online -Name OpenSSH.Server~~~~

Check if a firewall rule was created (maybe you want to adjust it):

Get-NetFirewallRule -Name *ssh*

Start the OpenSSH server:

Start-Service sshd

Running OpenSSH server as service:

Set-Service -Name sshd -StartupType 'Automatic'

You can create the .ssh directory with the correct permissions by connecting to localhost and creating the known_hosts file.

ssh user@

When you intend to use public key authentication for users in the administrators group, have a look into How to Login Windows Using SSH Key Under Local Admin.

Indeed you can get rsync running via WSL. But why load tons of dependencies on your system? With the installation of rsync I cheated a bit and used chocolately by running choco install rsync, but there is also an issue requesting rsync support for the OpenSSH server which includes an archive with a rsync.exe and libraries which may also fit. You can place those files for example into C:\Windows\System32\OpenSSH so they are in the PATH.

So here we are. Now I can solve all my other issues with BackupPC, Windows firewall and the network challenges to get access to the isolated dedicated network of the windows system.

Planet DebianJohn Goerzen: More Topics on Store-And-Forward (Possibly Airgapped) ZFS and Non-ZFS Backups with NNCP

Note: this is another article in my series on asynchronous communication in Linux with UUCP and NNCP.

In my previous post, I introduced a way to use ZFS backups over NNCP. In this post, I’ll expand on that and also explore non-ZFS backups.

Use of nncp-file instead of nncp-exec

The previous example used nncp-exec (like UUCP’s uux), which lets you pipe stdin in, then queues up a request to run a given command with that input on a remote. I discussed that NNCP doesn’t guarantee order of execution, but that for the ZFS use case, that was fine since zfs receive would just fail (causing NNCP to try again later).

At present, nncp-exec stores the data piped to it in RAM before generating the outbound packet (the author plans to fix this shortly). That made it unusable for some of my backups, so I set it up another way: with nncp-file, the tool to transfer files to a remote machine. A cron job then picks them up and processes them.

On the machine being backed up, we have to find a way to encode the dataset to be received. I chose to do that as part of the filename, so the updated simplesnap-queue could look like this:


set -e
set -o pipefail

DEST="`echo $1 | sed 's,^tank/simplesnap/,,'`"
FILE="bakfsfmt2-`date "+%s.%N".$$`_`echo "$DEST" | sed 's,/,@,g'`"

echo "Processing $DEST to $FILE" >&2
# stdin piped to this
zstd -8 - \
  | gpg --compress-algo none --cipher-algo AES256 -e -r 012345...  \
  | su nncp -c "/usr/local/nncp/bin/nncp-file -nice B -noprogress - 'backupsvr:$FILE'" >&2

echo "Queued $DEST to $FILE" >&2

I’ve added compression and encryption here as well; more on that below.

On the backup server, we would define a different incoming directory for each node in nncp.hjson. For instance:

host1: {
   incoming: "/var/local/nncp-bakcups-incoming/host1"

host2: {
   incoming: "/var/local/nncp-backups-incoming/host2"

I’ll present the scanning script in a bit.

Offsite Backup Rotation

Most of the time, you don’t want just a single drive to store the backups. You’d like to have a set. At minimum, one wouldn’t be plugged in so lightning wouldn’t ruin all your backups. But maybe you’d store a second drive at some other location you have access to (friend’s house, bank box, etc.)

There are several ways you could solve this:

  • If the remote machine is at a location with network access and you trust its physical security (remember that although it will store data encrypted at rest and will transport it encrypted, it will — in most cases — handle un-encrypted data during processing), you could of course send NNCP packets to it over the network at the same time you send them to your local backup system.
  • Alternatively, if the remote location doesn’t have network access or you want to keep it airgapped, you could transport the NNCP packets by USB drive to the remote end.
  • Or, if you don’t want to have any kind of processing capability remotely — probably a wise move — you could rotate the hard drives themselves, keeping one plugged in locally and unplugging the other to take it offsite.

The third option can be helped with NNCP, too. One way is to create separate NNCP installations for each of the drives that you store data on. Then, whenever one is plugged in, the appropriate NNCP config will be loaded and appropriate packets received and processed. The neighbor machine — the spooler — would just store up packets for the offsite drive until it comes back onsite (or, perhaps, your airgapped USB transport would do this). Then when it’s back onsite, all the queued up ZFS sends get replayed and the backups replicated.

Now, how might you handle this with NNCP?

The simple way would be to have each system generating backups send them to two destinations. For instance:

zstd -8 - | gpg --compress-algo none --cipher-algo AES256 -e -r 07D5794CD900FAF1D30B03AC3D13151E5039C9D5 \
  | tee >(su nncp -c "/usr/local/nncp/bin/nncp-file -nice B+5 -noprogress - 'backupdisk1:$FILE'") \
        >(su nncp -c "/usr/local/nncp/bin/nncp-file -nice B+5 -noprogress - 'backupdisk2:$FILE'") \
   > /dev/null

You could probably also more safely use pee(1) (from moreutils) to do this.

This has an unfortunate result of doubling the network traffic from every machine being backed up. So an alternative option would be to queue the packets to the spooling machine, and run a distribution script from it; something like this, in part:

if dotlockfile -r 0 -l -p "${LOCKFILE}"; then
  logit "Lock obtained at ${LOCKFILE} with dotlockfile"
  trap 'ECODE=$?; dotlockfile -u '"${EVAL_SAFE_LOCKFILE}"'; exit $ECODE' EXIT INT TERM
  logit "Could not obtain lock at $LOCKFILE; $0 likely already running."
  exit 0

logit "Scanning queue directory..."
for HOST in *; do
   for FILE in bakfsfmt2-*; do
           if [ -f "$FILE" ]; then
                   for BAKFS in backupdisk1 backupdisk2; do
                           runcommand nncp-file -nice B+5 -noprogress "$FILE" "$BAKFS:$HOST/$FILE"
                   runcommand rm "$FILE"
                   logit "$HOST: Skipping $FILE since it doesn't exist"

logit "Scan complete."

Security Considerations

You’ll notice that in my example above, the encryption happens as the root user, but nncp is called under su. This means that even if there is a vulnerability in NNCP, the data would still be protected by GPG. I’ll also note here that many sites run ssh as root unnecessarily; the same principles should apply there. (ssh has had vulnerabilities in the past as well). I could have used gpg’s built-in compression, but zstd is faster and better, so we can get good performance by using fast compression and piping that to an algorithm that can use hardware acceleration for encryption.

I strongly encourage considering transport, whether ssh or NNCP or UUCP, to be untrusted. Don’t run it as root if you can avoid it. In my example, the nncp user, which all NNCP commands are run as, has no access to the backup data at all. So even if NNCP were compromised, my backup data wouldn’t be. For even more security, I could also sign the backup stream with gpg and validate that on the receiving end.

I should note, however, that this conversation assumes that a network- or USB-facing ssh or NNCP is more likely to have an exploitable vulnerability than is gpg (which here is just processing a stream). This is probably a safe assumption in general. If you believe gpg is more likely to have an exploitable vulnerability than ssh or NNCP, then obviously you wouldn’t take this particular approach.

On the zfs side, the use of -F with zfs receive is avoided; this could lead to a compromised backed-up machine generating a malicious rollback on the destination. Backup zpools should be imported with -R or -N to ensure that a malicious mountpoint property couldn’t be used to cause an attack. I choose to use “zfs receive -u -o readonly=on” which is compatible with both unmounted backup datasets and zpools imported with -R (or both). To access the data in a backup dataset, you would normally clone it and access it there.

The processing script

So, put this all together and look at an example of a processing script that would run from cron as root and process the incoming ZFS data.

set -e
set -o pipefail

# Log a message
logit () {
   logger -p info -t "`basename "$0"`[$$]" "$1"

# Log an error message
logerror () {
   logger -p err -t "`basename "$0"`[$$]" "$1"

# Log stdin with the given code.  Used normally to log stderr.
logstdin () {
   logger -p info -t "`basename "$0"`[$$/$1]"

# Run command, logging stderr and exit code
runcommand () {
   logit "Running $*"
   if "$@" 2> >(logstdin "$1") ; then
      logit "$1 exited successfully"
      return 0
       logerror "$1 exited with error $RETVAL"
       return "$RETVAL"


if ! [ -d "$INCOMINGDIR" ]; then
        logerror "$INCOMINGDIR doesn't exist"
        exit 0

if dotlockfile -r 0 -l -p "${LOCKFILE}"; then
  logit "Lock obtained at ${LOCKFILE} with dotlockfile"
  trap 'ECODE=$?; dotlockfile -u '"${EVAL_SAFE_LOCKFILE}"'; exit $ECODE' EXIT INT TERM
  logit "Could not obtain lock at $LOCKFILE; $0 likely already running."
  exit 0


logit "Scanning queue directory..."
for HOST in *; do
    # files like backupsfmt2-134.13134_dest
    for FILE in "$HOSTPATH"/backupsfmt2-[0-9]*_?*; do
        if [ ! -f "$FILE" ]; then
            logit "Skipping non-existent $FILE"

        # Now, $DEST will be HOST/DEST.  Strip off the @ also.
        DEST="`echo "$FILE" | sed -e 's/^.*backupsfmt2[^_]*_//' -e 's,@,/,g'`"

        if [ -z "$DEST" ]; then
            logerror "Malformed dest in $FILE"
        HOST2="`echo "$DEST" | sed 's,/.*,,g'`"
        if [ -z "$HOST2" ]; then
            logerror "Malformed DEST $DEST in $FILE"

        if [ ! "$HOST" = "$HOST2" ]; then
            logerror "$DIR: $HOST doesn't match $HOST2"

        logit "Processing $FILE to $STORE/$DEST"
            if runcommand gpg -q -d < "$FILE" | runcommand zstdcat | runcommand zfs receive -u -o readonly=on "$STORE/$DEST"; then
                logit "Successfully processed $FILE to $STORE/$DEST"
                runcommand rm "$FILE"
                logerror "FAILED to process $FILE to $STORE/$DEST"

Applying These Ideas to Non-ZFS Backups

ZFS backups made our job easier in a lot of ways:

  • ZFS can calculate a diff based on an efficiently-stored previous local state (snapshot or bookmark), rather than a comparison to a remote state (rsync)
  • ZFS "incremental" sends, while less efficient than rsync, are reasonably efficient, sending only changed blocks
  • ZFS receive detects and enforces that the incremental source on the local machine must match the incremental source of the original stream, enforcing ordering
  • Datasets using ZFS encryption can be sent in their encrypted state
  • Incrementals can be done without a full scan of the filesystem

Some of these benefits you just won't get without ZFS (or something similar like btrfs), but let's see how we could apply these ideas to non-ZFS backups. I will explore the implementation of them in a future post.

When I say "non ZFS", I am being a bit vague as to whether the source, the destination, or both systems are running a non-ZFS filesystem. In general I'll assume that neither are ZFS.

The first and most obvious answer is to just tar up the whole system and send that every day. This is, of course, only suitable for small datasets on a fast network. These tarballs could be unpacked on the destination and stored more efficiently via any number of methods (hardlink trees, a block-level deduplicator like borg or rdedup, or even just simply compressed tarballs).

To make the network trip more efficient, something like rdiff or xdelta could be used. A signature file could be stored on the machine being backed up (generated via tee/pee at stream time), and the next run could simply send an rdiff delta over NNCP. This would be quite network-efficient, but still would require reading every byte of every file on every backup, and would also require quite a bit of temporary space on the receiving end (to apply the delta to the previous tarball and generate a new one).

Alternatively, a program that generates incremental backup files such as rdup could be used. These could be transmitted over NNCP to the backup server, and unpacked there. While perhaps less efficient on the network -- every file with at least one modified byte would be retransmitted in its entirety -- it avoids the need to read every byte of unmodified files or to have enormous temporary space. I should note here that GNU tar claims to have an incremental mode, but it has a potential data loss bug.

There are also some tools with algorithms that may apply well in this use care: syrep and fssync being the two most prominent examples, though rdedup (mentioned above) and the nascent asuran project may also be combinable with other tools to achieve this effect.

I should, of course, conclude this section by mentioning btrfs. Every time I've tried it, I've run into serious bugs, and its status page indicates that only some of them have been resolved. I would not consider using it for something as important as backups. However, if you are comfortable with it, it is likely to be able to run in more constrained environments than ZFS and could probably be processed in much the same way as zfs streams.

Cryptogram Friday Squid Blogging: China Launches Six New Squid Jigging Vessels

From Pingtan Marine Enterprise:

The 6 large-scale squid jigging vessels are normally operating vessels that returned to China earlier this year from the waters of Southwest Atlantic Ocean for maintenance and repair. These vessels left the port of Mawei on December 17, 2020 and are sailing to the fishing grounds in the international waters of the Southeast Pacific Ocean for operation.

I wonder if the company will include this blog post in its PR roundup.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Searching for Giant Squid by Collecting Environmental DNA

The idea is to collect and analyze random DNA floating around the ocean, and using that to figure out where the giant squid are. No one is sure if this will actually work.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Worse Than FailureCodeSOD: Generated Requests

If you've worked with developing software of any real complexity, you've probably come across a library or tool that does code generation. Instead of writing all the boring boiler-plate stuff yourself, you maybe throw a little configuration at the system and let it generate all those classes for you. I'd argue that, in most cases, that sort of thing is a code smell- it's not in and of itself bad, but it hints as missed abstractions. There's probably a general way to represent what you want to represent without generating a big pile of classes. The question is: is the additional abstraction worth it, or should you just generate code to get it done?

Russell was recently browsing the documentation for Amazon Web Services. The Java SDK allows you to send requests to AWS to automate things in their cloud. When you send a request, you need an AmazonWebServiceRequest object. Now, AmazonWebServiceRequest is itself an abstract class, so we need to send a concrete implementation, specific to the operation we want to perform. That means it's going to be one of:

AbortDocumentVersionUploadRequest, AbortEnvironmentUpdateRequest, AbortMultipartUploadRequest, AbortMultipartUploadRequest, AbortVaultLockRequest, AbstractPutObjectRequest, AcceptCertificateTransferRequest, AcceptDirectConnectGatewayAssociationProposalRequest, AcceptDomainTransferFromAnotherAwsAccountRequest, AcceptGrantRequest, AcceptHandshakeRequest, AcceptInboundCrossClusterSearchConnectionRequest, AcceptInputDeviceTransferRequest, AcceptInvitationRequest, … VerifySoftwareTokenRequest, VerifyTrustRequest, VerifyUserAttributeRequest, ViewBillingRequest, VoteOnProposalRequest, WithdrawByoipCidrRequest, WithdrawByoipCidrRequest, WriteRecordsRequest

I skipped a… few thousand of them in the middle there. You can get them all on the docs page. If you dig into any of the concrete implementations, they're all tagged as @Generated. I'm sure that there are more convenient methods which wrap around this, and that your average SDK user is never going to need to directly interact with any of these classes, but when you see a wall of generated classes, you can't help but wonder if this is the best approach. At a certain level, SDKs are meant to be understood by humans, and if you have about 9,000 auto-generated types, maybe you've lost the plot just a little bit.

And if you're worried about how you handle the response, don't be. Each of these request classes has a Result version as well. Well, not each of them- there's only 8,704 Result types. Now that's efficient.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianRuss Allbery: Review: The Once and Future Witches

Review: The Once and Future Witches, by Alix E. Harrow

Publisher: Redhook Books
Copyright: October 2020
ISBN: 0-316-42202-9
Format: Kindle
Pages: 515

Once upon a time there were three sisters.

They were born in a forgotten kingdom that smelled of honeysuckle and mud, where the Big Sandy ran wide and the sycamores shone white as knuckle-bones on the banks. The sisters had no mother and a no-good father, but they had each other; it might have been enough.

But the sisters were banished from their kingdom, broken and scattered.

The Once and Future Witches opens with Juniper, the youngest, arriving in the city of New Salem. The year is 1893, but not in our world, not quite; Juniper has witch-ways in her pocket and a few words of power. That's lucky for her because the wanted posters arrived before she did.

Unbeknownst to her or to each other, her sisters, Agnes and Bella, are already in New Salem. Agnes works in a cotton mill after having her heart broken one too many times; the mill is safer because you can't love a cotton mill. Bella is a junior librarian, meek and nervous and uncertain but still fascinated by witch-tales and magic. It's Bella who casts the spell, partly by accident, partly out of wild hope, but it was Juniper arriving in the city who provided the final component that made it almost work. Not quite, not completely, but briefly the lost tower of Avalon appears in St. George's Square. And, more importantly, the three sisters are reunited.

The world of the Eastwood sisters has magic, but the people in charge of that world aren't happy about it. Magic is a female thing, contrary to science and, more importantly, God. History has followed a similar course to our world in part because magic has been ruthlessly suppressed. Inquisitors are a recent memory and the cemetery has a witch-yard, where witches are buried unnamed and their ashes sown with salt. The city of New Salem is called New Salem because Old Salem, that stronghold of witchcraft, was burned to the ground and left abandoned, fit only for tourists to gawk at the supposedly haunted ruins. The women's suffrage movement is very careful to separate itself from any hint of witchcraft or scandal, making its appeals solely within the acceptable bounds of the church.

Juniper is the one who starts to up-end all of that in New Salem. Juniper was never good at doing what she was told.

This is an angry book that feels like something out of another era, closer in tone to a Sheri S. Tepper or Joanna Russ novel than the way feminism is handled in recent work. Some of that is the era of the setting, before women even had the right to vote. But primarily it's because Harrow, like those earlier works, is entirely uninterested in making excuses or apologies for male behavior. She takes an already-heated societal conflict and gives the underdogs magic, which turns it into a war. There is likely a better direct analogy from the suffrage movement, but the comparison that came to my mind was if Martin Luther King, Jr. proved ineffective or had not existed, and instead Malcolm X or the Black Panthers became the face of the Civil Rights movement.

It's also an emotionally exhausting book. The protagonists are hurt and lost and shattered. Their moments of victory are viciously destroyed. There is torture and a lot of despair. It works thematically; all the external solutions and mythical saviors fail, but in the process the sisters build their own strength and their own community and rescue themselves. But it's hard reading at times if you're emotionally invested in the characters (and I was very invested). Harrow does try to balance the losses with triumphs and that becomes more effective and easier to read in the back half of the book, but I struggled with the grimness at the start.

One particular problem for me was that the sisters start the book suspicious and distrustful of each other because of lies and misunderstandings. This is obvious to the reader, but they don't work through it until halfway through the book. I can't argue with this as a piece of characterization — it made sense to me that they would have reacted to their past the way that they did. But it was still immensely frustrating to read, since in the meantime awful things were happening and I wanted them to band together to fight. They also worry over the moral implications of the fate of their father, whereas I thought the only problem was that the man couldn't die more than once. There too, it makes sense given the moral framework the sisters were coerced into, but it is not my moral framework and it was infuriating to see them stay trapped in it for so long.

The other thing that I found troubling thematically is that Harrow personalizes evil. I thought the more interesting moral challenge posed in this book is a society that systematically abuses women and suppresses their power, but Harrow gradually supplants that systemic conflict with a villain who has an identity and a backstory. It provides a more straightforward and satisfying climax, and she does avoid the trap of letting triumph over one character solve all the broader social problems, but it still felt too easy. Worse, the motives of the villain turn out to be at right angles to the structure of the social oppression. It's just a tool he's using, and while that's also believable, it means the transfer of the narrative conflict from the societal to the personal feels like a shying away from a sharper political point. Harrow lets the inhabitants of New Salem off too easily by giving them the excuse of being manipulated by an evil mastermind.

What I thought Harrow did handle well was race, and it feels rare to be able to say this about a book written by and about white women. There are black women in New Salem as well, and they have their own ways and their own fight. They are suspicious of the Eastwood sisters because they're worried white women will stir up trouble and then run away and leave the consequences to fall on black women... and they're right. An alliance only forms once the white women show willingness to stay for the hard parts. Black women are essential to the eventual success of the protagonists, but the opposite is not necessarily true; they have their own networks, power, and protections, and would have survived no matter what the Eastwoods did. The book is the Eastwoods' story, so it's mostly concerned with white society, but I thought Harrow avoided both making black women too magical or making white women too central. They instead operate in parallel worlds that can form the occasional alliance of mutual understanding.

It helps that Cleopatra Quinn is one of the best characters of the book.

This was hard, emotional reading. It's the sort of book where everything has a price, even the ending. But I'm very glad I read it. Each of the three sisters gets their own, very different character arc, and all three of those arcs are wonderful. Even Agnes, who was the hardest character for me to like at the start of the book and who I think has the trickiest story to tell, becomes so much stronger and more vivid by the end of the book. Sometimes the descriptions are trying a bit too hard and sometimes the writing is not quite up to the intended goal, but some of the descriptions are beautiful and memorable, and Harrow's way of weaving the mythic and the personal together worked for me.

This is a more ambitious book than The Ten Thousand Doors of January, and while I think the ambition exceeded Harrow's grasp in a few places and she took a few thematic short-cuts, most of it works. The characters felt like living and changing people, which is not easy given how heavily the story structure leans on maiden, mother, and crone archetypes. It's an uncompromising and furious book that turns the anger of 1970s feminist SF onto themes that are very relevant in 2021. You will have to brace yourself for heartbreak and loss, but I think it's fantasy worth reading. Recommended.

Rating: 8 out of 10


Planet DebianEnrico Zini: COVID-19 vaccines

COVID-19 vaccination has started, and this site tracks progress in Italy. This site, world-wide.

Reverse Engineering the source code of the BioNTech/Pfizer SARS-CoV-2 Vaccine has a pretty good description of the BioNTech/Pfizer SARS-CoV-2 Vaccine, codon by codon, broken down in a way that I managed to follow.

From the same author, DNA seen through the eyes of a coder

Planet DebianEmmanuel Kasper: How to move a single VM between cloud providers

I am running since a decade a small Debian VM, that I use for basic web and mail hosting. Since most of the VM setup is done manually and not following the Infrastructure As Code pattern, it is faster to simply copy the filesystem when switching providers instead of reconfiguring everything.
The steps involved are:

1. create a backup of the filesystem using tar of rsync, excluding dynamic content
rsync  --archive \
    --one-file-system --numeric-ids \
    --rsh "ssh -i private_key root@server:/ /local_dir

tar -cvpzf backup.tar.gz \
--numeric-owner \
--exclude=/backup.tar.gz \
--one-file-system /

Notice here the --one-file-system switch which avoids back'ing up the content of mount points like /proc, /dev.
If you have extra partitions with a mounted filesystem, like /boot or home you need do add a separate backup for those.

2. create a new VM on the new cloud provider, verify you have a working console access, and power it off.
3. boot on the new cloud provider a rescue image
4. partition the disk image on the new provider.
5. mount the new root partition, and untar your backup on it. You could for instance push the local backup via rsync, or download the tar archive using https.
6. update network configuration and /etc/fstab
7. chroot into the target system, and reinstall grub

This works surprisingly well, and you if made your backup locally, you can test the whole procedure by building a test VM with your backup. Just replace the deboostrap step with a command like tar -xvpzf /path/to/backup.tar.gz -C /mount_point --numeric-owner

Using this procedure, I moved from Hetzner (link in French language) to Digital Ocean, from Digital Ocean to Vultr, and now back at Hetzner.


Planet DebianJonathan Wiltshire: RCBW 21.1

Does software-properties-common really depend on gnupg, as described in #970124, or could it be python3-software-properties? Should it be Depends, or Recommends? And do you accept the challenge of finding out and preparing a patch (and even an upload) to fix the bug?

Planet DebianMartin-Éric Racine: Help needed: clean up and submit KMS driver for Geode LX to LKML

Ever since switched to rootless operation, the days of the Geode driver have been numbered. The old codebase dates back from Geode's early days at Cyrix, was then updated by NSC to add support for their new GX2 architecture, from which AMD dropped GX1 support and added support for their new LX architecture. To put it mildly, that codebase is a serious mess.

However, at least the LX code comes with plenty of niceties, such as being able to detect when it runs on an OLPC XO-1 and to probe DCC pins to determine the optimal display resolution. This still doesn't make the codebase cruft-free.

Anyhow, most Linux distributions have dropped support for anything older than i686 with PAE, which essentially means that the GX2 code is just for show. Debian is one of very few distributions whose x86-32 port still ships with i686 without PAE. In fact, the lowest common denominator kernel on i386 is configured for Geode (LX).

A while back, someone had started working on a KMS driver for the Geode LX. Through word of mouth, I got my hands on a copy of their Git tree. The driver worked reasonably well, but the codebase needs some polishing before it could be included in the Linux kernel tree.

Hence this call for help:

Is there anyone with good experience of the LKML coding standards who would be willing to clean up the driver's code and submit the patch to the LKML?

Planet DebianJonathan Carter: Free Software Activities for 2020-12

Here’s a list of some Debian packaging work for December 2020.

2020-12-01: Sponsor package mangohud (0.6.1-1) for Debian unstable ( request).

2020-12-01: Sponsor package spyne (2.13.16-1) for Debian unstable (Python team request).

2020-12-01: Sponsor package python-xlrd (1.2.0-1) for Debian unstable (Python team request).

2020-12-01: Sponsor package buildbot for Debian unstable (Python team request).

2020-12-08: Upload package calamares ( to Debian unstable.

2020-12-09: Upload package btfs (2.23-1) to Debian unstable.

2020-12-09: Upload package feed2toot (0.15-1) to Debian unstable.

2020-12-09: Upload package gnome-shell-extension-harddisk-led (23-1) to Debian unstable.

2020-12-10: Upload package feed2toot (0.16-1) to Debian unstable.

2020-12-10: Upload package gnome-shell-extension-harddisk-led (24-1) to Debian unstable.

2020-12-13: Upload package xabacus (8.3.1-1) to Debian unstable.

2020-12-14: Upload package python-aniso8601 (8.1.0-1) to Debian unstable.

2020-12-19: Upload package rootskel-gtk (1.42) to Debian unstable.

2020-12-21: Sponsor package goverlay (0.4.3-1) for Debian unstable ( request).

2020-12-21: Sponsor package pastel (0.2.1-1) for Debian unstable (Python team request).

2020-12-22: Sponsor package python-requests-toolbelt (0.9.1-1) for Debian unstable (Python team request).

2020-12-22: Upload kpmcore (20.12.0-1) to Debian unstable.

2020-12-26: Upload package bundlewrap (4.3.0-1) to Debian unstable.

2020-12-26: Review package python-strictyaml (1.1.1-1) (Needs some more work) (Python team request).

2020-12-26: Review package buildbot (2.9.3-1) (Needs some more work) (Python team request).

2020-12-26: Review package python-vttlib (0.9.1+dfsg-1) (Needs some more work) (Python team request).

2020-12-26: Sponsor package python-formencode (2.0.0-1) for Debian unstable (Python team request).

2020-12-26: Sponsor package pylev (1.2.0-1) for Debian unstable (Python team request).

2020-12-26: Review package python-absl (Needs some more work) (Python team request).

2020-12-26: Sponsor package python-moreorless (0.3.0-2) for Debian unstable (Python team request).

2020-12-26: Sponsor package peewee (3.14.0+dfsg-1) for Debian unstable (Python team request).

2020-12-28: Sponsor package pympler (0.9+dfsg1-1) for Debian unstable (Python team request).

2020-12-28: Sponsor package bidict (0.21.2-1) for Debian unstable (Python team request).

Planet DebianPaul Wise: FLOSS Activities December 2020


This month I didn't have any particular focus. I just worked on issues in my info bubble.





  • Debian: restart bacula director, ping some people about disk usage
  • Debian wiki: unblock IP addresses, approve accounts, update email for accounts with bouncing email


  • Respond to queries from Debian users and contributors on the mailing lists and IRC


All work was done on a volunteer basis.


Cryptogram Amazon Has Trucks Filled with Hard Drives and an Armed Guard

From an interview with an Amazon Web Services security engineer:

So when you use AWS, part of what you’re paying for is security.

Right; it’s part of what we sell. Let’s say a prospective customer comes to AWS. They say, “I like pay-as-you-go pricing. Tell me more about that.” We say, “Okay, here’s how much you can use at peak capacity. Here are the savings we can see in your case.”

Then the company says, “How do I know that I’m secure on AWS?” And this is where the heat turns up. This is where we get them. We say, “Well, let’s take a look at what you’re doing right now and see if we can offer a comparable level of security.” So they tell us about the setup of their data centers.

We say, “Oh my! It seems like we have level five security and your data center has level three security. Are you really comfortable staying where you are?” The customer figures, not only am I going to save money by going with AWS, I also just became aware that I’m not nearly as secure as I thought.

Plus, we make it easy to migrate and difficult to leave. If you have a ton of data in your data center and you want to move it to AWS but you don’t want to send it over the internet, we’ll send an eighteen-wheeler to you filled with hard drives, plug it into your data center with a fiber optic cable, and then drive it across the country to us after loading it up with your data.

What? How do you do that?

We have a product called Snowmobile. It’s a gas-guzzling truck. There are no public pictures of the inside, but it’s pretty cool. It’s like a modular datacenter on wheels. And customers rightly expect that if they load a truck with all their data, they want security for that truck. So there’s an armed guard in it at all times.

It’s a pretty easy sell. If a customer looks at that option, they say, yeah, of course I want the giant truck and the guy with a gun to move my data, not some crappy system that I develop on my own.

Lots more about how AWS views security, and Keith Alexander’s position on Amazon’s board of directors, in the interview.

Found on Slashdot.

Cryptogram On the Evolution of Ransomware

Good article on the evolution of ransomware:

Though some researchers say that the scale and severity of ransomware attacks crossed a bright line in 2020, others describe this year as simply the next step in a gradual and, unfortunately, predictable devolution. After years spent honing their techniques, attackers are growing bolder. They’ve begun to incorporate other types of extortion like blackmail into their arsenals, by exfiltrating an organization’s data and then threatening to release it if the victim doesn’t pay an additional fee. Most significantly, ransomware attackers have transitioned from a model in which they hit lots of individuals and accumulated many small ransom payments to one where they carefully plan attacks against a smaller group of large targets from which they can demand massive ransoms. The antivirus firm Emsisoft found that the average requested fee has increased from about $5,000 in 2018 to about $200,000 this year.

Ransomware is a decades-old idea. Today, it’s increasingly profitable and professional.

Planet DebianJonathan Wiltshire: WordPress in a subdirectory

For many years now I’ve had WordPress installed as a subdirectory to my site but appearing to at the domain level, i.e. /wordpress/index.php is transparently presented as the homepage. This is done by setting the “WordPress Address” and “Site Address” settings and then mapping requests which do not match an existing file or directory through as a PHP pathinfo using Apache’s mod_rewrite rules in a .htaccess file or server configuration.

In this way most of the site is WordPress’s dynamic pages and posts, but WordPress itself is neatly contained and random static resources such as /screentest/1024.GIF work as expected.

Those .htaccess rules were originally hand-crafted but didn’t take account of changing recommendations, e.g. the Authorization header. When I rearranged matters recently I decided to take advantage of WordPress’s own generated rules and ditch my old rules.

They look like this:

# BEGIN WordPress
# The directives (lines) between "BEGIN WordPress" and "END WordPress" are
# dynamically generated, and should only be modified via WordPress filters.
# Any changes to the directives between these markers will be overwritten.
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]

# END WordPress

If .htaccess is writable, WordPress adds these rules automatically. Unfortunately, when I switched to them and not my own, the site broke.

Crucially the generated rules do not cater for WordPress being in a subdirectory. No matter what combination of settings I tried, local changes to the rules (like RewriteRule . /wordpress/index.php [L]) were rewritten back to the original values at random times and clearly couldn’t be relied upon to stay as I need them.

This feels like a bug: my intuition is that if the “WordPress Address” and “Site Address” settings are different, but within the same domain, the rules should be generated to take care of that.

One option is to make the .htaccess file read-only, which WordPress detects and avoids trying to make changes at all. But this rather defeats the object of letting it take care of future changes as the software and specifications change.

The second option, which I stuck with, is to add an index.php file with the following contents:


Now the generated rules, though incorrect, result in the right behaviour and won’t break on every setting change.

Planet DebianRuss Allbery: 2020 Book Reading in Review

In 2020, I finished and reviewed 42 books, two more than 2019 (although I had finished but not reviewed two books at the end of 2019, so the total is even more similar than that). This is the best year for reading in terms of book count since 2012, despite being seriously distracted by a new job, a pandemic, and US political meltdowns. Those distractions do show up in the drop in page count.

If it weren't for the pandemic, the count would have been higher. Just as I got into a rhythm of reading while I exercised, gyms became a bad idea for the rest of the year. Treadmills are good for reading; long walks around the neighborhood not so much. That time went to podcasts instead, which I'm not too sad about but which don't prompt reviews.

Finding the mental space and energy to write reviews proved as much of a challenge as finding time to read this year, and I once again had to do some catch-up at the end of the year. To the extent that I have goals for 2021, it's to tighten up the elapsed time between finishing a book and writing a review so that the reviews don't pile up.

I read one book this year that I rated 10 out of 10: Michael Lewis's The Fifth Risk, which is partly about the US presidential transition and is really about what the US government does and what sort of people make careers in civil service. This book is brilliant, fascinating, and surprisingly touching, and I wish it were four times as long. If anything, it's even more relevant today as we enter another transition than it was when Lewis wrote it or when I read it.

There were so many 9 out of 10 ratings this year that it's hard to know where to start. I read the last Murderbot novella by Martha Wells (Exit Strategy) and then the first Murderbot novel (Network Effect), both of which were everything I was hoping for. Murderbot's sarcastic first-person voice continues to be a delight, and I expect Network Effect to take home several 2021 awards. I'm eagerly awaiting the next novel, Fugitive Telemetry, currently scheduled for the end of April, 2021.

Also on the fiction side were Alix E. Harrow's wonderful debut novel The Ten Thousand Doors of January, a fierce novel about family and claiming power that will hopefully win the 2020 Mythopoeic Award (which was delayed by the pandemic), and TJ Klune's heart-warming The House in the Cerulean Sea, my feel-good novel of the year. Finally, Tamsyn Muir's Gideon the Ninth and Harrow the Ninth were a glorious mess in places, but I had more fun reading and discussing those books than I've had with any novel in a very long time.

On the non-fiction side, Tressie McMillan Cottom's Thick is the best collection of sociology that I've read. It's not easy reading, but that book gave me a new-found appreciation and understanding of sociology and what it's trying to accomplish. Gretchen McCulloch's Because Internet is much easier reading but similarly perspective-enhancing, helping me understand (among other things) how choice of punctuation and capitalization expands the dynamic range of meaning in informal text conversation. Finally, Nick Pettigrew's Anti-Social is a funny, enlightening, and sobering look at the process of addressing low-level unwanted behavior that's directly relevant to the current conflicts over the role of policing in society.

The full analysis includes some additional personal reading statistics, probably only of interest to me.

Planet DebianChris Lamb: OpenUK Influencer 2021

After a turbulent 2020, I am very grateful to have been chosen in OpenUK's 2021 honours as one of the 100 top influencers in the UK's open technology community, which recognises contributions to open source software, open data and open hardware.

Congratulations to all of the other open source heroes and heroines who were also listed — am looking forward to an exciting year together.

Planet DebianAndrej Shadura: Transitioning to a new OpenPGP key

Following dkg’s example, I decided to finally transition to my new ed25519/cv25519 key.

Unlike Daniel, I’m not yet trying to split identities, but I’m using this chance to drop old identities I no longer use. My new key only has my main email address and the Debian one, and only those versions of my name I still want around.

My old PGP key (at the moment in the Debian keyring) is:

pub   rsa4096/0x6EA4D2311A2D268D 2010-10-13 [SC] [expires: 2021-11-11]
uid                   [ultimate] Andrej Shadura <>
uid                   [ultimate] Andrew Shadura <>
uid                   [ultimate] Andrew Shadura <>
uid                   [ultimate] Andrew O. Shadoura <>
uid                   [ultimate] Andrej Shadura <>
sub   rsa4096/0xB2C0FE967C940749 2010-10-13 [E]
sub   rsa3072/0xC8C5F253DD61FECD 2018-03-02 [S] [expires: 2021-11-11]
sub   rsa2048/0x5E408CD91CD839D2 2018-03-10 [S] [expires: 2021-11-11]

The is the key I’ve been using in Debian from the very beginning, and its copies at the SKS keyserver network still have my first DD signature from angdraug:

sig  sig   85EE3E0E 2010-12-03 __________ __________ Dmitry Borodaenko <>
sig  sig   CB4D38A9 2010-12-03 __________ __________ Dmitry Borodaenko <>

My new PGP key is:

pub   ed25519/0xE8446B4AC8C77261 2016-06-13 [SC] [expires: 2022-06-25]
uid                   [ultimate] Andrej Shadura <>
uid                   [ultimate] Andrew Shadura <>
uid                   [ultimate] Andrej Shadura <>
uid                   [ultimate] Andrew Shadura <>
uid                   [ultimate] Andrei Shadura <>
sub   cv25519/0xD5A55606B6539A87 2016-06-13 [E] [expires: 2022-06-25]
sub   ed25519/0x52E0EA6F91F1DB8A 2016-06-13 [A] [expires: 2022-06-25]

If you signed my old key and are reading this, please consider signing my new key; if you feel you need to re-confirm this, feel free to contact me; otherwise, a copy of this statement signed by both the old and new keys is available here.

I have uploaded this new key to, and also published it through WKD and the SKS network. Both keys can also be downloaded from my website.

Planet DebianUtkarsh Gupta: FOSS Activites in December 2020

Here’s my (fifteenth) monthly update about the activities I’ve done in the F/L/OSS world.


This was my 24th month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/

Amongs a lot of things, this was month was crazy, hectic, adventerous, and the last of 2020 – more on some parts later this month.
I finally finished my 7th semester (FTW!) and moved onto my last one! That said, I had been busy with other things™ but still did a bunch of Debian stuff

Here are the following things I did this month:

Uploads and bug fixes:

Other $things:

  • Attended the Debian Ruby team meeting.
  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Sponsored golang-github-gorilla-css for Fedrico.

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my fifteenth month as a Debian LTS and sixth month as a Debian ELTS paid contributor.
I was assigned 26.00 hours for LTS and 38.25 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

  • Issued DLA 2474-1, fixing CVE-2020-28928, for musl.
    For Debian 9 Stretch, these problems have been fixed in version 1.1.16-3+deb9u1.
  • Issued DLA 2481-1, fixing CVE-2020-25709 and CVE-2020-25710, for openldap.
    For Debian 9 Stretch, these problems have been fixed in version 2.4.44+dfsg-5+deb9u6.
  • Issued DLA 2484-1, fixing #969126, for python-certbot.
    For Debian 9 Stretch, these problems have been fixed in version 0.28.0-1~deb9u3.
  • Issued DLA 2487-1, fixing CVE-2020-27350, for apt.
    For Debian 9 Stretch, these problems have been fixed in version 1.4.11. The update was prepared by the maintainer, Julian.
  • Issued DLA 2488-1, fixing CVE-2020-27351, for python-apt.
    For Debian 9 Stretch, these problems have been fixed in version 1.4.2. The update was prepared by the maintainer, Julian.
  • Issued DLA 2495-1, fixing CVE-2020-17527, for tomcat8.
    For Debian 9 Stretch, these problems have been fixed in version 8.5.54-0+deb9u5.
  • Issued DLA 2488-2, for python-apt.
    For Debian 9 Stretch, these problems have been fixed in version 1.4.3. The update was prepared by the maintainer, Julian.
  • Issued DLA 2508-1, fixing CVE-2020-35730, for roundcube.
    For Debian 9 Stretch, these problems have been fixed in version 1.2.3+dfsg.1-4+deb9u8. The update was prepared by the maintainer, Guilhem.

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:

  • Front-desk duty from 21-12 until 27-12 and from 28-12 until 03-01 for both LTS and ELTS.
  • Triaged openldap, python-certbot, lemonldap-ng, qemu, gdm3, open-iscsi, gobby, jackson-databind, wavpack, cairo, nsd, tomcat8, and bountycastle.
  • Marked CVE-2020-17527/tomcat8 as not-affected for jessie.
  • Marked CVE-2020-28052/bountycastle as not-affected for jessie.
  • Marked CVE-2020-14394/qemu as postponed for jessie.
  • Marked CVE-2020-35738/wavpack as not-affected for jessie.
  • Marked CVE-2020-3550{3-6}/qemu as postponed for jessie.
  • Marked CVE-2020-3550{3-6}/qemu as postponed for stretch.
  • Marked CVE-2020-16093/lemonldap-ng as no-dsa for stretch.
  • Marked CVE-2020-27837/gdm3 as no-dsa for stretch.
  • Marked CVE-2020-{13987, 13988, 17437}/open-iscsi as no-dsa for stretch.
  • Marked CVE-2020-35450/gobby as no-dsa for stretch.
  • Marked CVE-2020-35728/jackson-databind as no-dsa for stretch.
  • Marked CVE-2020-28935/nsd as no-dsa for stretch.
  • Auto EOL’ed libpam-tacplus, open-iscsi, wireshark, gdm3, golang-go.crypto, jackson-databind, spotweb, python-autobahn, asterisk, nsd, ruby-nokogiri, linux, and motion for jessie.
  • General discussion on LTS private and public mailing list.

Other $things! \o/

Bugs and Patches

Well, I did report some bugs and issues and also sent some patches:

  • Issue #44 for github-activity-readme, asking for a feature request to set custom committer’s email address.
  • Issue #711 for git2go, reporting build failure for the library.
  • PR #89 for rubocop-rails_config, bumping RuboCop::Packaging to v0.5.
  • Issue #36 for rubocop-packaging, asking to try out mutant :)
  • PR #212 for cucumber-ruby-core, bumping RuboCop::Packaging to v0.5.
  • PR #213 for cucumber-ruby-core, enabling RuboCop::Packaging.
  • Issue #19 for behance, asking to relax constraints on faraday and faraday_middleware.
  • PR #37 for rubocop-packaging, enabling tests against ruby3.0! \o/
  • PR #489 for cucumber-rails, bumping RuboCop::Packaging to v0.5.
  • Issue #362 for nheko, reporting a crash when opening the application.
  • PR #1282 for paper_trail, adding RuboCop::Packaging amongst other used extensions.
  • Bug #978640 for nheko Debian package, reporting a crash, as a result of libfmt7 regression.

Misc and Fun

Besides squashing bugs and submitting patches, I did some other things as well!

  • Participated in my first Advent of Code event! :)
    Whilst it was indeed fun, I didn’t really complete it. No reason, really. But I’ll definitely come back stronger next year, heh! :)
    All the solutions thus far could be found here.
  • Did a couple of reviews for some PRs and triaged some bugs here and there, meh.
  • Also did some cloud debugging, not so fun if you ask me, but cool enough to make me want to do it again! ^_^
  • Worked along with pollo, zigo, ehashman, rlb, et al for puppet and puppetserver in Debian. OMG, they’re so lovely! <3
  • Ordered some interesting books to read January onward. New year resolution? Meh, not really. Or maybe. But nah.
  • Also did some interesting stuff this month but can’t really talk about it now. Hopefully sooooon.

Until next time.
:wq for today.

Worse Than FailureError'd: Winter ...Delivered!

"I wanted to find out how much snow Providence got overnight and apparently, Amazon wants to sell me some of it," Francis B. writes.


John C. wrote, "Yes, Windstream, I have both questions and feedback on this."


"Well, I mean, both haggis and nappies are kind of intestine related," writes Andy.


Robert F. writes, "My propane tank is full of 84E84b089... Are you sure that's not my septic tank?"


"In an attempt to connect with more voters, it looks like Trump is growing a placeholder moustache," wrote Hugh S.


[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Sam VargheseFarewell, annus horribilis

Charles StrossSo you say you want a revolution

Today is December 6th. The UK left the EU most of a year ago, but a transition agreement is in effect; it expires on December 31st, and negotiations between the British government and the EU appear to be on the rocks.

How did we get here and where are we going?

It's clear now that the Conservative party has, since 2017, succumbed to entryism by a faction of extreme right wing xenophobes who, in alliance with a cadre of rapacious disaster capitalists, intend to ram through a Brexit that does not involve any kind of working trade agreement with the EU. The 2019 leadership contest and Alexander de Pfeffel Johnson's subsequent election victory handed control over the UK government to this shit-show of idiots, racists, and grifters, led by a second-rate Donald Trump cosplayer with a posh accent and a penchant for quoting half-remembered classical Greek poetry (badly).

The 80-seat Conservative majority in the House of Commons puts Johnson in an invidious position: he's simultaneously on top of the external opposition (Labour—the SNP have effectively been silenced for the past 4 years in Westminster by a tacit Tory/Labour agreement to pretend they don't exist) but vulnerable to internal opposition by the back-bench Tory headbangers. Even if he wanted a trade deal (I'm pretty sure he doesn't) Johnson can't put one in front of his own party without risking a rebellion and leadership challenge. And for Johnson, gamesmanship is everything: it's all about being Prime Minister, not doing the job of Prime Minister. His catastrophic and indecisive handling of COVID19, coupled with outrageous cronyism and blatant corruption in the testing and PPE contract process, is a mirror to his conduct of Brexit and the clearest possible demonstration that he's an unfit person to lead the nation in its biggest peacetime crisis since 1945, much less two crises of that magnitude, one of them self-inflicted, running simultaneously.

Johnson is an arch-bullshitter. He doesn't want to look as if he's aiming for a cliff-edge Brexit situation, but it's his goal (he hasn't done his homework and doesn't understand or care about the long-term implications), so he's participating in negotiations in bad faith. It's clear that his intention is to equivocate until one of the EU26 (probably France, at this point) walks out in exasperation: at this point, he can declare that there's no scope for a Brexit deal but blame the EU for the subsequent economic and political catastrophe. Because never accepting blame for anything is as much Boris Johnson's shtick as Donald Trump's.

A lot of bullshit swirls around the issue of the common fisheries policy. Fisheries are a £400M industry in the UK at this point—chicken feed. But it's a great pretext to drive a stake in the ground (or seabed) and declare an irreconcilable gulf that cannot be bridged. The English have folk memories of the 1970s Cod War with Iceland, and of interminable arguments over the CFP in the 1980s. Johnson's going to play power chords in the key of taking back control over an industry that's worth less than Edinburgh's tourist shops, and he's going to use it as a pretext to run down the clock.

Incidentally, Brexit is best understood (by non-Brits) as an English nationalist project—English nationalism always labels itself as British, because English identity presumptively appropriates and subsumes everything else in the four kingdoms on a whim. It's a romantic utopian dream of autonomy and supremacy, and it's unachievable (that world never existed in the first place). Because it's an impossible dream Brexit can never fail, it can only be betrayed. And so backsliding will not be tolerated in the leadership. Which is why Johnson is on the hook: if he slackens in his zeal his own party will turn on him, like Trumpists turning on a Republican legislator who concedes that Joe Biden might have actually won the presidential election.

I'm calling it for "no deal" at this point. So what happens next?

January 1st: a new customs regime comes into effect around the main ports at Dover, Harwich, Felixstowe, etc. Trucks can't enter Kent without completed export forms, using an IT system that HMRC say won't be ready until March. There's a giant lorry park there, with queuing for 7000 vehicles, which has no toilet facilities and has already flooded a couple of times (it's on a flood plain). Moreover it assumes the average customs delay will be only a couple of hours. I think this is optimistic in the extreme: there may well be tailbacks all the way to the M25 (London's orbital motorway) and reports of fresh produce perishing before it can clear customs in either direction.

It's worth noting that our container ports are already logjammed, with huge delays building up: stuff is getting stuck there either as a side-effect of COVID19 lockdown or in anticipation of Brexit. But it's going to get much, much worse after January 1st.

EHIC cards stop working and health insurance for British travellers in the EU will promptly cost about 20-25% more. Folks who own holiday/second homes in Spain and France (or elsewhere) will suddenly discover they can't spend more than 3 months out of any 6 month period living in their homes, and that if they rent them out they'll be liable for high levels of tax. (Me, I probably won't be able to attend SF conventions in the EU without applying for a visa: it is, after all, a work-related activity and we've lost free movement rights too.)

With no trade agreement, WTO standard tariffs on imports from the EU come into force. These tariffs, contra lies spread in the British press (which are almost wholly in the pocket of pro-Brexit media oligarchs) will come out of the pockets of consumers—meaning the 40% of our food that is imported from the EU will cost 10-30% more, a tax that falls disproportionately on the poor.

I can't help thinking that this tax on food is part of the plan: we've seen rampant cronyism and corruption in the Johnson government, and if they continue on form they will want to raise money somewhere. Stealing it from the mouths of starving children would be nothing new in view of the Tory resistance to funding school meals for kids with laid-off parents due to COVID19. Expect the money to be spent on commissioning studies and products for streamlining import/export facilities at our ports—outsourced on unadvertised contracts to the chumocracy, of course. (That's just the latest blatant example: a sinecure at the institutionally racist and misogynist Home Office for a close friend of the PM's fiancée.)

But that's a digression ...

Weird shortages will show up on the British high street almost immediately. Cut flowers, for example, are almost overwhelmingly imported on overnight ferries from nurseries in the Netherlands: expect Interflora to take a huge hit, and many high street florists to shutter, permanently. Those displays of cut flowers near the entrances of supermarkets will be a thing of the past. So will cheap "basics" ranges of canned food: they're already vanishing from supermarket shelves, in a move that is probably intended to prepare consumers for the coming sticker shock as average food prices rise 15-20% in a month.

There will be a near-crisis in Northern Ireland as more than 200 border crossings have to either carry out customs inspections or close. The Good Friday agreement is in jeopardy if free movement across the border goes. More to the point, about half the food wholesalers supplying shops in the North have announced that they're just going to give up that market, unless streamlined arrangements can be made. Some businesses will simply become non-viable: milk, for example, is in some cases currently trucked across the border multiple times between the farm gate and the dairy (as roads wind across the border) and butter or cheese processing involves movement between facilities on different sides of an arbitrary line on a map that requires VAT and duty assessment at each step. But let's ignore Northern Ireland for a bit.

In England, Nissan have already very politely indicated that they will stop producing cars in the UK in event of no deal being reached: their supply chains are integrated across the EU. One example given a year ago was that components of the transmission of a (BMW manufactured) Mini crosses the UK/EU border half a dozen times before it's bolted onto the car, as specialized operations are carried out at facilities in different nations. Brexit seems likely to impose additional manufacturing overheads of 10-20% on the automobile industry. I expect major car plants to begin to close by mid-January. We can also probably say goodbye to continued production of Airbus components in the UK—the exact same logistic headaches apply. That's a £105Bn industry and a £11Bn industry both on the brink of non-viability due to Brexit.

Banking is going to be hit as the EU has no intention of allowing external banks to continue trading in Euros. Many of the major investment banks have already carried out nameplate moves to Dublin, Paris, and Frankfurt: that process will accelerate rapidly.

Note that these industries—aviation and automobile manufacturing, financial services, imports—are not the same as the industries that took a hit from COVID19 (transport, hospitality, retail). So the impact of a no-deal Brexit is additive to the impact of COVID19; the one cannot be used to conceal the other.

Great! Disaster capitalism ahoy!

During the initial months of the COVID19 pandemic, Chancellor Rishi Sunak rolled out a gigantic financial aid package (or at least it looked gigantic: it turns out that more of it stuck to the fingers of large corporate Conservative donors than to anyone else's wallet) to keep the economy alive albeit in suspended animation through months of lockdown. In so doing, he got the British public used to the idea of Keynsian stimulus again. The public understanding of economics is primitive, with many people thinking of government revenue (taxation and spending) in terms of a family budget, rather than realizing that money is something that governments can create or destroy pretty much at will (the drawbacks being inflation/deflation) and that financial institutions can also generate by creating and leveraging debt. This was used to drive public support for austerity between 2008 and 2015, a policy which led to over 100,000 excess deaths in the UK, reports of unemployed people starving to death, of benefits claimants being sanctioned for non-completion of employment interviews (while they were on their way to a hospital due to a heart attack brought on by the stress of the interview), and similar. Popular support for austerity has mostly wilted, but COVID19 provided a brisk refresher course in Keynsian stimulus spending—and also gave the new generation of Tory looters a refresher course in corrupt self-dealing and cronyism.

What we're going to see next is a British government "emergency bailout for the economy" to "cushion the impact of Brexit". Of course, most of this will go to their corporate backers and friends, in the biggest Mafia style bust-out since 2008.

By March the random food shortages and chaos on the motorways should be coming under control as a "new normal" prevails. People will get used to empty shelves, after all. There will be a significant economic shock, but printing half a trillion pounds of virtual banknotes (and handing half of it to their friends) should paper over the cracks for a bit. The Tory press will blame half of the disruption on COVID19, and the other half on the EU (who did not vote for Brexit). The vaccine roll-out is ring-fenced: the government have already announced that they'll deploy military logistics capacity to ensure the Turkish-designed and German-manufactured doses arrive in the UK without getting stuck in the lorry queues at Felixstowe.

There will of course be an ongoing drip of outrageous news to keep the papers happy. I expect the Home Office to savagely turn on EU residents in the UK who have applied for and received indefinite leave to remain, because that's what the Home OFfice always does (did I mention institutional racism earlier?). They're already logjammed with visa applications with insane consequences (here's a nurse who's been driven out of the NHS by Home Office visa fuck-uppery). It's going to turn hostile to everyone—if you don't have a passport already, you should probably make sure you've got one and it's up to date just in case you need to prove that you're legally entitled to live here. (Oops, that's just another £120 tax bill coming due.) Industries will go bust or shut down progressively, not all on January 1st, so there'll be plenty of stuff to keep the news cycle going. MPs will of course denounce foreign investors who pull out as traitors or slackers.

Some time in February/March, the Falkland Isles will hit a crisis point. 80% of their income is based on exports from squid fisheries to Spain and Portugal—which are in the EU, and which mean Falklands-caught squid will be priced out of the market due to tariffs. Expect to see the RAF sailing or flying in food parcels to Port Stanley.

I nearly forgot to mention Gibraltar (which voted Remain by about 98%, and was ignored). The Rock is going to be in trouble; there may be some sort of arrangement by which Gibraltar is conveniently ignored by HM Government and treated as being outside the UK so that the frontier can be kept open, but otherwise Gibraltar is going to need Berlin Air-Lift style supply shipments from January 2nd. Of course the current shower, despite all their bloviation about sovereignty and taking back control, will probably be happy to sell Gibraltar back to Spain in due course (once attention is elsewhere): see also the Thatcher government's dealings with Argentina in 1980-81.

During March/April there's going to be the distraction of another storm on the horizon.

Scotland voted "remain" by a 62/38 margin; support for EU membership has hardened since the referendum, and seems to have transferred to support for independence (and possible re-accession). In 15 consecutive opinion polls since the November 2019 general election, support for Scottish independence has never dropped below 50% (and has been as high as 58%). There is going to be a Scottish parliamentary election on May 6th and the most recent polling shows the SNP getting over 55% of the vote, with more votes than the Conservatives and Labour combined: they're likely to receive a large absolute majority in Holyrood. The SNP have made a committment to an early post-Brexit Independence referendum a manifesto pledge. (They're not the only party to do so: the Scottish Green Party—disclaimer: I am a member—have also done so.) The Greens also regularly get seats in Holyrood: the upshot is that there will almost inevitably be a government with a committment to holding an independence referendum as soon as possible.

The constitutional position here gets murky, fast. Johnson has stated that he will refuse to grant an Article 30 order permitting a binding referendum on independence. But he's always capable of reversing himself. More to the point, a non-binding consultative referendum may be legally within the powers of the Scottish parliament, using the legislative framework left over from 2014. A significant majority voting "leave" in a non-binding referendum would be a horrible problem for Johnson—the 2016 Brexit referendum was also "non-binding, consultative".

This is not a blog essay about Scottish independence, so please don't start discussing it in the comments: I'm just noting that it's the next political crisis currently scheduled to hit the UK after Brexit (although it could always be pre-empted by the Northern Ireland troubles re-igniting, some other part of the UK seceding, COVID21 putting in an unwelcome appearance, a dinosaur-killer asteroid, and so on).

Your takeaway should be that the UK has just been through 13 very turbulent months (from the 2019 general election upset, via COVID19, to the current mess), but we're only just approaching the threshold of a year that looks likely to continue COVID19 for the first half (at least), with added economic crisis, probable civil disobedience and unrest, a risk of the NHS collapsing, a possible run on Sterling, and then a constitutional crisis as one or more parts of the United Kingdom gear up for a secession campaign.

Happy Christmas! Now tell me what I've missed?

Charles StrossSubmarine coming through!

I'm in a holding pattern on the blog just now because I'm frantically busy with end-of-year work: publisher production departments like to clear their desks before the office shuts down for a fortnight, and they expect to come back to a full in-tray on January 4th, and of course authors don't have families or friends to socialize with, so why not share the joy of a tight deadline copy edit check with your loved ones this festive season?

That's not actually what I'm working on right now, but I am actually working way more than usual, and as a result I hope to have good news to share with you early in the new year.

As for the blog?

I ought to blog about either an update on my COVID19 forecast, or an update on my Brexit forecast, but those are basically boring and disgusting and demoralizing and would take precious brain cells away from bringing you the next book, so naaah. If you feel like updating me with your predictions for 2021 in the comments below, though, go right ahead.

Finally, I have one thought to leave you with. Apparently the Washington Post ran a write-in competition for the best summary of 2020, and the winner (a nine year old from the mid-west) came up with this totally accurate description: "2020 was like taking care to look both ways before you cross the road, and then being hit by a submarine."

Happy solstice!

Worse Than FailureBest of…: Best of 2020: Web Server Installation

While this year has felt endless, there are projects which will feel like they take forever. As we wrap up our tour of the best of 2020, let's visit an endless project. Original -- Remy

Connect the dots puzzle

Once upon a time, there lived a man named Eric. Eric was a programmer working for the online development team of a company called The Company. The Company produced Media; their headquarters were located on The Continent where Eric happily resided. Life was simple. Straightforward. Uncomplicated. Until one fateful day, The Company decided to outsource their infrastructure to The Service Provider on Another Continent for a series of complicated reasons that ultimately benefited The Budget.

Part of Eric's job was to set up web servers for clients so that they could migrate their websites to The Platform. Previously, Eric would have provisioned the hardware himself. Under the new rules, however, he had to request that The Service Provider do the heavy lifting instead.

On Day 0 of our story, Eric received a server request from Isaac, a representative of The Client. On Day 1, Eric asked for the specifications for said server, which were delivered on Day 2. Day 2 being just before a long weekend, it was Day 6 before the specs were delivered to The Service Provider. The contact at The Service Provider, Thomas, asked if there was a deadline for this migration. Eric replied with the hard cutover date almost two months hence.

This, of course, would prove to be a fatal mistake. The following story is true; only the names have been changed to protect the guilty. (You might want some required listening for this ... )

Day 6

  • Thomas delivers the specifications to a coworker, Ayush, without requesting a GUI.
  • Ayush declares that the servers will be ready in a week.

Day 7

  • Eric informs The Client that the servers will be delivered by Day 16, so installations could get started by Day 21 at the latest.
  • Ayush asks if The Company wants a GUI.

Day 8

  • Eric replies no.

Day 9

  • Another representative of The Service Provider, Vijay, informs Eric that the file systems were not configured according to Eric's request.
  • Eric replies with a request to configure the file systems according to the specification.
  • Vijay replies with a request for a virtual meeting.
  • Ayush tells Vijay to configure the system according to the specification.

Day 16

  • The initial delivery date comes and goes without further word. Eric's emails are met with tumbleweeds. He informs The Client that they should be ready to install by Day 26.

Day 19

  • Ayush asks if any ports other than 22 are needed.
  • Eric asks if the servers are ready to be delivered.
  • Ayush replies that if port 22 needs to be opened, that will require approval from Eric's boss, Jack.

Day 20

  • Ayush delivers the server names to Eric as an FYI.

Day 22

  • Thomas asks Eric if there's been any progress, then asks Ayush to schedule a meeting to discuss between the three of them.

Day 23

  • Eric asks for the login credentials to the aforementioned server, as they were never provided.
  • Vijay replies with the root credentials in a plaintext email.
  • Eric logs in and asks for some network configuration changes to allow admin access from The Client's network.
  • Mehul, yet another person at The Service Provider, asks for the configuration change request to be delivered via Excel spreadsheet.
  • Eric tells The Client that Day 26 is unlikely, but they should probably be ready by end of Day 28, still well before the hard deadline of Day 60.

Day 28

  • The Client reminds Eric that they're decommissioning the old datacenter on Day 60 and would very much like to have their website moved by then.
  • Eric tells Mehul that the Excel spreadsheet requires information he doesn't have. Could he make the changes?
  • Thomas asks Mehul and Ayush if things are progressing. Mehul replies that he doesn't have the source IP (which was already sent). Thomas asks whom they're waiting for. Mehul replies and claims that Eric requested access from the public Internet.
  • Mehul escalates to Jack.
  • Thomas reminds Ayush and Mehul that if their work is pending some data, they should work toward getting that obstacle solved.

Day 29

  • Eric, reading the exchange from the evening before, begins to question his sanity as he forwards the original email back over, along with all the data they requested.

Day 30

  • Mehul replies that access has been granted.

Day 33

  • Eric discovers he can't access the machine from inside The Client's network, and requests opening access again.
  • Mehul suggests trying from the Internet, claiming that the connection is blocked by The Client's firewall.
  • Eric replies that The Client's datacenter cannot access the Internet, and that the firewall is configured properly.
  • Jack adds more explicit instructions for Mehul as to exactly how to investigate the network problem.

Day 35

  • Mehul asks Eric to try again.

Day 36

  • It still doesn't work.
  • Mehul replies with instructions to use specific private IPs. Eric responds that he is doing just that.
  • Ayush asks if the problem is fixed.
  • Eric reminds Thomas that time is running out.
  • Thomas replies that the firewall setting changes must have been stepped on by changes on The Service Provider's side, and he is escalating the issue.

Day 37

  • Mehul instructs Eric to try again.

Day 40

  • It still doesn't work.

Day 41

  • Mehul asks Eric to try again, as he has personally verified that it works from the Internet.
  • Eric reminds Mehul that it needs to work from The Client's datacenter—specifically, for the guy doing the migration at The Client.

Day 42

  • Eric confirms that the connection does indeed work from Internet, and that The Client can now proceed with their work.
  • Mehul asks if Eric needs access through The Company network.
  • Eric replies that the connection from The Company network works fine now.

Day 47

  • Ayush requests a meeting with Eric about support handover to operations.

Day 48

  • Eric asks what support is this referring to.
  • James (The Company, person #3) replies that it's about general infrastructure support.

Day 51

  • Eric notifies Ayush and Mehul that server network configurations were incorrect, and that after fixing the configuration and rebooting the server, The Client can no longer log in to the server because the password no longer works.
  • Ayush instructs Vijay to "setup the repository ASAP." Nobody knows what repository he's talking about.
  • Vijay responds that "licenses are not updated for The Company servers." Nobody knows what licenses he is talking about.
  • Vijay sends original root credentials in a plaintext email again.

Day 54

  • Thomas reminds Ayush and Mehul that the servers need to be moved by day 60.
  • Eric reminds Thomas that the deadline was extended to the end of the month (day 75) the previous week.
  • Eric replies to Vijay that the original credentials sent no longer work.
  • Vijay asks Eric to try again.
  • Mehul asks for the details of the unreachable servers, which were mentioned in the previous email.
  • Eric sends a summary of current status (can't access from The Company's network again, server passwords not working) to Thomas, Ayush, Mehul and others.
  • Vijay replies, "Can we discuss on this."
  • Eric replies that he's always reachable by Skype or email.
  • Mehul says that access to private IPs is not under his control. "Looping John and Jared," but no such people were added to the recipient list. Mehul repeats that from The Company's network, private IPs should be used.
  • Thomas tells Eric that the issue has been escalated again on The Service Provider's side.
  • Thomas complains to Roger (The Service Provider, person #5), Theodore (The Service Provider, person #6) and Matthew (The Service Provider, person #7) that the process isn't working.

Day 55

  • Theodore asks Peter (The Service Provider, person #8), Mehul, and Vinod (The Service Provider, person #9) what is going on.
  • Peter responds that websites should be implemented using Netscaler, and asks no one in particular if they could fill an Excel template.
  • Theodore asks who should be filling out the template.
  • Eric asks Thomas if he still thinks the sites can be in production by the latest deadline, Day 75, and if he should install the server on AWS instead.
  • Thomas asks Theodore if configuring the network really takes two weeks, and tells the team to try harder.

Day 56

  • Theodore replies that configuring the network doesn't take two weeks, but getting the required information for that often does. Also that there are resourcing issues related to such configurations.
  • Thomas suggests a meeting to fill the template.
  • Thomas asks if there's any progress.

Day 57

  • Ayush replies that if The Company provides the web service name, The Service Provider can fill out the rest.
  • Eric delivers a list of site domains and required ports.
  • Thomas forwards the list to Peter.
  • Tyler (The Company, person #4) informs Eric that any AWS servers should be installed by Another Service Provider.
  • Eric explains that the idea was that he would install the server on The Company's own AWS account.
  • Paul (The Company, person #5) informs Eric that all AWS server installations are to be done by Another Service Provider, and that they'll have time to do it ... two months down the road.
  • Kane (The Company, person #6) asks for a faster solution, as they've been waiting for nearly two months already.
  • Eric sets up the server on The Company's AWS account before lunch and delivers it to The Client.

Day 58

  • Peter replies that he needs a list of fully qualified domain names instead of just the site names.
  • Eric delivers a list of current blockers to Thomas, Theodore, Ayush and Jagan (The Service Provider, person #10).
  • Ayush instructs Vijay and the security team to check network configuration.
  • Thomas reminds Theodore, Ayush and Jagan to solve the issues, and reminds them that the original deadline for this was a month ago.
  • Theodore informs everyone that the servers' network configuration wasn't compatible with the firewall's network configuration, and that Vijay and Ayush are working on it.

Day 61

  • Peter asks Thomas and Ayush if they can get the configuration completed tomorrow.
  • Thomas asks Theodore, Ayush, and Jagan if the issues are solved.

Day 62

  • Ayush tells Eric that they've made configuration changes, and asks if he can now connect.

Day 63

  • Eric replies to Ayush that he still has trouble connecting to some of the servers from The Company's network.
  • Eric delivers network configuration details to Peter.
  • Ayush tells Vijay and Jai (The Service Provider, person #11) to reset passwords on servers so Eric can log in, and asks for support from Theodore with network configurations.
  • Matthew replies that Theodore is on his way to The Company.
  • Vijay resets the password and sends it to Ayush and Jai.
  • Ayush sends the password to Eric via plaintext email.
  • Theodore asks Eric and Ayush if the problems are resolved.
  • Ayush replies that connection from The Company's network does not work, but that the root password was emailed.

Day 64

  • Tyler sends an email to everyone and cancels the migration.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Chaotic Idealism“COVID is only a risk if you’re old or sick! If you’re young and healthy, there’s no need to take precautions!”

Okay, let’s take that as a premise: All high-risk people need to be able to isolate, all low-risk people should be completely unrestricted. Let’s see what that looks like in reality.

First of all, there are more high-risk people than you think. Elders, cancer survivors, anyone with diabetes, kidney disease, heart disease; anyone immunocompromised, anyone with severe asthma or lung disease. That’s about 10% of the population. Those people need to be able to stay at home, completely isolated, because everyone else is taking no precautions at all.

Therefore, we need to provide them their salaries–remember, many of them are working and they can no longer work outside the home–as well as make sure their medical needs are cared for safely. We also need to deliver their groceries and other home necessities. They will need to be given legal assurance that their jobs will be there for them when they leave isolation, and that they will not have their utilities cut or be evicted.

Many of the high-risk will need assistance. They live in nursing homes or prisons; they need home health care workers; they need regular doctors’ visits; some are children living with families–these doctors and home health care workers are now making house calls, by the way, since the people going for normal checkups are taking no precautions. To keep the high-risk safe, all doctors, home-health care workers, prison staff, and nursing home staff will also need to isolate completely. They will need the same precautions as a high-risk person.

Also, they will not be able to interact with low-risk people taking no precautions. That means that doctors will now need to be divided up between those seeing high-risk patients (and isolating) and those seeing low-risk patients. You are likely to need to change doctors.

Oh, and let’s break up a few families while we’re at it. Any high-risk people, medical, or residential center staff living with family have a choice: Their entire families can go into isolation (as above: No working outside the home, no going into public spaces whatsoever), or they can find an apartment outside their home and move out and isolate there. Or they may move to specially designated hotels, where all of the residents are isolated. All of the hotel staff will have to isolate, too, of course. Regardless, if you work in a prison, nursing home, or as a home-health care worker, you won’t see your family until herd immunity. And you’ll need all the same services: Grocery delivery, high-risk medical care, and free rent or hotel bill.

Now, if you don’t want to do this set-up, that’s fine. But it’s the only way to protect high-risk people while allowing low-risk people to take no precautions whatsoever. If you don’t support it, though, either admit that everybody has to take precautions, or admit that you don’t think high-risk people’s lives are worth inconvenience and lowered profits.

Cryptogram Brexit Deal Mandates Old Insecure Crypto Algorithms

In what is surely an unthinking cut-and-paste issue, page 921 of the Brexit deal mandates the use of SHA-1 and 1024-bit RSA:

The open standard s/MIME as extension to de facto e-mail standard SMTP will be deployed to encrypt messages containing DNA profile information. The protocol s/MIME (V3) allows signed receipts, security labels, and secure mailing lists… The underlying certificate used by s/MIME mechanism has to be in compliance with X.509 standard…. The processing rules for s/MIME encryption operations… are as follows:

  1. the sequence of the operations is: first encryption and then signing,
  2. the encryption algorithm AES (Advanced Encryption Standard) with 256 bit key length and RSA with 1,024 bit key length shall be applied for symmetric and asymmetric encryption respectively,
  3. the hash algorithm SHA-1 shall be applied.
  4. s/MIME functionality is built into the vast majority of modern e-mail software packages including Outlook, Mozilla Mail as well as Netscape Communicator 4.x and inter-operates among all major e-mail software packages.

And s/MIME? Bleah.

Worse Than FailureBest of…: Best of 2020: Science Is Science

You do not need formal training from a compsci program or similar before you're allowed to be a developer. But sometimes, when your job role already contains "engineer" in the title, people think you can handle any engineering task. As we continue our review of the best of 2020, here's a tale of misapplied human resources. Original --Remy

Oil well

Bruce worked for a small engineering consultant firm providing custom software solutions for companies in the industrial sector. His project for CompanyX involved data consolidation for a new oil well monitoring system. It was a two-phased approach: Phase 1 was to get the raw instrument data into the cloud, and Phase 2 was to aggregate that data into a useful format.

Phase 1 was completed successfully. When it came time to write the business logic for aggregating the data, CompanyX politely informed Bruce's team that their new in-house software team would take over from here.

Bruce and his team smelled trouble. They did everything they could think of to persuade CompanyX not to go it alone when all the expertise rested on their side. However, CompanyX was confident they could handle the job, parting ways with handshakes and smiles.

Although Phase 2 was officially no longer on his plate, Bruce had a suspicion borne from experience that this wasn't the last he'd hear from CompanyX. Sure enough, a month later he received an urgent support request via email from Rick, an electrical engineer.

We're having issues with our aggregated data not making it into the database. Please help!!

Rick Smith

"Lead Software Engineer!" Bruce couldn't help repeating out loud. Sadly, he'd seen this scenario before with other clients. In a bid to save money, their management would find the most sciency people on their payroll and would put them in charge of IT or, worse, programming.

Stifling a cringe, Bruce dug deeper into the email. Rick had written a Python script to read the raw instrument data, aggregate it in memory, and re-insert it into a table he'd added to the database. Said script was loaded with un-parameterized queries, filters on non-indexed fields, and SELECT * FROM queries. The aggregation logic was nothing to write home about, either. It was messy, slow, and a slight breeze could take it out. Bruce fired up the SQL profiler and found a bigger issue: a certain query was failing every time, throwing the error Cannot insert the value NULL into column 'requests', table 'hEvents'; column does not allow nulls. INSERT fails.

Well, that seemed straightforward enough. Bruce replied to Rick's email, asking if he knew about the error.

Rick's reply came quickly, and included someone new on the email chain. Yes, but we couldn't figure it out, so we were hoping you could help us. Aaron is our SQL expert and even he's stumped.

Product support was part of Bruce's job responsibilities. He helpfully pointed out the specific query that was failing and described how to use the SQL profiler to pinpoint future issues.

Unfortunately, CompanyX's crack new in-house software team took this opportunity to unload every single problem they were having on Bruce, most of them just as basic or even more basic than the first. The back-and-forth email chain grew to epic proportions, and had less to do with product support than with programming education. When Bruce's patience finally gave out, he sent Rick and Aaron a link to the W3 schools SQL tutorial page. Then he talked to his manager. Agreeing that things had gotten out of hand, Bruce's manager arranged for a BA to contact CompanyX to offer more formal assistance. A teleconference was scheduled for the next week, which Bruce and his manager would also be attending.

When the day of the meeting came, Bruce and his associates dialed in—but no one from CompanyX did. After some digging, they learned that the majority of CompanyX's software team had been fired or reassigned. Apparently, the CompanyX project manager had been BCC'd on Bruce's entire email chain with Rick and Aaron. Said PM had decided a new new software team was in order. The last Bruce heard, the team was still "getting organized." The fate of Phase 2 remains unknown.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Krebs on SecurityHappy 11th Birthday, KrebsOnSecurity!

Today marks the 11th anniversary of KrebsOnSecurity! Thank you, Dear Readers, for your continued encouragement and support!

With the ongoing disruption to life and livelihood wrought by the Covid-19 pandemic, 2020 has been a fairly horrid year by most accounts. And it’s perhaps fitting that this was also a leap year, piling on an extra day to a solar rotation that most of us probably can’t wait to see in the rearview mirror.

But it was hardly a dull one for computer security news junkies. In almost every category — from epic breaches and ransomware to cybercrime justice and increasingly aggressive phishing and social engineering scams — 2020 was a year that truly went to eleven.

Almost 150 stories here this past year generated nearly 9,000 responses from readers (although about 6 percent of those were on just one story). Thank you all for your thoughtful engagement, wisdom, news tips and support.

I’d like to reprise a note from last year’s anniversary post concerning ads. A good chunk of the loyal readers here are understandably security- and privacy-conscious, and many block advertisements by default — including the ads displayed here.

KrebsOnSecurity does not run third-party ads and has no plans to change that; all of the creatives you see on this site are hosted in-house, are purely image-based, and are vetted first by Yours Truly. Love them or hate ’em, these ads help keep the content at KrebsOnSecurity free to any and all readers. If you’re currently blocking ads here, please consider making an exception for this site.

In case you missed them, some of the most popular feature/enterprise stories on the site this year (in no particular order) included:

The Joys of Owning an ‘OG’ Email Account
Confessions of an ID Theft Kingpin (Part II)
Why and Where You Should Plant Your Flag
Thinking of a Career in Cybersecurity? Read This
Turn on MFA Before Crooks Do it for You
Romanian Skimmer Gang in Mexico Outed by KrebsOnSecurity Stole $1.2 Billion
Who’s Behind the ‘Web Listings’ Mail Scam?
When in Doubt: Hang Up, Look Up, & Call Back
Riding the State Unemployment Fraud Wave
Would You Have Fallen for this Phone Scam?


Worse Than FailureBest of…: Best of 2020: The Time-Delay Footgun

As we revisit the best articles of 2020, have you been wondering why 2020 has been such a… colorful year? Maybe the developer responsible wrote a bad version check. Original --Remy

A few years back, Mike worked at Initech. Initech has two major products: the Initech Creator and the Initech Analyzer. The Creator, as the name implied, let you create things. The Analyzer could take what you made with the Creator and test them.

For business reasons, these were two separate products, and it was common for one customer to have many more Creator licenses than Analyzer licenses, or upgrade them each on a different cadence. But the Analyzer depended on the Creator, so someone might have two wildly different versions of both tools installed.

Initech wasn’t just incrementing the version number and charging for a new seat every year. Both products were under active development, with a steady stream of new features. The Analyzer needed to be smart enough to check what version of Creator was installed, and enable/disable the appropriate set of features. Which meant the Analyzer needed to check the version string.

From a user’s perspective, the version numbers were simple: a new version was released every year, numbered for the year. So the 2009 release was version 9, the 2012 was version 12, and so on. Internally, however, they needed to track finer-grained versions, patch levels, and whether the build was intended as an alpha, beta, or release version. This meant that they looked more like “12.3g31”.

Mike was tasked with prepping Initech Analyzer 2013 for release. Since the company used an unusual version numbering schema, they had also written a suite of custom version parsing functions, in the form: isCreatorVersion9_0OrLater, isCreatorVersion11_0OrLater, etc. He needed to add isCreaterVersion12_0OrLater.

“Hey,” Mike suggested to his boss, “I notice that all of these functions are unique, we could make a general version that uses a regex.”

“No, don’t do that,” his boss said. “You know what they say, ‘I had a problem, so I used regexes, now I have two problems.’ Just copy-paste the version 11 version, and use that. It uses string slicing, which performs way better than regex anyway.”

“Well, I think there are going to be some problems-”

“It’s what we’ve done every year,” his boss said. “Just do it. It’s the version check, don’t put any thought into it.”

“Like, I mean, really problems- the way it-”

His boss cut him off and spoke very slowly. “It is just the version check. It doesn’t need to be complicated. And we know it can’t be wrong, because all the tests are green.”

Mike did not just copy the version 11 check. He also didn’t use regexes, but patterned his logic off the version 11 check, with some minor corrections. But he did leave the version 11 check alone, because he wasn’t given permission to change that block of code, and all of the tests were green.

So how did isCreatorVersion11_0OrLater work? Well, given a version string like 9.0g57 or 10.0a12, or 11.0b9, it would start by checking the second character. If it was a ., clearly we had a single digit version number which must be less than 11. If the second character was a 0, then it must be 10, which clearly is also less than 11, and there couldn't possibly be any numbers larger than 11 which have a "0" as their second character. Any other number must be greater than or equal 11.

Mike describes this as a “time-delayed footgun”. Because it was “right” for about a decade. Unfortunately, Initech Analyzer 2020 might be having some troubles right now…

Mike adds:

Now, I no longer work at Initech, so unfortunately I can’t tell you the fallout of what happened when that foot-gun finally went off this year.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!


Worse Than FailureBest of…: Best of 2020: Copy/Paste Culture

As per usual, we'll be spending a few days looking back at some of our favorite stories of the year. We start with a visit to a place where copy/pasting isn't just common, it's part of the culture. Original -- Remy

Mark F had just gone to production on the first project at his new job: create a billables reconciliation report that an end-user had requested a few years ago. It was clearly not a high priority, which was exactly why it was the perfect items to assign a new programmer.

"Unfortunately," the end user reported, "it just doesn't seem to be working. It's running fine on test, but when I run it on the live site I'm getting a SELECT permission denied on the object fn_CalculateBusinessDays message. Any idea what that means?"

The problem was fairly obvious, and Mark knew exactly what the error meant. But the solution wasn't so obvious. Why did the GRANT script work fine in test, but not in production? How can he check to see what the GRANTS are in production? Is there someone specific he should ask to get permission to look himself? Does the DBA team use a sort of ticketing system maybe? Is this even the right approach? Who on his team could he even ask?

Fortunately, Mark had the perfect venue to ask these sorts of questions: the weekly one-on-one with his team lead, Jennifer. Although he had a few years of coding experience under his belt, he was brand new to The Enterprise and specifically, how large organizations worked. Jennifer definitely wasn't the most technical person he'd met, but she was super helpful in "getting unblocked" as he was learning to say.

"Huh", Jennifer answered in their meeting, "first off, why do you even need a function to calculate the business days between two dates?"

"This seems like something pretty common in our reports," Mark responded, "and this, if the logic ever changes, we only need to change it in one place."

Jennifer gave a mystified look and smiled, "Changes? I don't think the 7-day week is going to change anytime soon, nor is the fact that Saturday and Sunday are weekends."

"Well, umm," Mark definitely didn't expect that response. He was surprised to have to explain the basic principles of code reuse to his supposed mentor, "you see, this way we don't have to constantly rewrite the logic in all the places, so the code is a bit simpler."

"Why don't you just copy/paste the calculation code in your queries?" she rhetorically asked. "That seems like it'd be a lot simpler to me. And that's what I always do…. But if you really want to get the DBAs involved, your best contact is that dba-share email address. They are super-slow to project tickets, but everyone sees that box and they will quickly triage from there."

Needless to say, he didn't follow Jennifer's programming advice. She was spot on about how to work with the DBA team. That tip alone saved Mark weeks of frustration and escalation, and helped him network with a lot more people inside The Enterprise over the years.


Mark's inside connections helped, and he eventually found himself leading a team of his own. That meant a lot more responsibilities, but he found it was pretty gratifying to help others "get unblocked" in The Enterprise.

One day, while enjoying a short vacation on a beach far, far away from the office, Mark got a frantic call from one of his team members. An end-user was panicked about a billables reconciliation report that had been inaccurate for months. The auditors had discovered the discrepancies and needed answers right away.

"So far as I can tell," his mentee said, "this report is using a fn_ CalculateBusinessDays function, which does all sorts of calculations for holidays, but they already prorate those on the report."

The problem was fairly obvious, and Mark knew exactly what happened. Some must have changed the logic on that function to work for their needs. But changing it back would mean breaking someone else's report. And the whole idea of a function seemed strange, because that would mean taking a dependen--

The junior programmer interrupted his stream of thought.

"I think I should just add an argument to the function to not include holidays," he said. "That's really simple to do, and we can just edit our report to use that argument."

"Ehhh," Mark hesitated, "the logic is so simple. Why don't you just copy/paste the business day calculation? That's the simplest solution… that's what I do all the time."

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Chaotic IdealismYou’re not “the only one” doing the right thing: A Mathematical Perspective

No, not hard math. Just counting. Don’t worry, math-haters.

Anyway, I recently realized this–and it should have been obvious, but I’m a bit slow on the uptake sometimes:

You know how you go out to pick up your groceries, or you scroll down social media, and it seems like you’re the only one wearing a mask, refusing to go into large crowds, staying home, etc.? Like everybody else is going out, ignoring all common sense, giving up on their grandparents and everybody else’s, and you’re all alone? Discouraging, right?

Well… it only looks like that. It can look like most people aren’t staying home even if the majority of people are, like you, staying home and wearing masks and being sensible.

Think about a group of 100 people. Let’s say 80 of those people are being smart, staying home, and going out to shop or go to the doctor’s or meet someone outdoors and distanced, about once a month. But 20 of those people are the plague-spreaders, and they happily go without masks. They visit public venues about every other day, just like they did before the pandemic.

The reality: 80% of people in this population are following COVID-prevention guidelines.

So every month, you have 80 people going out once each, and 20 people going out 15 times apiece.

You go out to the store, masked up; or you walk down the sidewalk. You look at the other people who are also out. This month, 80 of those trips outside the home will be from the wise people, and (20*15) 300 will be from foolish people.

It looks like only about a quarter of the population is being smart, like you are. But that’s not true. The foolish people are the minority. But they’re simply more visible. They go out more, so you’re much more likely to see them.

What it looks like: Only 21% of the people in a particular public space are following COVID-prevention guidelines… even though 80% of the population is doing so.

And that’s why, even if most people are staying home, it can look like practically nobody is.

You are not alone.

Oh, and I totally sneaked in some multiplication there. Muahaha….


Worse Than FailureCodeSOD: Classic WTF: 2012

As we enjoy our holiday today, in this seemingly unending year of 2020, our present to you is a blast from 2012, the year the world was supposed to end. Original --Remy

"Most people spend their New Year's Eve watching the ball drop and celebrating the New Year," writes Jason, "and actually, that's what I planned to do, too. Instead, I found myself debugging our licensing activation system." "Just as I was about to leave the office, I received a torrent of emails with the subject 'License Activation Failed'. One or two every now and then is expected, but dozens and dozens at four o'clock on New Year's Eve... not so good. It took me a moment to realize the significance of 4:00PM, but then it hit me: I'm on Pacific Time, which is UTC -8 hours.

"The error message that was filling up our logs was simply 'INVALID DATE' and for the life of me I couldn't figure out why. Our license code was a 32-bit number that represented the expiration date of the license and the features in the license. 7 of those bits represented the year since 2000, so obviously the date was fine up until 2127. After hours and hours of digging through PL/SQL, Java, JavaScript, Ruby, and some random shell scripts, I found the following.


Jason continued, "nowhere in the code was any indication why 12 would not work, so I took it out. Figuring it couldn't make things any worse, I published the code and tested the registration system. It worked. In the end, a meaningless IF statement had shut down our renewals business... just because."

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Cryptogram Russia’s SolarWinds Attack

Recent news articles have all been talking about the massive Russian cyberattack against the United States, but that’s wrong on two accounts. It wasn’t a cyberattack in international relations terms, it was espionage. And the victim wasn’t just the US, it was the entire world. But it was massive, and it is dangerous.

Espionage is internationally allowed in peacetime. The problem is that both espionage and cyberattacks require the same computer and network intrusions, and the difference is only a few keystrokes. And since this Russian operation isn’t at all targeted, the entire world is at risk — and not just from Russia. Many countries carry out these sorts of operations, none more extensively than the US. The solution is to prioritize security and defense over espionage and attack.

Here’s what we know: Orion is a network management product from a company named SolarWinds, with over 300,000 customers worldwide. Sometime before March, hackers working for the Russian SVR — previously known as the KGB — hacked into SolarWinds and slipped a backdoor into an Orion software update. (We don’t know how, but last year the company’s update server was protected by the password “solarwinds123” — something that speaks to a lack of security culture.) Users who downloaded and installed that corrupted update between March and June unwittingly gave SVR hackers access to their networks.

This is called a supply-chain attack, because it targets a supplier to an organization rather than an organization itself — and can affect all of a supplier’s customers. It’s an increasingly common way to attack networks. Other examples of this sort of attack include fake apps in the Google Play store, and hacked replacement screens for your smartphone.

SolarWinds has removed its customer list from its website, but the Internet Archive saved it: all five branches of the US military, the state department, the White House, the NSA, 425 of the Fortune 500 companies, all five of the top five accounting firms, and hundreds of universities and colleges. In an SEC filing, SolarWinds said that it believes “fewer than 18,000” of those customers installed this malicious update, another way of saying that more than 17,000 did.

That’s a lot of vulnerable networks, and it’s inconceivable that the SVR penetrated them all. Instead, it chose carefully from its cornucopia of targets. Microsoft’s analysis identified 40 customers who were infiltrated using this vulnerability. The great majority of those were in the US, but networks in Canada, Mexico, Belgium, Spain, the UK, Israel and the UAE were also targeted. This list includes governments, government contractors, IT companies, thinktanks, and NGOs — and it will certainly grow.

Once inside a network, SVR hackers followed a standard playbook: establish persistent access that will remain even if the initial vulnerability is fixed; move laterally around the network by compromising additional systems and accounts; and then exfiltrate data. Not being a SolarWinds customer is no guarantee of security; this SVR operation used other initial infection vectors and techniques as well. These are sophisticated and patient hackers, and we’re only just learning some of the techniques involved here.

Recovering from this attack isn’t easy. Because any SVR hackers would establish persistent access, the only way to ensure that your network isn’t compromised is to burn it to the ground and rebuild it, similar to reinstalling your computer’s operating system to recover from a bad hack. This is how a lot of sysadmins are going to spend their Christmas holiday, and even then they can&;t be sure. There are many ways to establish persistent access that survive rebuilding individual computers and networks. We know, for example, of an NSA exploit that remains on a hard drive even after it is reformatted. Code for that exploit was part of the Equation Group tools that the Shadow Brokers — again believed to be Russia — stole from the NSA and published in 2016. The SVR probably has the same kinds of tools.

Even without that caveat, many network administrators won’t go through the long, painful, and potentially expensive rebuilding process. They’ll just hope for the best.

It’s hard to overstate how bad this is. We are still learning about US government organizations breached: the state department, the treasury department, homeland security, the Los Alamos and Sandia National Laboratories (where nuclear weapons are developed), the National Nuclear Security Administration, the National Institutes of Health, and many more. At this point, there’s no indication that any classified networks were penetrated, although that could change easily. It will take years to learn which networks the SVR has penetrated, and where it still has access. Much of that will probably be classified, which means that we, the public, will never know.

And now that the Orion vulnerability is public, other governments and cybercriminals will use it to penetrate vulnerable networks. I can guarantee you that the NSA is using the SVR’s hack to infiltrate other networks; why would they not? (Do any Russian organizations use Orion? Probably.)

While this is a security failure of enormous proportions, it is not, as Senator Richard Durban said, “virtually a declaration of war by Russia on the United States.” While President-elect Biden said he will make this a top priority, it’s unlikely that he will do much to retaliate.

The reason is that, by international norms, Russia did nothing wrong. This is the normal state of affairs. Countries spy on each other all the time. There are no rules or even norms, and it’s basically “buyer beware.” The US regularly fails to retaliate against espionage operations — such as China’s hack of the Office of Personal Management (OPM) and previous Russian hacks — because we do it, too. Speaking of the OPM hack, the then director of national intelligence, James Clapper, said: “You have to kind of salute the Chinese for what they did. If we had the opportunity to do that, I don’t think we’d hesitate for a minute.”

We don’t, and I’m sure NSA employees are grudgingly impressed with the SVR. The US has by far the most extensive and aggressive intelligence operation in the world. The NSA’s budget is the largest of any intelligence agency. It aggressively leverages the US’s position controlling most of the internet backbone and most of the major internet companies. Edward Snowden disclosed many targets of its efforts around 2014, which then included 193 countries, the World Bank, the IMF and the International Atomic Energy Agency. We are undoubtedly running an offensive operation on the scale of this SVR operation right now, and it’ll probably never be made public. In 2016, President Obama boasted that we have “more capacity than anybody both offensively and defensively.”

He may have been too optimistic about our defensive capability. The US prioritizes and spends many times more on offense than on defensive cybersecurity. In recent years, the NSA has adopted a strategy of “persistent engagement,” sometimes called “defending forward.” The idea is that instead of passively waiting for the enemy to attack our networks and infrastructure, we go on the offensive and disrupt attacks before they get to us. This strategy was credited with foiling a plot by the Russian Internet Research Agency to disrupt the 2018 elections.

But if persistent engagement is so effective, how could it have missed this massive SVR operation? It seems that pretty much the entire US government was unknowingly sending information back to Moscow. If we had been watching everything the Russians were doing, we would have seen some evidence of this. The Russians’ success under the watchful eye of the NSA and US Cyber Command shows that this is a failed approach.

And how did US defensive capability miss this? The only reason we know about this breach is because, earlier this month, the security company FireEye discovered that it had been hacked. During its own audit of its network, it uncovered the Orion vulnerability and alerted the US government. Why don’t organizations like the Departments of State, Treasury and Homeland Wecurity regularly conduct that level of audit on their own systems? The government’s intrusion detection system, Einstein 3, failed here because it doesn’t detect new sophisticated attacks — a deficiency pointed out in 2018 but never fixed. We shouldn’t have to rely on a private cybersecurity company to alert us of a major nation-state attack.

If anything, the US’s prioritization of offense over defense makes us less safe. In the interests of surveillance, the NSA has pushed for an insecure cell phone encryption standard and a backdoor in random number generators (important for secure encryption). The DoJ has never relented in its insistence that the world’s popular encryption systems be made insecure through back doors — another hot point where attack and defense are in conflict. In other words, we allow for insecure standards and systems, because we can use them to spy on others.

We need to adopt a defense-dominant strategy. As computers and the internet become increasingly essential to society, cyberattacks are likely to be the precursor to actual war. We are simply too vulnerable when we prioritize offense, even if we have to give up the advantage of using those insecurities to spy on others.

Our vulnerability is magnified as eavesdropping may bleed into a direct attack. The SVR’s access allows them not only to eavesdrop, but also to modify data, degrade network performance, or erase entire networks. The first might be normal spying, but the second certainly could be considered an act of war. Russia is almost certainly laying the groundwork for future attack.

This preparation would not be unprecedented. There’s a lot of attack going on in the world. In 2010, the US and Israel attacked the Iranian nuclear program. In 2012, Iran attacked the Saudi national oil company. North Korea attacked Sony in 2014. Russia attacked the Ukrainian power grid in 2015 and 2016. Russia is hacking the US power grid, and the US is hacking Russia’s power grid — just in case the capability is needed someday. All of these attacks began as a spying operation. Security vulnerabilities have real-world consequences.

We’re not going to be able to secure our networks and systems in this no-rules, free-for-all every-network-for-itself world. The US needs to willingly give up part of its offensive advantage in cyberspace in exchange for a vastly more secure global cyberspace. We need to invest in securing the world’s supply chains from this type of attack, and to press for international norms and agreements prioritizing cybersecurity, like the 2018 Paris Call for Trust and Security in Cyberspace or the Global Commission on the Stability of Cyberspace. Hardening widely used software like Orion (or the core internet protocols) helps everyone. We need to dampen this offensive arms race rather than exacerbate it, and work towards cyber peace. Otherwise, hypocritically criticizing the Russians for doing the same thing we do every day won’t help create the safer world in which we all want to live.

This essay previously appeared in the Guardian.

Worse Than FailureCodeSOD: Classic WTF: Developer Carols

It's the holiday season, which means over the next few days, we'll be reviewing some of the best of 2020, if anything about 2020 can be considered "the best", and maybe some other surprises. To kick things off, we're going to pull from the faroff year of Christmas 2017, and return to our Developer Carols. That year, we ran them too late to go caroling, and this year, nobody outside of New Zealand should be going caroling, keeping with our tradition of meeting the requirements but delivering absolutely no value. (Original)
Árbol navideño luminoso en Madrid 02

It’s Christmas, and thus technically too late to actually go caroling. Like any good project, we’ve delivered close enough to the deadline to claim success, but late enough to actually be useless for this year!

Still, enjoy some holiday carols specifically written for our IT employees. Feel free to annoy your friends and family for the rest of the day.

Push to Prod (to the tune of Joy To the World)

Joy to the world,
We’ve pushed to prod,
Let all,
record complaints,
“This isn’t what we asked you for,”
“Who signed off on these requirements,”
“Rework it,” PMs sing,
“Rework it,” PMs sing,
“Work over break,” the PMs sing.

Backups (to the tune of Deck the Halls)

Back the system up to tape drives,
Fa la la la la la la la la,
TAR will make the tape archives,
Fa la la la la la la la la,
Recov'ry don't need no testing,
Fa la la la la la la la la la,
Pray it works upon requesting,
Fa la la la la la la la la

Ode to CSS (to the tune of Silent Night)

Vertical height,
Align to the right,
Aid my fight,
Round the corners,
Flattened design,
Please work this time,
It won't work in IE,
Never in goddamn IE

The Twelve Days of The Holiday Shift (to the tune of The Twelve Days of Christmas)

On my nth day of helpdesk, the ticket sent to me:
12 write arms leaping
11 Trojans dancing
10 bosses griping
9 fans not humming
8 RAIDs not striping
7 WANs a-failing
6 cables fraying
5 broken things
4 calling users
3 missing pens
2 turtled drives
and a toner cartridge that is empty.

(Contributed by Charles Robinson)

Here Comes a Crash Bug (to the tune of Here Comes Santa Claus)

Here comes a crash bug,
Here comes a crash bug,
Find th’ culprit with git blame,
Oh it was my fault,
It’s always my fault,
Patch and push again.

Issues raisin‘, users ‘plainin’,
Builds are failin’ tonight,
So hang your head and say your prayers,
For a crash bug comes tonight.

WCry the Malware (to the tune of Frosty the Snowman)

WCry the Malware, was a nasty ugly worm,
With a cryptolock and a bitcoin bribe,
Spread over SMB

WCry the Malware, is a Korean hack they say,
But the NSA covered up the vuln,
To use on us one day

There must have been some magic in that old kill-switch they found,
For when they register’d a domain,
The hack gained no more ground

WCry the Malware, was as alive as he could be,
Till Microsoft released a patch,
To fix up SMB

(Suggested by Mark Bowytz)

Oh Come All Ye Web Devs (to the tune of Oh Come All Ye Faithful)

Oh come, all ye web devs,
Joyful and triumphant,
Oh come ye to witness,
JavaScript's heir:

Come behold TypeScript,
It’s just JavaScript,
But we can conceal that,
But we can conceal that,
But we can conceal that,
With our toolchain

Thanks to Jane Bailey for help with scansion. Where it's right, thank her, where it's wrong, blame me.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Cryptogram How China Uses Stolen US Personnel Data

Interesting analysis of China’s efforts to identify US spies:

By about 2010, two former CIA officials recalled, the Chinese security services had instituted a sophisticated travel intelligence program, developing databases that tracked flights and passenger lists for espionage purposes. “We looked at it very carefully,” said the former senior CIA official. China’s spies “were actively using that for counterintelligence and offensive intelligence. The capability was there and was being utilized.” China had also stepped up its hacking efforts targeting biometric and passenger data from transit hubs…

To be sure, China had stolen plenty of data before discovering how deeply infiltrated it was by U.S. intelligence agencies. However, the shake-up between 2010 and 2012 gave Beijing an impetus not only to go after bigger, riskier targets, but also to put together the infrastructure needed to process the purloined information. It was around this time, said a former senior NSA official, that Chinese intelligence agencies transitioned from merely being able to steal large datasets en masse to actually rapidly sifting through information from within them for use….

For U.S. intelligence personnel, these new capabilities made China’s successful hack of the U.S. Office of Personnel Management (OPM) that much more chilling. During the OPM breach, Chinese hackers stole detailed, often highly sensitive personnel data from 21.5 million current and former U.S. officials, their spouses, and job applicants, including health, residency, employment, fingerprint, and financial data. In some cases, details from background investigations tied to the granting of security clearances — investigations that can delve deeply into individuals’ mental health records, their sexual histories and proclivities, and whether a person’s relatives abroad may be subject to government blackmail — were stolen as well….

When paired with travel details and other purloined data, information from the OPM breach likely provided Chinese intelligence potent clues about unusual behavior patterns, biographical information, or career milestones that marked individuals as likely U.S. spies, officials say. Now, these officials feared, China could search for when suspected U.S. spies were in certain locations — and potentially also meeting secretly with their Chinese sources. China “collects bulk personal data to help it track dissidents or other perceived enemies of China around the world,” Evanina, the top U.S. counterintelligence official, said.


But after the OPM breach, anomalies began to multiply. In 2012, senior U.S. spy hunters began to puzzle over some “head-scratchers”: In a few cases, spouses of U.S. officials whose sensitive work should have been difficult to discern were being approached by Chinese and Russian intelligence operatives abroad, according to the former counterintelligence executive. In one case, Chinese operatives tried to harass and entrap a U.S. official’s wife while she accompanied her children on a school field trip to China. “The MO is that, usually at the end of the trip, the lightbulb goes on [and the foreign intelligence service identifies potential persons of interest]. But these were from day one, from the airport onward,” the former official said.

Worries about what the Chinese now knew precipitated an intelligence community-wide damage assessment surrounding the OPM and other hacks, recalled Douglas Wise, a former senior CIA official who served deputy director of the Defense Intelligence Agency from 2014 to 2016. Some worried that China might have purposefully secretly altered data in individuals’ OPM files to later use as leverage in recruitment attempts. Officials also believed that the Chinese might sift through the OPM data to try and craft the most ideal profiles for Chinese intelligence assets seeking to infiltrate the U.S. government­ — since they now had granular knowledge of what the U.S. government looked for, and what it didn’t, while considering applicants for sensitive positions. U.S. intelligence agencies altered their screening procedures to anticipate new, more finely tuned Chinese attempts at human spying, Wise said.


Kevin RuddLetter: Foreign Influence Transparency Scheme

Mr Chris Moraitis
Attorney-General’s Department
3-5 National Circuit
Barton ACT 2600

23 December 2020

Dear Mr Moraitis,

Re: Foreign Influence Transparency Scheme Act

I refer to your letter of 25 November 2020, which expressed the view that I have registration obligations under this scheme’s special imposition on former cabinet ministers, despite the fact that I undertake no activities on behalf of a foreign principal and I am not an agent of foreign influence.

This view comes more than a year after my lawyer first contacted you in September 2019 to clarify my obligations. He did so at my initiative, despite his view that I had nothing to register.

Peculiarly, despite having provided a list of my international roles, you declined my request to say which activities needed to be registered. This puts me in the invidious position of needing to declare every country with which I have contact or risk prosecution.

I reiterate that I am not an agent of foreign influence. I engage internationally as an individual, a scholar, a commentator, a former world leader and in my roles with international non-government institutions – not on behalf of any foreign state, their entities or their representatives.

Your suggestion that public activities, such as live broadcast interviews with the BBC and Radio New Zealand, may be registrable defies the Attorney-General’s statement that officials would interpret this Act with “common sense”. It is ridiculous to imagine that merely being interviewed by the BBC makes one an agent of UK government influence, not least if they use this platform to frankly criticise the UK government, as I often do.

I wholly support this legislation, but your sweeping interpretation of what constitutes an “arrangement” with a foreign principal potentially captures any engagement I have with any foreign government, or those tangentially connected to them. This absurd interpretation will have immediate implications for all former cabinet ministers. Nonetheless, I am complying with this interpretation by disclosing on the public register that, since your letter, I have communicated with entities or individuals that are closely associated with these jurisdictions:

• Barbados

• Brazil

• Canada

• China

• Costa Rica

• Denmark

• Ecuador

• Egypt

• France

• Germany

• Greece

• Hong Kong

• India

• Indonesia

• Israel

• Italy

• Japan

• Jordan

• Kazakhstan

• Republic of Korea

• Mexico

• Oman

• Pakistan

• Peru

• Philippines

• Poland

• Portugal

• Russia

• Saudi Arabia

• Singapore

• Spain

• Sweden

• United Arab Emirates

• United Kingdom

• United States of America


I have done so in one or other of the following capacities:

• President, Asia Society Policy Institute, United States

• Incoming President, Asia Society, United States

• Chair, International Peace Institute, United States

• Member, Jesus College, Oxford University, United Kingdom

• Research Student, Oxford University, United States

• Visiting Fellow, University of Toronto, Canada

• Visiting Fellow, Harvard Kennedy School, United States

• Senior Fellow, Paulson Institute, United States

• Distinguished Fellow, Centre for Strategic & International Studies, United States

• Senior Fellow and International Advisory Panel Member, Chatham House, United Kingdom

• Member, External Advisory Group to the International Monetary Fund Managing Director

• Member, Bloomberg New Economy Forum Advisory Board, United States

• Global Chair, Water and Sanitation For All

• Member, Comprehensive Nuclear Test Ban Organisation Global Eminent Persons Group

• Co-Chair, Chicago Council of Foreign Relations Task Force on Preventing Nuclear Proliferation, United States

• Member, Stephen A. Schwartzman Education Foundation, United States, which provides scholarships for students to attend China’s Tsinghua University

• Member, Morgan Stanley Sustainability Advisory Board, United States

• Board Member, Center for International Governance Innovation, Canada

• Board Member, Sir Bani Yas Forum, United Arab Emirates

• Chairman, Global Alliance of Sharing Economy (North America)

• Principal, Kevin Rudd & Associates, Australia

• Former Prime Minister of Australia

• Private resident of New York City, United States

I will update this list, and the list of countries with which I have contact, as best I am aware. It is entirely possible, given the international nature of my work and the sheer number of foreigners whom I meet, that I will have contact with individuals who have connections of which I would be unaware.

I expect that, having taken such an expansive, unrealistic interpretation in my case, you will require the same of all former cabinet ministers who engage with foreigners including Mr Abbott and Mr Howard.

Further, this expansive interpretation has given me pause to consider how the legislation might affect others in our society, including the media, given the arrangements that sometimes exist between the Australian media outlets and foreign governments, officials and entities. For instance, Rupert Murdoch’s News Corporation is well known for cultivating partnerships with governments, foreign and domestic, where both sides understand that favourable coverage is being exchanged for political and commercial favours.

To this end, I have sought advice from Bret Walker SC about how this legislation might affect news outlets in Australia. Mr Walker’s advice, which I have attached, indicates that such outlets may be required to disclose confidential arrangements to disseminate information in Australia on behalf of a foreign principal. Mr Walker highlights the case of News Corporation’s Sharri Markson and her exclusive report on May 2, 2020, of a ‘dossier’ that was said to be produced by ‘Western governments’. One may suspect, as I do, that this news story was placed in the Australian media by the Trump Administration so that it could influence Australian public opinion. If so, Ms Markson and News Corporation would apparently be obliged to register as potential agents of foreign influence in Australia (and disclose these sources to the government).

Mr Walker’s advice raises a clear matter of public concern. I therefore raise it as a potential matter for your department and minister to consider in consultation with officials, media organisations and, importantly, the journalists’ union, the MEAA.

Given the matters discussed in this letter, including the implications for former cabinet ministers and for media organisations such as News Corporation, it is my intention to place this letter in the public domain. I also intend to publish a summary of these sentiments on the public register.

Yours sincerely,

Hon Kevin Rudd AC


Attachment: Advice from Bret Walker SC – Dec 2020

Photo: Ryan Miller/Capture Imaging for Pacific Council

The post Letter: Foreign Influence Transparency Scheme appeared first on Kevin Rudd.

Cryptogram Friday Squid Blogging: Linguine allo Scoglio Recipe

Delicious seafood pasta dish — includes squid — from America’s Test Kitchen.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Small Giant Squid Washes Ashore in Japan

A ten-foot giant squid has washed ashore on the Western coast of Japan.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Investigating the Navalny Poisoning

Bellingcat has investigated the near-fatal poisoning of Alexey Navalny by the Russian FSB back in August. The details display some impressive traffic analysis. Navalny got a confession out of one of the poisoners, displaying some masterful social engineering.

Lots of interesting opsec details in all of this.

Charles StrossDead Lies Dreaming: Spoilers

I've been head-down in the guts of a novel this month, hence lack of blogging: purely by coincidence, I'm working on the next-but-one sequel to Dead Lies Dreaming.

Which reminds me that Dead Lies Dreaming came out nearly a month ago, and some of you probably have read it and have questions!

So feel free to ask me anything about the book in the comments below.

(Be warned that (a) there will probably be spoilers, and (b) I will probably not answer questions that would supply spoilers for the next books in the ongoing project.)

Cryptogram Cellebrite Can Break Signal

Cellebrite announced that it can break Signal. (Note that the company has heavily edited its blog post, but the original — with lots of technical details — was saved by the Wayback Machine.)

News article. Slashdot post.

The whole story is puzzling. Cellebrite’s details will make it easier for the Signal developers to patch the vulnerability. So either Cellebrite believes it is so good that it can break whatever Signal does, or the original blog post was a mistake.

EDITED TO ADD (12/22): Signal’s Moxie Marlinspike takes serious issue with Cellebrite’s announcement. I have urged him to write it up, and will link to it when he does.

EDITED TO ADD (12/23): I need to apologize for this post. I finally got the chance to read all of this more carefully, and it seems that all Cellebrite is doing is reading the texts off of a phone they can already access. To this has nothing to do with Signal at all. So: never mind. False alarm. Apologies, again.

Kevin RuddMFW: We Need To Talk About Murdoch


The post MFW: We Need To Talk About Murdoch appeared first on Kevin Rudd.

Kevin RuddHacked Off: Murdoch Royal Commission


The post Hacked Off: Murdoch Royal Commission appeared first on Kevin Rudd.

Worse Than FailureWhat the Fun Holiday Activity: A Visit From "Coding Cats"?

Thanks again to everyone who submitted a holiday tale for our What the Fun Holiday special. Like all good holiday traditions, our winner indulges in a bit of nostalgia for a Christmas classic, by adapting the classic "A Visit From St. Nicolas", a trick we've done ourselves. But like a good WTF, Lee R also mixes in some frustration and anger, and maybe a few inside jokes that we're all on the outside of. Still, for holiday spirit, this tale can't be beat.

Now, our normal editorial standards avoid profanity, but in the interests of presenting the story as Lee submitted it, we're going to suspend that rule for today. It is, after all, the holidays, and we're all miserable.

In December 2018, our distributed Sports app team at FOX was to the wall. We needed to release a new version with pay-per-view streaming before an immovable sporting event date in Q1. I frequently explained away my bugs and other failures by blaming our cats, since they walk on the keyboard and "write the code."

I decided the team needed to loosen up and (again blaming the "codin' cats") posted the following on our main dev Slack channel right before Christmas break.

Has anyone seen this error? error CS1524: Expected catch - lame attempt at humor not wrapped in a try – nottolaugh block

with apologies to Clement Clarke Moore

T’was the night before boxing when all throughout FOX
All the VP’s were praying that someone would watch
The backlog was groomed, we had our ducks in a row
And visions of Team Pages meant app usage would grow
Then a bad feeling – oh no, not the plan?!
Zac’s calling from LA: the shit just hit the fan
Forget about Team Pages and all of that rot
The ‘wheels’ just bought boxing! (believe it or not)
I guess that’s ok .. is it really that new?
uh streaming on Delta and you guessed it: pay per view
Streaming without BAM? well now – that’s a great gift!
No, you don’t get it – you have to write it in Swift
But we use Xamarin – our shared code is like glue
We’re sorry, you know these guys don’t have a clue
But don’t worry – we’re talkin’ big bucks – I mean fees
This is straight from the top – the very biggest of cheese
Well the schedule – I heard that it’s end of Q2
We can probably get there with James, Greg and Sue
Guess again amigo: time to size – let’s throw darts
Forget about June – would you believe March?
Get crackin, get codin, so what if it’s ugly
Do it yourself or send it to Willow tree
On Battalions, On Regiments, On Squads (what a scheme)
How’s this for innovation: we just call it a “team”
Doesn’t have to be pretty, watch us rake in the cash
and no unit tests - just keep slingin’ hash
that’s close enough – stop - let’s call it a day
Our asses are covered with Ben and Sam on Q/A
We did it – we pulled together – that’s always the key
Now one last question – can we go to Disney?

papa: sorry we didn’t get the video code done. Slick got into the Christmas Catnip again – he didn’t just walk on the keyboard, he was flyin’. this looks like maybe .. Lisp? hopefully you can patch it up and do the PR for us.

-- Smokey, Twoface, Slick, and Benzira (the codin’ cats)

Really the main WTF was just the schedule. This post eventually found its way to management, and I learned that (thankfully) they do have a sense of humor.

"Management had a real sense of humor" is a true Christmas Miracle.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.


Cryptogram Eavesdropping on Phone Taps from Voice Assistants

The microphones on voice assistants are very sensitive, and can snoop on all sorts of data:

In Hey Alexa what did I just type? we show that when sitting up to half a meter away, a voice assistant can still hear the taps you make on your phone, even in presence of noise. Modern voice assistants have two to seven microphones, so they can do directional localisation, just as human ears do, but with greater sensitivity. We assess the risk and show that a lot more work is needed to understand the privacy implications of the always-on microphones that are increasingly infesting our work spaces and our homes.

From the paper:

Abstract: Voice assistants are now ubiquitous and listen in on our everyday lives. Ever since they became commercially available, privacy advocates worried that the data they collect can be abused: might private conversations be extracted by third parties? In this paper we show that privacy threats go beyond spoken conversations and include sensitive data typed on nearby smartphones. Using two different smartphones and a tablet we demonstrate that the attacker can extract PIN codes and text messages from recordings collected by a voice assistant located up to half a meter away. This shows that remote keyboard-inference attacks are not limited to physical keyboards but extend to virtual keyboards too. As our homes become full of always-on microphones, we need to work through the implications.

LongNowPodcast: The Future of Breathing | James Nestor

The Long Now Foundation · James Nestor – The Future of Breathing

Drawing on thousands of years of medical texts and recent cutting-edge studies in pulmonology, psychology, biochemistry, and human physiology, journalist James Nestor questions the conventional wisdom of what we thought we knew about our most basic biological function, breathing.

Nestor tracks down men and women exploring the science behind ancient breathing practices like Pranayama, Sudarshan Kriya, and Tummo and teams up with pulmonary specialists to scientifically test long-held beliefs about how we breathe. His inquiry leads to the understanding that breathing is in many ways as important as what we eat, how much we exercise, or whatever genes we’ve inherited.

Listen on Apple Podcasts.

Listen on Spotify.

Worse Than FailureWhat the Fun Holiday Activity: The Gift of the Consultant and The Holiday Push

As we roll into the last few days before Christmas, it's time to share what our readers sent in for our "What the Fun Holiday" contest. It was a blast going through the submissions to see what our holiday experiences looked like.

Before we dig in to our contest winners, our first honorable mention is to David N, who shared with us "The Worm Before Christmas", a classic from 1988 that was new to us.

Our first runner up story comes from Mike. This is an Easter Tale, complete with a death, a resurrection, and thirty pieces of silver (or whatever amount you give highly paid consultants).

We had a workflow engine that severely needed to be rewritten. We brought in an outside consultant to rewrite it. For months, we heard how crappy the original code was and he didn't understand how it could even work. Every week, we would hear how he shaved large percentages of time off of the engine. He would brag "I cut 127% off of the processing time", then the following week another 42% and so on. All the while, bad-mouthing the current development staff.
We put his code into QA and all seemed to go well. We really thought we had a vastly improved workflow engine. Since there would be little to no volume on Easter weekend, we picked that Friday night to put his code into production.
All went well, until Saturday morning. None of the workflows were running properly and large chunks of logic was missing. The consultant assured us it was minor issue and instead of rolling back, we should leave it there and give him time to fix it. Saturday came and went with half a dozen 'fixes' that didn't fix anything. On Easter Sunday, I got a call that we were rolling everything back.
That was the easy part.
Now we had to identify which workflows were triggered and try to retrigger them. It was a SAAS operation and the DBs were multi-tenant as well. Needless to say, there was no Easter dinner for any of us. We spent Sunday and Monday undoing what he had done.
When we looked into his code, we saw that it code was only processing the workflows. But it didn't do any of the events based on the workflow outcome. So, his huge percentage gains didn't exist. Our huge percentage gain was showing him the door.
At least we didn't see the Easter Bunny, but this Easter Egg was more than enough.

When you're coming up on a big holiday break, there's always a push to get things done in time. No one wants to come back to a gigantic backlog after a holiday. That little push is a gift you give your team- but sometimes that gift isn't appreciated. B shares their story:

It was around the time of our favorite holiday: Christmas.
My employer was having a nice get together with all employees after work. It was scheduled at 5 p.m. If our work was done, we could share some holiday cheer with our peers.
As usual, I started quite early in the morning and worked hard to finish my project to earn my three weeks of holiday. I was nearly finished with my work at around 1 p.m.
Now, my co-worker, "Lou", knew about my long holiday and needed something finished before my leave so that he could continue to work on it. He estimated it should take me half an hour, one hour tops. So I wrapped up my current project in 15 minutes and started with Lou's top priority work. At 5.30 p.m I was not nearly finished an called him (his status in the booking system which tracks the working hours showed him online and working). No response from him. Another co-worker in his room told me he had already gone to the party an hour ago.
Being a professional, I wrote him an email with the current status and the next steps. By the time I got to the party it was 6.15 p.m and dear old Lou had already left. I drank some hot wine, chatted with my fellows and then went home enjoying Christmas with my family.
Now the real WTF was on my return, three weeks later. The first thing Lou told me when I saw him was and asked about the project was: "I did not have time for it.". I was speechless. He had 10 days to work on it, "made" me work longer than needed and all just to leave it to rot.
At least I learned not to believe his priorities. And it was the first drop that caused me to hunt for, and eventually get a better job.

Thanks so much to all our submitters. Mike and B, someone will be reaching out soon to get your prizes out to you, and tomorrow, we'll reveal our grand prize winner, and maybe kick off a new holiday tradition?

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Chaotic IdealismSolitude

I’m plodding down the sidewalk. It’s trying to rain, but not quite succeeding. I’m wearing a coat, hat, mask, and sunglasses. The mask isn’t strictly necessary; my neighborhood isn’t crowded, especially in the middle of the day; but I’ve found that it keeps my nose warm. Besides, if people see me wearing a mask, hopefully it’ll become more normal.

My neighborhood is at that awkward age long after “new”, but long before “historic”. The houses are small, 1950s, identical. Window, door, window, window. A copy-paste suburb.

Every house has a fence. They are fence people, dog people, privacy people. I make a point of waving to everyone I see: Half-turn, hand up, wave once, wave twice, “Hi, neighbor!”, no need to smile because of the mask, but it’s implied. I’m used to greeting people I don’t recognize; I’m faceblind, so, out of context, I only recognize my closest friends and family anyway. And maybe, if I greet people, it will become more normal, and we’ll become more able to say, “Hey, can I help?” or, “Hey, can you help?”

I walk every day when I’m not too tired and I manage to kick myself out the door, which means I walk about three days a week. It’s a mile around my neighborhood. I figure that if I can make it in fifteen minutes without feeling tired, I’ll stay in shape for when I return to the library, when we are all vaccinated and I can finally volunteer again. You might not think library work is physically tiring, but that is not an opinion held by anyone who has ever carried seventeen books while dodging toddlers and looking for the large-print biography of Beatrice Potter requested by a patron who surprised them while they were trying to keep the cookbooks from invading and conquering the diet-and-nutrition section.

These are also flag people, so there’s a flag every few houses. Americans are entirely too much in love with their flags. The Biden yard signs were all taken down long ago, but increasingly sad-looking Trump signs still stick out of lawns here and there. One house flies an American flag printed with an image of Trump sporting Photoshopped muscles, holding an assault rifle, explosions in the background. Another flies a black-and-white flag with one blue stripe, meant to support the police. I wonder if they realize they’re not supposed to modify the flag. Or, for that matter, fly a tattered flag in the rain, which some of them are also doing. Not that this is illegal, nor should it be; but it’s considered awfully disrespectful. Well, I’m not a natural-born citizen, only naturalized; what do I know?

When I return home, it’s to the sleepy blinks of my cat Christy, who has lately reclaimed her territory from foster cat Katniss, adopted out last week to a middle-aged woman who wanted some company. Katniss is the last cat I will foster for CACHS, because CACHS lost funding this year. We used to have a no-kill county shelter with full service for feral cats. Come January 1, we’ll have only a dogcatcher. Another casualty of COVID.

These days, I’m alone most of the time, but it’s okay. People used to tell me that if you stayed alone too long, you’d go insane. I always thought I must be an exception, but I had no way of proving it because the world kept forcing me to socialize. Turns out I was right. I’m happy enough with my cat, my books, my computer, my work on and my work on the disability memorial web sites, and the crocheted blankets I’m making for donation. I strongly suspect that solitary confinement isn’t torture in and of itself–it’s the lack of stimulation, the lack of things to do and think about, that people find impossible to cope with.

But this is a socially-focused world. People have been told for a very long time that they can’t live without socializing and, for that matter, can’t live without romance or sex. Maybe that’s part of why so many people in my country are ignoring safety regulations during the pandemic; they feel like it’s unconscionable to deny them their bars and parties because they don’t understand that solitude is perfectly possible. Not that I envy the extroverts right now–but it’s perfectly possible to socialize online. I know that an introvert like me will never quite understand what it’s like, but it can’t be impossible. I miss my library; I wish I could go there. I used to go twice a week. Can’t they stay home, just a little longer? People are dying. Where is their compassion?

But in the end I can only control my own actions, which means feeling a little bit at loose ends, missing the library, but in general, doing okay. At least I can refuse to be part of the problem.

Worse Than FailureCodeSOD: All About the Details

Dora's AngularJS team (previously) wanted to have their display be "smart" enough to handle whether or not you were editing a list of "Users" or just one "User", so they implemented a function to turn a plural word into a singular word.

Which, of course, we already know the WTF implementation of it: they just chop off the last letter. So "Potatoes" becomes "Potatoe", "Moose" becomes the noise cows make, and I'm just left saying "oh, gees". But they managed to make it worse than that.

The actual display might be used like so:

<label> Edit {{$ctrl.singular}}</label>

singular is a property on the controller for this component. But how does that get populated?

function getSingular(){ if($ctrl.type){ $ctrl.singular = $ctrl.type === 'details' ? $ctrl.type : $ctrl.type.slice(0, -1); } }

So, it's important to note, getSingular isn't a "get" method in the conventional sense. It populates the singular property, but doesn't return anything. Hopefully this method gets called sometime before we try and display $ctrl.singular- it won't crash or anything, but it also won't display any useful information.

For extra weirdness, though, it also has a extra gate: "details" is always plural. This means you might use it like so: Edit User {{$ctrl.singular}} to display "Edit User Details", and it will always be plural.

It's a bad implementation of a bad idea, with a confusing method name, inconvenient calling semantics, and a trap-door that could easily surprise you if you're not details oriented. That's a lot of bad code in a tight space.

Pay attention to this space over the next few days: We'll be announcing our winners of our What the Fun Holiday special, and sharing those stories.
[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

MEMPV vs Mplayer

After writing my post about VDPAU in Debian [1] I received two great comments from anonymous people. One pointed out that I should be using VA-API (also known as VAAPI) on my Intel based Thinkpad and gave a reference to an Arch Linux Wiki page, as usual Arch Linux Wiki is awesome and I learnt a lot of great stuff there. I also found the Debian Wiki page on Hardware Video Acceleration [2] which has some good information (unfortunately I had already found all that out through more difficult methods first, I should read the Debian Wiki more often.

It seems that mplayer doesn’t suppoer VAAPI. The other comment suggested that I try the mpv fork of Mplayer which does support VAAPI but that feature is disabled by default in Debian.

I did a number of tests on playing different videos on my laptop running Debian/Buster with Intel video and my workstation running Debian/Unstable with ATI video. The first thing I noticed is that mpv was unable to use VAAPI on my laptop and that VDPAU won’t decode VP9 videos on my workstation and most 4K videos from YouTube seem to be VP9. So in most cases hardware decoding isn’t going to help me.

The Wikipedia page about Unified Video Decoder [3] shows that only VCN (Video Core Next) supports VP9 decoding while my R7-260x video card [4] has version 4.2 of the Unified Video Decoder which doesn’t support VP9, H.265, or JPEG. Basically I need a new high-end video card to get VP9 decoding and that’s not something I’m interested in buying now (I only recently bought this video card to do 4K at 60Hz).

The next thing I noticed is that for my combination of hardware and software at least mpv tends to take about 2/3 the CPU time to play videos that mplayer does on every video I tested. So it seems that using mpv will save me 1/3 of the power and heat from playing videos on my laptop and save me 1/3 of the CPU power on my workstation in the worst case while sometimes saving me significantly more than that.


To summarise quite a bit of time experimenting with video playing and testing things: I shouldn’t think too much about hardware decoding until VP9 hardware is available (years for me). But mpv provides some real benefits right now on the same hardware, I’m not sure why.



The Hetzner server that hosts my blog among other things has 2*256G SSDs for the root filesystem. The smartctl and smartd programs report both SSDs as in FAILING_NOW state for the Wear_Leveling_Count attribute. I don’t have a lot of faith in SMART. I run it because it would be stupid not to consider data about possible drive problems, but don’t feel obliged to immediately replace disks with SMART errors when not convenient (if I ordered a new server and got those results I would demand replacement before going live).

Doing any sort of SMART scan will cause service outage. Replacing devices means 2 outages, 1 for each device.

I noticed the SMART errors 2 weeks ago, so I guess that the SMART claims that both of the drives are likely to fail within 24 hours have been disproved. The system is running BTRFS so I know there aren’t any unseen data corruption issues and it uses BTRFS RAID-1 so if one disk has an unreadable sector that won’t cause data loss.

Currently Hetzner Server Bidding has ridiculous offerings for systems with SSD storage. Search for a server with 16G of RAM and SSD storage and the minimum prices are only 2E cheaper than a new server with 64G of RAM and 2*512G NVMe. In the past Server Bidding has had servers with specs not much smaller than the newest systems going for rates well below the costs of the newer systems. The current Hetzner server is under a contract from Server Bidding which is significantly cheaper than the current Server Bidding offerings, so financially it wouldn’t be a good plan to replace the server now.


I have just released a new version of etbe-mon [1] which has a new monitor for SMART data (smartctl). It also has a change to the sslcert check to search all IPv6 and IPv4 addresses for each hostname, makes freespace check look for filesystem mountpoint, and makes the smtpswaks check use latest swaks command-line.

For the new smartctl check there is an option to treat “Marginal” alert status from smartctl as errors and there is an option to name attributes that will be treated as marginal even if smartctl thinks they are significant. So now I have my monitoring system checking the SMART data on the servers of mine which have real hard drives (not VMs) and aren’t using RAID hardware that obscures such things. Also it’s not alerting me about the Wear_Leveling_Count on that particular Hetzner server.


Kevin RuddTranscript: Wang Yi at Asia Society



State Councilor and Minister of Foreign Affairs Wang Yi

18 December 2020


Kevin Rudd

We bring you this broadcast to the Asia Society family, friends and supporters from around the world. We’re joined tonight by our centres in Washington, in Houston, in Los Angeles and San Francisco, as well as our centers around the world in Tokyo, in Seoul, in Hong Kong, Manila, Sydney, Melbourne, of course in Mumbai and Zurich. And there are many others who are joining this gathering as well. This is a special event and we are honored to have with us the foreign minister of the People’s Republic of China, Wang Yi, who is also of course, a respected member of the State Council of China. And we’re also joined by Ambassador Cui Tiankai, the Ambassador of the People’s Republic of China to the United States of America. Let me make just a few remarks of welcome in Chinese. And then, let me outline our context for this evening’s presentation by the foreign minister.


(Greetings to you all. I’m Kevin Rudd. I’m the President of the Asia Society. It is a great honor for us today to have the opportunity to host Mr. Wang Yi, Minister of Foreign Affairs of the People’s Republic of China. Asia Society has a long history of 65 years. We were founded in the 1950s during the Cold War. But our founder, John D. Rockefeller III, was a visionary. He believed that starting in the 1950s, the United States should maintain a good relationship with Asia, including a good relationship between the United States and China. Of course there  have been creep problems  in US-China relations recently. But as Asia Society, we intend to carry on our mission. So tonight, we are particularly interested in the content of Foreign Minister Wang Yi’s  address .)

As someone who has been a student of US-China relations for the last 40 years, I’d like to say that it gives me no pleasure to point out that the current state of the US-China relationship is probably the worst that we’ve seen in nearly half a century. Despite all the twists and turns that we’ve seen over those decades, many positive things have been achieved over 50 years. But of course, new problems and new difficulties have arisen. 

When we look to the future. There will be a long debate about why the US-China relationship has ended up in the difficulty it currently faces. There will be a long debate about what China has done that is different, what America has done that is different, and how strategic circumstances more generally have changed. The key challenge however  is what will we now do for this extraordinary decade of the 2020s which lies ahead of us. 

Broadly speaking, I see several possible scenarios. One, we see China work within the framework of the international rules based order that we’ve been developing together since 1945. The second is that as China’s power grows, that the rest of the world, including the United States increasingly adjusts to an order, which is much more accommodating of Chinese interests and values, and where China’s leadership becomes more apparent. There’s a third scenario too, which is through international machinery, like the G20, and like other bodies through the United Nations, China and the United States work collaboratively with other countries to ensure that the order is stable, accommodates our interests and our values, and still preserves the fundamental principles of open societies, of open trade, and of course, of open systems. 

These I think are the scenarios which lie before us. Of course, some of our colleagues, including Graham Allison from Harvard University who also joins us in the audience this evening, have outlined a further scenario. And that is, in fact, we may be, according to Thucydides’s Trap, destined for war. I don’t believe that is so, but Graham’s analysis is a sobering one. He points to the structural tensions between established powers and rising powers, and whether in fact it creates a near inevitable dynamic which pushes them towards crisis, conflict, and war. That’s the sobering alternative to the other scenarios I just outlined. In fact, when I was working with Graham at Harvard University, five years or so ago before coming to the Asia Society, I remember working on a paper entitled “constructive realism,” 建设性的现实主义, a set of principles about how we could govern the future of the US-China relationship. With areas defined with red lines, with areas of strategic competition, and wth areas of strategic cooperation. It’s good to see some of those principles now alive in some of the thinking around the world at present as we embark upon this new period of the Biden administration. 

And finally, before turning to Minister Wang, as a former Prime Minister of Australia, it would be remiss  of me not to mention the impact of US-China relations on third countries around the world. US allies like Japan, and the Republic of Korea ,but also Australia itself, which now finds itself very much in the firing line of tensions in the bilateral relationship. And the question which these countries have is how do we navigate the shoals of all of this for the future, and particularly in the Australian case, how do we take the temperature down and navigate a creative and constructive way through current tensions. 

So I’m very pleased tonight in Australia, this morning  in America, and early evening in Beijing, (given we are working in multiple time zones. ) that we have this opportunity to hear from the man himself. Wang Yi, the Foreign Minister of the People’s Republic of China.

Foreign Minister Wang is a man of enormous diplomatic experience. And he has traveled the world extensively since becoming Chinese Foreign Minister. Prior to that as a professional Chinese diplomat, he has extensive experience in many countries, and particularly in Japan. So, Foreign Minister, you are a welcome guest here at the Asia Society. We welcome you to address our Asia Society family and community around the world. Over to you minister.

Wang Yi

Thank you very much, Mr. Kevin Rudd. 

You are a household name in China, a famous international advocate. And you have made important contribution to friendship and cooperation between China and Australia. I want to congratulate you on the fact that you are going to be the President of Asia Society. Actually, Asia Society is not unfamiliar to me. I received the former president, and many friends from Asia Society in Beijing. We engage in in-depth and open discussions on many occasions. I really feel that as long as we engage in face to face discussions, we can reach a lot of consensus on wide ranging issues, there are no unsurmountable difficulties or obstacles between us. Under your stewardship, we hope that Asia Society will continue to care for and support the growth of China-US relations, and we will be happy to engage friends of Asia Society for discussions and conversations to build deeper understanding. 

While the outgoing 2020 has witnessed the sudden onslaught of COVID-19, a pandemic that has upended the world in almost all aspects, countries have come to realize more than ever that global challenge requires enhanced international coordination and cooperation, and that major countries in particular, should lead by example.

However, as you have mentioned in your opening remarks, China-US relations have spiraled down to the lowest level since the establishment of diplomatic ties 41 years ago. This is not something that we would like to see, because clearly it is not an interest of the Chinese and American peoples, nor is it helpful when global efforts are needed to overcome the difficulties.

In retrospect, our 2020 might have witnessed the greatest damage to the international order and international relations. Among many others, we see power politics jeopardizing international stability. As arbitrator interfer other’s internal affairs, sanctions and unilateral sanctions have become the biggest destabilizing factor to regional and global security. We see protectionism jeopardizing international trade as backlash against globalization is gaining momentum. There are more barriers impeding trade and investment. Global industrial supply chains are on the cusp of breakdown. We see unilateralism jeopardizing international cooperation. The “go it alone approach” and walking away from international commitment have fractured and crippled the international system, and have dragged on international efforts by all countries against global challenges.

And we see McCarthyism resurging and jeopardizing normal international exchanges, and those with radical and entrenched political bias seek to label and stigmatize open and lawful political parties and institutions of other countries, and use ideology to disrupt or even sever normal international engagement, aiming at starting a new Cold War, and forming a new Iron Curtain.

Well these risks and challenges facing us are unprecedented, China and the United States are the two largest economies. We are permanent members of the UN Security Council, and we are the largest developing and developed country, respectively. We always believe that what we should do is to form the right perception about one another, act in line with the trend of the times, and heed the aspirations of the international community. We need to step up to our responsibilities as major countries. And at the same time, work together with other countries to overcome difficulties, meet challenges, and pursue development.

I know all of you follow China’s diplomacy closely and the style and future direction of which has been the subject of ongoing discussions. What I would like to underline is that China follows an independent foreign policy of peace and seeks to engage other countries for friendship and cooperation on the basis of the five principles of peaceful coexistence. China is committed to bringing happiness to the Chinese people and contributing to progress of humanity and seeks to play a constructive role for world peace and development.

I would like to say it again that China has no intention to compete for hegemony. We never interfere in other’s internal affairs. We don’t export our system or model. Not in the least do we seek spheres of influence.  That’s what we did in the past. That’s what we are going to do in the future.

China’s diplomacy. What are the focus and priorities? First of all, China’s diplomacy is for the development of the nation. But as China remains a developing country, as President Xi Jinping has said, the Chinese people’s aspiration for better life is the goal that we endeavored toward with relentless efforts. All rural residents living under the current poverty line and all designated counties have shut off poverty this year. This means that extreme poverty is eliminated. For the first time in China’s history of several thousand years. We are proud of such achievement. And what we have achieved is a big contribution to poverty reduction efforts of all humanity. At the same time, we are sober minded about a long journey ahead. If we are to lock in the gains against poverty and bring prosperity to all the people. China’s diplomacy, which starts at home, naturally, should help promote the overall development of the country and the new development paradigm. So, the first priority of China’s diplomacy is to serve sustainable development of the nation. 

Second, China’s diplomacy is for win win outcome. It’s not part of the Chinese culture to seek our own development or put up interest above others, nor is it our philosophy to play the zero sum game, and be the winner that takes all. What we are committed to is a win win strategy of opening up, to ensure that all countries will come out as winners. The experience of China’s 40 plus years of “reform and opening up” is a strong testament to that. All the businesses and countries that have worked together with China have achieved shared development and prosperity. This is what we believe will make diplomacy more sustainable and more popular. We will further open up only broader, deeper, and with higher standards, and we will share opportunities and benefits with all countries for win win and greater development.

China’s diplomacy is for equity. Having experienced great humiliation in history, China truly understands how important equity is. We believe that countries, irrespective of their size and strength, are all equal members of the international community. The big and strong must not bully the small and weak. We believe that all countries enjoy equal rights to development. Developed countries have achieved development. We congratulate developed countries on that. And then developed countries need to help developing countries, increase the capacity for self-development. And developing countries should not be kept forever at the lower end of the industrial and value chains.

We believe that global affairs should be handled by all countries through consultation and that international rules should be made by all countries on an equal footing. There should be more democracy in international relations.

China state stays committed to developing a relationship based on coordination, cooperation, and stability with the United States under the principle of no conflict, no confrontation, mutual respect, and win win cooperation. And China has been working in good faith to that goal.

Regrettably, however, when we turn on TVs, read newspapers, and access new media, we would often see senior US officials, pointing fingers at China. And there is no evidence to support their accusations. They are merely irresponsible presumption of guilt and emotional lashing out. The fundamental reason behind all this is that some of US politicians have strategic miscalculations about China.

First, they choose to ignore the vast common interest and room for cooperation between the two countries and insist that China is a main threat. But they get this wrong at the very beginning. The ensuing government strategy that mobilizes all resources available to take on China is going in a wrong direction. China is not a threat to the United States, was not, is not ,and will not be a threat to the United States. 

Second, out of ideological bias, they seek to defame  the Communist Party of China. The CPC as the constitutionally recognized ruling party of China has a close bond and a shared future with the Chinese people. An attack on the CPC is an attack on the 1.4 billion Chinese people. So, it is not going to succeed. It is doomed to fail.

Third, they hope that maximum pressure will make China give in. China was once bullied by Western powers. But those days are long gone. Power politics will only get the Chinese people to be more resolved in their response.

And fourth, they attempt to build an international coalition against China. But this is the age of globalization. The interest of all countries are so intertwined that the overwhelming majority of them do not want to take sides, let alone being forced into confrontation with China. Facts have proved and will continue to prove that these attempts will lead nowhere and find no support, because they deny the fruitful cooperation between China and the United States over the past 40 plus years of diplomatic engagement, right off decades of efforts by the good Chinese and Americans to grow this relationship and dismiss the ardent hope of the international community for peaceful coexistence between China and the United States. This grave difficult situation in China-US relations is not something we want to see.

The two sides should learn from the ups and downs since the establishment of diplomatic relations. In particular, it is important that the United States policy toward China returned to objective and sensibility as early as possible. I wish to stress that China’s policy toward the United States is always stable and consistent. We always believe that with a deeply interwoven interest between the two countries, neither can do without the other, remodel the other, or replace the other. The bilateral relationship is no zero sum game. And the success of one does not have to entail the others failure.

While China-US cooperation can make great things happen for the two countries and the entire world,  China-US confrontation would definitely spell disaster for not only the two countries, but also humanity as a whole. The giant vessel of China-US relationship carries not only the wellbeing of the 1.7 billion Chinese Americans, but also the interests of the over 7 billion people in the world.

I believe we all agree that the time has come to decide the future course of this giant vessel. As President Xi Jinping wrote in his congratulatory message to President-Elect Joe Biden, it is hoped that the two sides will work together in the spirit of no conflict, no confrontation, mutual respect, and win win cooperation. Focus on cooperation, manage differences, move forward China-US relations in a sound and steady manner, and together with other countries and the international community, at most, the noble cause of world peace and development. This is how we see our relationship and what we expect of the relationship.

I hope that the US sign will join us on the basis of mutual respect, through dialogue and consultation, and by way of deepening our common interest and enhancing the support by the people to rebuild the strategic framework for the healthy and steady growth of China-US relations. This I believe is also what Ambassador Cui Tiankai shared with friends in Washington DC. In my video discussion last week with friends from the US-China Business Council, I talked about the importance for China and the United States to restart dialogue, return bilateral relations to the right track, and rebuild mutual trust. We hope that we will expand cooperation, manage differences through dialogue. We have noted the full priorities laid out by President-Elect Joe Biden. We believe that COVID response, economic recovery, and climate change, provide space for cooperation between our two countries. And the most pressing task at the moment is to jointly tackle the pandemic. We in China stand ready to continue to do what we can to support the years as needed.

I don’t know whether all of you aware that China has provided over 40 billion face masks to the United States. That is, on average, every American citizen gets over 100 facemasks made in China. The two countries could also strengthen cooperation, diagnostic and therapeutic experience, PP production, and vaccine research manufacturing distribution. We could also leverage our respective strengths to support COVID response in third countries and contribute to a global community of health for all.

Climate change is another important area of cooperation. China is steadfast in following the new development philosophy and building an ecological civilization. We are committed to achieve green, low carbon, and sustainable development. To this end, we will faithfully implement the Paris Agreement on climate change  to fulfill our responsibility for future generations, and our obligations to the international community. In the Climate Ambition Summit on December 12, President Xi Jinping announced China’s objectives and policy measures to scale up its nationally determined contributions. We have also noted that President-Elect Joe Biden had pledged to bring the us back to the Paris Agreement, after taking office. We welcome more active actions from the US side to this end. China and the US can come together again to facilitate international cooperation on climate change and make our due contributions. As the two largest economies, China and the United States need to strengthen macro economic dialogue. And China is ready to do so, to coordinate our policies and contribute to global growth and financial stability.

That said, we in China never shy away from our differences, our stance is that the two sides should manage constructively the prominent and important issues, based on a right perception of each other.

First, on ideological issues. We hope both sides will respect each other’s choice of system and development path. Four decades ago, leaders of China and the US made the handshake across the vast Pacific. It was fundamental to us because both countries recognize the importance of mutual respect and seeking common ground while putting aside differences. The goal of China-US engagement is not to mow the other in one’s own image, still less to defeat the other side, but to seek and expand convergent interests. Both Chinese and American systems are chosen by their people. And the systems are deeply rooted in their respective historical and cultural traditions. If the US China policy were to remodel or even subvert China, it would not be achievable, it would be Mission Impossible, and it leads nowhere. The right approach is to respect each other’s political system and development paths, continue to maintain peaceful coexistence and promote win win cooperation.

Second, on issues concerning national sovereignty and territorial integrity. All China’s internal affairs involve China’s core interests. Be it the UN Charter, or under the three binding China-US joined communiques, none of these issues shall be subject to foreign interference. Some politicians have fabricated too much false information about Xinjiang and Tibet, and the executive branch and Congress have on this basis, exercise long arm jurisdiction on Chinese businesses and individuals. Such moves seriously violate international law and defies international justice and conscience. As an independent, sovereign state, China, naturally, has to respond.

We shall not allow the law of the jungle to govern our world again. It’s same time for our foreign friends who truly care about China and wish to know more about Xinjiang, Tibet, and other parts of China. We are always ready to share with them the facts, what is truly happening in China. We welcome all of you joining this video link today from various countries to visit China, including the two autonomous regions, at your convenience. There, you will see firsthand a situation different from what you hear and see in the news. You will see a Xinjiang and Tibet that enjoys social progress ,ethnic harmony, freedom of religious belief, and a vibrant economy.

Third, on trade issues. We need to replace confrontation and sanctions with dialogue and consultation. China-US trade is mutually beneficial in nature. What drives trade is market demand, not imposed deals. There are no winners in trade wars. What has happened proves that pressuring others with tariffs would only boomerang, and we would only hurt oneself in the end. The two sides need to remove manmade barriers and instill positive expectations for the sound development of bilateral economic and trade cooperation.

Let me stress here that the Chinese market will continue to grow and is expected to become the largest and most vibrant in the world. This means China can and need to buy more products with active demand in the Chinese market from the United States, and it is just a matter of time before the trade imbalance is eased. As for the concern of structural issues, let me say that China is firmly advancing our reform according to our reform timeline. At present, we are firmly advancing supply side structural reform. In the meantime, China is implementing in good faith the common understanding in the phase one trade agreement in this respect. China has set a clear goal of building a new system of open economy of highest standards. And in China, we have taken domestic reform into more sophisticated fields of institutions and rules. China’s reform does not stop at policies. As time passes, the legitimate concerns expressed by various parties will be properly resolved, because first and foremost, resolving such concerns meets China’s needs of building a modern system.

We urge the US side to stop overstretching the notion of national security, stop the arbitrary suppression of Chinese companies. Just in recent days, the executive branch of the US administration has been expanding the list of sanctions against Chinese companies. This is unacceptable. We hope the US side will take a sober minded approach and provide an open, fair, and nondiscriminatory environment for Chinese businesses and investors.

Fourth, on maritime issues. We need to strive to turn frictions into cooperation. I believe this is totally achievable. Because for one thing, there has never been a problem with the freedom of navigation or overflight in the South China Sea. There has never been a single instance where normal navigation or overflight was impeded. China’s position is crystal clear: we will continue to work with other countries to maintain the freedom of navigation overflight under international law. 

China will speed up consultations with ASEAN countries, toward a Code of Conduct in the South China Sea (COC), which will regulate behaviors on the sea and underline the principle of peaceful settlement of disputes. The conclusion of the COC will be compatible with universally recognized international laws, including the UNCLOS. It will not affect legitimate and law for maritime rights and interests of countries outside this region, because COC is not worked out by China alone, it will be an agreement by 11 countries working together.

The Chinese government is always open to candid communication dialogue on maritime issues with the US side. This door remains open. Chinese and US experts and scholars may also engage in in depth discussions on the applicability and technicality of UNCLOS and other international laws and rules to avoid misunderstanding and misjudgment.

China and the United States in maintaining freedom of navigation, protecting the marine environment, and harnessing marine resources share converging interests. The two sides can well explore possibilities of cooperation in those areas, engage in positive interactions on maritime issues and shape positive interactions and add positive elements to China-US relations. 

Fifth, on people to people exchange. We need to remove restrictions as soon as possible. Friendship between our peoples provide the social foundation for China-US relations. People with vision in both countries should jointly reject attempts to disrupt people to people contact and create a cultural decoupling between the two countries.

We need to work together to encourage and support people from all sectors to increase exchanges and mutual understanding. To view all Chinese students, experts, and scholars in the US as spy suspects actually say it’s more about the mentality of the accusers and their lack of confidence. China has no intention to pick a fight with the United States, either in diplomacy, media, or any other field. It is important that people with vision in both countries jointly oppose stigmatizing people to people exchange and politicizing normal contact, and remove stumbling blocks to such contact and exchange.

Friends, as long as the two countries act with a sense of responsibility to history and humanity, bear in mind the fundamental interests of the two peoples and the whole world, and stay committed to principles of mutual respect, equality, seeking common ground while showing differences, and win win cooperation, the giant vessel of China-US relations will be able to stay on the right course, steer clear of hidden shoals and rocks, navigate to the counter currents and stormy waves, and achieve the goal of mutual benefit and all win for the world. 

I’ve made a quite lengthy speech. Thank you for your patience. Thank you for listening. I’m sure you have a lot of views to share with me. I would like to hear your views and also your questions. Thank you.

Kevin Rudd

Thank you very much. From the Foreign Minister, a very comprehensive speech outlining China’s view on the potential  future of the bilateral relationship with the United States under the new Biden administration. 

The Foreign Minister has made great reference to the great ship of US-China relations. I have this mental picture of this enormous ocean liner out on the seas. And sometimes the waves are like this. And sometimes, you look for where the lifeboats are. But still, this great ocean liner is afloat. That is the important thing. The second visual metaphor, which the Foreign Minister used, which I thought was particularly interesting, was it affects not just the future of the 1.7 billion passengers on board the ocean liner, but the flotilla of smaller ships, who are also in the convoy. And that is the rest of the world. The other 5 billion of us who have an interest in what you 1.7 billion actually do. So I think, Foreign Minister, it’s a very good analogy. 

It’s been a wide ranging speech, and we heard from the Foreign Minister, his description of China’s worldview, and its overall approach to diplomacy.  We also heard from the Minister, his particular view of the United States, both the challenges and the opportunities. I noted in particular what the Minister said about the stated priorities of the incoming Biden administration. He referred to the possibility of collaboration in three of the four priority areas which the incoming president has identified. 

Foreign Minister Wang spoke in particular about the potential for pandemic collaboration. I think, from the perspective of the rest of the world, it would be a wonderful thing. If the vaccine could be treated not as an issue of geopolitics, but could be treated as being a global common good, and that for all the people of the world, all 7 billion people in the world, that the United States and China could collaborate to ensure that we have global distribution of the vaccines from various countries which are now available, I would commend that as an approach. 

The Foreign Minister also spoke about President Biden’s priority on climate change. He’s right to identify that as a priority. Those of us who know the Vice President, and those around him know that this is not a ephemeral concern, it is a fundamental concern of the incoming administration. And so Xi Jinping’s statement recently about China achieving carbon neutrality by 2060 and what Foreign Minister Wang just mentioned, in terms of China’s adjustment of its “nationally determined commitment” for the decade ahead  are important. And again, I think from the perspective of others around the world, the Europeans, the Japanese, and those who want a good outcome from the Paris Agreement  for the planet, we would commend both of you to put all your energy into this, because our kids and our grandkids depend on it. 

You also spoke candidly about the difficulties that you see emerging with the US-China relationship and how this needs to be tackled with candor and vigor.  You spoke about the radically different views on human rights in both Xinjiang and Tibet. This will require greater and more direct engagement between the two sides and all sides on that, because there are quite different views.

On the question of trade, all of us in the Asia Pacific region want to see more free trade, not more protectionism. We welcome China’s membership of RCEP. We would like to see America join RCEP.

I would also like to see both America and China join the TPP. The thing about trade is, if we have a good system for regulating the rules for the World Trade Organization, then it is good for all countries because free trade raises all boats. So, this could become an area of genuine collaboration rather than of continuing conflict with the United States. 

And finally, on the other areas that you mentioned in terms of maritime disputes and maritime confrontation as opposed to collaboration. Foreign Minister, as you know, as an experienced diplomat, and Ambassador Cui, this will be the hardest question to resolve, because the perception gap is so huge. But finding a way of stabilizing  what can sometimes be dangerous confrontation in the skies and on the seas of the South China Sea, and in and around Taiwan, is important for the restabilization of the relationship. 

Finally, Minister you commented on Chinese students around the world. It is a view of which I personally completely share. We should be opening the doors wider to each other’s  students not closing doors to them. Other matters of state concern can be attended to by the agencies of state responsible for those areas. But we have a huge mutual interest around the world for our students to be welcomed in all countries – Chinese students in the United States and around the world and foreign students in China. Because this creates the people to people building blocks for the long term future. And I hope we can turn that corner in the period ahead, so that we see no evidence of any racism or racial prejudice or McCarthyism in any form in any part of the world towards Chinese students, and frankly towards foreign students,  as they would experience sometimes, even in China. 

To conclude, Foreign Minister Wang thank you so much for this comprehensive presentation. You spoke eloquently about the need to rebuild a strategic framework for the US-China relationship. That framework needs early attention between both sides. If we’re going to have strategic competition between the United States and China, at least may be managed strategic competition within a common framework, which also provides opportunity for massive areas of cooperation at the same time. The alternative for not having such a framework you also referred to. You spoke before about the danger of a new iron curtain. Those were stark words. I don’t think that’s what China wants. I don’t believe that’s what people in the United States or the government of the United States wants either. But it require all of our effort our diplomacy and goodwill to make sure that that scenario is avoided. 

The truth is, as someone from a third country, my judgement it that both the United States and China will each have to change some of their policy positions at present if we are to share a common future. And I think the rest of the world will seek to work within such a framework as well. 

To conclude, also, Minister I enjoyed very much, your early comments about the fact that face to face dialogue, once COVID makes that possible,  enables us to solve most problems. And I hope  that is possible between United States and China. As soon as possible.  And once again, to put in a pitch for my own country Australia, face to face contact between the Chinese government at the Australian government as well. Thank you so much Minister for these comments and for your remarks to us today.

The post Transcript: Wang Yi at Asia Society appeared first on Kevin Rudd.

METesting VDPAU in Debian

VDPAU is the Video Decode and Presentation API for Unix [1]. I noticed an error with mplayer “Failed to open VDPAU backend cannot open shared object file: No such file or directory“, Googling that turned up Debian Bug #869815 [2] which suggested installing the packages vdpau-va-driver and libvdpau-va-gl1 and setting the environment variable “VDPAU_DRIVER=va_gl” to enable VPDAU.

The command vdpauinfo from the vdpauinfo shows the VPDAU capabilities, which showed that VPDAU was working with va_gl.

When mplayer was getting the error about a missing i915 driver it took 35.822s of user time and 1.929s of system time to play Self Control by Laura Branigan [3] (a good music video to watch several times while testing IMHO) on my Thinkpad Carbon X1 Gen1 with Intel video and a i7-3667U CPU. When I set “VDPAU_DRIVER=va_gl” mplayer took 50.875s of user time and 4.207s of system time but didn’t have the error.

It’s possible that other applications on my Thinkpad might benefit from VPDAU with the va_gl driver, but it seems unlikely that any will benefit to such a degree that it makes up for mplayer taking more time. It’s also possible that the Self Control video I tested with was a worst case scenario, but even so taking almost 50% more CPU time made it unlikely that other videos would get a benefit.

For this sort of video (640×480 resolution) it’s not a problem, 38 seconds of CPU time to play a 5 minute video isn’t a real problem (although it would be nice to use less battery). For a 1600*900 resolution video (the resolution of the laptop screen) it took 131 seconds of user time to play a 433 second video. That’s still not going to be a problem when playing on mains power but will suck a bit when on battery. Most Thinkpads have Intel video and some have NVidia as well (which has issues from having 2 video cards and from having poor Linux driver support). So it seems that the only option for battery efficient video playing on the go right now is to use a tablet.

On the upside, screen resolution is not increasing at a comparable rate to Moore’s law so eventually CPUs will get powerful enough to do all this without using much electricity.


Cryptogram Friday Squid Blogging: Christmas Squid Memories

Stuffed squid for Christmas Eve.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Krebs on SecurityVMware Flaw a Vector in SolarWinds Breach?

U.S. government cybersecurity agencies warned this week that the attackers behind the widespread hacking spree stemming from the compromise at network software firm SolarWinds used weaknesses in other, non-SolarWinds products to attack high-value targets. According to sources, among those was a flaw in software virtualization platform VMware, which the U.S. National Security Agency (NSA) warned on Dec. 7 was being used by Russian hackers to impersonate authorized users on victim networks.

On Dec. 7, 2020, the NSA said “Russian state-sponsored malicious cyber actors are exploiting a vulnerability in VMware Access and VMware Identity Manager products, allowing the actors access to protected data and abusing federated authentication.”

VMware released a software update to plug the security hole (CVE-2020-4006) on Dec. 3, and said it learned about the flaw from the NSA.

The NSA advisory (PDF) came less than 24 hours before cyber incident response firm FireEye said it discovered attackers had broken into its networks and stolen more than 300 proprietary software tools the company developed to help customers secure their networks.

On Dec. 13, FireEye disclosed that the incident was the result of the SolarWinds compromise, which involved malicious code being surreptitiously inserted into updates shipped by SolarWinds for users of its Orion network management software as far back as March 2020.

In its advisory on the VMware vulnerability, the NSA urged patching it “as soon as possible,” specifically encouraging the National Security System, Department of Defense, and defense contractors to make doing so a high priority.

The NSA said that in order to exploit this particular flaw, hackers would already need to have access to a vulnerable VMware device’s management interface — i.e., they would need to be on the target’s internal network (provided the vulnerable VMware interface was not accessible from the Internet). However, the SolarWinds compromise would have provided that internal access nicely.

In response to questions from KrebsOnSecurity, VMware said it has “received no notification or indication that the CVE 2020-4006 was used in conjunction with the SolarWinds supply chain compromise.”

VMware added that while some of its own networks used the vulnerable SolarWinds Orion software, an investigation has so far revealed no evidence of exploitation.

“While we have identified limited instances of the vulnerable SolarWinds Orion software in our environment, our own internal investigation has not revealed any indication of exploitation,” the company said in a statement. “This has also been confirmed by SolarWinds own investigations to date.”

On Dec. 17, DHS’s Cybersecurity and Infrastructure Security Agency (CISA) released a sobering alert on the SolarWinds attack, noting that CISA had evidence of additional access vectors other than the SolarWinds Orion platform.

CISA’s advisory specifically noted that “one of the principal ways the adversary is accomplishing this objective is by compromising the Security Assertion Markup Language (SAML) signing certificate using their escalated Active Directory privileges. Once this is accomplished, the adversary creates unauthorized but valid tokens and presents them to services that trust SAML tokens from the environment. These tokens can then be used to access resources in hosted environments, such as email, for data exfiltration via authorized application programming interfaces (APIs).”

Indeed, the NSA’s Dec. 7 advisory said the hacking activity it saw involving the VMware vulnerability “led to the installation of a web shell and follow-on malicious activity where credentials in the form of SAML authentication assertions were generated and sent to Microsoft Active Directory Federation Services (ADFS), which in turn granted the actors access to protected data.”

Also on Dec. 17, the NSA released a far more detailed advisory explaining how it has seen the VMware vulnerability being used to forge SAML tokens, this time specifically referencing the SolarWinds compromise.

Asked about the potential connection, the NSA said only that “if malicious cyber actors gain initial access to networks through the SolarWinds compromise, the TTPs [tactics, techniques and procedures] noted in our December 17 advisory may be used to forge credentials and maintain persistent access.”

“Our guidance in this advisory helps detect and mitigate against this, no matter the initial access method,” the NSA said.

CISA’s analysis suggested the crooks behind the SolarWinds intrusion were heavily focused on impersonating trusted personnel on targeted networks, and that they’d devised clever ways to bypass multi-factor authentication (MFA) systems protecting networks they targeted.

The bulletin references research released earlier this week by security firm Volexity, which described encountering the same attackers using a novel technique to bypass MFA protections provided by Duo for Microsoft Outlook Web App (OWA) users.

Duo’s parent Cisco Systems Inc. responded that the attack described by Volexity didn’t target any specific vulnerability in its products. As Ars Technica explained, the bypass involving Duo’s protections could have just as easily involved any of Duo’s competitors.

“MFA threat modeling generally doesn’t include a complete system compromise of an OWA server,” Ars’ Dan Goodin wrote. “The level of access the hacker achieved was enough to neuter just about any defense.”

Several media outlets, including The New York Times and The Washington Post, have cited anonymous government sources saying the group behind the SolarWinds hacks was known as APT29 or “Cozy Bear,” an advanced threat group believed to be part of the Russian Federal Security Service (FSB).

SolarWinds has said almost 18,000 customers may have received the backdoored Orion software updates. So far, only a handful of customers targeted by the suspected Russian hackers behind the SolarWinds compromise have been made public — including the U.S. Commerce, Energy and Treasury departments, and the DHS.

No doubt we will hear about new victims in the public and private sector in the coming days and weeks. In the meantime, thousands of organizations are facing incredibly costly, disruptive and time-intensive work in determining whether they were compromised and if so what to do about it.

The CISA advisory notes the attackers behind the SolarWinds compromises targeted key personnel at victim firms — including cyber incident response staff, and IT email accounts. The warning suggests organizations that suspect they were victims should assume their email communications and internal network traffic are compromised, and rely upon or build out-of-band systems for discussing internally how they will proceed to clean up the mess.

“If the adversary has compromised administrative level credentials in an environment—or if organizations identify SAML abuse in the environment, simply mitigating individual issues, systems, servers, or specific user accounts will likely not lead to the adversary’s removal from the network,” CISA warned. “In such cases, organizations should consider the entire identity trust store as compromised. In the event of a total identity compromise, a full reconstitution of identity and trust services is required to successfully remediate. In this reconstitution, it bears repeating that this threat actor is among the most capable, and in many cases, a full rebuild of the environment is the safest action.”

Kevin RuddStatement: Major General the Honourable Michael Jeffery AC, AO (Mil), CVO, MC (Retd)

Thérèse and I were deeply saddened today to learn of the death of Michael Jeffery, the former Governor-General of Australia.

Michael was a good Governor-General. He was appointed to the office after a period of controversy concerning his predecessor and worked hard to return the office of Governor-General to one of universal respect.

Michael was appointed by my predecessor, John Howard. Despite that fact, he dealt with my government with complete courtesy, professionalism and respect.

Michael had a distinguished military career. He was also passionate about land conservation and the environment and understood deeply the problems of water scarcity on our vast continent. Michael drew deeply on his proud, West Australian experience, knowing well that water is the lifeblood of our country.

Michael was supported by his wife Marlena. They were both kind and gentle people who always prepared to take time with others, whatever their position in life.

It was on Michael’s personal recommendation that I nominated Quentin Bryce to become Governor-General. He had dealt with Quentin during her period as Governor of Queensland and was deeply impressed by her qualities. He therefore played a significant role in bringing about Australia’s first woman Governor-General.

Thérèse and I pass our deepest condolences to Marlena and their entire family.

The post Statement: Major General the Honourable Michael Jeffery AC, AO (Mil), CVO, MC (Retd) appeared first on Kevin Rudd.

Cryptogram NSA on Authentication Hacks (Related to SolarWinds Breach)

The NSA has published an advisory outlining how “malicious cyber actors” are “are manipulating trust in federated authentication environments to access protected data in the cloud.” This is related to the SolarWinds hack I have previously written about, and represents one of the techniques the SVR is using once it has gained access to target networks.

From the summary:

Malicious cyberactors are abusing trust in federated authentication environments to access protected data. The exploitation occurs after the actors have gained initial access to a victim’s on-premises network. The actors leverage privileged access in the on-premises environment to subvert the mechanisms that the organization uses to grant access to cloud and on-premises resources and/or to compromise administrator credentials with the ability to manage cloud resources. The actors demonstrate two sets of tactics, techniques,and procedures (TTP) for gaining access to the victim network’s cloud resources, often with a particular focus on organizational email.

In the first TTP, the actors compromise on-premises components of a federated SSO infrastructure and steal the credential or private key that is used to sign Security Assertion Markup Language (SAML) tokens(TA0006, T1552, T1552.004). Using the private keys, the actors then forge trusted authentication tokens to access cloud resources. A recent NSA Cybersecurity Advisory warned of actors exploiting a vulnerability in VMware Access and VMware Identity Manager that allowed them to perform this TTP and abuse federated SSO infrastructure.While that example of this TTP may have previously been attributed to nation-state actors, a wealth of actors could be leveraging this TTP for their objectives. This SAML forgery technique has been known and used by cyber actors since at least 2017.

In a variation of the first TTP, if the malicious cyber actors are unable to obtain anon-premises signing key, they would attempt to gain sufficient administrative privileges within the cloud tenant to add a malicious certificate trust relationship for forging SAML tokens.

In the second TTP, the actors leverage a compromised global administrator account to assign credentials to cloud application service principals (identities for cloud applications that allow the applications to be invoked to access other cloud resources). The actors then invoke the application’s credentials for automated access to cloud resources (often email in particular) that would otherwise be difficult for the actors to access or would more easily be noticed as suspicious (T1114, T1114.002).

This is an ongoing story, and I expect to see a lot more about TTP — nice acronym there — in coming weeks.

Related: Tom Bossert has a scathing op-ed on the breach. Jack Goldsmith’s essay is worth reading. So is Nick Weaver’s.

Worse Than FailureError'd: Something Has Gone Wrong

"Ooh! That moment when you're listening to music and a you get a vague pop up message that fills you with existential dread," writes Noah B.


"You know, Domino's, you're right...I'm just not feeling null today," writes Mark W.


Greg wrote, "If you're going to output diagnostic data in Production, you might want to pick a section that isn't next to a (dubious?) reassurance about security."


"Well to be fair, it doesn't say to enter 'numbers' or 'letters'," Pascal wrote.


Benjamin writes, "Maybe write a line of code to check if it's already expired before sending it...? Just a thought."


[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

METhinkpad Storage Problem

For a while I’ve had a problem with my Thinkpad X1 Carbon Gen 1 [1] where storage pauses for 30 seconds, it’s become more common recently and unfortunately everything seems to depend on storage (ideally a web browser playing a video from the Internet could keep doing so with no disk access, but that’s not what happens).

echo 3 > /sys/block/sda/device/timeout

I’ve put the above in /etc/rc.local which should make it only take 3 seconds to recover from storage problems instead of 30 seconds, the problem hasn’t recurred since that change so I don’t know if it works as desired. The Thinkpad is running BTRFS and no data is reported as being lost, so it’s not a problem to use it at this time.

The bottom of this post has an example of the errors I’m getting. A friend said that his searches for such errors shows people claiming that it’s a “SATA controller or cable” problem, which for a SATA device bolted directly to the motherboard means it’s likely to be a motherboard problem (IE more expensive to fix than the $289 I paid for the laptop almost 3 years ago).

A Lenovo forum discussion says that the X1 Carbon Gen1 uses an unusual connector that’s not M.2 SATA and not mSATA [2]. This means that a replacement is going to be expensive, $100US for a replacement from eBay when a M.2 SATA or NVMe device would cost about $50AU from my local store. It also means that I can’t use a replacement for anything else. If my laptop took a regular NVMe I’d buy one and test it out, if it didn’t solve the problem a spare NVMe device is always handy to have. But I’m not interested in spending $100US for a part that may turn out to be useless.

I bought the laptop expecting that there was nothing I could fix inside it. While it is theoretically possible to upgrade the CPU etc in a laptop most people find that the effort and expense makes it better to just replace the entire laptop. With the ultra light category of laptops the RAM is soldered on the motherboard and there are even less options for replacing things. So while it’s annoying that Lenovo didn’t use a standard interface for this device (they did so for other models in the range) it’s not a situation that I hadn’t expected.

Now the question is what to do next. The Thinkpad has a SD socket and micro SD cards are getting large capacities nowadays. If it won’t boot from a SD card I could boot from USB and then run from SD. So even if the internal SATA device fails entirely it should still be useful. I wonder if I could find a Thinkpad with a similar problem going cheap as I’m pretty sure that a Windows user couldn’t make any storage device usable the way I can with Linux.

I’ve looked at prices on auction sites but haven’t seen a Thinkpad X1 Carbon going for anywhere near the price I got this one. The nearest I’ve seen is around $350 for a laptop with significant damage.

While I am a little unhappy at Lenovo’s choice of interface, I’ve definitely got a lot more than $289 of value out of this laptop. So I’m not really complaining.

[315041.837612] ata1.00: status: { DRDY }
[315041.837613] ata1.00: failed command: WRITE FPDMA QUEUED
[315041.837616] ata1.00: cmd 61/20:48:28:1e:3e/00:00:00:00:00/40 tag 9 ncq dma 
16384 out
                         res 40/00:01:00:00:00/00:00:00:00:00/e0 Emask 0x4 
[315041.837617] ata1.00: status: { DRDY }
[315041.837618] ata1.00: failed command: READ FPDMA QUEUED
[315041.837621] ata1.00: cmd 60/08:50:e0:26:84/00:00:00:00:00/40 tag 10 ncq 
dma 4096 in
                         res 40/00:01:00:00:00/00:00:00:00:00/e0 Emask 0x4 
[315041.837622] ata1.00: status: { DRDY }
[315041.837625] ata1: hard resetting link
[315042.151781] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[315042.163368] ata1.00: ACPI cmd ef/02:00:00:00:00:a0 (SET FEATURES) 
[315042.163370] ata1.00: ACPI cmd f5/00:00:00:00:00:a0 (SECURITY FREEZE LOCK) 
filtered out
[315042.163372] ata1.00: ACPI cmd ef/10:03:00:00:00:a0 (SET FEATURES) filtered 
[315042.183332] ata1.00: ACPI cmd ef/02:00:00:00:00:a0 (SET FEATURES) 
[315042.183334] ata1.00: ACPI cmd f5/00:00:00:00:00:a0 (SECURITY FREEZE LOCK) 
filtered out
[315042.183336] ata1.00: ACPI cmd ef/10:03:00:00:00:a0 (SET FEATURES) filtered 
[315042.193332] ata1.00: configured for UDMA/133
[315042.193789] sd 0:0:0:0: [sda] tag#10 FAILED Result: hostbyte=DID_OK 
[315042.193791] sd 0:0:0:0: [sda] tag#10 Sense Key : Illegal Request [current] 
[315042.193793] sd 0:0:0:0: [sda] tag#10 Add. Sense: Unaligned write command
[315042.193795] sd 0:0:0:0: [sda] tag#10 CDB: Read(10) 28 00 00 84 26 e0 00 00 
08 00
[315042.193797] print_req_error: I/O error, dev sda, sector 8660704
[315042.193810] ata1: EH complete

Cryptogram More on the SolarWinds Breach

The New York Times has more details.

About 18,000 private and government users downloaded a Russian tainted software update –­ a Trojan horse of sorts ­– that gave its hackers a foothold into victims’ systems, according to SolarWinds, the company whose software was compromised.

Among those who use SolarWinds software are the Centers for Disease Control and Prevention, the State Department, the Justice Department, parts of the Pentagon and a number of utility companies. While the presence of the software is not by itself evidence that each network was compromised and information was stolen, investigators spent Monday trying to understand the extent of the damage in what could be a significant loss of American data to a foreign attacker.

It’s unlikely that the SVR (a successor to the KGB) penetrated all of those networks. But it is likely that they penetrated many of the important ones. And that they have buried themselves into those networks, giving them persistent access even if this vulnerability is patched. This is a massive intelligence coup for the Russians and failure for the Americans, even if no classified networks were touched.

Meanwhile, CISA has directed everyone to remove SolarWinds from their networks. This is (1) too late to matter, and (2) likely to take many months to complete. Probably the right answer, though.

This is almost too stupid to believe:

In one previously unreported issue, multiple criminals have offered to sell access to SolarWinds’ computers through underground forums, according to two researchers who separately had access to those forums.

One of those offering claimed access over the Exploit forum in 2017 was known as “fxmsp” and is wanted by the FBI “for involvement in several high-profile incidents,” said Mark Arena, chief executive of cybercrime intelligence firm Intel471. Arena informed his company’s clients, which include U.S. law enforcement agencies.

Security researcher Vinoth Kumar told Reuters that, last year, he alerted the company that anyone could access SolarWinds’ update server by using the password “solarwinds123”

“This could have been done by any attacker, easily,” Kumar said.

Neither the password nor the stolen access is considered the most likely source of the current intrusion, researchers said.

That last sentence is important, yes. But the sloppy security practice is likely not an isolated incident, and speaks to the overall lack of security culture at the company.

And I noticed that SolarWinds has removed its customer page, presumably as part of its damage control efforts. I quoted from it. Did anyone save a copy?

EDITED TO ADD: Both the Wayback Machine and Brian Krebs have saved the SolarWinds customer page.


Kevin RuddBBC NewsHour: Australia-China Tariffs

16 DECEMBER 2020

 Topic: Australian WTO action on barley

The post BBC NewsHour: Australia-China Tariffs appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: All the Angles

Web frameworks are a double edged sword. They are, as a rule, bloated, complicated, opinionated and powerful. You can get a lot done, so long as you stick to the framework's "happy path", and while you can wander off and probably make it work, there be dragons. You also run into a lot of developers who, instead of learning the underlying principles, just learn the framework. This means they might not understand broader web development, but can do a lot with Angular.

And then you might have developers who don't understand broader web development or the framework they're using.

Dora has someone on their team which meets that criteria.

The first sign there was a problem was this line in a view object:

<a href class="btn" ng-click="doSomething()" ng-disabled="$ctrl.numFormErrors > 0">save form</a>

First off, ng-disabled adds the disabled attribute to the DOM element, which doesn't do anything to anchor tags. The goal here is to disable the "save" button if there are validation errors, and already the goal has been missed. However, that's not the weirdest part of this. Angular provides loads of helper variables and functions, including a form.$invalid which helpfully tells you if there are validation errors. So where is $ctrl.numFormErrors coming from?

$scope.$watch("form.$error", function(errors) { $ctrl.numFormErrors = 0; $ctrl.fieldsWithErrors = []; _.forEach(errors, function (errs) { for (var i = 0; i < errs.length; i++) { if ($ctrl.fieldsWithErrors.indexOf(errs[i].$name) < 0) { $ctrl.fieldsWithErrors.push(errs[i].$name); $ctrl.numFormErrors++; } } }); }, true);

Oh, that's simple enough. So much clearer than using the built in form.$invalid.

If you're not "up" on Angular, and without diving too deep on the mechanics, $scope.$watch registers a callback: every time form.$error changes, we invoke this function. In this callback, we clear out our $ctrl.numFormErrors and $ctrl.fieldsWithErrors. The keys of errors are validation failures, like maxlength and pattern, so we use the lodash library to forEach through each of those keys.

The values are arrays of which fields have the given error, so we for loop across errs (I guess we didn't want to use lodash again?). If we haven't already tracked an error for the field with this $name, we add it to our array and increment the numFormErrors field.

Now, every time the user edits the form, we'll have a list of exactly which fields are invalid, and an exact count about how many there are. That's not something Angular makes obvious, so we've accomplished something, right?

Well, the only problem is that this code never actually uses $ctrl.fieldsWithErrors and it only ever checks if $ctrl.numFormErrors > 0, so no- we didn't need to do any of this.

Dora threw out the $watch and just replaced the anchor with a button done the "Angular" way:

<button class="btn" ng-click="doSomething()" ng-disabled="form.$invalid">save form</button>
[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Krebs on SecurityMalicious Domain in SolarWinds Hack Turned into ‘Killswitch’

A key malicious domain name used to control potentially thousands of computer systems compromised via the months-long breach at network monitoring software vendor SolarWinds was commandeered by security experts and used as a “killswitch” designed to turn the sprawling cybercrime operation against itself, KrebsOnSecurity has learned.

Austin, Texas-based SolarWinds disclosed this week that a compromise of its software update servers earlier this year may have resulted in malicious code being pushed to nearly 18,000 customers of its Orion platform. Many U.S. federal agencies and Fortune 500 firms use(d) Orion to monitor the health of their IT networks.

On Dec. 13, cyber incident response firm FireEye published a detailed writeup on the malware infrastructure used in the SolarWinds compromise, presenting evidence that the Orion software was first compromised back in March 2020. FireEye said hacked networks were seen communicating with a malicious domain name — avsvmcloud[.]com — one of several domains the attackers had set up to control affected systems.

As first reported here on Tuesday, there were signs over the past few days that control over the domain had been transferred to Microsoft. Asked about the changeover, Microsoft referred questions to FireEye and to GoDaddy, the current domain name registrar for the malicious site.

Today, FireEye responded that the domain seizure was part of a collaborative effort to prevent networks that may have been affected by the compromised SolarWinds software update from communicating with the attackers. What’s more, the company said the domain was reconfigured to act as a “killswitch” that would prevent the malware from continuing to operate in some circumstances.

“SUNBURST is the malware that was distributed through SolarWinds software,” FireEye said in a statement shared with KrebsOnSecurity. “As part of FireEye’s analysis of SUNBURST, we identified a killswitch that would prevent SUNBURST from continuing to operate.”

The statement continues:

“Depending on the IP address returned when the malware resolves avsvmcloud[.]com, under certain conditions, the malware would terminate itself and prevent further execution. FireEye collaborated with GoDaddy and Microsoft to deactivate SUNBURST infections.”

“This killswitch will affect new and previous SUNBURST infections by disabling SUNBURST deployments that are still beaconing to avsvmcloud[.]com. However, in the intrusions FireEye has seen, this actor moved quickly to establish additional persistent mechanisms to access to victim networks beyond the SUNBURST backdoor.

This killswitch will not remove the actor from victim networks where they have established other backdoors. However, it will make it more difficult to for the actor to leverage the previously distributed versions of SUNBURST.”

It is likely that given their visibility into and control over the malicious domain, Microsoft, FireEye, GoDaddy and others now have a decent idea which companies may still be struggling with SUNBURST infections.

The killswitch revelations came as security researchers said they’d made progress in decoding SUNBURST’s obfuscated communications methods. Chinese cybersecurity firm RedDrip Team published their findings on Github, saying its decoder tool had identified nearly a hundred suspected victims of the SolarWinds/Orion breach, including universities, governments and high tech companies.

Meanwhile, the potential legal fallout for SolarWinds in the wake of this breach continues to worsen. The Washington Post reported Tuesday that top investors in SolarWinds sold millions of dollars in stock in the days before the intrusion was revealed. SolarWinds’s stock price has fallen more than 20 percent in the past few days. The Post cited former enforcement officials at the U.S. Securities and Exchange Commission (SEC) saying the sales were likely to prompt an insider trading investigation.

Kevin RuddTimes Radio: Britain, Australia and Europe

17 DECEMBER 2020

Topics: Brexit, China, trade, Covid, UK Labour

The post Times Radio: Britain, Australia and Europe appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Stringing Your Date Along

One of the best parts of doing software development is that you're always learning something new. Like, for example, I thought I'd seen every iteration on bad date handling code. But today, I learned something new.

Katharine picked up a pile of tickets, all related to errors with date handling. For months, the code had been running just fine, but in November there was an OS upgrade. "Ever since," the users complained, "it's consistently off by a whole month!" This was a bit of a puzzle, as there's nothing in an OS upgrade that should cause date strings to be consistently off by a whole month. Clearly, there must be a bug, but why did it only now start happening? Katharine pulled up the C code, and checked.

static const char *month_name[] = { "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul" "Aug", "Sep", "Oct", "Nov", "Dec" };

It took a few passes before Katharine spotted the problem. Once she did, though, it was perfectly clear that the OS upgrade had nothing to do with it, and the userbase clearly had not been checking the output of the program since July.

If you haven't spotted it yet, note the missing "," after "Jul". In C, you're allowed to concatenate strings if a line ends with a string and the next line starts with a string, so this calendar goes from "Jun" straight to "JulAug", then the eight month of the year is "Sep".

Even after making the fix, several of the users were happy that "They fixed whatever the OS upgrade broke," but wished "they didn't change things that were working just fine!"

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Krebs on SecuritySolarWinds Hack Could Affect 18K Customers

The still-unfolding breach at network management software firm SolarWinds may have resulted in malicious code being pushed to nearly 18,000 customers, the company said in a legal filing on Monday. Meanwhile, Microsoft should soon have some idea which and how many SolarWinds customers were affected, as it recently took possession of a key domain name used by the intruders to control infected systems.

On Dec. 13, SolarWinds acknowledged that hackers had inserted malware into a service that provided software updates for its Orion platform, a suite of products broadly used across the U.S. federal government and Fortune 500 firms to monitor the health of their IT networks.

In a Dec. 14 filing with the U.S. Securities and Exchange Commission (SEC), SolarWinds said roughly 33,000 of its more than 300,000 customers were Orion customers, and that fewer than 18,000 customers may have had an installation of the Orion product that contained the malicious code. SolarWinds said the intrusion also compromised its Microsoft Office 365 accounts.

The initial breach disclosure from SolarWinds came five days after cybersecurity incident response firm FireEye announced it had suffered an intrusion that resulted in the theft of some 300 proprietary software tools the company provides to clients to help secure their IT operations.

On Dec. 13, FireEye published a detailed writeup on the malware infrastructure used in the SolarWinds compromise, presenting evidence that the Orion software was first compromised back in March 2020. FireEye didn’t explicitly say its own intrusion was the result of the SolarWinds hack, but the company confirmed as much to KrebsOnSecurity earlier today.

Also on Dec. 13, news broke that the SolarWinds hack resulted in attackers reading the email communications at the U.S. Treasury and Commerce departments.

On Dec. 14, Reuters reported the SolarWinds intrusion also had been used to infiltrate computer networks at the U.S. Department of Homeland Security (DHS). That disclosure came less than 24 hours after DHS’s Cybersecurity and Infrastructure Security Agency (CISA) took the unusual step of issuing an emergency directive ordering all federal agencies to immediately disconnect the affected Orion products from their networks.


Security experts have been speculating as to the extent of the damage from the SolarWinds hack, combing through details in the FireEye analysis and elsewhere for clues about how many other organizations may have been hit.

And it seems that Microsoft may now be in perhaps the best position to take stock of the carnage. That’s because sometime on Dec. 14, the software giant took control over a key domain name — avsvmcloud[.]com — that was used by the SolarWinds hackers to communicate with systems compromised by the backdoored Orion product updates.

Armed with that access, Microsoft should be able to tell which organizations have IT systems that are still trying to ping the malicious domain. However, because many Internet service providers and affected companies are already blocking systems from accessing that malicious control domain or have disconnected the vulnerable Orion services, Microsoft’s visibility may be somewhat limited.

Microsoft has a long history of working with federal investigators and the U.S. courts to seize control over domains involved in global malware menaces, particularly when those sites are being used primarily to attack Microsoft Windows customers.

Microsoft dodged direct questions about its visibility into the malware control domain, suggesting those queries would be better put to FireEye or GoDaddy (the current domain registrar for the malware control server). But in a response on Twitter, Microsoft spokesperson Jeff Jones seemed to confirm that control of the malicious domain had changed hands.

“We worked closely with FireEye, Microsoft and others to help keep the internet safe and secure,” GoDaddy said in a written statement. “Due to an ongoing investigation and our customer privacy policy, we can’t comment further at this time.”

FireEye declined to answer questions about exactly when it learned of its own intrusion via the Orion compromise, or approximately when attackers first started offloading sensitive tools from FireEye’s network. But the question is an interesting one because its answer may speak to the motivations and priorities of the hackers.

Based on the timeline known so far, the perpetrators of this elaborate hack would have had a fairly good idea back in March which of SolarWinds’ 18,000 Orion customers were worth targeting, and perhaps even in what order.

Alan Paller, director of research for the SANS Institute, a security education and training company based in Maryland, said the attackers likely chose to prioritize their targets based on some calculation of risk versus reward.

Paller said the bad guys probably sought to balance the perceived strategic value of compromising each target with the relative likelihood that exploiting them might result in the entire operation being found out and dismantled.

“The way this probably played out is the guy running the cybercrime team asked his people to build a spreadsheet where they ranked targets by the value of what they could get from each victim,” Paller said. “And then next to that they likely put a score for how good the malware hunters are at the targets, and said let’s first go after the highest priority ones that have a hunter score of less than a certain amount.”

The breach at SolarWinds could well turn into an existential event for the company, depending on how customers react and how SolarWinds is able to weather the lawsuits that will almost certainly ensue.

“The lawsuits are coming, and I hope they have a good general counsel,” said James Lewis, senior vice president at the Center for Strategic and International Studies. “Now that the government is telling people to turn off [the SolarWinds] software, the question is will anyone turn it back on?”

According to its SEC filing, total revenue from the Orion products across all customers — including those who may have had an installation of the Orion products that contained the malicious update — was approximately $343 million, or roughly 45 percent of the firm’s total revenue. SolarWinds’ stock price has fallen 25 percent since news of the breach first broke.

Some of the legal and regulatory fallout may hinge on what SolarWinds knew or should have known about the incident, when, and how it responded. For example, Vinoth Kumar, a cybersecurity “bug hunter” who has earned cash bounties and recognition from multiple companies for reporting security flaws in their products and services, posted on Twitter that he notified SolarWinds in November 2019 that the company’s software download website was protected by a simple password that was published in the clear on SolarWinds’ code repository at Github.

Andrew Morris, founder of the security firm GreyNoise Intelligence, on said that as of Tuesday evening SolarWinds still hadn’t removed the compromised Orion software updates from its distribution server.

Another open question is how or whether the incoming U.S. Congress and presidential administration will react to this apparently broad cybersecurity event. CSIS’s Lewis says he doubts lawmakers will be able to agree on any legislative response, but he said it’s likely the Biden administration will do something.

“It will be a good new focus for DHS, and the administration can issue an executive order that says federal agencies with regulatory authority need to manage these things better,” Lewis said. “But whoever did this couldn’t have picked a better time to cause a problem, because their timing almost guarantees a fumbled U.S. response.”

Cryptogram How the SolarWinds Hackers Bypassed Duo’s Multi-Factor Authentication

This is interesting:

Toward the end of the second incident that Volexity worked involving Dark Halo, the actor was observed accessing the e-mail account of a user via OWA. This was unexpected for a few reasons, not least of which was the targeted mailbox was protected by MFA. Logs from the Exchange server showed that the attacker provided username and password authentication like normal but were not challenged for a second factor through Duo. The logs from the Duo authentication server further showed that no attempts had been made to log into the account in question. Volexity was able to confirm that session hijacking was not involved and, through a memory dump of the OWA server, could also confirm that the attacker had presented cookie tied to a Duo MFA session named duo-sid.

Volexity’s investigation into this incident determined the attacker had accessed the Duo integration secret key (akey) from the OWA server. This key then allowed the attacker to derive a pre-computed value to be set in the duo-sid cookie. After successful password authentication, the server evaluated the duo-sid cookie and determined it to be valid. This allowed the attacker with knowledge of a user account and password to then completely bypass the MFA set on the account. It should be noted this is not a vulnerability with the MFA provider and underscores the need to ensure that all secrets associated with key integrations, such as those with an MFA provider, should be changed following a breach.

Again, this is not a Duo vulnerability. From ArsTechnica:

While the MFA provider in this case was Duo, it just as easily could have involved any of its competitors. MFA threat modeling generally doesn’t include a complete system compromise of an OWA server. The level of access the hacker achieved was enough to neuter just about any defense.

Cryptogram Another Massive Russian Hack of US Government Networks

The press is reporting a massive hack of US government networks by sophisticated Russian hackers.

Officials said a hunt was on to determine if other parts of the government had been affected by what looked to be one of the most sophisticated, and perhaps among the largest, attacks on federal systems in the past five years. Several said national security-related agencies were also targeted, though it was not clear whether the systems contained highly classified material.


The motive for the attack on the agency and the Treasury Department remains elusive, two people familiar with the matter said. One government official said it was too soon to tell how damaging the attacks were and how much material was lost, but according to several corporate officials, the attacks had been underway as early as this spring, meaning they continued undetected through months of the pandemic and the election season.

The attack vector seems to be a malicious update in SolarWinds’ “Orion” IT monitoring platform, which is widely used in the US government (and elsewhere).

SolarWinds’ comprehensive products and services are used by more than 300,000 customers worldwide, including military, Fortune 500 companies, government agencies, and education institutions. Our customer list includes:

  • More than 425 of the US Fortune 500
  • All ten of the top ten US telecommunications companies
  • All five branches of the US Military
  • The US Pentagon, State Department, NASA, NSA, Postal Service, NOAA, Department of Justice, and the Office of the President of the United States
  • All five of the top five US accounting firms
  • Hundreds of universities and colleges worldwide

I’m sure more details will become public over the next several weeks.

EDITED TO ADD (12/15): More news.

Kevin RuddBBC World: Australia-China Relations

15 DECEMBER 2020

 Topic: Chinese trade action against Australian coal

The post BBC World: Australia-China Relations appeared first on Kevin Rudd.


Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 26)

Here’s part twenty-six of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.


LongNow“Lockdown Gardening” Is The New Archaeological Frontier in Britain

Few things inspire someone to take a longer view on history than the possibility of treasure in their own backyard. With people taking to their gardens under pandemic lockdown came more than 47,000 reported archaeological finds in England and Wales.

Meanwhile, the British government just announced their plans to broaden what counts as “treasure” under law, expanding the definition to include items such as Bronze Age axes, Iron Age caldrons, and medieval weapons and jewelry. Their goal: to keep priceless history out of private collections.

But giving museums dibs on historical artifacts does not seem to diminish their market value or the appeal to landowners and prospectors. The Earth’s surface, now a layered document of not just prehistoric but modern human activity continues to reveal its secrets under the combined exfoliating power of human curiosity and opportunism. “Lockdown gardening” and the varied other forms of digging done by idle hands in 02020 contribute to the Golden Age of archaeology and paleontology we live through as a consequence of rapid global development and our sheer busybodyness:

In 2017, 1,267 pieces went through the process in which a [UK] committee determines whether an item should be considered a treasure, up from 79 pieces in 1997.


Cory DoctorowDaddy-Daughter Podcast, 2020 Edition

When my daughter Poesy was four, her nursery school let us know that they were shutting down a day before my wife’s office closed for the holidays, leaving us with a childcare problem. Since I worked for myself, I took the day off and brought her to my office, where we recorded a short podcast, singing Rudolph the Red-Nosed Reindeer (a frankly amazing rendition!).

We’ve done it every year since, except for 2016 when I had mic problems. Now she’s 12, and we’ve just recorded our eighth installment, and as always, it was a highlight of my holiday season. She says that singing is way too cringe, so instead she’s got a ten-minute tutorial on how to ride a horse.

Here’s this year’s recording, and here are the years gone by: