Planet Russell


Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2021

A Debian LTS logo

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

In May, we again put aside 2100 EUR to fund Debian projects. There was no proposals for new projects received, thus we’re looking forward to receive more projects from various Debian teams! Please do not hesitate to submit a proposal, if there is a project that could benefit from the funding!

We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In May, 12 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 7.0h (out of 14h assigned and 12h from April), thus carrying over 19h to June.
  • Anton Gladky did 12h (out of 12h assigned).
  • Ben Hutchings did 16h (out of 13.5h assigned plus 4.5h from April), thus is carrying over 2h for June.
  • Chris Lamb did 18h (out of 18h assigned).
  • Holger Levsen‘s work was coordinating/managing the LTS team, he did 5.5h and gave back 6.5h to the pool.
  • Markus Koschany did 15h (out of 29.75h assigned and 15h from April), thus carrying over 29.75h to June.
  • Ola Lundqvist did 12h (out of 12h assigned and 4.5h from April), thus carrying over 4.5h to June.
  • Roberto C. Sánchez did 7.5h (out of 27.5h assigned and 27h from April), and gave back 47h to the pool.
  • Sylvain Beucler did 29.75h (out of 29.75h assigned).
  • Thorsten Alteholz did 29.75h (out of 29.75h assigned).
  • Utkarsh Gupta did 29.75h (out of 29.75h assigned).

Evolution of the situation

In May we released 33 DLAs and mostly skipped our public IRC meeting and the end of the month. In June we’ll have another team meeting using video as lined out on our LTS meeting page.
Also, two months ago we announced that Holger would step back from his coordinator role and today we are announcing that he is back for the time being, until a new coordinator is found.
Finally, we would like to remark once again that we are constantly looking for new contributors. Please contact Holger if you are interested!

The security tracker currently lists 41 packages with a known CVE and the dla-needed.txt file has 21 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

Worse Than FailureCodeSOD: A Date With Yourself

Once upon a time, someone wanted to add a banner to a web page. They also wanted the banner to only appear after a certain date. Jack stumbled across their implementation when trying to understand why the banner would vanish for two weeks at the start of every month.

// get date var MyDate = new Date(); var MyDateString; MyDate.setDate(MyDate.getDate()); MyDateString = ('0' + MyDate.getDate()).slice(-2) + '-' + ('0' + (MyDate.getMonth()+1)).slice(-2) + '-' + MyDate.getFullYear(); if (MyDateString > '13-04-2014') { // do stuff... }

So, let's just start with the bad date handling, complete with hacked together string padding. We convert the actual date to a date string, and then compare against the date string instead of the actual date itself. Yes, very bad, very useless, and clearly the source of the bug that got Jack's attention. Since it's string comparisons, '01-05-2021' is "before" '13-04-2014'.

But I had to skip over something important to get there.


I love that line. It's useless. It has nothing actually to do with what the code is actually trying to do. It's the sort of thing that even a developer who doesn't understand anything would have to read that and wonder why it was there.

But it's there. It isn't going anywhere.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Planet DebianBen Hutchings: Debian LTS work, May 2021

In May I was assigned 13.5 hours of work by Freexian's Debian LTS initiative and carried over 4.5 hours from earlier months. I worked 16 hours and will carry over the remainder.

I finished reviewing the futex code in the PREEMPT_RT patchset for Linux 4.9, and identified several places where it had been mis-merged with the recent futex security fixes. I sent a patch for these upstream, which was accepted and applied in v4.9.268-rt180.

I have continued updating the Linux 4.9 package to later upstream stable versions, and backported some missing security fixes. I have still not made a new upload, but intend to do so this week.

Planet DebianJonathan Dowland: Opinionated IkiWiki v1

It's been more than a year since I wrote about Opinionated IkiWiki, a pre-configured, containerized deployment of Ikiwiki with opinions. My intention was to make something that is easy to get up and running if you are more experienced with containers than IkiWiki.

I haven't yet switched to Opinionated IkiWiki for this site, but that was my goal, and I think it's mature enough now that I can migrate over at some point, so it seems a good time to call it Version 1.0. I have been using it for my own private PIM systems for a while now.

You can pull built images from, here: The source lives here: A description of some of the changes made to the IkiWiki version lives here:

Cryptogram The Supreme Court Narrowed the CFAA

In a 6-3 ruling, the Supreme Court just narrowed the scope of the Computer Fraud and Abuse Act:

In a ruling delivered today, the court sided with Van Buren and overturned his 18-month conviction.

In a 37-page opinion written and delivered by Justice Amy Coney Barrett, the court explained that the “exceeds authorized access” language was, indeed, too broad.

Justice Barrett said the clause was effectively making criminals of most US citizens who ever used a work resource to perform unauthorized actions, such as updating a dating profile, checking sports scores, or paying bills at work.

What today’s ruling means is that the CFAA cannot be used to prosecute rogue employees who have legitimate access to work-related resources, which will need to be prosecuted under different charges.

The ruling does not apply to former employees accessing their old work systems because their access has been revoked and they’re not “authorized” to access those systems anymore.


It’s a good ruling, and one that will benefit security researchers. But the confusing part is footnote 8:

For present purposes, we need not address whether this inquiry turns only on technological (or “code-based”) limitations on access, or instead also looks to limits contained in contracts or policies.

It seems to me that this is exactly what the ruling does address. The court overturned the conviction because the defendant was not limited by technology, but only by policies. So that footnote doesn’t make any sense.

I have written about this general issue before, in the context of adversarial machine learning research.

Planet DebianEnrico Zini: Pipelining

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

Running actions on a server is nice, but a network round trip for each action is not very efficient. If I need to run a linear sequence of actions, I can stream them all to the server, and then read replies streamed from the server as they get executed.

This technique is called pipelining and one can see it used, for example, in Redis, or Mitogen.


Ansible has the concept of "Roles" as a series of related tasks: I'll play with that. Here's an example role to install and setup fail2ban:

class Role(role.Role):
    def main(self):

                enabled = true
                enabled = true
        ), name="configure fail2ban")

I prototyped roles as classes, with methods that push actions down the pipeline. If an action fails, all further actions for the same role won't executed, and will be marked as skipped.

Since skipping is applied per-role, it means that I can blissfully stream actions for multiple roles to the server down the same pipe, and errors in one role will stop executing that role and not others. Potentially I can get multiple roles going with a single network round-trip:


import sys
from transilience.system import Mitogen
from transilience.runner import Runner

def main():
    system = Mitogen("my server", "ssh", hostname="", username="root")

    runner = Runner(system)

    # Send roles to the server

    # Run until all roles are done

if __name__ == "__main__":

That looks like a playbook, using Python as glue rather than YAML.

Decision making in roles

Besides filing a series of actions, a role may need to take decisions based on the results of previous actions, or on facts discovered from the server. In that case, we need to wait until the results we need come back from the server, and then decide if we're done or if we want to send more actions down the pipe.

Here's an example role that installs and configures Prosody:

from transilience import actions, role
from transilience.actions import builtin
from .handlers import RestartProsody

class Role(role.Role):
    Set up prosody XMPP server
    def main(self):
        self.add(actions.facts.Platform(), then=self.have_facts)

            name=["certbot", "python-certbot-apache"],
        ), name="install support packages")

            name=["prosody", "prosody-modules", "lua-sec", "lua-event", "lua-dbi-sqlite3"],
        ), name="install prosody packages")

    def have_facts(self, facts):
        facts = facts.facts  # Malkovich Malkovich Malkovich!

        domain = facts["domain"]
        ctx = {
            "ansible_domain": domain

            argv=["certbot", "certonly", "-d", f"chat.{domain}", "-n", "--apache"],
        ), name="obtain chat certificate")

        with self.notify(RestartProsody):
                content=self.template_engine.render_file("roles/prosody/templates/prosody.cfg.lua", ctx),
            ), name="write prosody configuration")

            ), name="write prosody firewall")

    # ...

This files some general actions down the pipe, with a hook that says: when the results of this action come back, run self.have_facts().

At that point, the role can use the results to build certbot command lines, render prosody's configuration from Jinja2 templates, and use the results to file further action down the pipe.

Note that this way, while the server is potentially still busy installing prosody, we're already streaming prosody's configuration to it.

If anything goes wrong with the installation of prosody's package, the role will be marked as failed and all further actions of the same role, even those filed by have_facts() will be skipped.

Notify and handlers

In the previous example self.notify() also appears: that's my attempt to model the equivalent of Ansible's handlers. If any of the actions inside the with produce changes, then the RestartProsody role will be executed, potentially filing more actions ad the end of the playbook.

The runner will take care of collecting all the triggered role classes in a set, which discards duplicates, and then running the main() method of all resulting roles, which will cause more actions to be filed down the pipe.

Action conditions

Sometimes some actions are only meaningful as consequences of other actions. Let's take, for example, enabling buster-backports as an extra apt source:

        a = self.add(builtin.copy(
            content="deb [arch=amd64] buster-backports main contrib",
        ), name="enable backports")

        ), name="update after enabling backports",
           # Run only if the previous copy changed anything
           when={a: ResultState.CHANGED},

Here we want to update Apt's cache, which is a slow operation, only after we actually write /etc/apt/sources.list.d/debian-buster-backports.list. If the file was already there from a previous run, we can skip downloading the new package lists.

The when= attributes adds an annotation to the action that is sent town the pipeline, that says that it should only be run if the state of a previous action matches the given one.

In this case, when on the remote it's the turn of "update after enabling backports", it gets skipped unless the state of the previous "enable backports" action is CHANGED.

Effects of pipelining

I ported enough of Ansible's modules to be able to run the provisioning scripts of my VPS entirely via ansible.

This is the playbook run as plain Ansible:

$ time ansible-playbook vps.yaml
servername       : ok=55   changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

real    2m10.072s
user    0m33.149s
sys 0m10.379s

This is the same playbook run with Ansible speeded up via the Mitogen backend, which makes Ansible more bearable:

$ export ANSIBLE_STRATEGY=mitogen_linear
$ time ansible-playbook vps.yaml
servername       : ok=55   changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

real    0m24.428s
user    0m8.479s
sys 0m1.894s

This is the same playbook ported to Transilience:

$ time ./provision
real    0m2.585s
user    0m0.659s
sys 0m0.034s

Doing nothing went from 2 minutes down to 3 seconds!

That's the kind of running time that finally makes me comfortable with maintaining my VPS by editing the playbook only, and never logging in to mess with the system configuration by hand!

Next steps

I'm quite happy with what I have: I can now maintain my VPS with a simple script with quick iterative cycles.

I might use it to develop new playbooks, and port them to ansible only when they're tested and need to be shared with infrastructure that needs to rely on something more solid and battle tested than a prototype provisioning system.

I might also keep working on it as I have more interesting ideas that I'd like to try. I feel like Ansible reached some architectural limits that are hard to overcome without a major redesign, and are in many way hardcoded in its playbook configuration. It's nice to be able to try out new designs without that baggage.

I'd love it if even just the library of Transilience actions could grow, and gain widespread use. Ansible modules standardized a set of management operations, that I think became the way people think about system management, and should really be broadly available outside of Ansible.

If you are interesting in playing with Transilience, such as:

  • polishing the packaging, adding a, publishing to PIP, packaging in Debian
  • adding example playbooks
  • porting more Ansible modules to Transilience actions
  • improving the command line interface
  • test other ways to feed actions to pipelines
  • test other pipeline primitives
  • add backends besides Local and Mitogen
  • prototype a parser to turn a subsets of YAML playbook syntax into transilience actions
  • adopt it into your multinational organization infrastructure to speed up provisioning times by orders of magnitude at the cost of the development time that it takes to turn this prototype into something solid and road tested
  • create a startup and get millions in venture capital to disrupt the provisioning ecosystem

do get in touch or send a pull request! :)

Cryptogram TikTok Can Now Collect Biometric Data

This is probably worth paying attention to:

A change to TikTok’s U.S. privacy policy on Wednesday introduced a new section that says the social video app “may collect biometric identifiers and biometric information” from its users’ content. This includes things like “faceprints and voiceprints,” the policy explained. Reached for comment, TikTok could not confirm what product developments necessitated the addition of biometric data to its list of disclosures about the information it automatically collects from users, but said it would ask for consent in the case such data collection practices began.

Planet DebianEnrico Zini: Use ansible actions in a script

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

I like many of the modules provided with Ansible: they are convenient, platform-independent implementations of common provisioning steps. They'd be fantastic to have in a library that I could use in normal programs.

This doesn't look easy to do with Ansible code as it is. Also, the code quality of various Ansible modules doesn't fit something I'd want in a standard library of cross-platform provisioning functions.

Modeling Actions

I want to keep the declarative, idempotent aspect of describing actions on a system. A good place to start could be a hierarchy of dataclasses that hold the same parameters as ansible modules, plus a run() method that performs the action:

class Action:
    Base class for all action implementations.

    An Action is the equivalent of an ansible module: a declarative
    representation of an idempotent operation on a system.

    An Action can be run immediately, or serialized, sent to a remote system,
    run, and sent back with its results.
    uuid: str = field(default_factory=lambda: str(uuid.uuid4()))
    result: Result = field(default_factory=Result)

    def summary(self):
        Return a short text description of this action
        return self.__class__.__name__

    def run(self, system: transilience.system.System):
        Perform the action
        self.result.state = ResultState.NOOP

I like that Ansible tasks have names, and I hate having to give names to trivial tasks like "Create directory /foo/bar", so I added a summary() method so that trivial tasks like that can take care of naming themselves.

Dataclasses allow to introspect fields and annotate them with extra metadata, and together with docstrings, I can make actions reasonably self-documeting.

I ported some of Ansible's modules over: see complete list in the git repository.

Running Actions in a script

With a bit of glue code I can now run Ansible-style functions from a plain Python script:


from transilience.runner import Script

script = Script()

for i in range(10):
    script.builtin.file(state="touch", path=f"/tmp/test{i}")

Running Actions remotely

Dataclasses have an asdict function that makes them trivially serializable. If their members stick to data types that can be serialized with Mitogen and the run implementation doesn't use non-pure, non-stdlib Python modules, then I can trivially run actions on all sorts of remote systems using Mitogen:


from transilience.runner import Script
from transilience.system import Mitogen

script = Script(system=Mitogen("my server", "ssh", hostname="", username="user"))

for i in range(10):
    script.builtin.file(state="touch", path=f"/tmp/test{i}")

How fast would that be, compared to Ansible?

$ time ansible-playbook test.yaml
real    0m15.232s
user    0m4.033s
sys 0m1.336s

$ time ./test_script

real    0m4.934s
user    0m0.547s
sys 0m0.049s

With a network round-trip for each single operation I'm already 3x faster than Ansible, and it can run on nspawn containers, too!

I always wanted to have a library of ansible modules useable in normal scripts, and I've always been angry with Ansible for not bundling their backend code in a generic library. Well, now there's the beginning of one!

Sweet! Next step, pipelining.

Planet DebianEnrico Zini: My gripes with Ansible

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

Musing about Ansible

I like infrastructure as code.

I like to be able to represent an entire system as text files in a git repositories, and to be able to use that to recreate the system, from my Virtual Private Server, to my print server and my stereo, to build machines, to other kind of systems I might end up setting up.

I like that the provisioning work I do on a machine can be self-documenting and replicable at will.

The good

For that I quite like Ansible, in principle: simple (in theory) YAML files describe a system in (reasonably) high-level steps, and it can be run on (almost) any machine that happens to have a simple Python interpreter installed.

I also like many of the modules provided with Ansible: they are convenient, platform-independent implementations of common provisioning steps. They'd be fantastic to have in a library that I could use in normal programs.

The bad

Unfortunately, Ansible is slow. Running the playbook on my VPS takes about 3 whole minutes even if I'm just changing a line in a configuration file.

This means that most of the time, instead of changing that line in the playbook and running it, to then figure out after 3 minutes that it was the wrong line, or I made a spelling mistake in the playbook, I end up logging into the server and editing in place.

That defeats the whole purpose, but that level of latency between iterations is just unacceptable to me.

The ugly

I also think that Ansible has outgrown its original design, and the supposedly declarative, idempotent YAML has become a full declarative scripting language in disguise, whose syntax is extremely awkward and verbose.

If I'm writing declarative descriptions, YAML is great. If I'm writing loops and conditionals, I want to write code, not templated YAML.

I also keep struggling trying to use Ansible to provision chroots and nspawn containers.

A personal experiment: Transilience

There's another thing I like in Ansible: it's written in Python, which is a language I'm comfortable with. Compared to other platforms, it's one that I'm more likely to be able to control beyond being a simple user.

What if I can port Ansible modules into a library of high-level provisioning functions, that I can just run via normal Python scripts?

What if I can find a way to execute those scripts remotely and not just locally?

I've started writing some prototype code, and the biggest problem is, of course, finding a name.

Ansible comes from Ursula K. Le Guin's Hainish Cycle novels, where it is a device that allows its users to communicate near-instantaneously over interstellar distances. Traveling, however, is still constrained by the speed of light.

Later in the same universe, the novels A Fisherman of the Inland Sea and The Shobies' Story, talk about experiments with instantaneous interstellar travel, as a science Ursula Le Guin called transilience:

Transilience: n. A leap across or from one thing to another [1913 Webster]

Transilience. I like everything about this name.

Now that the hardest problem is solved, the rest is just a simple matter of implementation details.

Planet DebianFrançois Marier: Self-hosting an Ikiwiki blog

8.5 years ago, I moved my blog to Ikiwiki and Branchable. It's now time for me to take the next step and host my blog on my own server. This is how I migrated from Branchable to my own Apache server.

Installing Ikiwiki dependencies

Here are all of the extra Debian packages I had to install on my server:

apt install ikiwiki ikiwiki-hosting-common gcc libauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-sslauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-ssleay-perl libjson-xs-perl librpc-xml-perl python-docutils libxml-feed-perl libsearch-xapian-perl libmailtools-perl highlight-common libsearch-xapian-perl xapian-omega
apt install --no-install-recommends ikiwiki-hosting-web libgravatar-url-perl libmail-sendmail-perl libcgi-session-perl
apt purge libnet-openid-consumer-perl

Then I enabled the CGI module in Apache:

a2enmod cgi

and un-commented the following in /etc/apache2/mods-available/mime.conf:

AddHandler cgi-script .cgi

Creating a separate user account

Since Ikiwiki needs to regenerate my blog whenever a new article is pushed to the git repo or a comment is accepted, I created a restricted user account for it:

adduser blog
adduser blog sshuser
chsh -s /usr/bin/git-shell blog

git setup

Thanks to Branchable storing blogs in git repositories, I was able to import my blog using a simple git clone in /home/blog (the srcdir):

git clone --bare git:// source.git

Note that the name of the directory (source.git) is important for the ikiwikihosting plugin to work.

Then I pulled the .setup file out of the setup branch in that repo and put it in /home/blog/.ikiwiki/FeedingTheCloud.setup. After that, I deleted the setup branch and the origin remote from that clone:

git branch -d setup
git remote rm origin

Following the recommended git configuration, I created a working directory (the repository) for the blog user to modify the blog as needed:

cd /home/blog/
git clone /home/blog/source.git FeedingTheCloud

I added my own ssh public key to /home/blog/.ssh/authorized_keys so that I could push to the srcdir from my laptop.

Finaly, I generated a new ssh key without a passphrase:

ssh-keygen -t ed25519

and added it as deploy key to the GitHub repo which acts as a read-only mirror of my blog.

Ikiwiki config

While I started with the Branchable setup file, I changed the following things in it:

srcdir: /home/blog/FeedingTheCloud
destdir: /var/www/blog
cgi_wrapper: /var/www/blog/blog.cgi
cgi_wrappermode: 675
- goodstuff
- lockedit
- comments
- blogspam
- sidebar
- attachment
- favicon
- format
- highlight
- search
- theme
- moderatedcomments
- flattr
- calendar
- headinganchors
- notifyemail
- anonok
- autoindex
- date
- relativedate
- htmlbalance
- pagestats
- sortnaturally
- ikiwikihosting
- gitpush
- emailauth
- brokenlinks
- fortune
- more
- openid
- orphans
- passwordauth
- progress
- recentchanges
- repolist
- toggle
- txt
sslcookie: 1
  file: /home/blog/.ikiwiki/cookies
useragent: ikiwiki
git_wrapper: /home/blog/source.git/hooks/post-update
allowed_attachments: admin()

Then I created the destdir:

mkdir /var/www/blog
chown blog:blog /var/www/blog

and generated the initial copy of the blog as the blog user:

ikiwiki --setup .ikiwiki/FeedingTheCloud.setup --wrappers --rebuild

One thing that failed to generate properly was the tag cloug (from the pagestats plugin). I have not been able to figure out why it fails to generate any output when run this way, but if I push to the repo and let the git hook handle the rebuilding of the wiki, the tag cloud is generated correctly. Consequently, fixing this is not high on my list of priorities, but if you happen to know what the problem is, please reach out.

Apache config

Here's the Apache config I put in /etc/apache2/sites-available/blog.conf:

<VirtualHost *:443>

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/
    SSLCertificateKeyFile /etc/letsencrypt/live/

    Header set Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"

    Include /etc/fmarier-org/blog-common

<VirtualHost *:443>

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/
    SSLCertificateKeyFile /etc/letsencrypt/live/

    Redirect permanent /

<VirtualHost *:80>

    Redirect permanent /

and the common config I put in /etc/fmarier-org/blog-common:


DocumentRoot /var/www/blog

LogLevel core:info
CustomLog ${APACHE_LOG_DIR}/blog-access.log combined
ErrorLog ${APACHE_LOG_DIR}/blog-error.log

AddType application/rss+xml .rss

<Location /blog.cgi>
        Options +ExecCGI

before enabling all of this using:

a2ensite blog
apache2ctl configtest
systemctl restart apache2.service

The domain used to be pointing to Feedburner and so I need to maintain it in order to avoid breaking RSS feeds from folks who added my blog to their reader a long time ago.

Server-side improvements

Since I'm now in control of the server configuration, I was able to make several improvements to how my blog is served.

First of all, I enabled the HTTP/2 and Brotli modules:

a2enmod http2
a2enmod brotli

and enabled Brotli compression by putting the following in /etc/apache2/conf-available/francois.conf:

<IfModule mod_brotli.c>
    AddOutputFilterByType BROTLI_COMPRESS text/html text/plain text/xml text/css text/javascript application/javascript
    BrotliCompressionQuality 4

Next, I made my blog available as a Tor onion service by putting the following in /etc/apache2/sites-available/blog.conf:

<VirtualHost *:443>
    ServerAlias xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion

    Header set Onion-Location "http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion%{REQUEST_URI}s"
    Header set alt-svc 'h2="xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion:443"; ma=315360000; persist=1'

<VirtualHost *:80>
    ServerName xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Include /etc/fmarier-org/blog-common

Then I followed the Mozilla Observatory recommendations and enabled the following security headers:

Header set Content-Security-Policy: "default-src 'none'; report-uri ; style-src 'self' 'unsafe-inline' ; img-src 'self' ; script-src https://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ 'unsafe-inline' 'sha256-pA8FbKo4pYLWPDH2YMPqcPMBzbjH/RYj0HlNAHYoYT0=' 'sha256-Kn5E/7OLXYSq+EKMhEBGJMyU6bREA9E8Av9FjqbpGKk=' 'sha256-/BTNlczeBxXOoPvhwvE1ftmxwg9z+WIBJtpk3qe7Pqo=' ; base-uri 'self'; form-action 'self' ; frame-ancestors 'self'"
Header set X-Frame-Options: "SAMEORIGIN"
Header set Referrer-Policy: "same-origin"
Header set X-Content-Type-Options: "nosniff"

Note that the Mozilla Observatory is mistakenly identifying HTTP onion services as insecure, so you can ignore that failure.

I also used the Mozilla TLS config generator to improve the TLS config for my server.

Then I added security.txt and gpc.json to the root of my git repo and then added the following aliases to put these files in the right place:

Alias /.well-known/gpc.json /var/www/blog/gpc.json
Alias /.well-known/security.txt /var/www/blog/security.txt

I also followed these instructions to create a sitemap for my blog with the following alias:

Alias /sitemap.xml /var/www/blog/sitemap/index.rss

Finally, I simplified a few error pages to save bandwidth:

ErrorDocument 301 " "
ErrorDocument 302 " "
ErrorDocument 404 "Not Found"

Monitoring 404s

Another advantage of running my own web server is that I can monitor the 404s easily using logcheck by putting the following in /etc/logcheck/logcheck.logfiles:


Based on that, I added a few redirects to point bots and users to the location of my RSS feed:

Redirect permanent /atom /index.atom
Redirect permanent /comments.rss /comments/index.rss
Redirect permanent /comments.atom /comments/index.atom
Redirect permanent /FeedingTheCloud /index.rss
Redirect permanent /feed /index.rss
Redirect permanent /feed/ /index.rss
Redirect permanent /feeds/posts/default /index.rss
Redirect permanent /rss /index.rss
Redirect permanent /rss/ /index.rss

and to tell them to stop trying to fetch obsolete resources:

Redirect gone /~ff/FeedingTheCloud
Redirect gone /gittip_button.png
Redirect gone /ikiwiki.cgi

I also used these 404s to discover a few old Feedburner URLs that I could redirect to the right place using

Redirect permanent /feeds/1572545745827565861/comments/default /posts/watch-all-of-your-logs-using-monkeytail/comments.atom
Redirect permanent /feeds/1582328597404141220/comments/default /posts/news-feeds-rssatom-for-mythtvorg-and/comments.atom
Redirect permanent /feeds/8490436852808833136/comments/default /posts/recovering-lost-git-commits/comments.atom
Redirect permanent /feeds/963415010433858516/comments/default /posts/debugging-openwrt-routers-by-shipping/comments.atom

I also put the following robots.txt in the git repo in order to stop a bunch of authentication errors coming from crawlers:

User-agent: *
Disallow: /blog.cgi
Disallow: /ikiwiki.cgi

Future improvements

There are a few things I'd like to improve on my current setup.

The first one is to remove the iwikihosting and gitpush plugins and replace them with a small script which would simply git push to the read-only GitHub mirror. Then I could uninstall the ikiwiki-hosting-common and ikiwiki-hosting-web since that's all I use them for.

Next, I would like to have proper support for signed git pushes. At the moment, I have the following in /home/blog/source.git/config:

    advertisePushOptions = true
    certNonceSeed = "(random string)"

but I'd like to also reject unsigned pushes.

While my blog now has a CSP policy which doesn't rely on unsafe-inline for scripts, it does rely on unsafe-inline for stylesheets. I tried to remove this but the actual calls to allow seemed to be located deep within jQuery and so I gave up. Patches for this would be very welcome of course.

Finally, I'd like to figure out a good way to deal with articles which don't currently have comments. At the moment, if you try to subscribe to their comment feed, it returns a 404. For example:

[Sun Jun 06 17:43:12.336350 2021] [core:info] [pid 30591:tid 140253834704640] [client] AH00128: File does not exist: /var/www/blog/posts/using-iptables-with-network-manager/comments.atom

This is obviously not ideal since many feed readers will refuse to add a feed which is currently not found even though it could become real in the future. If you know of a way to fix this, please let me know.

Worse Than FailureCodeSOD: Experience is Integral

Behind every code WTF is a process WTF. For example, Charles W was recently tasked with updating some file-handling code to match changes in the underlying file-format it parses. This is the C code which parses an integer:

if ((*p == '-' || ('0' <= *p && '9' >= *p)) && retCode == -256) { retCode = 0; p = _tcsrev(p); if (*p == ' ') p++; for (i = 0; '0' <= *p && '9' >= *p; i++) { retCode += (int)pow(10, (double)i) * ((int)*p - 0x30); p++; } if (*p == '-') retCode *= -1; }

This code does the basic, CS101 approach to parsing strings into integers, which is to go character by character and multiply by the appropriate power of ten. While no program should ever do this when there are perfectly fine built-ins, like wcstol, this isn't an utter disaster of a code block. Now, this is C and it's doing a bunch of pointer manipulation, it's certainly not safe code. Malformed inputs could almost certainly ruin your day here. It's bad and dangerous code.

But the code isn't the WTF. Note how this is very much the approach a college student or novice programmer might take? Well, fifteen years ago, Charles's employer hired on a college freshman doing a "work study" program. The student was "very good at programming", and so someone told them to implement the file parsing code and then let them go to town. No one checked in beyond "make sure it compiles and runs".

The result is a gigantic blob of code that looks like it was written by a talented but inexperienced college student. Wheels are reinvented. The difficult solution is usually preferred to the simple and clear one. It swings wildly from "overly modularized" to "it's okay if functions are 500 lines long", and "everything is documented" to "I forgot how to put in comments today".

And this is the code that they've been shipping for fifteen years. Charles is among the first to touch it. Which, hey, good on that mystery college student for writing code that was at least reliable enough that nobody had to deal with it before now. Shame on the employer, however, who tried to get important work done on the cheap by hiring inexperienced labor and providing them no supervision.

This is a case where, sure, the code isn't good, but the WTF is how that code got written. Charles adds, "That student is apparently working as a programmer somewhere…", and I hope that along the way they've found employers that could provide them guidance instead of just heaving them into the deep end and getting lucky.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianSergio Durigan Junior: I am not on Freenode anymore

This is a quick public announcement to say that I am not on the Freenode IRC network anymore. My nickname (sergiodj), which was more than a decade old, has just been deleted (along with every other nickname and channel in that network) from their database today, 2021-06-14.

For your safety, you should assume that everybody you knew at Freenode is not there either, even if you see their nicknames online. Do not trust without verifying. In fact, I would strongly encourage that you do not join Freenode anymore: their new policies are absolutely questionable and their disregard for their users is blatant.

If you would like to chat with me, you can find me at OFTC (preferred) and Libera.

Cory DoctorowThe Rent’s Too Damned High

This week on my podcast, my latest Medium column, The Rent’s Too Damned High, about the long con of convincing Americans that they will grow prosperous through housing wealth, not labor rights.



Planet DebianVincent Fourmond: Solution for QSoas quiz #2: averaging several Y values for the same X value

This post describes two similar solutions to the Quiz #2, using the data files found there. The two solutions described here rely on split-on-values. The first solution is the one that came naturally to me, and is by far the most general and extensible, but the second one is shorter, and doesn't require external script files.

Solution #1

The key to both solution is to separate the original data into a series of datasets that only contain data at a fixed value of x (which corresponds here to a fixed pH), and then process each dataset one by one to extract the average and standard deviation. This first step is done thus:
QSoas> load kcat-vs-ph.dat
QSoas> split-on-values pH x /flags=data
After these commands, the stacks contains a series of datasets bearing the data flag, that each contain a single column of data, as can be seen from the beginnings of a show-stack command:
QSoas> k
Normal stack:
	 F  C	Rows	Segs	Name	
#0	(*) 1	43	1	'kcat-vs-ph_subset_22.dat'
#1	(*) 1	44	1	'kcat-vs-ph_subset_21.dat'
#2	(*) 1	43	1	'kcat-vs-ph_subset_20.dat'
Each of these datasets have a meta-data named pH whose value is the original x value from kcat-vs-ph.dat. Now, the idea is to run a stats command on the resulting datasets, extracting the average value of x and its standard deviation, together with the value of the meta pH. The most natural and general way to do this is to use run-for-datasets, using the following script file (named process-one.cmds):
stats /meta=pH /output=true /stats=x_average,x_stddev
So the command looks like:
QSoas> run-for-datasets process-one.cmds flagged:data
This command produces an output file containing, for each flagged dataset, a line containing x_average, x_stddev, and pH. Then, it is just a matter of loading the output file and shuffling the columns in the right order to get the data in the form asked. Overall, this looks like this:
l kcat-vs-ph.dat
split-on-values pH x /flags=data
output result.dat /overwrite=true
run-for-datasets process-one.cmds flagged:data
l result.dat
apply-formula tmp=y2;y2=y;y=x;x=tmp
dataset-options /yerrors=y2
The slight improvement over what is described above is the use of the output command to write the output to a dedicated file (here result.dat), instead of out.dat and ensuring it is overwritten, so that no data remains from previous runs.

Solution #2

The second solution is almost the same as the first one, with two improvements:
  • the stats command can work with datasets other than the current one, by supplying them to the /buffers= option, so that it is not necessary to use run-for-datasets;
  • the use of the output file can by replaced by the use of the accumulator.
This yields the following, smaller, solution:
l kcat-vs-ph.dat
split-on-values pH x /flags=data
stats /meta=pH /accumulate=* /stats=x_average,x_stddev /buffers=flagged:data
apply-formula tmp=y2;y2=y;y=x;x=tmp
dataset-options /yerrors=y2

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.


Planet DebianNorbert Preining: Future of Cinnamon in Debian

OK, this is not an easy post. I have been maintaining Cinnamon in Debian for quite some time, since around the times version 4 came out. The soon (hahaha) to be released Bullseye will carry the last release of the 4-track, but version 5 is already waiting, After Bullseye, the future of Cinnamon in Debian currently looks bleak.

Since my switch to KDE/Plasma, I haven’t used Cinnamon in months. Only occasionally I tested new releases, but never gave them a real-world test. Having left Gnome3 for it’s complete lack of usability for pro-users, I escaped to Cinnamon and found a good home there for quite some time – using modern technology but keeping user interface changes conservative. For long time I haven’t even contemplated using KDE, having been burned during the bad days of KDE3/4 when bloat-as-bloat-can-be was the best description.

What revelation it was that KDE/Plasma was more lightweight, faster, responsive, integrated, customizable, all in all simple great. Since my switch to KDE/Plasma I think not for a second I have missed anything from the Gnome3 or Cinnamon world.

And that means, I will most probably NOT packaging Cinnamon 5, nor do any real packaging work of Cinnamon for Debian in the future. Of course, I will try to keep maintenance of the current set of packages for Bullseye, but for the next release, I think it is time that someone new steps in. Cinnamon packaging taught me a lot on how to deal with multiple related packages, which is of great use in the KDE packaging world.

If someone steps forward, I will surely be around for support and help, but as long as nobody takes the banner, it will mean the end of Cinnamon in Debian.

Please contact me if you are interested!

Planet DebianKentaro Hayashi: has moved to Team Infrastructure

Today, has moved to Team Infrastructure

So far, was sponsored by FOSSHOST which provides us a VPS instance since Jan, 2021. It was located at OSU Open Source Lab. It worked pretty well, Thanks FOSSHOST sponsorship since ever!

Now, uses the VPS instance which is provided by Team Infrastructure. (still non-DSA managed) It is hosted at HETZNER Cloud.

About is a experimental service to demonstrate how to improve user experience with finding and fixing Debian unstable related bugs for making "unstable life" comfortable.

Thank Team for sponsoring,

Planet DebianJunichi Uekawa: Wrote a quick hack to open chroot in emacs tramp.

Wrote a quick hack to open chroot in emacs tramp. I wrote a mode for cros_sdk and it was relatively simple. I figured that chroot must be easier. I could write one in about 30 minutes. I need to mount proc and home inside the chroot to make it useful, but here goes. chroot-tramp.el。


Planet DebianMike Gabriel: New: The Debian BBB Packaging Team (and: Kurento Media Server goes Debian)

Today, Fre(i)e Software GmbH has been contracted for packaging Kurento Media Server for Debian. This packaging project will be funded by GUUG e.V. (the German Unix User Group e.V.). A big thanks to the people from GUUG e.V. for making this packaging project possible.

About Kurento Media Server

Kurento is an open source software project providing a platform suitable for creating modular applications with advanced real-time communication capabilities. For knowing more about Kurento, please visit the Kurento project website:

Kurento is part of FIWARE. For further information on the relationship of FIWARE and Kurento check the Kurento FIWARE Catalog Entry. Kurento is also part of the NUBOMEDIA research initiative.

Kurento Media Server is a WebRTC-compatible server that processes audio and video streams, doing composable pipeline-based processing of media.

About BigBlueButton

As some of you may know, Kurento Media Server is one of the core components of the BigBlueButton software, an ,,Open Source Virtual Classroom Software''.

The context of the KMS funding is - after several other steps - getting the complete software component stack of BigBlueButton (aka BBB) into Debian some day, so that we can provide BBB as native Debian packages. On Debian. (Currently, one needs to use an always already a bit outdated version of Ubuntu).

Due to this greater context, I just created the Debian BBB Packaging Team on

Outlook and Appreciation

The current project (uploading Kurento Media Server to Debian) will very likely be extended to one year of package maintenance for all Kurento Media Server components in Debian. Extending this maintenance funding to a second year, has also been discussed, and seems a possible option.

Probably most Debian Developer colleagues will agree with me when I say that Debian packaging is not a one-time shot until the first uploads of software packages have landed and settled. Debian package maintenance is a long term responsibility and requires long term commitment. I am very glad, that the people at GUUG e.V are on the same page with me (with us) regarding this. This is much and dearly appreciated. Thank you!!!

What else?

Well, we have also talked about another BigBlueButton component that is not yet in Debian: FreeSwitch. But more of that, when time has come.

How to Join the Debian BBB Packaging Team?

Please ping me via IRC (sunweaver on OFTC IRC) or [matrix] (

How to Support the Debian BBB Packaging Team?

If you, your organization, your company, your municipality, your university, etc. feels like supporting the effort of packaging BigBlueButton for Debian, please get in touch with:

And yes, the company homepage is not online, yet, but it is in the makings...

Mike (aka sunweaver)

Planet DebianLisandro Damián Nicanor Pérez Meyer: Firsts steps into QML

After years of using and maintaining Qt there was a piece of the SDK that I never got to use as a developer: QML. Thanks to ICS I've took the free (in the sense of cost) QML Programming — Fundamentals and Beyond.

It consists of seven sessions, which can be easily done in a few days. I did them all in 4 days, but with enough time available you can do them even faster. Of course some previous knowledge of Qt comes handy.

The only drawback was the need of a corporate e-mail in order to register (or at least the webpage says so). Apart from that it is really worth the effort. So, if you are planning into getting into QML this is definitely a nice way to start.

Chaotic IdealismShould libraries seek more current replacements for books that mention “Asperger’s”?

A lot of autistic people don’t like the term “Asperger’s” very much anymore, ever since the evidence came to light that Hans Asperger was a eugenicist who made the argument that his (verbal, intelligent) boys were valuable to the Third Reich, but also sent more disabled children to institutions, where they died from neglect or were murdered. (The research was summarized in a book called “Asperger’s Children”, which I cannot recommend highly enough. Asperger’s here in the title refers to the doctor himself.)

The trouble is that this is recent information, and many good books about autism were written when “Asperger’s” was the term popularized by Lorna Wing to describe autism that did not affect one’s language ability or ability to care for oneself.

This was needed because before “Asperger syndrome”, autism was thought to be always severe, very rare, and always associated with extreme disability. People with less-extreme symptoms were being overlooked, and without a diagnosis or any help they often ended up jobless, homeless, and mentally ill.

So “Asperger’s” did do its duty as a diagnosis–we needed it–but with the recent revelations about Hans Asperger being a eugenicist rather than simply a doctor who made excuses for his patients, the specific term has become a little bit troublesome to us. Many of us do still use it, but it is increasingly gaining an association with the functioning labels that deny help to the “high-functioning” and agency to the “low-functioning”.

Asperger’s was merged into autism spectrum disorder primarily because it is not medically distinguishable from classic autism. Although people diagnosed with Asperger’s don’t have a speech delay, they do have unusual speech and communication problems; and although they don’t have delays in basic ADLs, they often have serious problems with other aspects of independent living. And when someone diagnosed with Asperger’s is evaluated according to the DSM-IV criteria of Autistic Disorder, they fit those criteria more than 90% of the time.

One of the problems the autism community faces, internally, is something we call “Aspie supremacy”. These are people–often quite young people, teenagers and twenty-somethings still dependent on the ableist framework they were raised in–who declare themselves to have Asperger’s, not autism, because they are smart and talented and not disabled, and therefore are superior to other autistics–and perhaps even to neurotypicals.

This is a problem because they are assuming that disability means one cannot be talented, cannot be smart; and that one must be either inferior or superior to others. And of course it means leaving behind anyone who cannot mask their autism enough to be included in the upper “Aspie” class. It is essentially Asperger’s eugenics, and yes, it does trouble us greatly, especially since these people are often deeply hurt by years of bullying, abuse, and ableist exclusion, and want to solve the problem by taking themselves out of the “disability” category rather than by advocating for disability rights.

I am only one autistic person and this is only one perspective. I will leave it to the librarians to use this information to judge whether, and which, books should be updated.

Planet DebianColin Watson: SSH quoting

A while back there was a thread on one of our company mailing lists about SSH quoting, and I posted a long answer to it. Since then a few people have asked me questions that caused me to reach for it, so I thought it might be helpful if I were to anonymize the original question and post my answer here.

The question was why a sequence of commands involving ssh and fiddly quoting produced the output they did. The first example was this:

$ ssh user@machine.local bash -lc "cd /tmp;pwd"

Oh hi, my dubious life choices have been such that this is my specialist subject!

This is because SSH command-line parsing is not quite what you expect.

First, recall that your local shell will apply its usual parsing, and the actual OS-level execution of ssh will be like this:

[0]: ssh
[1]: user@machine.local
[2]: bash
[3]: -lc
[4]: cd /tmp;pwd

Now, the SSH wire protocol only takes a single string as the command, with the expectation that it should be passed to a shell by the remote end. The OpenSSH client deals with this by taking all its arguments after things like options and the target, which in this case are:

[0]: bash
[1]: -lc
[2]: cd /tmp;pwd

It then joins them with a single space:

bash -lc cd /tmp;pwd

This is passed as a string to the server, which then passes that entire string to a shell for evaluation, so as if you’d typed this directly on the server:

sh -c 'bash -lc cd /tmp;pwd'

The shell then parses this as two commands:

bash -lc cd /tmp

The directory change thus happens in a subshell (actually it doesn’t quite even do that, because bash -lc cd /tmp in fact ends up just calling cd because of the way bash -c parses multiple arguments), and then that subshell exits, then pwd is called in the outer shell which still has the original working directory.

The second example was this:

$ ssh user@machine.local bash -lc "pwd;cd /tmp;pwd"

Following the logic above, this ends up as if you’d run this on the server:

sh -c 'bash -lc pwd; cd /tmp; pwd'

The third example was this:

$ ssh user@machine.local bash -lc "cd /tmp;cd /tmp;pwd"

And this is as if you’d run:

sh -c 'bash -lc cd /tmp; cd /tmp; pwd'

Now, I wouldn’t have implemented the SSH client this way, because I agree that it’s confusing. But /usr/bin/ssh is used as a transport for other things so much that changing its behaviour now would be enormously disruptive, so it’s probably impossible to fix. (I have occasionally agitated on openssh-unix-dev@ for at least documenting this better, but haven’t made much headway yet; I need to get round to preparing a documentation patch.) Once you know about it you can use the proper quoting, though. In this case that would simply be:

ssh user@machine.local 'cd /tmp;pwd'

Or if you do need to specifically invoke bash -l there for some reason (I’m assuming that the original example was reduced from something more complicated), then you can minimise your confusion by passing the whole thing as a single string in the form you want the remote sh -c to see, in a way that ensures that the quotes are preserved and sent to the server rather than being removed by your local shell:

ssh user@machine.local 'bash -lc "cd /tmp;pwd"'

Shell parsing is hard.

Kevin RuddHawke Centre: Kevin Rudd and Quentin Dempster AM discuss disinformation in the media

10 JUNE 2021

The post Hawke Centre: Kevin Rudd and Quentin Dempster AM discuss disinformation in the media appeared first on Kevin Rudd.

Kevin RuddBBC World News: Kevin Rudd on the G7 summit

10 JUNE 2021

The post BBC World News: Kevin Rudd on the G7 summit appeared first on Kevin Rudd.

Kevin RuddCNN: Kevin Rudd on Coronavirus, Climate and China ahead of the G7 summit.

10 JUNE 2021

Becky Anderson
Let’s bring in Kevin Rudd, former Prime Minister of Australia he is now the president of the Asia Society Policy Institute and has recently spoken out about China and Kevin Rudd joins us now live from Brisbane, Australia. Mr. Rudd, thank you very much for joining us. Let’s start off if I could, with the discussion that’s being centered here, the G7 the three C’s, that’s Coronavirus, climate, as well as China. If I can start with China, Scott Morrison, we’ve had Prime Minister had some very big rhetoric, I think it’s fair to say against China. What do you think he wants to hear from the G7, on the question of China?

Kevin Rudd
Well the Australia bilateral relationship with China has gone through a very difficult period over the last year or so. And you’ve seen a lot of rhetoric from the Australian Prime Minister. You’ve also seen a lot of, shall we say, retaliatory positions taken by the Chinese government against Australia? I think the key question for the G7 is where they wish to land in terms of their collective position in establishing a new modus vivendi with the Chinese government that the President Xi Jingping. There of course the outstanding questions will be the future stability in the Taiwan straits, you have standing question will be those of human rights. And the outstanding question will be working with China at the same time on climate change.

Becky Anderson
Yeah, and as you well know, Mr. Rudd Europe is not as hawkish against China, for economic reasons, let’s say. So do you think the Mr. Morrison’s message will resonate with the rest of the leaders here at the G7?

Kevin Rudd
I’m not sure whether Mr. Morrison’s message will resonate or not. I think President Biden’s approach to the G7 meeting will be to achieve as much of a common position in terms of the G7’s relationship with the future of China on both human rights questions on security questions, but also critically on climate questions as well. Remember, within this overall frame, that both prime minister Johnson and President Biden will be working towards a large outcome at the Climate Change Conference in Glasgow at the end of this year, I think achieving a solid G7 outcome on climate, therefore, including leveraging China to do more will be critical in terms of the overall outcome of the summit.

Becky Anderson
Oh, I’ve heard Mr. Morrison call for a way to blunt China’s economic coercion. And one way he thinks that you can counter Chinese competition is to reform the WTO, the World Trade Organization, how likely do you think is this to happen? How likely you think others will follow suit?

Kevin Rudd
I think the difficulty with WTO reform processes for those of us familiar with it is that they take forever, frankly, because they’re all achieved on the basis of sense. The bottom line in terms of questions of economic coercion, it’s trying to achieve a between Australia and China are more evenly balanced relationship into the future. That is difficult. But at the same time, when any individual statement encounters, shall we say bilateral economic coercion from China, it’s important that there be a collective position in response to that. I think that is the way through here. And it will be interesting to see what the G7 summit leaders arrive at by way of consensus on that. If you look at the draft, as it were, indicate language being floated, it’s quite unusual and it’s forward leaning nature on China. It would be the first time for example, a summit communicate the G7 dealt explicitly, with the Taiwan straits dealt explicitly with questions and Xinjiang. So I think therefore, there is a sharpening in the G7 position. At the same time, as you rightly pointed out, the Europeans have a different perspective on China, both on trade investment relations, but critically also all united and wanting to work with China on climate. This is a complex challenge.

Becky Anderson
Yeah, it’s more about those shared values, isn’t it? Let’s talk if I may, Mr. Rudd, about COVID. Australia has been I think it’s fair to say praised for its ability to largely stamp out the virus. It has very strict border controls, which even limits citizens from returning . Borders, from what I understand are not expected to open anytime soon. And vaccination rates are very low. Meanwhile, Australia’s emergency response to the virus is becoming some would say unsustainable. And no roadmap so far has been presented to reopening. So what do you think is or should be Australia’s COVID exit strategy here?

Kevin Rudd
Well, within the a G7 context from which you’re speaking, of course, there’s a broader obligation in terms of The COVID crisis, which was to agree on a package of measures for vaccination policy towards the developing world. But let’s step away from that to your specific question on Australia, I think it’s going to take some time for Australia to, as it were, reopen its borders, in large part because the medical authorities in Australia, the chief medical officers of the federal and state governments have been fairly uniform in their advice in terms of keeping international borders highly restricted in the current period. You’re right to point out the slow pace of vaccination within this country, which has been quite slow against OECD standards, means that it does slow the pace as well in terms of border reopening. So therefore, I think when Prime Minister Morrison says we’re not looking towards the border reopening until into 2022, he’s probably speaking the truth there. It could however be brought forward if the vaccination rate here was rapidly increased. And there I think the government in Australia has been lacking.

Becky Anderson
Let’s, let’s talk climate change, if we may. What we have heard, what we’ve been hearing from leaders is a commitment to net zero by 2050. That’s all we hear, net zero by 2050. Mr. Morrison has, however, being somewhat resistant to set more ambitious climate commitments. Do you think he’ll be swayed by other leaders here at the G7 Mr. Rudd?

Kevin Rudd
Well, prior to leaving Australia, Mr. Morrison has been very clear that he does not intend to be swayed either by President Biden or by others. But I think Prime Minister Morrison may well get mugged by reality here, by which I mean, the European Union is moving towards a border carbon tax, for want of a better term. That is force nations who are not doing enough to reduce their greenhouse gas emissions over time, either in terms of their mid century commitment on carbon neutrality, or what’s called the near term indifference to reduce over the 2020s through until 2030. That a border adjustment tax or a border carbon tax would be imposed. Mr. Morrison is going to have to address that reality A. from the Europeans and potentially from the Americans as well. And therefore he would have to explain to the Australian community that by being recalcitrant on climate change, ambition himself, he is willing to wear the economic penalties which flow from the rest of the world. I think that’ll be a hard message for him to sell to these communities.

Becky Anderson
Yeah, I suspect so too. Kevin Rudd, the former Prime Minister of Australia, I appreciate you taking the time to speak to us, sir.

The post CNN: Kevin Rudd on Coronavirus, Climate and China ahead of the G7 summit. appeared first on Kevin Rudd.

Kevin RuddProject Syndicate: The Virus Next Time

Echoing recommendations made by earlier commissions that studied the growing risk of pandemics and the inadequate global system for dealing with them, the Independent Panel for Pandemic Preparedness and Response has released precisely the policy blueprint that we need. World leaders must not dither in implementing it.

BRISBANE – As more developed countries begin to feel as though they have made it to the other side of the COVID-19 crisis, two striking realities are coming into view. First, one can clearly see just how vulnerable many developing countries still are to rapidly escalating outbreaks of the type we are witnessing in India. The results of failing to distribute the most effective vaccines equitably and strategically are being laid bare.

Second, with more dangerous and contagious variants continuing to emerge, we do not have the luxury of delaying work toward a new international system for pandemic preparedness and response. We must start that project immediately. And fortunately, the Independent Panel for Pandemic Preparedness and Response (IPPR), chaired by former New Zealand Prime Minister Helen Clark and former Liberian President Ellen Johnson Sirleaf, has just published a blueprint for how to do it.

The question now is whether governments are ready not just to listen but to act. The answer will determine whether we can prevent future epidemics from becoming global catastrophes. I know from my own government’s experience during the 2009 swine flu (H1N1) pandemic that it is crucial to confront these crises with immediate, far-reaching, and coordinated action. Thanks to eight months of work by the IPPR, policymakers now have a comprehensive set of recommendations for transforming how we manage pandemic risks.

Chief among the panel’s proposals is a call for pandemic preparedness and response to be elevated to the highest level of political leadership through a new Global Health Threats Council, which should be based at the United Nations headquarters in New York. The panel has also proposed an International Financing Facility for Pandemic Preparedness and Response to help share the burden in future global health crises. Either through direct contributions or a kind of assessed contribution, this mechanism would fund both ongoing preparedness and rapid-response measures in low- and middle-income countries.

The IPPR has offered the kind of emphatic, dispassionate, and actionable guidance that governments need and – in this case – have demanded through the World Health Organization. Four years ago, the Independent Commission on Multilateralism (ICM, which I chaired) tried to raise the alarm about the growing threat of pandemics in its report Global Pandemics and Global Public Health. We were aghast at the poor state of the global health architecture at a time when cross-border health crises were becoming more frequent and posing unprecedented risks. Those risks have since materialized in the form of the COVID-19 pandemic.

In addition to issuing a clear warning, the commission’s report made a series of bold recommendations to strengthen the multilateral system in the face of potential global health crises. Its proposals for clearer rules for verification and early-warning mechanisms have now been echoed in the IPPR’s recommendations, as has its call for a more empowered independent WHO secretariat. We are still waiting for progress on all of these fronts.

We cannot afford to let the IPPR’s report fall on the same deaf ears. And yet, that is exactly what seems to be happening. The 74th World Health Assembly just voted to spend six months studying the panel’s report before even considering taking any action. Such delays are simply unacceptable.The COVID-19 crisis has borne out an uncomfortable truth that is emphasized in the IPPR’s report: namely, that many of the national and global institutions established to deal with global pandemics are not fit for purpose, or have not been properly activated. From the moment in late 2019 and early 2020 when the existing International Health Regulations failed, the COVID-19 outbreak became a global catastrophe. And since then, our national and global economic responses have been too slow, tepid, and uncoordinated – a failure that the post-2008 G20 architecture was supposed to prevent.

The current crisis could still become much worse before it gets any better. We are already witnessing a breakdown of global supply chains, which will lead to terrible economic, political, and public-health outcomes. We need to get back on track now so that we can fight not only future pandemics but also this one.The IPPR’s report could not be timelier. The G7 summit in Cornwall on June 11-13 is an opportunity to concentrate our efforts with backing from the highest political levels. COVID-19 has been costly for all of us. The ICM’s 2017 report anticipated that we would be here one day and identified the solutions we would need to implement. Let us use the IPPR’s findings to enact meaningful reforms and show real leadership, so that this pandemic will be the last one to catch us off guard.

Photo: Vaccinating children at the native village of Mille Tres, Panama (Jothenomad/FLICKR).

The post Project Syndicate: The Virus Next Time appeared first on Kevin Rudd.

Planet DebianMike Gabriel: Linux on Acer Spin 3

Recently, I bought an Acer Spin 3 Convertible Notebook for the company and provided it to Robert Tari for his daily work on Ayatana Indicators (which currently is funded by the UBports Foundation via my company Fre(i)e Software GmbH).

Some days ago Robert reported back about a sleepless night he spent with that machine... He got stuck with a tricky issue regarding the installation of Manjaro GNU/Linux on that machine, that could be -- at the end -- resolved by a not so well documented trick.

Before anyone else spends another sleepless night on this, we thought we'd better share Robert's solution.

So, the below applies to the Acer Spin 3 series (and probably to other Spin models, perhaps even some other Acer laptops):

Acer Spin 3 Pre-Inst Cheat Codes

Before you even plug in the USB install media:

  1. Go to UEFI settings (i.e. BIOS for us elderly people) [F2]
  2. Security -> Set Supervisor Password [Enabled]
  3. Enter the password you'll use
  4. Boot -> Secure Boot -> [Disabled] (you can't disable it without a set supervisor password)
  5. Exit -> Exit Saving Changes
  6. Restart and go to UEFI settings again [F2]
  7. Main -> [Now press CTRL + S] -> VMD Controller -> [Disabled]
  8. Exit -> Exit Saving Changes
  9. Now plug in the install USB and restart

Esp. the disabling of the VMD Controller is essential. Otherwise, GRUB won't find any partition nor EFI registered boot items after the installation and drops into the EFI recovery shell.

Robert hasn't tested the Wacom pen that comes with the device, nor the fingerprint reader, yet.

Everything else works out-of-the-box.

Mike Gabriel (aka sunweaver)

Worse Than FailureError'd: Unspoken

It's been quite a few years since I was last in Silicon Valley. So it wouldn't surprise me at all if some enterprising restaurateur has unveiled a trendy pub and stolen all the humorous thunder from Sean's submission. I'll be more surprised if they haven't.

Says Sean K. "I love trying new beers, but these varieties remind me too much of work." 418's all around! Alas, the keg of 204 has kicked.



Meanwhile, we can't count on Eric K. anymore. "Port 31 of the HP switch is offline, but which one?" he counterfactually queries.



Preparing for some post-pandemic adventure, peripatetic Matias is raring to explore far from home. "Thanks for offering to take me to the local site that I am currently browsing!" he declares.



Reader Rob H. shares a slightly Kafkaesque encounter. "Microsoft must have a different definition of optional than I do. One of the 'optional' settings in Edge is turned on, but it's disabled so that I can't turn it off!"



But Mike T. tops it with this winner of a Catch-22. "NVidia thinks my old password is so bad I can't even change it."



[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Cryptogram FBI/AFP-Run Encrypted Phone

For three years, the Federal Bureau of Investigation and the Australian Federal Police owned and operated a commercial encrypted phone app, called AN0M, that was used by organized crime around the world. Of course, the police were able to read everything — I don’t even know if this qualifies as a backdoor. This week, the world’s police organizations announced 800 arrests based on text messages sent over the app. We’ve seen law enforcement take over encrypted apps before: for example, EncroChat. This operation, code-named Trojan Shield, is the first time law enforcement managed an app from the beginning.

If there is any moral to this, it’s one that all of my blog readers should already know: trust is essential to security. And the number of people you need to trust is larger than you might originally think. For an app to be secure, you need to trust the hardware, the operating system, the software, the update mechanism, the login mechanism, and on and on and on. If one of those is untrustworthy, the whole system is insecure.

It’s the same reason blockchain-based currencies are so insecure, even if the cryptography is sound.

Kevin RuddVale Duncan Pegg MP

With Duncan Pegg, Premier Palaszczuk and others at his community farewell in Sunnybank in May 2021.

They don’t make local representatives more decent than Duncan Pegg.

Duncan wasn’t lured into political life by ego or material gain; his overriding ambition was to improve the lives of the people he served. Beneath his softly spoken exterior beat the heart of a first-class fighter, driven by his commitment to tolerance, equity and justice for all.

I knew Duncan for most of his life. As a young activist, Duncan joined my team in the federal seat of Griffith to help defend fairness and multiculturalism against the rising forces of injustice and Hansonism.

Over the past decade, it’s been my honour to return the favour by campaigning alongside Duncan for election and re-election to the seat of Stretton in the Queensland Parliament.

The communities of Stretton love Duncan dearly and recognise him as a man of authenticity, integrity and compassion. When I speak to members of the Chinese community on the southside, they sing Duncan’s praises.

Even when Duncan was in the fight of his life against cancer, he poured every ounce of available strength into fighting for his community. It’s not hard to find someone in Stretton whose life was personally improved by Duncan’s advocacy.

When I went to visit Duncan at the Canossa Hospital last week, we had a long conversation about his life and contribution to local community. Duncan was enormously proud of what he’d achieved for his local schools. He was also proud and passionate about the deep bonds he had forged with his local multicultural communities. And he wanted, in particular, to be remembered to them.

To lose Duncan at such a young age is a profound loss for the southside of Brisbane, for the parliament and for Queensland as a whole.

The post Vale Duncan Pegg MP appeared first on Kevin Rudd.

Planet DebianPetter Reinholdtsen: Nikita version 0.6 released - free software archive API server

I am very pleased to be able to share with you the announcement of a new version of the archiving system Nikita published by its lead developer Thomas Sødring:

It is with great pleasure that we can announce a new release of nikita. Version 0.6 ( This release makes new record keeping functionality available. This really is a maturity release. Both in terms of functionality but also code. Considerable effort has gone into refactoring the codebase and simplifying the code. Notable changes for this release include:

  • Significantly improved OData parsing
  • Support for business specific metadata and national identifiers
  • Continued implementation of domain model and endpoints
  • Improved testing
  • Ability to export and import from arkivstruktur.xml

We are currently in the process of reaching an agreement with an archive institution to publish their picture archive using nikita with business specific metadata and we hope that we can share this with you soon. This is an interesting project as it allows the organisation to bring an older picture archive back to life while using the original metadata values stored as business specific metadata. Combined with OData means the scope and use of the archive is significantly increased and will showcase both the flexibility and power of Noark.

I really think we are approaching a version 1.0 of nikita, even though there is still a lot of work to be done. The notable work at the moment is to implement access-control and full text indexing of documents.

My sincere thanks to everyone who has contributed to this release!

- Thomas

Release 0.6 2021-06-10 (d1ba5fc7e8bad0cfdce45ac20354b19d10ebbc7b)

  • Refactor metadata entity search
  • Remove redundant security configuration
  • Make OpenAPI documentation work
  • Change database structure / inheritance model to a more sensible approach
  • Make it possible to move entities around the fonds structure
  • Implemented a number of missing endpoints
  • Make sure yml files are in sync
  • Implemented/finalised storing and use of
    • Business Specific Metadata
    • Norwegian National Identifiers
    • Cross Reference
    • Keyword
    • StorageLocation
    • Author
    • Screening for relevant objects
    • ChangeLog
    • EventLog
  • Make generation of updated docker image part of successful CI pipeline
  • Implement pagination for all list requests
    • Refactor code to support lists
    • Refactor code for readability
    • Standardise the controller/service code
  • Finalise File->CaseFile expansion and Record->registryEntry/recordNote expansion
  • Improved Continuous Integration (CI) approach via gitlab
  • Changed conversion approach to generate tagged PDF documents
  • Updated dependencies
    • For security reasons
    • Brought codebase to spring-boot version 2.5.0
    • Remove import of necessary dependencies
    • Remove non-used metrics classes
  • Added new analysis to CI including
  • Implemented storing of Keyword
  • Implemented storing of Screening and ScreeningMetadata
  • Improved OData support
    • Better support for inheritance in queries where applicable
    • Brought in more OData tests
    • Improved OData/hibernate understanding of queries
    • Implement $count, $orderby
    • Finalise $top and $skip
    • Make sure & is used between query parameters
  • Improved Testing in codebase
    • A new approach for integration tests to make test more readable
    • Introduce tests in parallel with code development for TDD approach
    • Remove test that required particular access to storage
  • Implement case-handling process from received email to case-handler
    • Develop required GUI elements (digital postroom from email)
    • Introduced leader, quality control and postroom roles
  • Make PUT requests return 200 OK not 201 CREATED
  • Make DELETE requests return 204 NO CONTENT not 200 OK
  • Replaced 'oppdatert*' with 'endret*' everywhere to match latest spec
  • Upgrade Gitlab CI to use python > 3 for CI scripts
  • Bug fixes
    • Fix missing ALLOW
    • Fix reading of objects from jar file during start-up
    • Reduce the number of warnings in the codebase
    • Fix delete problems
    • Make better use of cascade for "leaf" objects
    • Add missing annotations where relevant
    • Remove the use of ETAG for delete
    • Fix missing/wrong/broken rels discovered by runtest
    • Drop unofficial convertFil (konverterFil) end point
    • Fix regex problem for dateTime
    • Fix multiple static analysis issues discovered by coverity
    • Fix proxy problem when looking for object class names
    • Add many missing translated Norwegian to English (internal) attribute/entity names
    • Change UUID generation approach to allow code also set a value
    • Fix problem with Part/PartParson
    • Fix problem with empty OData search results
    • Fix metadata entity domain problem
  • General Improvements
    • Makes future refactoring easier as coupling is reduced
    • Allow some constant variables to be set from property file
    • Refactor code to make reflection work better across codebase
    • Reduce the number of @Service layer classes used in @Controller classes
    • Be more consistent on naming of similar variable types
    • Start printing rels/href if they are applicable
    • Cleaner / standardised approach to deleting objects
    • Avoid concatenation when using StringBuilder
    • Consolidate code to avoid duplication
    • Tidy formatting for a more consistent reading style across similar class files
    • Make throw a log.error message not an message
    • Make throw print the log value rather than printing in multiple places
    • Add some missing pronom codes
    • Fix time formatting issue in Gitlab CI
    • Remove stale / unused code
    • Use only UUID datatype rather than combination String/UUID for systemID
    • Mark variables final and @NotNull where relevant to indicate intention
  • Change Date values to DateTime to maintain compliance with Noark 5 standard
  • Domain model improvements using Hypersistence Optimizer
    • Move @Transactional from class to methods to avoid borrowing the JDBC Connection unnecessarily
    • Fix OneToOne performance issues
    • Fix ManyToMany performance issues
    • Add missing bidirectional synchronization support
    • Fix ManyToMany performance issue
  • Make List
  • and Set
  • use final-keyword to avoid potential problems during update operations
  • Changed internal URLs, replaced "hateoas-api" with "api".
  • Implemented storing of Precedence.
  • Corrected handling of screening.
  • Corrected _links collection returned for list of mixed entity types to match the specific entity.
  • Improved several internal structures.

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianVincent Bernat: Serving WebP & AVIF images with Nginx

WebP and AVIF are two image formats for the web. They aim to produce smaller files than JPEG and PNG. They both support lossy and lossless compression, as well as alpha transparency. WebP was developed by Google and is a derivative of the VP8 video format.1 It is supported on most browsers. AVIF is using the newer AV1 video format to achieve better results. It is supported by Chromium-based browsers and has experimental support for Firefox.2

Your browser supports WebP and AVIF image formats. Your browser supports none of these image formats. Your browser only supports the WebP image format. Your browser only supports the AVIF image format.

Without JavaScript, I can’t tell what your browser supports.

Converting and optimizing images

For this blog, I am using the following shell snippets to convert and optimize JPEG and PNG images. Skip to the next section if you are only interested in the Nginx setup.

JPEG images

JPEG images are converted to WebP using cwebp.

find media/images -type f -name '*.jpg' -print0 \
  | xargs -0n1 -P$(nproc) -i \
      cwebp -q 84 -af '{}' -o '{}'.webp

They are converted to AVIF using avifenc from libavif:

find media/images -type f -name '*.jpg' -print0 \
  | xargs -0n1 -P$(nproc) -i \
      avifenc --codec aom --yuv 420 --min 20 --max 25 '{}' '{}'.avif

Then, they are optimized using jpegoptim built with Mozilla’s improved JPEG encoder, via Nix. This is one reason I love Nix.

jpegoptim=$(nix-build --no-out-link \
      -E 'with (import <nixpkgs>{}); jpegoptim.override { libjpeg = mozjpeg; }')
find media/images -type f -name '*.jpg' -print0 \
  | sort -z
  | xargs -0n10 -P$(nproc) \
      ${jpegoptim}/bin/jpegoptim --max=84 --all-progressive --strip-all

PNG images

PNG images are down-sampled to 8-bit RGBA-palette using pngquant. The conversion reduces file sizes significantly while being mostly invisible.

find media/images -type f -name '*.png' -print0 \
  | sort -z
  | xargs -0n10 -P$(nproc) \
      pngquant --skip-if-larger --strip \
               --quiet --ext .png --force

Then, they are converted to WebP with cwebp in lossless mode:

find media/images -type f -name '*.png' -print0 \
  | xargs -0n1 -P$(nproc) -i \
      cwebp -z 8 '{}' -o '{}'.webp

No conversion is done to AVIF: lossless compression is not as efficient as pngquant and lossy compression is only marginally better than what I get with WebP.

Keeping only the smallest files

I am only keeping WebP and AVIF images if they are at least 10% smaller than the original format: decoding is usually faster for JPEG and PNG; and JPEG images can be decoded progressively.3

for f in media/images/**/*.{webp,avif}; do
  orig=$(stat --format %s ${f%.*})
  new=$(stat --format %s $f)
  (( orig*0.90 > new )) || rm $f

I only keep AVIF images if they are smaller than WebP.

for f in media/images/**/*.avif; do
  [[ -f ${f%.*}.webp ]] || continue
  orig=$(stat --format %s ${f%.*}.webp)
  new=$(stat --format %s $f)
  (( $orig > $new )) || rm $f

We can compare how many images are kept when converted to WebP or AVIF:

printf "     %10s %10s %10s\n" Original WebP AVIF
for format in png jpg; do
  printf " ${format:u} %10s %10s %10s\n" \
    $(find media/images -name "*.$format" | wc -l) \
    $(find media/images -name "*.$format.webp" | wc -l) \
    $(find media/images -name "*.$format.avif" | wc -l)

AVIF is better than MozJPEG for most JPEG files while WebP beats MozJPEG only for one file out of two:

       Original       WebP       AVIF
 PNG         64         47          0
 JPG         83         40         74

Further reading

I didn’t detail my choices for quality parameters and there is not much science in it. Here are two resources providing more insight on AVIF:

Serving WebP & AVIF with Nginx

To serve WebP and AVIF images, there are two possibilities:

  1. use <picture> to let the browser pick the format it supports, or
  2. use content negotiation to let the server send the best-supported format.

I use the second approach. It relies on inspecting the Accept HTTP header in the request. For Chrome, it looks like this:

Accept: image/avif,image/webp,image/apng,image/*,*/*;q=0.8

I configure Nginx to serve AVIF image, then the WebP image, and fallback to the original JPEG/PNG image depending on what the browser advertises:4

http {
  map $http_accept $webp_suffix {
    default        "";
    "~image/webp"  ".webp";
  map $http_accept $avif_suffix {
    default        "";
    "~image/avif"  ".avif";
server {
  # […]
  location ~ ^/images/.*\.(png|jpe?g)$ {
    add_header Vary Accept;
    try_files $uri$avif_suffix$webp_suffix $uri$avif_suffix $uri$webp_suffix $uri =404;

For example, let’s suppose the browser requests /images/ont-box-orange@2x.jpg. If it supports WebP but not AVIF, $webp_suffix is set to .webp while $avif_suffix is set to the empty string. The server tries to serve the first existing file in this list:

  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg
  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg

If the browser supports both AVIF and WebP, Nginx walks the following list:

  • /images/ont-box-orange@2x.jpg.webp.avif (it never exists)
  • /images/ont-box-orange@2x.jpg.avif
  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg

Eugene Lazutkin explains in more detail how this works. I have only presented a variation of his setup supporting both WebP and AVIF.

  1. VP8 is only used for lossy compression. Lossless compression is using an unrelated format↩︎

  2. Firefox support was scheduled for Firefox 86 but because of the lack of proper color space support, it is still not enabled by default. ↩︎

  3. Progressive decoding is not planned for WebP but could be implemented using low-quality thumbnail images for AVIF. See this issue for a discussion. ↩︎

  4. The Vary header ensures an intermediary cache (a proxy or a CDN) checks the Accept header before using a cached response. Internet Explorer has trouble with this header and may not be able to cache the resource properly. There is a workaround but Internet Explorer’s market share is now so small that it is pointless to implement it. ↩︎

Kevin RuddThe Project: Kevin Rudd on Morrison’s tired tune on climate ahead of the G7

09 JUNE 2021

The post The Project: Kevin Rudd on Morrison’s tired tune on climate ahead of the G7 appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Quite the Event

A few years back, Alvin was in college, and took his first formal summer job as a programmer. It was supposed to be just some HTML layout work, and the designer handed him a few sample pages. "Just do it like this."

The "sample pages" were a mishmash of random indentation, huge swathes of commented out code, and all those other little traits of "someone's just coding in production, aren't they?" that crop up in files. Still, it was just some HTML layout work, so how hard could it be?

Not too hard- until Alvin started popping up the DOM inspector. Every time he opened debugging windows in his browser, the page started misbehaving. Menus stopped staying popped open, or would pop open at seemingly random intervals. He could refresh, and it would go away, but the next time he opened (or closed) the DOM inspector, the problems would come back.

In the one JavaScript file in the project, Alvin found this:

$(document).ready(websitefunction); $(window).resize(websitefunction); function websitefunction(){ $.slidebars(); $('select').each(function(){ back_position = $(this).width()-20; back_position1 = back_position+'px 13px'; $(this).css('background-position',back_position1); }); $('.btn-default').click(function(){ $(this).parent().find('.btn-default').removeClass('active'); $(this).addClass('active'); }); $('.slb_btn1').click(function(){ $('.slb_row1').show(); }); $('.slb_btn2').click(function(){ $('.slb_row1').hide(); $('.slb_row2').show(); }); $('.slb_btn3').click(function(){ $('.slb_row2').hide(); }); $('.slb_btn4').click(function(){ $('.slb_row2').show(); }); if($(document).width()>770){ $('.submenu').width($(document).width()); for(i=1;i<5;i++){ arrow_left = $('.sub_'+i).offset().left+ $('.sub_'+i).width()/2; $('.arrow'+i).css({'left':arrow_left}); } $('.sub_1').mouseover( function(){ $('.submenu').hide(); $('.pro_service').slideDown(100) }); $('.sub_2').mouseover( function(){ $('.submenu').hide(); $('.clien_parner').slideDown(100) }); $('.sub_3').mouseover( function(){ $('.submenu').hide(); $('.help_service').slideDown(100) } ) $('.sub_4').mouseover( function(){ $('.submenu').hide(); $('.my_profile').slideDown(100) } ); $('.submenu').click(function(e){ e.stopPropagation(); }); $(document).click(function(){ $('.submenu').slideUp(100); }); } }

There's a lot to dislike about this wall of event handler registrations, but it's the first two lines that really tell us the story:

$(document).ready(websitefunction); $(window).resize(websitefunction);

When the document is ready, invoke the websitefunction which registers all the event handlers. This is a reasonable and normal thing to do. When the window is resized- we do the same thing, without clearing out any of the existing event handlers. Each time the window gets resized (whether by pulling up debugging tools or by just resizing the browser window) we duplicate the existing event handlers.

If nothing else, Alvin learned some examples of what not to do on that summer job.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianLouis-Philippe Véronneau: New Desktop Computer

I built my last desktop computer what seems like ages ago. In 2011, I was in a very different place, both financially and as a person. At the time, I was earning minimum wage at my school's café to pay rent. Since the café was owned by the school cooperative, I had an employee discount on computer parts. This gave me a chance to build my first computer from spare parts at a reasonable price.

After 10 years of service1, the time has come to upgrade. Although this machine was still more than capable for day to day tasks like browsing the web or playing casual video games, it started to show its limits when time came to do more serious work.

Old computer specs:

CPU: AMD FX-8530
Memory: 8GB DDR3 1600Mhz
Motherboard: ASUS TUF SABERTOOTH 990FX R2.0
Storage: Samsung 850 EVO 500GB SATA

I first started considering an upgrade in September 2020: David Bremner was kindly fixing a bug in ledger that kept me from balancing my books and since it seemed like a class of bug that would've been easily caught by an autopkgtest, I decided to add one.

After adding the necessary snippets to run the upstream testsuite (an easy task I've done multiple times now), I ran sbuild and ... my computer froze and crashed. Somehow, what I thought was a simple Python package was maxing all the cores on my CPU and using all of the 8GB of memory I had available.2

A few month later, I worked on jruby and the builds took 20 to 30 minutes — long enough to completely disrupt my flow. The same thing happened when I wanted to work on lintian: the testsuite would take more than 15 minutes to run, making quick iterations impossible.

Sadly, the pandemic completely wrecked the computer hardware market and prices here in Canada have only recently started to go down again. As a result, I had to wait more time than I would've liked not to pay scalper prices.

New computer specs:

CPU: AMD Ryzen 5900X
Memory: 64GB DDR4 3200MHz
Motherboard: MSI MPG B550 Gaming Plus
Storage: Corsair MP600 500 GB Gen4 NVME

The difference between the two machines is pretty staggering: I've gone from a CPU with 2 cores and 8 threads, to one with 12 cores and 24 threads. Not only that, but single-threaded performance has also vastly increased in those 10 years.

A good example would be building grammalecte, a package I've recently sponsored. I feel it's a good benchmark, since the build relies on single-threaded performance for the normal Python operations, while being threaded when it compiles the dictionaries.

On the old computer:

Build needed 00:10:07, 273040k disk space

And as you can see, on the new computer the build time has been significantly reduced:

Build needed 00:03:18, 273040k disk space

Same goes for things like the lintian testsuite. Since it's a very multi-threaded workload, it now takes less than 2 minutes to run; a 750% improvement.

All this to say I'm happy with my purchase. And — lo and behold — I can now build ledger without a hitch, even though it maxes my 24 threads and uses 28GB of RAM. Who would've thought...

Screen capture of htop showing how much resources ledger takes to build

  1. I managed to fry that PC's motherboard in 2016 and later replaced it with a brand new one. I also upgraded the storage along the way, from a very cheap cacheless 120GB SSD to a larger Samsung 850 EVO SATA drive. 

  2. As it turns out, ledger is mostly written in C++ :) 

Planet DebianDirk Eddelbuettel: #33: Collaborative Editing and Execution in Shared Byoby Sessions

Welcome to the 33th post in the rigorously raconteuring R recommendations series, or R4 for short. This post is also a post in the T4 series of tips, tricks, tools, and toys as it picks up and extends earlier posts on byobu. And it fits nicely in the more recent ESS-Intro series as we show some Emacs. You can find earlier R4 posts here, and the T4 posts here; the ESS-Intro series is here.

The focus of this short video (and slides) is on collaboration using files, but also entire sessions, execution and all aspects of joint exploration, development or debugging. Anything you can do in a terminal you can also do shared in a terminal. The video contains a brief lightning talk, and a shared session jointly with Grant McDermott and Vicent Arel-Bundock. My big big thanks to both of them for prodding and encouragement, as well as fearless participation in the joint section of the video:

The corresponding pdf slides are here.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianMichael Prokop: efivars is gone with Debian/bullseye #newinbullseye

Continuing with #newinbullseye, it’s worth being aware of, that efivars is gone with the kernel version shipped as of Debian/bullseye.

Quoting from

The Linux kernel gives access to the UEFI configuration variables via a set of files under /sys, using two different interfaces.

The older interface was showing files under /sys/firmware/efi/vars, and this is what was used by default in both Wheezy and Jessie.

The new interface is efivarfs, which will expose things in a slightly different format under /sys/firmware/efi/efivars.
This is the new preferred way of using UEFI configuration variables, and Debian switched to it by default from Stretch onwards.

Now, CONFIG_EFI_VARS is no longer enabled in Debian due to commit 20146398c4 (shipped as such with Debian kernel package versions >=5.10.1-1~exp1).

As a result, the kernel module efivars is no longer available on systems running Debian kernels >=5.10 (which includes Debian/bullseye). Now, when running such a system in EFI mode, chroot-ing into a system and executing e.g. efibootmgr, it might fail with:

# efibootmgr
EFI variables are not supported on this system.

This is caused by /sys/firmware/efi/vars no longer being available, because of the disabled CONFIG_EFI_VARS. To get this working again, you need to make efivarfs available via:

# mount -t efivarfs efivarfs /sys/firmware/efi/efivars

Then efibootmgr and further tools relying on efivars should work again.

FYI: if you’re a user of Grml’s grml-chroot tool, this is going to be handled out of the box for you.

Planet DebianThorsten Alteholz: My Debian Activities in May 2021

FTP master

This month I accepted 85 and rejected 6 packages. The overall number of packages that got accepted was only 88. Yeah, Debian is frozen but hopefully will unfreeze soon.

Debian LTS

This was my eighty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 29.75h. During that time I did LTS and normal security uploads of:

  • [DLA 2650-1] exim4 security update for 17 CVEs
  • [DLA 2665-1] ring security update one CVE
  • [DLA 2669-1] libxml2 security update one CVE
  • the fix for tnef/CVE-2019-18849 had been approved and I could do the PU-upload

I also made some progress with gpac and struggle with dozens of issues here.

Last but not least I did some days of frontdesk duties, which for whatever reason was rather time-consuming this month.

Debian ELTS

This month was the thirty-fifth ELTS month.

During my allocated time I uploaded:

  • ELA-420-1 for exim4
  • ELA-435-1 for python2.7
  • ELA-436-1 for libxml2

I also made some progress with python3.4

Last but not least I did some days of frontdesk duties.

Other stuff

On my neverending golang challenge I again uploaded some packages either for NEW or as source upload.

Last but not least I adopted gnucobol.

Planet DebianEnrico Zini: Ansible recurse and follow quirks

I'm reading Ansible's builtin.file sources for, uhm, reasons, and the use of follow stood out to my eyes. Reading on, not only that. I feel like the ansible codebase needs a serious review, at least in essential core modules like this one.

In the file module documentation it says:

This flag indicates that filesystem links, if they exist, should be followed.

In the recursive_set_attributes implementation instead, follow means "follow symlinks to directories", but if a symlink to a file is found, it does not get followed, kind of.

What happens is that ansible will try to change the mode of the symlink, which makes sense on some operating systems. And it does try to use lchmod if present. Buf if not, this happens:

# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
    os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))

So it tries doing chmod on the symlink, and if that changed the mode of the actual file, switch it back.

I would have appreciated a comment documenting on which systems a hack like this makes sense. As it is, it opens a very short time window in which a symlink attack can make a system file vulerable, and an exception thrown by the second stat will make it vulnerable permanently.

What about follow following links during recursion: how does it avoid loops? I don't see a cache of (device, inode) pairs visited. Let's try:

fatal: [localhost]: FAILED! => {"changed": false, "details": "maximum recursion depth exceeded", "gid": 1000, "group": "enrico", "mode": "0755", "msg": "mode must be in octal or symbolic form", "owner": "enrico", "path": "/tmp/test/test1", "size": 0, "state": "directory", "uid": 1000}

Ok, it, uhm, delegates handling that to the Python stack size. I guess it means that a ln -s .. foo in a directory that gets recursed will always fail the task. Fun!

More quirks

Turning a symlink into a hardlink is considered a noop if the symlink points to the same file:

- hosts: localhost
   - name: create test file
        path: /tmp/testfile
        state: touch
   - name: create test link
        path: /tmp/testlink
        state: link
        src: /tmp/testfile
   - name: turn it into a hard link
        path: /tmp/testlink
        state: hard
        src: /tmp/testfile


$ ansible-playbook test3.yaml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] ************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [create test file] *****************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [create test link] *****************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [turn it into a hard link] *********************************************************************************************************************************************************************************************
ok: [localhost]

PLAY RECAP ******************************************************************************************************************************************************************************************************************
localhost                  : ok=4    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

More quirks

Converting a directory into a hardlink should work, but it doesn't because unlink is used instead of rmdir:

- hosts: localhost
   - name: create test dir
        path: /tmp/testdir
        state: directory
   - name: turn it into a symlink
        path: /tmp/testdir
        state: hard
        src: /tmp/
        force: yes


$ ansible-playbook test4.yaml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] ************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [create test dir] ******************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [turn it into a symlink] ***********************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "gid": 1000, "group": "enrico", "mode": "0755", "msg": "Error while replacing: [Errno 21] Is a directory: b'/tmp/testdir'", "owner": "enrico", "path": "/tmp/testdir", "size": 0, "state": "directory", "uid": 1000}

PLAY RECAP ******************************************************************************************************************************************************************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

More quirks

This is hard to test, but it looks like if source and destination are hardlinks to the same inode numbers, but on different filesystems, the operation is considered a successful noop:

It should probably be something like:

if (st1.st_dev, st1.st_ino) == (st2.st_dev, st2.st_ino):

Worse Than FailureCodeSOD: Getting Overloaded With Details

Operator overloading is one of those "dangerous" features. Like multi-parent inheritance, it can potentially create some really expressive, easy to read code, or it can create a disaster of incomprehensible nonsense.

In C++, many core classes use operator overloading, most notably the I/O classes, which reuse (or abuse) the bitshift operators into stream operators. So, for example, one possible way of converting a string into an integer might be to do something like this:

bool str2val(const string &s, int &v) { std::istringstream iss(s); istream &i = (iss >> v); bool b = (!( || i.bad())) && iss.eof(); return b; }

As I'm reconstructing this example from our submitter's notes, any error in this code is mine.

This particular example converts a string into an istringstream- wrapping an input stream around a string. Then, using the >> operator, attempts to read the contents of that stream into an integer variable. This, again, is one of those operator overloads. It also returns an istream object, which we can then use to check the status of the conversion.

Now, this particular example comes from Gaetan, who didn't implement it because it was the best way to convert strings to integers. Instead, this is a compromise solution, that I included first so that we can understand what the code is supposed to do.

Gaetan's interaction with this code started, as so many legacy projects do, with "it doesn't compile." A fifteen year old code block, which had compiled just fine under GCC 4.8.0 stopped compiling under GCC 5.5.0.

Gaetan pulled up the code, and found this:

bool str2val(const string &s, int &v) { std::istringstream iss(s); bool b = iss.operator >>(v).operator void*() != 0 && iss.eof(); return b; }

This code does the same thing as the above block, but without any hint of readability. Instead of using the overloaded operator as an operator, the original developer instead invokes it like a function, and then invokes a conversion to void*, which is utter nonsense. In fact, it was that specific conversion which GCC 5+ balked on, but which GCC 4.8.0 did not mind in the least. The comparison against 0 was just a hack-y way to see if the fail/bad bits were set.

There's no reason to write the code this way, except perhaps because you have a really deep understanding of C++ syntax and really want to show it off. Gaetan is not a C++ wizard, so when re-implementing this code in a way that compiled, they did their best to re-implement it as close as possible to the original implementation.

Just, y'know, in a way that was readable, that compiled, and didn't depend on some type-punning magic to work.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Planet DebianSteinar H. Gunderson: Encoding AVIF from Perl

AVIF is basically AV1 in a HEIF container; it's one of several contenders (among WebP, WebP2, HEIC, JPEG XL and probably others) for the next-generation still image codec on the web.

I wanted to try it out, but it turns out that ImageMagick (which is what my image gallery uses behind the scenes) in Debian is too old to support writing it, and besides, I'm not sure if I'd trust it to get everything right. Here's the code for 4:2:0:

      my ($fh, $raw_filename) = File::Temp::tempfile('tmp.XXXXXXXX', DIR => $dirname, SUFFIX => '.yuv');
      # Write a Y4M header, so that we get the chroma siting and color space correct.
      printf $fh "YUV4MPEG2 W%d H%d F25:1 Ip A1:1 C420jpeg XYSCSS=420JPEG XCOLORRANGE=FULL\nFRAME\n", $nwidth, $nheight;
      my %parms = (
        file => $fh,
        filename => $raw_filename,
        'sampling-factor' => '2x2'
my $ivf_filename; ($fh, $ivf_filename) = File::Temp::tempfile('tmp.XXXXXXXX', DIR => $dirname, SUFFIX => '.ivf'); close($fh);
system('aomenc', '--quiet', '--cpu-used=0', '--bit-depth=10', '--end-usage=q', '--cq-level=10', '--target-bitrate=0', '--good', '--aq-mode=1', '--matrix-coefficients=bt601', '-o', $ivf_filename, $raw_filename); unlink($raw_filename); system('MP4Box', '-quiet', '-add-image', "$ivf_filename:primary", '-ab', 'avif', '-ab', 'miaf', '-new', $cachename);

I eventually figured 4:2:0 wasn't going to cut it for all of my images, though, so I switched to 4:4:4, which confusingly is rather different from ImageMagick (not from aomenc):

      my ($fh, $raw_filename) = File::Temp::tempfile('tmp.XXXXXXXX', DIR => $dirname, SUFFIX => '.ycbcr');
      # Write a Y4M header, so that we get the chroma range correct.
      printf $fh "YUV4MPEG2 W%d H%d F25:1 Ip A1:1 C444 XYSCSS=444 XCOLORRANGE=FULL\nFRAME\n", $nwidth, $nheight;
      my %parms = (
        file => $fh,
        filename => $raw_filename,
        interlace => 'Plane'
      # and so on...

Unfortunately, I am unable to find a setting where AVIF 4:4:4 consistently outperforms regular JPEG 4:4:4 (even with libjpeg); it's fantastic at lower bitrates, but I want visual lossless, and at those bitrates, it's either smoothing way too much structure or using way too may bits. So next out is probably trying JPEG XL, or maybe figuring out how to hook up mozjpeg…

Krebs on SecurityMicrosoft Patches Six Zero-Day Security Holes

Microsoft today released another round of security updates for Windows operating systems and supported software, including fixes for six zero-day bugs that malicious hackers already are exploiting in active attacks.

June’s Patch Tuesday addresses just 49 security holes — about half the normal number of vulnerabilities lately. But what this month lacks in volume it makes up for in urgency: Microsoft warns that bad guys are leveraging a half-dozen of those weaknesses to break into computers in targeted attacks.

Among the zero-days are:

CVE-2021-33742, a remote code execution bug in a Windows HTML component.
CVE-2021-31955, an information disclosure bug in the Windows Kernel
CVE-2021-31956, an elevation of privilege flaw in Windows NTFS
CVE-2021-33739, an elevation of privilege flaw in the Microsoft Desktop Window Manager
CVE-2021-31201, an elevation of privilege flaw in the Microsoft Enhanced Cryptographic Provider
CVE-2021-31199, an elevation of privilege flaw in the Microsoft Enhanced Cryptographic Provider

Kevin Breen, director of cyber threat research at Immersive Labs, said elevation of privilege flaws are just as valuable to attackers as remote code execution bugs: Once the attacker has gained an initial foothold, he can move laterally across the network and uncover further ways to escalate to system or domain-level access.

“This can be hugely damaging in the event of ransomware attacks, where high privileges can enable the attackers to stop or destroy backups and other security tools,” Breen said. “The ‘exploit detected’ tag means attackers are actively using them, so for me, it’s the most important piece of information we need to prioritize the patches.”

Microsoft also patched five critical bugs — flaws that can be remotely exploited to seize control over the targeted Windows computer without any help from users. CVE-2021-31959 affects everything from Windows 7 through Windows 10 and Server versions 2008, 2012, 2016 and 2019.

Sharepoint also got a critical update in CVE-2021-31963; Microsoft says this one is less likely to be exploited, but then critical Sharepoint flaws are a favorite target of ransomware criminals.

Interestingly, two of the Windows zero-day flaws — CVE-2021-31201 and CVE-2021-31199 — are related to a patch Adobe released recently for CVE-2021-28550, a flaw in Adobe Acrobat and Reader that also is being actively exploited.

“Attackers have been seen exploiting these vulnerabilities by sending victims specially crafted PDFs, often attached in a phishing email, that when opened on the victim’s machine, the attacker is able to gain arbitrary code execution,” said Christopher Hass, director of information security and research at Automox. “There are no workarounds for these vulnerabilities, patching as soon as possible is highly recommended.”

In addition to updating Acrobat and Reader, Adobe patched flaws in a slew of other products today, including Adobe Connect, Photoshop, and Creative Cloud. The full list is here, with links to updates.

The usual disclaimer:

Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for Windows updates to hose one’s system or prevent it from booting properly, and some updates even have been known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

For a quick visual breakdown of each update released today and its severity level, check out the this Patch Tuesday post from the SANS Internet Storm Center.

Cryptogram Detecting Deepfake Picture Editing

“Markpainting” is a clever technique to watermark photos in such a way that makes it easier to detect ML-based manipulation:

An image owner can modify their image in subtle ways which are not themselves very visible, but will sabotage any attempt to inpaint it by adding visible information determined in advance by the markpainter.

One application is tamper-resistant marks. For example, a photo agency that makes stock photos available on its website with copyright watermarks can markpaint them in such a way that anyone using common editing software to remove a watermark will fail; the copyright mark will be markpainted right back. So watermarks can be made a lot more robust.

Here’s the paper: “Markpainting: Adversarial Machine Learning Meets Inpainting,” by David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, and Ross Anderson.

Abstract: Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching. Recently, inpainting started being used for watermark removal, raising concerns. In this paper we study how to manipulate it using our markpainting technique. First, we show how an image owner with access to an inpainting model can augment their image in such a way that any attempt to edit it using that model will add arbitrary visible information. We find that we can target multiple different models simultaneously with our technique. This can be designed to reconstitute a watermark if the editor had been trying to remove it. Second, we show that our markpainting technique is transferable to models that have different architectures or were trained on different datasets, so watermarks created using it are difficult for adversaries to remove. Markpainting is novel and can be used as a manipulation alarm that becomes visible in the event of inpainting.

Cryptogram Information Flows and Democracy

Henry Farrell and I published a paper on fixing American democracy: “Rechanneling Beliefs: How Information Flows Hinder or Help Democracy.”

It’s much easier for democratic stability to break down than most people realize, but this doesn’t mean we must despair over the future. It’s possible, though very difficult, to back away from our current situation towards one of greater democratic stability. This wouldn’t entail a restoration of a previous status quo. Instead, it would recognize that the status quo was less stable than it seemed, and a major source of the tensions that have started to unravel it. What we need is a dynamic stability, one that incorporates new forces into American democracy rather than trying to deny or quash them.

This paper is our attempt to explain what this might mean in practice. We start by analyzing the problem and explaining more precisely why a breakdown in public consensus harms democracy. We then look at how these beliefs are being undermined by three feedback loops, in which anti-democratic actions and anti-democratic beliefs feed on each other. Finally, we explain how these feedback loops might be redirected so as to sustain democracy rather than undermining it.

To be clear: redirecting these and other energies in more constructive ways presents enormous challenges, and any plausible success will at best be untidy and provisional. But, almost by definition, that’s true of any successful democratic reforms where people of different beliefs and values need to figure out how to coexist. Even when it’s working well, democracy is messy. Solutions to democratic breakdowns are going to be messy as well.

This is part of our series of papers looking at democracy as an information system. The first paper was “Common-Knowledge Attacks on Democracy.”

Planet DebianEnrico Zini: Mock syscalls with C++

I wrote and maintain some C++ code to stream high quantities of data as fast as possible, and I try to use splice and sendfile when available.

The availability of those system calls varies at runtime according to a number of factors, and the code needs to be written to fall back to read/write loops depending on what the splice and sendfile syscalls say.

The tricky issue is unit testing: since the code path chosen depends on the kernel, the test suite will test one path or the other depending on the machine and filesystems where the tests are run.

It would be nice to be able to mock the syscalls, and replace them during tests, and it looks like I managed.

First I made catalogues of the mockable syscalls I want to be able to mock. One with function pointers, for performance, and one with std::function, for flexibility:

 * Linux versions of syscalls to use for concrete implementations.
struct ConcreteLinuxBackend
    static ssize_t (*read)(int fd, void *buf, size_t count);
    static ssize_t (*write)(int fd, const void *buf, size_t count);
    static ssize_t (*writev)(int fd, const struct iovec *iov, int iovcnt);
    static ssize_t (*sendfile)(int out_fd, int in_fd, off_t *offset, size_t count);
    static ssize_t (*splice)(int fd_in, loff_t *off_in, int fd_out,
                             loff_t *off_out, size_t len, unsigned int flags);
    static int (*poll)(struct pollfd *fds, nfds_t nfds, int timeout);
    static ssize_t (*pread)(int fd, void *buf, size_t count, off_t offset);

 * Mockable versions of syscalls to use for testing concrete implementations.
struct ConcreteTestingBackend
    static std::function<ssize_t(int fd, void *buf, size_t count)> read;
    static std::function<ssize_t(int fd, const void *buf, size_t count)> write;
    static std::function<ssize_t(int fd, const struct iovec *iov, int iovcnt)> writev;
    static std::function<ssize_t(int out_fd, int in_fd, off_t *offset, size_t count)> sendfile;
    static std::function<ssize_t(int fd_in, loff_t *off_in, int fd_out,
                                 loff_t *off_out, size_t len, unsigned int flags)> splice;
    static std::function<int(struct pollfd *fds, nfds_t nfds, int timeout)> poll;
    static std::function<ssize_t(int fd, void *buf, size_t count, off_t offset)> pread;

    static void reset();

Then I converted the code to templates, parameterized on the catalogue class.

Explicit template instantiation helps in making sure that one doesn't need to include template code in all sorts of places.

Finally, I can have a RAII class for mocking:

 * RAII mocking of syscalls for concrete stream implementations
struct MockConcreteSyscalls
    std::function<ssize_t(int fd, void *buf, size_t count)> orig_read;
    std::function<ssize_t(int fd, const void *buf, size_t count)> orig_write;
    std::function<ssize_t(int fd, const struct iovec *iov, int iovcnt)> orig_writev;
    std::function<ssize_t(int out_fd, int in_fd, off_t *offset, size_t count)> orig_sendfile;
    std::function<ssize_t(int fd_in, loff_t *off_in, int fd_out,
                                 loff_t *off_out, size_t len, unsigned int flags)> orig_splice;
    std::function<int(struct pollfd *fds, nfds_t nfds, int timeout)> orig_poll;
    std::function<ssize_t(int fd, void *buf, size_t count, off_t offset)> orig_pread;


    : orig_read(ConcreteTestingBackend::read),

    ConcreteTestingBackend::read = orig_read;
    ConcreteTestingBackend::write = orig_write;
    ConcreteTestingBackend::writev = orig_writev;
    ConcreteTestingBackend::sendfile = orig_sendfile;
    ConcreteTestingBackend::splice = orig_splice;
    ConcreteTestingBackend::poll = orig_poll;
    ConcreteTestingBackend::pread = orig_pread;

And here's the specialization to pretend sendfile and splice aren't available:

 * Mock sendfile and splice as if they weren't available on this system
struct DisableSendfileSplice : public MockConcreteSyscalls

    ConcreteTestingBackend::sendfile = [](int out_fd, int in_fd, off_t *offset, size_t count) -> ssize_t {
        errno = EINVAL;
        return -1;
    ConcreteTestingBackend::splice = [](int fd_in, loff_t *off_in, int fd_out,
                                        loff_t *off_out, size_t len, unsigned int flags) -> ssize_t {
        errno = EINVAL;
        return -1;

It's now also possible to reproduce in the test suite all sorts of system-related issues we might observe in production over time.

Cryptogram Vulnerabilities in Weapons Systems

“If you think any of these systems are going to work as expected in wartime, you’re fooling yourself.”

That was Bruce’s response at a conference hosted by US Transportation Command in 2017, after learning that their computerized logistical systems were mostly unclassified and on the Internet. That may be necessary to keep in touch with civilian companies like FedEx in peacetime or when fighting terrorists or insurgents. But in a new era facing off with China or Russia, it is dangerously complacent.

Any twenty-first century war will include cyber operations. Weapons and support systems will be successfully attacked. Rifles and pistols won’t work properly. Drones will be hijacked midair. Boats won’t sail, or will be misdirected. Hospitals won’t function. Equipment and supplies will arrive late or not at all.

Our military systems are vulnerable. We need to face that reality by halting the purchase of insecure weapons and support systems and by incorporating the realities of offensive cyberattacks into our military planning.

Over the past decade, militaries have established cyber commands and developed cyberwar doctrine. However, much of the current discussion is about offense. Increasing our offensive capabilities without being able to secure them is like having all the best guns in the world, and then storing them in an unlocked, unguarded armory. They just won’t be stolen; they’ll be subverted.

During that same period, we’ve seen increasingly brazen cyberattacks by everyone from criminals to governments. Everything is now a computer, and those computers are vulnerable. Cars, medical devices, power plants, and fuel pipelines have all been targets. Military computers, whether they’re embedded inside weapons systems or on desktops managing the logistics of those weapons systems, are similarly vulnerable. We could see effects as stodgy as making a tank impossible to start up, or sophisticated as retargeting a missile midair.

Military software is unlikely to be any more secure than commercial software. Although sensitive military systems rely on domestically manufactured chips as part of the Trusted Foundry program, many military systems contain the same foreign chips and code that commercial systems do: just like everyone around the world uses the same mobile phones, networking equipment, and computer operating systems. For example, there has been serious concern over Chinese-made 5G networking equipment that might be used by China to install “backdoors” that would allow the equipment to be controlled. This is just one of many risks to our normal civilian computer supply chains. And since military software is vulnerable to the same cyberattacks as commercial software, military supply chains have many of the same risks.

This is not speculative. A 2018 GAO report expressed concern regarding the lack of secure and patchable US weapons systems. The report observed that “in operational testing, the [Department of Defense] routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic.” It’s a similar attitude to corporate executives who believe that they can’t be hacked — and equally naive.

An updated GAO report from earlier this year found some improvements, but the basic problem remained: “DOD is still learning how to contract for cybersecurity in weapon systems, and selected programs we reviewed have struggled to incorporate systems’ cybersecurity requirements into contracts.” While DOD now appears aware of the issue of lack of cybersecurity requirements, they’re still not sure yet how to fix it, and in three of the five cases GAO reviewed, DOD simply chose to not include the requirements at all.

Militaries around the world are now exploiting these vulnerabilities in weapons systems to carry out operations. When Israel in 2007 bombed a Syrian nuclear reactor, the raid was preceded by what is believed to have been a cyber attack on Syrian air defenses that resulted in radar screens showing no threat as bombers zoomed overhead. In 2018, a 29-country NATO exercise, Trident Juncture, that included cyberweapons was disrupted by Russian GPS jamming. NATO does try to test cyberweapons outside such exercises, but has limited scope in doing so. In May, Jens Stoltenberg, the NATO secretary-general, said that “NATO computer systems are facing almost daily cyberattacks.”

The war of the future will not only be about explosions, but will also be about disabling the systems that make armies run. It’s not (solely) that bases will get blown up; it’s that some bases will lose power, data, and communications. It’s not that self-driving trucks will suddenly go mad and begin rolling over friendly soldiers; it’s that they’ll casually roll off roads or into water where they sit, rusting, and in need of repair. It’s not that targeting systems on guns will be retargeted to 1600 Pennsylvania Avenue; it’s that many of them could simply turn off and not turn back on again.

So, how do we prepare for this next war? First, militaries need to introduce a little anarchy into their planning. Let’s have wargames where essential systems malfunction or are subverted­not all of the time, but randomly. To help combat siloed military thinking, include some civilians as well. Allow their ideas into the room when predicting potential enemy action. And militaries need to have well-developed backup plans, for when systems are subverted. In Joe Haldeman’s 1975 science-fiction novel The Forever War, he postulated a “stasis field” that forced his space marines to rely on nothing more than Roman military technologies, like javelins. We should be thinking in the same direction.

NATO isn’t yet allowing civilians not employed by NATO or associated military contractors access to their training cyber ranges where vulnerabilities could be discovered and remediated before battlefield deployment. Last year, one of us (Tarah) was listening to a NATO briefing after the end of the 2020 Cyber Coalition exercises, and asked how she and other information security researchers could volunteer to test cyber ranges used to train its cyber incident response force. She was told that including civilians would be a “welcome thought experiment in the tabletop exercises,” but including them in reality wasn’t considered. There is a rich opportunity for improvement here, providing transparency into where improvements could be made.

Second, it’s time to take cybersecurity seriously in military procurement, from weapons systems to logistics and communications contracts. In the three year span from the original 2018 GAO report to this year’s report, cybersecurity audit compliance went from 0% to 40% (those 2 of 5 programs mentioned earlier). We need to get much better. DOD requires that its contractors and suppliers follow the Cybersecurity Maturity Model Certification process; it should abide by the same standards. Making those standards both more rigorous and mandatory would be an obvious second step.

Gone are the days when we can pretend that our technologies will work in the face of a military cyberattack. Securing our systems will make everything we buy more expensive — maybe a lot more expensive. But the alternative is no longer viable.

The future of war is cyberwar. If your weapons and systems aren’t secure, don’t even bother bringing them onto the battlefield.

This essay was written with Tarah Wheeler, and previously appeared in Brookings TechStream.

Planet DebianJonathan Dowland: LaTeX draft documents

I'm writing up a PhD deliverable (which will show up here eventually) using LaTeX, which is my preferred tool for such things, since I also use it for papers, and will eventually be using it for my thesis itself. For this last document, I experimented with a few packages and techniques for organising the document which I found useful, so I thought I'd share them.

What version is this anyway?

I habitually store and develop my documents in git repositories. From time to time I generate a PDF and copy it off elsewhere to review (e.g., an iPad). Later on it can be useful to be able to figure out exactly what source built the PDF. I achieve this using

\newcommand{\version}{\input|"git describe --always --dirty"}

And \version\ somewhere in the header of the document.

Draft mode

The common document classes all accept a draft argument, to enable Draft Mode.


Various other packages behave differently if Draft Mode is enabled. The graphicx package, for example, doesn't actually draw pictures in draft mode, which I don't find useful. So for that package, I force it to behave as if we were in "Final Mode" at all times:


I want to also include some different bits and pieces in Draft Mode. Although the final version won't need it, I find having a Table of Contents very helpful during the writing process. The ifdraft package adds a convenience macro to query whether we are in draft or not. I use it like so:

This page will be cut from the final report.

For this document, I have been given the section headings I must use and the number of pages each section must run to. When drafting, I want to include the page budget in the section names (e.g. Background (2 pages)). I also force new pages at the beginning of each Section, to make it easier to see how close I am to each section's page budget.

\section{Work completed\ifdraft{ (1 page)}{}} % 1 Page


Two TODO items in the margin

Two TODO items in the margin

Collated TODOs in a list

Collated TODOs in a list

The todonotes package package is one of many that offers macros to make managing in-line TODO notes easier. Within the source of my document, I can add a TODO right next to the relevant text with \todo{something to do}. In the document, by default, this is rendered in the right-hand margin. With the right argument, the package will only render the notes in draft mode.


todonotes can also collate all the TODOs together into a single list. The list items are hyperlinked back to the page where the relevant item appears.

  This page will be cut from the final report.

Planet DebianPavit Kaur: GSoC: About my Project and Community Bonding Period

alt text

To start writing about updates regarding my GSoC project, the first obvious thing I need to do is to explain what my project really is. So let’s get started.

About my project

What is debci?

Directly stating from the official docs:

The Debian continuous integration (debci) is an automated system that coordinates the execution of automated tests against packages in the Debian system.

Let’s try decoding it:

Debian is a huge system with thousands of packages and within these packages exist inter-package dependencies. So if any package is updated, it is important to test if that package is working correctly but it is equally important to test that all the packages which are dependent on this updated package are working correctly too.

Debci is a platform serving this purpose of automated testing for the entire Debian archive whenever a new version of the package, or of any package in its dependency chain is available. It comes with a UI that lets developers easily run tests and see their results if they pass or not.

For my GSoC project, I am working to implement some incremental improvements to debci making it easier to use and maintain.

Community Bonding Period

The debci community

Everyone I have come across till now in the community is very nice. The debci community in itself is a small but active community. It really feels good to be a part of conversations here.

Weekly call set up

I have two mentors for this project Antonio Terceiro and Paul Gevers and they have set up a weekly sync call with me in which I will share my updates regarding the work done in the past week, any issues I am facing, and discuss the work for next week. In addition to this, I can always contact them on IRC for any issue I am stuck in.

Work till now

The first thing I did in the community bonding period is setting up this blog. I wanted to have one for a long time and this seems to be a really nice opportunity to start. And the fact this has been added to Planet Debian too makes me happier to write. I am still trying to get a hang of this and definitely need to learn how to spend less time writing it.

I also worked on my already opened merge requests during this period and got them merged.

Since I am already familiar with the codebase, so I started with my first deliverable a bit earlier before the official coding period begins which is migrating the logins to Salsa, Debian’s Gitlab Instance.

Currently, debci uses Debian SSO client certificates for logging in, but that is deprecated so it needs to be migrated to use Salsa as an identity provider.

The OmniAuth library is being used to implement this with help of ruby-omniauth-gitlab strategy. I explored a great deal about integrating OmniAuth with our application and bumped into so many issues too when I began implementing that. Once I am done integrating the Salsa Authentication with debci, I am planning to write a separate tutorial on that which could be helpful to other people using OmniAuth with their application.

With that, the community bonding period has ended on 7th June and the coding period officially begins and for now, I will be continuing working on my first deliverable.

That’s all for now. See you next time!

Planet DebianNorbert Preining: KDE/Plasma 5.22 for Debian

Today, KDE released version 5.22 of the Plasma desktop with the usual long list of updates and improvements. And packages for Debian are ready for consumption! Ah and yes, KDE Gear 21.04 is also ready!

As usual, I am providing packages via my OBS builds. If you have used my packages till now, then you only need to change the plasma521 line to read plasma522. Just for your convenience, if you want the full set of stuff, here are the apt-source entries:

deb ./
deb ./
deb ./
deb ./
deb ./

and for testing the same with Debian_unstable replaced with Debian_Testing. As usual, don’t forget that you need to import my OBS gpg key to make these repos work!

The sharp eye might have detected also the apps2104 line, yes the newly renamed KDE Gear suite of packages is also available in my OBS builds (and in Debian/experimental).

Uploads to Debian

Currently, the frameworks and most of the KDE Gear (Apps) 21.04 are in Debian/experimental. I will upload Plasma 5.22 to experimental as soon as NEW processing of two packages is finished (which might take another few months).

After the release of bullseye, all the current versions will be uploaded to unstable as usual.

Enjoy the new Plasma!

Planet DebianBits from Debian: Registration for DebConf21 Online is Open

DebConf21 banner

The DebConf team is glad to announce that registration for DebConf21 Online is now open.

The 21st Debian Conference is being held Online, due to COVID-19, from August 22 to August 29, 2021. It will also sport a DebCamp from August 15 to August 21, 2021 (preceeding the DebConf).

To register for DebConf21, please visit the DebConf website at

Reminder: Creating an account on the site does not register you for the conference, there's a conference registration form to complete after signing in.

Participation in DebConf21 is conditional on your respect of our Code of Conduct. We require you to read, understand and abide by this code.

A few notes about the registration process:

  • We need to know attendees' locations to better plan the schedule around timezones. Please make sure you fill in the "Country I call home" field in the registration form accordingly. It's especially important to have this data for people who submitted talks, but also for other attendees.

  • We are offering limited amounts of financial support for those who require it in order to attend. Please refer to the corresponding page on the website for more information.

Any questions about registration should be addressed to

See you online!

DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsors Lenovo and Infomaniak, and our Gold Sponsor Matanel Foundation.

Worse Than FailureCodeSOD: Contractor Management System

Maximillion was hired to maintain a PHP CMS. His new employer, up to this point, had just been contracting out work, but as time went on, contracting rates got higher, the amount of time required to add basic features kept going up. It was time to go ta a full time employee.

"The system's pretty simple," the hiring manager explained during the interview, "as I understand it, it's basically just one PHP file."

It was not just one PHP file, but the manager wasn't that far off. The front page of the site was 4.9MB. That's not 4.9MB rendered, that's the PHP file. PHP and HTML existed side by side with in-line JavaScript and in-line CSS.

And the code had some… treats.

//strip content of extra tinyMCE <p> tags $offset=0; $content = " ".$content; while(($pos = @stripos($content,'<p>&nbsp;</p>',$offset))) { $content = substr($content,0,$pos).substr($content,$pos+13); $offset = $pos+13; } // doing this twice seems to get all of them out; $offset=0; $content = " ".$content; while(($pos = @stripos($content,'<p>&nbsp;</p>',$offset))) { $content = substr($content,0,$pos).substr($content,$pos+13); $offset = $pos+13; }

"Doing this twice seems to get all of them out". I get that TinyMCE, a WYSIWYG editor, might inject some noise into the generated HTML, and that you might want to clean out those <p>&nbsp;</p> tags, but why twice? The comment implies its necessary, but why?

Of course, no giant PHP file would be complete without reimplementing date logic.

public function time_from($timestamp){ $difference = time() - $timestamp; $periods = array("sec", "min", "hour", "day", "week", "month", "years", "decade"); $lengths = array("60","60","24","7","4.35","12","10"); if ($difference > 0) { // this was in the past $ending = "ago"; } else { // this was in the future $difference = -$difference; $ending = "to go"; } for($j = 0; $difference >= $lengths[$j]; $j++) $difference /= $lengths[$j]; $difference = round($difference); if($difference != 1) $periods[$j].= "s"; $text = "$difference $periods[$j] $ending"; return $text; }

Honestly, and it terrifies me to say this, while I've got a lot of problems with this function, it doesn't bother me as much as usual. Since the goal is to just sorta ballpark with "4 weeks ago" all the bad date assumptions don't matter that much. I still hate it.

And while the contractors clearly understood what functions were, that doesn't mean all of the contractors did. For example, this block:

$offset = 0; while($pos = @stripos($string,'src=',$offset)) { $first_quote = @stripos($string,'"',$pos+3); $second_quote = @stripos($string,'"',$first_quote+2); $first_3 = strtolower(substr($string,$first_quote+1,3)); if($first_3=='www'||$first_3=='htt'||$first_3=='mai') { if($first_3=='www') $string = $core->run_action('pages','str_insert',array('http://',$string,$first_quote+1)); } else $string = $core->run_action('pages','str_insert',array($url_fix,$string,$first_quote+1)); $offset = $second_quote+1; }

That just gets copy-pasted any time they need to do any URL handling. What, do you think you get to 4.9MB of source code by writing reusable code? You gotta copy/paste to get those numbers up!

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Krebs on SecurityJustice Dept. Claws Back $2.3M Paid by Colonial Pipeline to Ransomware Gang

The U.S. Department of Justice said today it has recovered $2.3 million worth of Bitcoin that Colonial Pipeline paid to ransomware extortionists last month. The funds had been sent to DarkSide, a ransomware-as-a-service syndicate that disbanded after a May 14 farewell message to affiliates saying its Internet servers and cryptocurrency stash were seized by unknown law enforcement entities.

On May 7, the DarkSide ransomware gang sprang its attack against Colonial, which ultimately paid 75 Bitcoin (~$4.4 million) to its tormentors. The company said the attackers only hit its business IT networks — not its pipeline security and safety systems — but that it shut the pipeline down anyway as a precaution [several publications noted Colonial shut down its pipeline because its billing system was impacted, and it had no way to get paid].

On or around May 14, the DarkSide representative on several Russian-language cybercrime forums posted a message saying the group was calling it quits.

“Servers were seized, money of advertisers and founders was transferred to an unknown account,” read the farewell message. “Hosting support, apart from information ‘at the request of law enforcement agencies,’ does not provide any other information.”

A message from the DarkSide and REvil ransomware-as-a-service cybercrime affiliate programs.

Many security experts said they suspected DarkSide was just laying low for a while thanks to the heat from the Colonial attack, and that the group would re-emerge under a new banner in the coming months. And while that may be true, the seizure announced today by the DOJ certainly supports the DarkSide administrator’s claims that their closure was involuntary.

Security firms have suspected for months that the DarkSide gang shares some leadership with that of REvil, a.k.a. Sodinokibi, another ransomware-as-a-service platform that closed up shop in 2019 after bragging that it had extorted more than $2 billion from victims. That suspicion was solidified further when the REvil administrator added his comments to the announcement about DarkSide’s closure (see screenshot above).

First surfacing on Russian language hacking forums in August 2020, DarkSide is a ransomware-as-a-service platform that vetted cybercriminals can use to infect companies with ransomware and carry out negotiations and payments with victims. DarkSide says it targets only big companies, and forbids affiliates from dropping ransomware on organizations in several industries, including healthcare, funeral services, education, public sector and non-profits.

According to an analysis published May 18 by cryptocurrency security firm Elliptic, 47 cybercrime victims paid DarkSide a total of $90 million in Bitcoin, putting the average ransom payment of DarkSide victims at just shy of $2 million.


The DoJ’s announcement left open the question of how exactly it was able to recover a portion of the payment made by Colonial, which shut down its Houston to New England fuel pipeline for a week and prompted long lines, price hikes and gas shortages at filling stations across the nation.

The DOJ said law enforcement was able to track multiple transfers of bitcoin and identify that approximately 63.7 bitcoins (~$3.77 million on May 8), “representing the proceeds of the victim’s ransom payment, had been transferred to a specific address, for which the FBI has the ‘private key,’ or the rough equivalent of a password needed to access assets accessible from the specific Bitcoin address.”

A passage from the DOJ’s press release today.

How it came to have that private key is the key question. Nicholas Weaver, a lecturer at the computer science department at University of California, Berkeley, said the most likely explanation is that law enforcement agents seized money from a specific DarkSide affiliate responsible for bringing the crime gang the initial access to Colonial’s systems.

“The ‘obtained the private key’ part of their statement is doing a lot of work,” Weaver said, pointing out that the amount the FBI recovered was less than the full amount Colonial paid.

“It is ONLY the Colonial Pipeline ransom, and it looks to be only the affiliate’s take.”

Experts at Elliptic came to the same conclusion.

“Any ransom payment made by a victim is then split between the affiliate and the developer,” writes Elliptic’s co-founder Tom Robinson. “In the case of the Colonial Pipeline ransom payment, 85% (63.75 BTC) went to the affiliate and 15% went to the DarkSide developer.”

The Biden administration is under increasing pressure to do something about the epidemic of ransomware attacks. In conjunction with today’s action, the DOJ called attention to the wins of its Ransomware and Digital Extortion Task Force, which have included successful prosecutions of crooks behind such threats as the Netwalker and SamSam ransomware strains.

The DOJ also released a June 3 memo from Deputy Attorney General Lisa O. Monaco instructing all federal prosecutors to observe new guidelines that seek to centralize reporting about ransomware victims.

Having a central place for law enforcement and intelligence agencies to gather and act on ransomware threats was one of the key recommendations of a ransomware task force being led by some of the world’s top tech firms. In an 81-page report, the industry led task force called for an international coalition to combat ransomware criminals, and for a global network of investigation hubs. Their recommendations focus mainly on disrupting cybercriminal ransomware gangs by limiting their ability to get paid, and targeting the individuals and finances of the organized thieves behind these crimes.

Cory DoctorowI Quit

This week on my podcast, my latest Medium column, I Quit: Peak indifference, big tobacco, disinformation and death, on the connection between smoking cessation, monopoly, corruption, the climate emergency, and the denial epidemic.


Krebs on SecurityAdventures in Contacting the Russian FSB

KrebsOnSecurity recently had occasion to contact the Russian Federal Security Service (FSB), the Russian equivalent of the U.S. Federal Bureau of Investigation (FBI). In the process of doing so, I encountered a small snag: The FSB’s website said in order to communicate with them securely, I needed to download and install an encryption and virtual private networking (VPN) appliance that is flagged by at least 20 antivirus products as malware.

The FSB headquarters at Lubyanka Square, Moscow. Image: Wikipedia.

The reason I contacted the FSB — one of the successor agencies to the Russian KGB — ironically enough had to do with security concerns raised by an infamous Russian hacker about the FSB’s own preferred method of being contacted.

KrebsOnSecurity was seeking comment from the FSB about a blog post published by Vladislav “BadB” Horohorin, a former international stolen credit card trafficker who served seven years in U.S. federal prison for his role in the theft of $9 million from RBS WorldPay in 2009. Horohorin, a citizen of Russia, Israel and Ukraine, is now back where he grew up in Ukraine, running a cybersecurity consulting business.

Horohorin’s BadB carding store, badb[.]biz, circa 2007. Image:

Visit the FSB’s website and you might notice its web address starts with http:// instead of https://, meaning the site is not using an encryption certificate. In practical terms, any information shared between the visitor and the website is sent in plain text and will be visible to anyone who has access to that traffic.

This appears to be the case regardless of which Russian government site you visit. According to Russian search giant Yandex, the laws of the Russian Federation demand that encrypted connections be installed according to the Russian GOST cryptographic algorithm.

That means those who have a reason to send encrypted communications to a Russian government organization — including ordinary things like making a payment for a government license or fine, or filing legal documents — need to first install CryptoPro, a Windows-only application that loads the GOST encryption libraries on a user’s computer.

But if you want to talk directly to the FSB over an encrypted connection, you can just install their own client, which bundles the CryptoPro code. Visit the FSB’s site and select the option to “transfer meaningful information to operational units,” and you’ll see a prompt to install a “random number generation” application that is needed before a specific contact form on the FSB’s website will load properly.

Mind you, I’m not suggesting anyone go do that: Horohorin pointed out that this random number generator was flagged by 20 different antivirus and security products as malicious.

“Think well before contacting the FSB for any questions or dealing with them, and if you nevertheless decide to do this, it is better to use a virtual machine,” Horohorin wrote. “And a spacesuit. And, preferably, while in another country.”

Antivirus product detections on the FSB’s VPN software. Image: VirusTotal.

It’s probably worth mentioning that the FSB is the same agency that’s been sanctioned for malicious cyber activity by the U.S. government on multiple occasions over the past five years. According to the most recent sanctions by the U.S. Treasury Department, the FSB is known for recruiting criminal hackers from underground forums and offering them legal cover for their actions.

“To bolster its malicious cyber operations, the FSB cultivates and co-opts criminal hackers, including the previously designated Evil Corp., enabling them to engage in disruptive ransomware attacks and phishing campaigns,” reads a Treasury assessment from April 2021.

While Horohorin seems convinced the FSB is disseminating malware, it is not unusual for a large number of security tools used by VirusTotal or other similar malware “sandbox” services to incorrectly flag safe files as bad or suspicious — an all-too-common condition known as a “false positive.”

Late last year I warned my followers on Twitter to put off installing updates for their Dell products until the company could explain why a bunch of its software drivers were being detected as malware by two dozen antivirus tools. Those all turned out to be false positives.

To really figure out what this FSB software was doing, I turned to Lance James, the founder of Unit221B, a New York City based cybersecurity firm. James said each download request generates a new executable program. That is because the uniqueness of the file itself is part of what makes the one-to-one encrypted connection possible.

“Essentially it is like a temporary, one-time-use VPN, using a separate key for each download” James said. “The executable is the handshake with you to exchange keys, as it stores the key for that session in the exe. It’s a terrible approach. But it’s what it is.”

James said the FSB’s program does not appear to be malware, at least in terms of the actions it takes on a user’s computer.

“There’s no sign of actual trojan activity here except the fact it self deletes,” James said. “It uses GOST encryption, and [the antivirus products] may be thinking that those properties look like ransomware.”

James says he suspects the antivirus false-positives were triggered by certain behaviors which could be construed as malware-like. The screenshot below — from VirusTotal — says some of the file’s contents align with detection rules made to find instances of ransomware.

Some of the malware detection rules triggered by the FSB’s software. Source: VirusTotal.

Other detection rules tripped by this file include program routines that erase event logs from the user’s system — a behavior often seen in malware that is trying to hide its tracks.

On a hunch that just including the GOST encryption routine in a test program might be enough to trigger false positives in VirusTotal, James wrote and compiled a short program in C++ that invoked the GOST cipher but otherwise had no networking components. He then uploaded the file for scanning at VirusTotal.

Even though James’ test program did nothing untoward or malicious, it was flagged by six antivirus engines as potentially hostile. Symantec’s machine learning engine seemed particularly certain that James’ file might be bad, awarding it the threat name “ML.Attribute.HighConfidence” — the same designation it assigned to the FSB’s program.

KrebsOnSecurity installed the FSB’s software on a test computer using a separate VPN, and straight away it connected to an Internet address currently assigned to the FSB (

The program prompted me to click on various parts of the screen to generate randomness for an encryption key, and when that was done it left a small window which explained in Russian that the connection was established and that I should visit a specific link on the FSB’s site.

The FSB’s random number generator in action.

Doing so opened up a page where I could leave a message for the FSB. I asked them if they had any response to their program being broadly flagged as malware.

The contact form that ultimately appeared after installing the FSB’s software and clicking a specific link at fsb[.]ru.

After all the effort, I’m disappointed to report that I have not yet received a reply. Nor did I hear back from S-Terra CSP, the company that makes the VPN software offered by the FSB.

James said that given their position, he could see why many antivirus products might think it’s malware.

“Since they won’t use our crypto and we won’t use theirs,” James said. “It’s a great explanation on political weirdness with crypto.”

Still, James said, a number of things just don’t make sense about the way the FSB has chosen to deploy its one-time VPN software.

“The way they have set this up to suddenly trust a dynamically changing exe is still very concerning. Also, why would you send me a 256 random number generator seed in an exe when the computer has a perfectly valid and tested random number generator built in? You’re sending an exe to me with a key you decide over a non-secure environment. Why the fuck if you’re a top intelligence agency would you do that?”

Why indeed. I wonder how many people would share information about federal crimes with the FBI if the agency required everyone to install an executable file first — to say nothing of one that looks a lot like ransomware to antivirus firms?

After doing this research, I learned the FSB recently launched a website that is only reachable via Tor, software that protects users’ anonymity by bouncing their traffic between different servers and encrypting the traffic at every step of the way. Unlike the FSB’s clear web site, the agency’s Tor site does not ask visitors to download some dodgy software before contacting them.

“The application is running for a limited time to ensure your safety,” the instructions for the FSB’s random number generator assure, with just a gentle nudge of urgency. “Do not forget to close the application when finished.”

Yes, don’t forget that. Also, do not forget to incinerate your computer when finished.

Planet DebianMike Gabriel: UBports: Packaging of Lomiri Operating Environment for Debian (part 05)

Before and during FOSDEM 2020, I agreed with the people (developers, supporters, managers) of the UBports Foundation to package the Unity8 Operating Environment for Debian. Since 27th Feb 2020, Unity8 has now become Lomiri.

Recent Uploads to Debian related to Lomiri (Feb - May 2021)

Over the past 4 months I attended 14 of the weekly scheduled UBports development sync sessions and worked on the following bits and pieces regarding Lomiri in Debian:

  • Bundle upload to Debian of all Ayatana Indicators packages, including lomiri-url-dispatcher
  • Upload to Debian unstable: ayatana-indicator-session 0.8.2-1 (fix parallel build problem)
  • lomiri-ui-toolkit: Consulting on non-free font usage and other issues with upstream side of the Ubuntu Touch's UI Toolkit
  • Upload to Debian unstable: wlcs 1.2.1-1
  • qtmir update: 0.6.1-6 (fixing qml-demo-shell)
  • Upload to Debian unstable: qtmir 0.6.1-7
  • Upload to Debian unstable as NEW: deviceinfo 0.1.0-1
  • Upload to Debian unstable: mir 1.8.0+dfsg1-16 (fixing #985503)
  • Discuss with upstream how to handle hard-coded /opt/ path in src:pkg click
  • File upstream MR:
  • Upload to Debian experimental as NEW: click 0.5.0-1
  • Upload to Debian experimental as NEW: libusermetrics 1.2.0-1
  • Port libusermetrics over to pkgkde-symbolshelper
  • Upload to Debian experimental as NEW: hfd-service 0.1.0-1 (This included upstream development: upstart to systemd conversion, a D-Bus security fix, etc.)
  • Upload to Debian experimental as NEW: libayatana-common 0.9.1-1
  • Upload to Debian experimental: deviceinfo (0.1.0-2): Update .symbols file for non-amd64 Debian architectures
  • Upload to Debian experimental: ayatana-indicator-keyboard 0.7.901-1
  • Test-Build lomiri-ui-toolkit (after several Qt5.15 fixes on upstream's side); exclude broken unit tests from building lomiri-ui-toolkit as recommended by upstream
  • Upstream MR (libusermetrics): Amend URL in Lomiri upstream path (
  • Upstream bug hunting (lomiri-ui-toolkit, unit test failures in unit/components/tst_haptics.qml)
  • Prepare content-hub packaging for initial build tests. However, this requires lomiri-ui-toolkit to be packaged first
  • Upstream MRs related to lomiri-ui-toolkit: (grammar fixes) (fix misspelled property name) (fix misspelled word in CONTEXT_TRACE() call) (drop license test) (app-launch-profile: Use lomiri namespace in binary file names)
  • Upload to Debian experimental: lomiri-ui-toolkit 1.3.3000+dfsg1-1
  • Upload to Debian unstable: lomiri-download-manager 0.1.0-8 (Fix wrong package dependency, #988808)
  • Upload to Debian unstable: lomiri-app-launch 0.0.90-8 (Update .symbols file for alpha and hppa archs)
  • Upload to Debian experimental: hfd-service 0.1.0-2 (systemd build-requirement fix)
  • Upload to Debian experimental: deviceinfo 0.1.0-3 (Update .symbols for s390x arch)
  • Prepare for upload to Debian: content-hub 1.0.0
  • For content-hub to be buildable, we needed another packaging fix for lomiri-ui-toolkit which received an...
  • Upload to Debian expermental AS NEW (2): lomiri-ui-toolkit 1.3.3000+dfsg-2 (font package dependency fix, but also a d/copyright update with correct artwork license)

The largest amount of work (and time) went into getting lomiri-ui-toolkit ready for upload. That code component is a absolutely massive beast and dearly intertwined with Qt5 (and unit tests fail with every new warning a new Qt5.x introduces). This bit of work I couldn't do alone (see below in "Credits" section).

The next projects / packages ahead are some smaller packages (content-hub, gmenuharness, etc.) before we will finally come to lomiri (i.e. main bit of the Lomiri Operating Environment) itself.


Many big thanks go to everyone on the UBports project, but especially to Ratchanan Srirattanamet who lived inside of lomiri-ui-toolkit for more than two weeks, it seemed.

Also, thanks to Florian Leeber for being my point of contact for topics regarding my cooperation with the UBports Foundation.

Packaging Status

The current packaging status of Lomiri related packages in Debian can be viewed at:

Mike Gabriel (aka sunweaver)

Planet DebianRussell Coker: Dell PowerEdge T320 and Linux

I recently bought a couple of PowerEdge T320 servers, so now to learn about setting them up. They are a little newer than the R710 I recently setup (which had iDRAC version 6), they have iDRAC version 7.

RAM Speed

One system has a E5-2440 CPU with 2*16G DDR3 DIMMs and a Memtest86+ speed of 13,043MB/s, the other is essentially identical but with a E5-2430 CPU and 4*16G DDR3 DIMMs and a Memtest86+ speed of 8,270MB/s. I had expected that more DIMMs means better RAM performance but this isn’t what happened. I firstly upgraded the BIOS, as I expected it didn’t make a difference but it’s a good thing to try first.

On the E5-2430 I tried removing a DIMM after it was pointed out on Facebook that the CPU has 3 memory channels (here’s a link to a great site with information on that CPU and many others [1]). When I did that I was prompted to disable advanced ECC (which treats pairs of DIMMs as a single unit for ECC allowing correcting more than 1 bit errors) and I had to move the 3 remaining DIMMS to different slots. That improved the performance to 13,497MB/s. I then put the spare DIMM into the E5-2440 system and the performance increased to 13,793MB/s, when I installed 4 DIMMs in the E5-2440 system the performance remained at 13,793MB/s and the E5-2430 went down to 12,643MB/s.

This is a good result for me, I now have the most RAM and fastest RAM configuration in the system with the fastest CPU. I’ll sell the other one to someone who doesn’t need so much RAM or performance (it will be really good for a small office mail server and NAS).

Firmware Update


The first issue is updating the BIOS, unfortunately the first link I found to the Dell web site didn’t have a link to download the Linux installer. It offered a Windows binary, an EFI program, and a DOS binary. I’m not about to install Windows if there is any other option and EFI is somewhat annoying, so that leaves DOS. The first Google result for installing FreeDOS advised using “unetbootin”, that didn’t work at all for me (created a USB image that the Dell BIOS didn’t recognise as bootable) and even if it did it wouldn’t have been a good solution.

I went to the FreeDOS download page [2] and got the “Lite USB” zip file. That contained “FD12LITE.img” which I could just dd to a USB stick. I then used fdisk to create a second 32MB partition, used mkfs.fat to format it, and then copied the BIOS image file to it. I booted the USB stick and then ran the BIOS update program from drive D:. After the BIOS update this became the first system I’ve seen get a totally green result from “spectre-meltdown-checker“!

I found the link to the Linux installer for the new Dell BIOS afterwards, but it was still good to play with FreeDOS.

PERC Driver

I probably didn’t really need to update the PERC (PowerEdge Raid Controller) firmware as I’m just going to run it in JBOD mode. But it was easy to do, a simple bash shell script to update it.

Here are the perccli commands needed to access disks, it’s all hot-plug so you can insert disks and do all this without a reboot:

# show overview
perccli show
# show controller 0 details
perccli /c0 show all
# show controller 0 info with less detail
perccli /c0 show
# clear all "foreign" RAID members
perccli /c0 /fall delete
# add a vd (RAID) of level RAID0 (r0) with the drive 32:0 (enclosure:slot from above command)
perccli /c0 add vd r0 drives=32:0

The “perccli /c0 show” command gives the following summary of disk (“PD” in perccli terminology) information amongst other information. The EID is the enclosure, Slt is the “slot” (IE the bay you plug the disk into) and the DID is the disk identifier (not sure what happens if you have multiple enclosures). The allocation of device names (sda, sdb, etc) will be in order of EID:Slt or DID at boot time, and any drives added at run time will get the next letters available.

EID:Slt DID State DG       Size Intf Med SED PI SeSz Model                     Sp 
32:0      0 Onln   0  465.25 GB SATA SSD Y   N  512B Samsung SSD 850 EVO 500GB U  
32:1      1 Onln   1  465.25 GB SATA SSD Y   N  512B Samsung SSD 850 EVO 500GB U  
32:3      3 Onln   2   3.637 TB SATA HDD N   N  512B ST4000DM000-1F2168        U  
32:4      4 Onln   3   3.637 TB SATA HDD N   N  512B WDC WD40EURX-64WRWY0      U  
32:5      5 Onln   5 278.875 GB SAS  HDD Y   N  512B ST300MM0026               U  
32:6      6 Onln   6 558.375 GB SAS  HDD N   N  512B AL13SXL600N               U  
32:7      7 Onln   4   3.637 TB SATA HDD N   N  512B ST4000DM000-1F2168        U  

The PERC controller is a MegaRAID with possibly some minor changes, there are reports of Linux MegaRAID management utilities working on it for similar functionality to perccli. The version of MegaRAID utilities I tried didn’t work on my PERC hardware. The smartctl utility works on those disks if you tell it you have a MegaRAID controller (so obviously there’s enough similarity that some MegaRAID utilities will work). Here are example smartctl commands for the first and last disks on my system. Note that the disk device node doesn’t matter as all device nodes associated with the PERC/MegaRAID are equal for smartctl.

# get model number etc on DID 0 (Samsung SSD)
smartctl -d megaraid,0 -i /dev/sda
# get all the basic information on DID 0
smartctl -d megaraid,0 -a /dev/sda
# get model number etc on DID 7 (Seagate 4TB disk)
smartctl -d megaraid,7 -i /dev/sda
# exactly the same output as the previous command
smartctl -d megaraid,7 -i /dev/sdc

I have uploaded etbemon version 1.3.5-6 to Debian which has support for monitoring smartctl status of MegaRAID devices and NVMe devices.


To update IDRAC on Linux there’s a bash script with the firmware in the same file (binary stuff at the end of a shell script). To make things a little more exciting the script insists that rpm be available (running “apt install rpm” fixes that for a Debian system). It also creates and runs other shell scripts which start with “#!/bin/sh” but depend on bash syntax. So I had to make /bin/sh a symlink to /bin/bash. You know you need this if you see errors like “typeset: not found” and “[: -eq: unexpected operator” and then the system reboots. Dell people, please test your scripts on dash (the Debian /bin/sh) or just specify #!/bin/bash.

If the IDRAC update works it will take about 8 minutes.

Lifecycle Controller

The Lifecycle Controller is apparently for installing OS and firmware updates. I use Linux tools to update Linux and I generally don’t plan to update the firmware after deployment (although I could do so from Linux if needed). So it doesn’t seem to offer anything useful to me.

Setting Up IDRAC

For extra excitement I decided to try to setup IDRAC from the Linux command-line. To install the RAC setup tool you run “apt install srvadmin-idracadm7 libargtable2-0” (because srvadmin-idracadm7 doesn’t have the right dependencies).

# srvadmin-idracadm7 is missing a dependency
apt install srvadmin-idracadm7 libargtable2-0
# set the IP address, netmask, and gatewat for IDRAC
idracadm7 setniccfg -s
# put my name on the front panel LCD
idracadm7 set System.LCD.UserDefinedString "Russell Coker"


This is a very nice deskside workstation/server. It’s extremely quiet with hardly any fan noise and the case is strong enough to contain the noise of hard drives. When running with 3* 3.5″ SATA disks and 2*10k 2.5″ SAS disks on a wooden floor it wasn’t annoyingly loud. Without the SAS disks it was as quiet as you can expect any PC to be, definitely not the volume you expect from a serious server! I bought the T320 systems loaded with SAS disks which made them quite loud, I immediately put the disks on ebay and installed SATA SSDs and hard drives which gives me more performance and more space than the SAS disks with less cost and almost no noise.

8*3.5″ drive bays gives room for expansion. I currently have 2*SATA SSDs and 3*SATA disks, the SSDs are for the root filesystem (including /home) and the disks are for a separate filesystem for large files.

Worse Than FailureCodeSOD: Are You Active Enough?

Cornelius was working with some code where the objects might be "active" or "inactive". His code needed to do something different, depending on whether the objects were "active" or not, but fortunately, there was a handy-dandy IsActive method. Weirdly, that method required a bool parameter, but also returned a bool. Since there wasn't any useful documentation, Cornelius checked the C++ code.

bool SomeActivatedClass::IsActive(bool& active) { active = true; return false; }

This method leaves the choice of whether or not the object IsActive up to the caller. If you check the value in the pass-by-reference parameter, it will always be true. If you check the return value, it will always be false.

This method is, truly, a work of art. It doesn't tell you, the audience, what to think about the nature of being active. It instead poses a question, and lets the caller decide. It forces us to confront our own preconceptions about an IsActive method, the nature of return values, and why we sometimes prefer reference parameters. Like films by the the great auteur, Neil Breen, it asks big questions even as it provides no answers of its own.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianRuss Allbery: Review: Stoneskin

Review: Stoneskin, by K.B. Spangler

Series: Deep Witches #0
Publisher: A Girl and Her Fed Books
Copyright: September 2017
ASIN: B075PHK498
Format: Kindle
Pages: 226

Stoneskin is a prequel to the Deep Witches Trilogy, which is why I have it marked as book 0 of the series. Unlike most prequels, it was written and published before the series and there doesn't seem to be any reason not to read it first.

Tembi Moon is an eight-year-old girl from the poor Marumaru area on the planet of Adhama. Humanity has spread to the stars and first terraformed the worlds and then bioformed themselves to live there. The differences are subtle, but Tembi's skin becomes thicker and less sensitive when damaged (either physically or emotionally) and she can close her ears against dust storms. One day, she wakes up in an unknown alley and finds herself on the world of Miha'ana, sixteen thousand light-years away, where she is rescued and brought home by a Witch named Matindi.

In this science fiction future, nearly all interstellar travel is done through the Deep. The Deep is not like the typical hand-waved science fiction subspace, most notably in that it's alive. No one is entirely sure where it came from or what sort of creature it is. It sometimes manages to communicate in words, but abstract patterns with feelings attached are more common, and it only communicates with specific people. Those people are Witches, who are chosen by the Deep via some criteria no one understands. Witches can use the Deep to move themselves or anything else around the galaxy. All interstellar logistics rely on them.

The basics of Tembi's story are not that unusual; she's been chosen by the Deep to be a Witch. What is remarkable is that she's young and she's poor, completely disconnected from the power structures of the galaxy. But, once chosen, her path as far as the rest of the galaxy is concerned is fixed: she will go to Lancaster to be trained as a Witch. Matindi is able to postpone this for a time by keeping an eye on her, but not forever.

I bought this book because of the idea of the Deep, and that is indeed the best part of the book. There is a lot of mystery about its exact nature, most of which is not resolved in this book, but it mostly behaves like a giant and extremely strange dog, and it's awesome. Replacing the various pseudo-scientific explanations for faster than light travel with interactions with a dream-weird giant St. Bernard with too many paws that talks in swirls of colored bubbles and is very eager to please its friends is brilliant.

This book covers a lot of years of Tembi's life and is, as advertised, a prelude to a story that is not resolved here. It's a coming of age story in which she does eventually end up at Lancaster, learns and chafes at the elaborate and very conservative structures humans have put in place to try to make interactions with the Deep predictable and reliable, and eventually gets drawn into the politics of war and the question of when people have a responsibility to intervene. Tembi, and the reader, also have many opportunities to get extremely upset at how the Deep is treated and how much entitlement the Witches have about their access and control, although how the Deep feels about it is left for a future book.

Not all of this story is as good as the premise. There are some standard coming of age tropes that I'm not fond of, such as Tembi's predictable temporary falling out with the Deep (although the Deep's reaction is entertaining). It's also not at all a complete story, although that's clearly signaled by the subtitle. But as an introduction to the story universe and an extended bit of scene-setting, it accomplishes what it sets out to do. It's also satisfyingly thoughtful about the moral trade-offs around stability and the value of preserving institutions. I know which side I'm on within the universe, but I appreciated how much nuance and thoughtfulness Spangler puts into the contrary opinion.

I'm hooked on the universe and want to learn more about the Deep, enough so that I've already bought the first book of the main trilogy.

Followed by The Blackwing War.

Rating: 7 out of 10


Planet DebianIustin Pop: Goodbye Travis, hello GitHub Actions

My very cyclical open-source work

For some reason, I only manage to do coding at home every some months - mostly 6 months apart, so twice a year (even worse than my blogging frequency :P). As such, I missed the whole discussion about travis-ci (the .org version) going away, etc.

So when I finally did some work on corydalis a few weeks ago, and had a travis build failure (restoring a large cache simply timed out, either travis or S3 has some hiccup - it cleared by itself a day later), I opened the travis-ci interface to see the scary banner (“ is shutting down�), and asked myself what’s happening. The deadline was in less than a week even 😧…

Long story short, Travis was a good home for many years, but they were bought and are doing significant changes to their OSS support, so it’s time to move on. I anyway wanted to learn GitHub Actions for a while (ahem, free time intervened), so this was a good (forced) “opportunity�.

Proper composable CI

The advantage of Travis’ infrastructure was that the build configuration was really simple. It had very few primitives: pre-steps (before_install), install steps, and the actual things to do to test, post-steps (after_sucess) and a few other small helpers (caching, apt packages, etc.) This made it really easy to just pick up and write a config, plus it had the advantage of allowing to test configs from the web UI without needing to push.

This simplicity was unfortunately also its significant limiter: the way to do complex things in steps was simply to add more shell commands.

GitHub actions, together with its marketplace, changes this entirely. There are no “built-in� actions, the language just defines the build/job/step hierarchy, and one glues together whatever steps they want. This has the disadvantage that even checking out the code needs to be explicitly written in all workflows (so boilerplate, if you don’t need customisation), but it opens up a huge opportunity for composition, since it allows people to publish actions (steps) that you just import, encapsulating all the work.

So, after learning how to write a moderately complicated workflow (complicated as in multiple Python version, some of them needing different OS version, and multi-OS), it was straightforward to port this to all my projects - just somewhat tedious. I’ve now shutdown all builds on Travis, just can’t find a way to delete my account 😅

Better multi-OS, worse (missing) multi-arch

In theory, Travis supports Linux, MacOS, FreeBSD and Windows, but I’ve found that support for non-Linux is not quite as good. Maybe I missed things, but multi-version Python builds on MacOS were not as nicely supported as Linux; Windows is quite early, and very limited; and I haven’t tested FreeBSD.

GitHub is more restrictive - Linux, MacOS and Windows - but I found support for MacOS and Windows better for my use cases. If your use case is testing multiple MacOS versions, Travis wins, if it’s more varied languages/etc. on the single available MacOS version, GitHub works better.

On the multi-arch side, Travis wins hands-down. Four different native architectures, and enabling one more is just adding another key to the arch list. With GitHub, if I understand right, you either have to use docker+emulation, or use self-hosted runners.

So here it really matters what is more important to you. Maybe in the future GitHub will support more arches, but right now, Travis wins for this use-case.


For my specific use-case, GitHub Actions is a better fit right now. The marketplace has advantages (I’ll explain better in a future post), the actions are a very nice way to encapsulate functionality, and it’s still available/free (up to a limit) for open source projects. I don’t know what the future of Travis for OSS will be, but all I heard so far is very concerning.

However, I’ll still miss a few things.

For example, an overall dashboard for all my projects, like this one:

Travis dashboard Travis dashboard

I couldn’t find any such thing on GitHub, so I just use my set of badges.

Then cache management. Travis allows you to clear the cache, and it does auto-update the cache. GitHub caches are immutable once built, so you have to:

  • watch if changed dependencies/dependency chains result in things that are no longer cached;
  • if so, need to manually bump the cache key, resulting in a commit for purely administrative purposes.

For languages where you have a clean full chain of dependencies recorded (e.g. node’s package-lock.json, stack’s stack.yaml.lock), this is trivial to achieve, but gets complicated if you add OS dependencies, languages which don’t record all this, etc.

Hmm, maybe I should just embed the year/month in the cache name - cheating, but automated cheating.

With all said and done, I think GHA is much less refined, but with more potential. Plus, the pace of innovation on Travis side was quite slow (likely money problems, hence them being bought, etc.)…

So one TODO done: “learn GitHub Actions�, even if a bit unplanned 😂

Kevin RuddReply to Murdoch’s “Australian”: On the Separation of Church and State

This is the response that Murdoch’s ‘Australian’ refused to publish — my answer to columnist Gerard Henderson’s erroneous defence of Scott Morrison over the separation of church and state. You can read Henderson’s original here, which misquotes my piece in the Guardian.

Australians deserve to understand clearly the influences that guide the nation’s political leaders, whom they entrust with the extraordinary powers of our secular commonwealth. This is why Scott Morrison’s speech to the Australian Christian Churches last month – a secret speech that his office declines to publish – is a legitimate matter for open, honest debate.

This debate should be conducted with evidence, fact and reason. Hence why I contributed a 1350-word piece for The Guardian on the intersection of Pentecostalism and Australian public policy. Sadly, Gerard Henderson’s summation of my critique in this newspaper was as reductive as it was inaccurate. For example, Henderson attacked my “grievous overstatement” that Pentecostals categorically believe that if you are godly, you will be healthy and wealthy. In fact, I simply noted the “health and wealth gospel” often features in Pentecostal preaching. For evidence of this, consider the book You Need More Money, which expressly links divine blessing to being a “money magnet”; it was authored by Hillsong’s Brian Houston, the nation’s most influential Pentecostal pastor. Not all Pentecostals share Houston’s theology on wealth.

Like every branch of the Christian family, Pentecostal worshippers frequently defy their leaders. For example, I expect Morrison would accept evolution as a scientific fact. But if you examine their doctrinal statements, its clear that not all Pentecostals do.

Henderson dismisses concerns that there are certain Pentecostal doctrines could present problems for the secular polity because they potentially erode the separation of church and state. However, Henderson should be acutely aware of this problem given his own experience at the National Civic Council of B.A. Santamaria, whose vigorous sectarian crusade in league with Archbishop Daniel Mannix split the Labor Party in 1955.

Henderson conspicuously overlooks specific Morrison’s behaviours that have caused Australians concern. These include Morrison physically involving Australians in religious rituals, such as laying on of hands, without their consent. When the Prime Minister tours natural disaster shelters, he is there in his secular office, not as the nation’s high priest. If he wants to lay hands on people to impart the healing power of prayer, he can ask permission; that he doesn’t seek consent implies he already knows how they might react.

Second, Henderson is incurious about Morrison’s view that humans can’t fix the world’s problems; that it is God’s responsibility, and what the world simply needs is the growth of the church. This deeply troubling logic may explain Morrison’s disinterest in climate action. Henderson would be aware of an apocalyptic tradition among some Pentecostals that political action to resolve human or environmental problems is redundant simply because Christ’s eventual return will herald the end times. When voters cast their ballots, they deserve to know whether Morrison believes mortal problems can be solved by mortals.

Third, there is a broader question how Morrison views the relationship between his office and God. Morrison’s speech suggests he identifies with the kings and prophets of the Old Testament who claimed God spoke to them directly. Morrison’s speech recalled receiving a message from God through a painting during the last election campaign, reassuring him of divine support in his partisan struggle against the Labor Party. Again, Australians deserve to know from Morrison: how does he believe God communicates with prime ministers?

Fourth, Henderson accuses me of attacking Morrison’s commitment to his faith. I did not. To the contrary, I wrote that nobody should doubt the genuineness of his faith and noted we attended the same Christian fellowship in Canberra. No, Morrison shouldn’t be attacked for his faith; rather, he should have the political courage and moral fortitude to open up to Australians about how it informs his worldview. Any suggestion that Morrison leaves his faith at his office door doesn’t pass the pub test, given the content of his secret speech.

Finally, Henderson apparently regards it as irrelevant to our polity that various Pentecostal churches around Australia have become active recruiting grounds for Liberal Party branches. In Queensland, the division is well known between the dwindling band of mainstream LNP secularists and a self-appointed God’s Army now dominating much of the state division. It is relevant to our democracy that this gradually pushes the Coalition further to the far-right and that we are beginning to see the religious polarisation of our national politics.

In opposition, I wrote a 6500-word essay on faith in politics because I believed that, as a prospective national leader, voters deserved to know what they might be buying. After it was published in The Monthly, I answered questions about it (including from this newspaper).

Henderson dismisses my call for Morrison to do the same, arguing – oddly – that Morrison’s speech explained so little, he shouldn’t have to elaborate. That, Gerard, is precisely the point. Morrison should not leave Australians to rely on grainy iPhone footage and newspaper speculation to discern what the Prime Minister might have meant by his more curious comments.

This discussion has never been more important. Pentecostal churches have long nurtured conscientious political minds on the right and the left. But the fact that these churches are now being deliberately targeted by Liberal and National party recruiters who, borrowing from Santamaria, signal to members that true Christians have no place in the Labor Party. This is sectarian identity politics.

Henderson’s conception of the separation of church and state is so narrow that he sees no questions worth asking. Many Australians disagree, both progressives and conservatives. They treasure our secular democracy and fear it being chipped away by the sort of religious fundamentalism seen in parts of the US Congress. They therefore raise legitimate questions won’t go away until Morrison definitively answers them.

It shouldn’t be hard unless, of course, he has something to hide.

Images: Brian Houston/Facebook; Sydney Institute/YouTube

The post Reply to Murdoch’s “Australian”: On the Separation of Church and State appeared first on Kevin Rudd.

Planet DebianRussell Coker: Netflix and IPv6

It seems that Netflix has an ongoing issue of not working well with IPv6, apparently they have some sort of region checking code that doesn’t correctly identify IPv6 prefixes. To fix this I wrote the following script to make a small zone file with only A records for Netflix and no AAAA records. The $OUT.header file just has the SOA record for my fake domain.



dig -t a @|sed -n -e "s/^.*IN/www IN/p"|grep [0-9]$ >> $OUT
dig -t a @|sed -n -e "s/^.*IN/ IN/p"|grep [0-9]$ >> $OUT
/usr/sbin/rndc reload > /dev/null


I updated this post to add a line for which is the address used by Android devices.

Planet DebianEvgeni Golov: Controlling Somfy roller shutters using an ESP32 and ESPHome

Our house has solar powered, remote controllable roller shutters on the roof windows, built by the German company named HEIM & HAUS. However, when you look closely at the remote control or the shutter motor, you'll see another brand name: SIMU. As the shutters don't have any wiring inside the house, the only way to control them is via the remote interface. So let's go on the Internet and see how one can do that, shall we? ;)

First thing we learn is that SIMU remote stuff is just re-branded Somfy. Great, another name! Looking further we find that Somfy uses some obscure protocol to prevent (replay) attacks (spoiler: it doesn't!) and there are tools for RTL-SDR and Arduino available. That's perfect!

Always sniff with RTL-SDR first!

Given the two re-brandings in the supply chain, I wasn't 100% sure our shutters really use the same protocol. So the first "hack" was to listen and decrypt the communication using RTL-SDR:

$ git clone
$ cd radio_stuff
$ make -C converters am_to_ook
$ make -C decoders decode_somfy
$ rtl_fm -M am -f 433.42M -s 270K | ./am_to_ook -d 10 -t 2000 -  | ./decode_somfy
<press some buttons on the remote>

The output contains the buttons I pressed, but also the id of the remote and the command counter (which is supposed to prevent replay attacks). At this point I could just use the id and the counter to send own commands, but if I'd do that too often, the real remote would stop working, as its counter won't increase and the receiver will drop the commands when the counters differ too much.

But that's good enough for now. I know I'm looking for the right protocol at the right frequency. As the end result should be an ESP32, let's move on!

Acquiring the right hardware

Contrary to an RTL-SDR, one usually does not have a spare ESP32 with 433MHz radio at home, so I went shopping: a NodeMCU-32S clone and a CC1101. The CC1101 is important as most 433MHz chips for Arduino/ESP only work at 433.92MHz, but Somfy uses 433.42MHz and using the wrong frequency would result in really bad reception. The CC1101 is essentially an SDR, as you can tune it to a huge spectrum of frequencies.

Oh and we need some cables, a bread board, the usual stuff ;)

The wiring is rather simple:

ESP32 wiring for a CC1101

And the end result isn't too beautiful either, but it works:

ESP32 and CC1101 in a simple case

Acquiring the right software

In my initial research I found an Arduino sketch and was totally prepared to port it to ESP32, but luckily somebody already did that for me! Even better, it's explicitly using the CC1101. Okay, okay, I cheated, I actually ordered the hardware after I found this port and the reference to CC1101. ;)

As I am using ESPHome for my ESPs, the idea was to add a "Cover" that's controlling the shutters to it. Writing some C++, how hard can it be?

Turns out, not that hard. You can see the code in my GitHub repo. It consists of two (relevant) files: somfy_cover.h and somfy.yaml.

somfy_cover.h essentially wraps the communication with the Somfy_Remote_Lib library into an almost boilerplate Custom Cover for ESPHome. There is nothing too fancy in there. The only real difference to the "Custom Cover" example from the documentation is the split into SomfyESPRemote (which inherits from Component) and SomfyESPCover (which inherits from Cover) -- this is taken from the Custom Sensor documentation and allows me to define one "remote" that controls multiple "covers" using the add_cover function. The first two params of the function are the NVS name and key (think database table and row), and the third is the rolling code of the remote (stored in somfy_secrets.h, which is not in Git).

In ESPHome a Cover shall define its properties as CoverTraits. Here we call set_is_assumed_state(true), as we don't know the state of the shutters - they could have been controlled using the other (real) remote - and setting this to true allows issuing open/close commands at all times. We also call set_supports_position(false) as we can't tell the shutters to move to a specific position.

The one additional feature compared to a normal Cover interface is the program function, which allows to call the "program" command so that the shutters can learn a new remote.

somfy.yaml is the ESPHome "configuration", which contains information about the used hardware, WiFi credentials etc. Again, mostly boilerplate. The interesting parts are the loading of the additional libraries and attaching the custom component with multiple covers and the additional PROG switches:

  name: somfy
  platform: ESP32
  board: nodemcu-32s
    - SmartRC-CC1101-Driver-Lib@2.5.6
    - Somfy_Remote_Lib@0.4.0
    - EEPROM
    - somfy_secrets.h
    - somfy_cover.h

  - platform: custom
    lambda: |-
      auto somfy_remote = new SomfyESPRemote();
      somfy_remote->add_cover("somfy", "badezimmer", SOMFY_REMOTE_BADEZIMMER);
      somfy_remote->add_cover("somfy", "kinderzimmer", SOMFY_REMOTE_KINDERZIMMER);
      return somfy_remote->covers;

      - id: "somfy"
        name: "Somfy Cover"
      - id: "somfy2"
        name: "Somfy Cover2"

  - platform: template
    name: "PROG"
      - lambda: |-
  - platform: template
    name: "PROG2"
      - lambda: |-

The switch to trigger the program mode took me a bit. As the Cover interface of ESPHome does not offer any additional functions besides movement control, I first wrote code to trigger "program" when "stop" was pressed three times in a row, but that felt really cumbersome and also had the side effect that the remote would send more than needed, sometimes confusing the shutters. I then decided to have a separate button (well, switch) for that, but the compiler yelled at me I can't call program on a Cover as it does not have such a function. Turns out, you need to explicitly cast to SomfyESPCover and then it works, even if the code becomes really readable, NOT. Oh and as the switch does not have any code to actually change/report state, it effectively acts as a button that can be pressed.

At this point we can just take an existing remote, press PROG for 5 seconds, see the blinds move shortly up and down a bit and press PROG on our new ESP32 remote and the shutters will learn the new remote.

And thanks to the awesome integration of ESPHome into HomeAssistant, this instantly shows up as a new controllable cover there too.

Future Additional Work

I started writing this post about a year ago… And the initial implementation had some room for improvement…

More than one remote

The initial code only created one remote and one cover element. Sure, we could attach that to all shutters (there are 4 of them), but we really want to be able to control them separately.

Thankfully I managed to read enough ESPHome docs, and learned how to operate std::vector to make the code dynamically accept new shutters.

Using ESP32's NVS

The ESP32 has a non-volatile key-value storage which is much nicer than throwing bits at an emulated EEPROM. The first library I used for that explicitly used EEPROM storage and it would have required quite some hacking to make it work with NVS. Thankfully the library I am using now has a plugable storage interface, and I could just write the NVS backend myself and upstream now supports that. Yay open-source!

Remaining issues

Real state is unknown

As noted above, the ESP does not know the real state of the shutters: a command could have been lost in transmission (the Somfy protocol is send-only, there is no feedback) or the shutters might have been controlled by another remote. At least the second part could be solved by listening all the time and trying to decode commands heard over the air, but I don't think this is worth the time -- worst that can happen is that a closed (opened) shutter receives another close (open) command and that is harmless as they have integrated endstops and know that they should not move further.

Can't program new remotes with ESP only

To program new remotes, one has to press the "PROG" button for 5 seconds. This was not exposed in the old library, but the new one does support "long press", I just would need to add another ugly switch to the config and I currently don't plan to do so, as I do have working remotes for the case I need to learn a new one.


Planet DebianUtkarsh Gupta: FOSS Activites in May 2021

Here’s my (twentieth) monthly update about the activities I’ve done in the F/L/OSS world.


This was my 29th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

Interesting month, surprisingly. Lots of things happening and lots of moving parts; becoming the “new normal�, I believe. Anyhow, working on Ubuntu full-time has its own advantage and one of them is being able to work on Debian stuff! 🥰

So whilst I couldn’t upload a lot of packages because of the freeze, here’s what I worked on:

Uploads and bug fixes:

Other $things:

  • Mentoring for newcomers and assisting people in BSP.
  • Moderation of -project mailing list.


This was my 4th month of actively contributing to Ubuntu. Now that I’ve joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

This month, by all means, was dedicated mostly to PHP 8.0, transitioning from PHP 7.4 to 8.0. Naturally, it had so many moving parts and moments of utmost frustration, shared w/ Bryce. :D

So even though I can’t upload anything, I worked on the following stuff & asked for sponsorship.
But before, I’d like to take a moment to stress how kind and awesome Gianfranco Costamagna, a.k.a. LocutusOfBorg is! He’s been sponsoring a bunch of my things & helping with re-triggers, et al. Thanks a bunch, Gianfranco; beers on me whenever we meet! �


Uploads & Syncs:


Seed Operations:

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my twentieth month as a Debian LTS and eleventh month as a Debian ELTS paid contributor.
I was assigned 29.75 hours for LTS and 40.00 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:

  • Front-desk duty from 24-05 until 30-05 for both LTS and ELTS.
  • Triaged rails, libimage-exiftool-perl, hivex, graphviz, glibc, libexosip2, impacket, node-ws, thunar, libgrss, nginx, postgresql-9.6, ffmpeg, composter, and curl.
  • Mark CVE-2019-9904/graphviz as ignored for stretch and jessie.
  • Mark CVE-2021-32029/postgresql-9.6 as not-affected for stretch.
  • Mark CVE-2020-24020/ffmpeg as not-affected for stretch.
  • Mark CVE-2020-22020/ffmpeg as postponed for stretch.
  • Mark CVE-2020-22015/ffmpeg as ignored for stretch.
  • Mark CVE-2020-21041/ffmpeg as postponed for stretch.
  • Mark CVE-2021-33574/glibc as no-dsa for stretch & jessie.
  • Mark CVE-2021-31800/impacket as no-dsa for stretch.
  • Mark CVE-2021-32611/libexosip2 as no-dsa for stretch.
  • Mark CVE-2016-20011/libgrss as ignored for stretch.
  • Mark CVE-2021-32640/node-ws as no-dsa for stretch.
  • Mark CVE-2021-32563/thunar as no-dsa for stretch.
  • [LTS] Help test and review bind9 update for Emilio.
  • [LTS] Suggest and add DEP8 tests for bind9 for stretch.
  • [LTS] Sponsored upload of htmldoc to buster for Havard as a consequence of #988289.
  • [ELTS] Fix triage order for jetty and graphviz.
  • [ELTS] Raise issue upstream about cloud-init; mock tests instead.
  • [ELTS] Write to private ELTS list about triage ordering.
  • [ELTS] Review Emilio’s new script and write back feedback, mentioning extra file created, et al.
  • [ELTS/LTS] Raise upgrade problems from LTS -> LTS+1 to the list. Thread here.
    • Further help review and raise problems that could occur, et al.
  • [LTS] Help explain path forward for firmware-nonfree update to Ola. Thread here.
  • [ELTS] Revert entries of TEMP-0000000-16B7E7 and TEMP-0000000-1C4729; CVEs assigned & fix ELTS tracker build.
  • Auto EOL’ed linux, libgrss, node-ws, and inspircd for jessie.
  • Attended monthly Debian LTS meeting, which didn’t happen, heh.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • General and other discussions on LTS private and public mailing list.

Until next time.
:wq for today.

Planet DebianEnrico Zini: Ansible blockinfile oddity

I was reading Ansible's blockinfile sources for, uhm, reasons, and the code flow looked a bit odd.

So I checked what happens if a file has spurious block markers.

Give this file:

$ cat /tmp/test.orig

And this playbook:

$ cat test.yaml
- hosts: localhost
   - name: test blockinfile
        block: NEWLINE
        path: /tmp/test

You get this result:

$ cat /tmp/test

I was hoping that I was reading the code incorrectly, but it turns out that Ansible's blockinfile matches the last pair of begin-end markers it finds, in whatever order it finds them.

Planet DebianDirk Eddelbuettel: td 0.0.4 on CRAN: More Maintenance

The still fairly recent td package for accessing the twelvedata API for financial data has been updated on CRAN this morning, and is now at released version 0.0.4. This corrects something the previous 0.0.3 release from last weekend was meant to address, but didn’t quite do it.

Access to the helper function finding a proper user config file (for .e.g., the API config) is now correctly conditioned on R 4.0.0, and the versioned depends has been removed.

The NEWS entry follows.

Changes in version 0.0.4 (2021-06-05)

  • The version comparison was corrected and the package no longer (formally) depends on R (>= 4.0.0)

  • Very minor edits

Courtesy of my CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianJunichi Uekawa: emacs tramp mode for chroot.

emacs tramp mode for chroot. I saw tramp for docker and lxc so I figured it must be possible to write a mode for chroot. I wrote one for cros_sdk and that made my life much easier. I can build and run inside chroot from emacs transparently. Seems like it should also be possible to write something for dchroot.

Cryptogram Friday Squid Blogging: Squids in Space

NASA is sending baby bobtail squid into space.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: Scratch and Dent

Renault driver Oisin H. writes <<I had no idea French week had only six days>> It seems they too have never got the hang of Thursdays.



For your breakfast, please enjoy the lovely shade of egg yolk in this submission from Yaytay who highlights "It's so broken that it's showing the default .Net error page!"



Since we're making today "Pick on Microsoft Day", here's a treat from Azure. Rob F. shares "The view as web page divided by zero and gave me a computer says no response." Go easy on Rob, Google's translator has a beef with Microsoft, too.



And a reader who styles himself TheRealSteveJudge queries "Where can I get a 0-bit Windows? At least I can download drivers for it."



Finally, loyal reader Pascal apparently files his taxes the same time we do. 2017, 2018, Thursdays, whatever.



[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianMatthew Garrett: Mike Lindell's Cyber "Evidence"

Mike Lindell, notable for absolutely nothing relevant in this field, today filed a lawsuit against a couple of voting machine manufacturers in response to them suing him for defamation after he claimed that they were covering up hacks that had altered the course of the US election. Paragraph 104 of his suit asserts that he has evidence of at least 20 documented hacks, including the number of votes that were changed. The citation is just a link to a video called Absolute 9-0, which claims to present sufficient evidence that the US supreme court will come to a 9-0 decision that the election was tampered with.

The claim is that Lindell was provided with a set of files on the 9th of January, and gave these to some cyber experts to verify. These experts identified them as packet captures. The video contains scrolling hex, and we are told that this is the raw encrypted data from the files. In reality, the hex values correspond very clearly to printable ASCII, and appear to just be the Pennsylvania voter roll. They're not encrypted, and they're not packet captures (they contain no packet headers).

20 of these packet captures were then selected and analysed, giving us the tables contained within Exhibit 12. The alleged source IPs appear to correspond to the networks the tables claim, and the latitude and longitude presumably just come from a geoip lookup of some sort (although clearly those values are far too precise to be accurate). But if we look at the target IPs, we find something interesting. Most of them resolve to the website for the county that was the nominal target (eg, is So, we're supposed to believe that in many cases, the county voting infrastructure was hosted on the county website.

Unfortunately we're not given the destination port, but isn't listening on anything other than 80 and 443. We're told that the packet data is encrypted, so presumably it's over HTTPS. So, uh, how did they decrypt this to figure out how many votes were switched? If Mike's hackers have broken TLS, they really don't need to be dealing with this.

We're also given some background information on how it's impossible to reconstruct packet captures after the fact (untrue), or that modifying them would change their hashes (true, but in the absence of known good hash values that tells us nothing), but it's pretty clear that nothing we're shown actually demonstrates what we're told it does.

In summary: yes, any supreme court decision on this would be 9-0, just not the way he's hoping for.

Update: It was pointed out that this data appears to be part of a larger dataset. This one is even more dubious - it somehow has MAC addresses for both the source and destination (which is impossible), and almost none of these addresses are in actual issued ranges.

comment count unavailable comments

Planet DebianReproducible Builds (diffoscope): diffoscope 177 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 177. This version includes the following changes:

[ Keith Smiley ]
* Improve support for Apple "provisioning profiles".
* Fix ignoring objdump tests on MacOS.

You find out more by visiting the project homepage.


Cryptogram Security and Human Behavior (SHB) 2021

Today is the second day of the fourteenth Workshop on Security and Human Behavior. The University of Cambridge is the host, but we’re all on Zoom.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. The format translates well to Zoom, and we’re using random breakouts for the breaks between sessions.

I always find this workshop to be the most intellectually stimulating two days of my professional year. It influences my thinking in different, and sometimes surprising, ways.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, twelfth, and thirteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Planet DebianJonathan McDowell: Digging into Kubernetes containers

Having build a single node Kubernetes cluster and had a poke at what it’s doing in terms of networking the next thing I want to do is figure out what it’s doing in terms of containers. You might argue this should have come before networking, but to me the networking piece is more non-standard than the container piece, so I wanted to understand that first.

Let’s start with a process listing on the host.

ps faxno user,stat,cmd

There are a number of processes from the host kernel we don’t care about:

kernel processes
       0 S    [kthreadd]
       0 I<    \_ [rcu_gp]
       0 I<    \_ [rcu_par_gp]
       0 I<    \_ [kworker/0:0H-events_highpri]
       0 I<    \_ [mm_percpu_wq]
       0 S     \_ [rcu_tasks_rude_]
       0 S     \_ [rcu_tasks_trace]
       0 S     \_ [ksoftirqd/0]
       0 I     \_ [rcu_sched]
       0 S     \_ [migration/0]
       0 S     \_ [cpuhp/0]
       0 S     \_ [cpuhp/1]
       0 S     \_ [migration/1]
       0 S     \_ [ksoftirqd/1]
       0 I<    \_ [kworker/1:0H-kblockd]
       0 S     \_ [cpuhp/2]
       0 S     \_ [migration/2]
       0 S     \_ [ksoftirqd/2]
       0 I<    \_ [kworker/2:0H-events_highpri]
       0 S     \_ [cpuhp/3]
       0 S     \_ [migration/3]
       0 S     \_ [ksoftirqd/3]
       0 I<    \_ [kworker/3:0H-kblockd]
       0 S     \_ [kdevtmpfs]
       0 I<    \_ [netns]
       0 S     \_ [kauditd]
       0 S     \_ [khungtaskd]
       0 S     \_ [oom_reaper]
       0 I<    \_ [writeback]
       0 S     \_ [kcompactd0]
       0 SN    \_ [ksmd]
       0 SN    \_ [khugepaged]
       0 I<    \_ [kintegrityd]
       0 I<    \_ [kblockd]
       0 I<    \_ [blkcg_punt_bio]
       0 I<    \_ [edac-poller]
       0 I<    \_ [devfreq_wq]
       0 I<    \_ [kworker/0:1H-kblockd]
       0 S     \_ [kswapd0]
       0 I<    \_ [kthrotld]
       0 I<    \_ [acpi_thermal_pm]
       0 I<    \_ [ipv6_addrconf]
       0 I<    \_ [kstrp]
       0 I<    \_ [zswap-shrink]
       0 I<    \_ [kworker/u9:0-hci0]
       0 I<    \_ [kworker/2:1H-kblockd]
       0 I<    \_ [ata_sff]
       0 I<    \_ [sdhci]
       0 S     \_ [irq/39-mmc0]
       0 I<    \_ [sdhci]
       0 S     \_ [irq/42-mmc1]
       0 S     \_ [scsi_eh_0]
       0 I<    \_ [scsi_tmf_0]
       0 S     \_ [scsi_eh_1]
       0 I<    \_ [scsi_tmf_1]
       0 I<    \_ [kworker/1:1H-kblockd]
       0 I<    \_ [kworker/3:1H-kblockd]
       0 S     \_ [jbd2/sda5-8]
       0 I<    \_ [ext4-rsv-conver]
       0 S     \_ [watchdogd]
       0 S     \_ [scsi_eh_2]
       0 I<    \_ [scsi_tmf_2]
       0 S     \_ [usb-storage]
       0 I<    \_ [cfg80211]
       0 S     \_ [irq/130-mei_me]
       0 I<    \_ [cryptd]
       0 I<    \_ [uas]
       0 S     \_ [irq/131-iwlwifi]
       0 S     \_ [card0-crtc0]
       0 S     \_ [card0-crtc1]
       0 S     \_ [card0-crtc2]
       0 I<    \_ [kworker/u9:2-hci0]
       0 I     \_ [kworker/3:0-events]
       0 I     \_ [kworker/2:0-events]
       0 I     \_ [kworker/1:0-events_power_efficient]
       0 I     \_ [kworker/3:2-events]
       0 I     \_ [kworker/1:1]
       0 I     \_ [kworker/u8:1-events_unbound]
       0 I     \_ [kworker/0:2-events]
       0 I     \_ [kworker/2:2]
       0 I     \_ [kworker/u8:0-events_unbound]
       0 I     \_ [kworker/0:1-events]
       0 I     \_ [kworker/0:0-events]

There are various basic host processes, including my SSH connections, and Docker. I note it’s using containerd. We also see kubelet, the Kubernetes node agent.

host processes
       0 Ss   /sbin/init
       0 Ss   /lib/systemd/systemd-journald
       0 Ss   /lib/systemd/systemd-udevd
     101 Ssl  /lib/systemd/systemd-timesyncd
       0 Ssl  /sbin/dhclient -4 -v -i -pf /run/ -lf /var/lib/dhcp/dhclient.enx00e04c6851de.leases -I -df /var/lib/dhcp/dhclient6.enx00e04c6851de.leases enx00e04c6851de
       0 Ss   /usr/sbin/cron -f
     104 Ss   /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
       0 Ssl  /usr/sbin/dockerd -H fd://
       0 Ssl  /usr/sbin/rsyslogd -n -iNONE
       0 Ss   /usr/sbin/smartd -n
       0 Ss   /lib/systemd/systemd-logind
       0 Ssl  /usr/bin/containerd
       0 Ss+  /sbin/agetty -o -p -- \u --noclear tty1 linux
       0 Ss   sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
       0 Ss    \_ sshd: root@pts/1
       0 Ss    |   \_ -bash
       0 R+    |       \_ ps faxno user,stat,cmd
       0 Ss    \_ sshd: noodles [priv]
    1000 S         \_ sshd: noodles@pts/0
    1000 Ss+           \_ -bash
       0 Ss   /lib/systemd/systemd --user
       0 S     \_ (sd-pam)
    1000 Ss   /lib/systemd/systemd --user
    1000 S     \_ (sd-pam)
       0 Ssl  /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni

And that just leaves a bunch of container related processes:

container processes
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id fd95c597ff3171ff110b7bf440229e76c5108d5d93be75ffeab54869df734413 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id c2ff2c50f0bc052feda2281741c4f37df7905e3b819294ec645148ae13c3fe1b -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 589c1545d9e0cdf8ea391745c54c8f4db49f5f437b1a2e448e7744b2c12f8856 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6f417fd8a8c573a2b8f792af08cdcd7ce663457f0f7218c8d55afa3732e6ee94 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id afa9798c9f663b21df8f38d9634469e6b4db0984124547cd472a7789c61ef752 -address /run/containerd/containerd.sock
       0 Ssl   \_ kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address= --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true --port=0
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4b3708b62f4d427690f5979848c59fce522dab6c62a9c53b806ffbaef3f88e62 -address /run/containerd/containerd.sock
       0 Ssl   \_ kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address= --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --port=0 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --use-service-account-credentials=true
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 89f35bf7a825eb97db7035d29aa475a3a1c8aaccda0860a46388a3a923cd10bc -address /run/containerd/containerd.sock
       0 Ssl   \_ kube-apiserver --advertise-address= --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers= --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range= --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 2dabff6e4f59c96d931d95781d28314065b46d0e6f07f8c65dc52aa465f69456 -address /run/containerd/containerd.sock
       0 Ssl   \_ etcd --advertise-client-urls= --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls= --initial-cluster=udon= --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=, --listen-metrics-urls= --listen-peer-urls= --name=udon --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 73fae81715b670255b66419a7959798b287be7bbb41e96f8b711fa529aa02f0d -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 26d92a720c560caaa5f8a0217bc98e486b1c032af6c7c5d75df508021d462878 -address /run/containerd/containerd.sock
       0 Ssl   \_ /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=udon
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7104f65b5d92a56a2df93514ed0a78cfd1090ca47b6ce4e0badc43be6c6c538e -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 48d735f7f44e3944851563f03f32c60811f81409e7378641404035dffd8c1eb4 -address /run/containerd/containerd.sock
       0 Ssl   \_ /usr/bin/weave-npc
       0 S<        \_ /usr/sbin/ulogd -v
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 36b418e69ae7076fe5a44d16cef223d8908016474cb65910f2fd54cca470566b -address /run/containerd/containerd.sock
       0 Ss    \_ /bin/sh /home/weave/
       0 Sl        \_ /home/weave/weaver --port=6783 --datapath=datapath --name=12:82:8f:ed:c7:bf --http-addr= --metrics-addr= --docker-api= --no-dns --db-prefix=/weavedb/weave-net --ipalloc-range= --nickname=udon --ipalloc-init consensus=0 --conn-limit=200 --expect-npc --no-masq-local
       0 Sl        \_ /home/weave/kube-utils -run-reclaim-daemon -node-name=udon -peer-name=12:82:8f:ed:c7:bf -log-level=debug
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 534c0a698478599277482d97a137fab8ef4d62db8a8a5cf011b4bead28246f70 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 9ffd6b668ddfbf3c64c6783bc6f4f6cc9e92bfb16c83fb214c2cbb4044993bf0 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4a30785f91873a7e6a191e86928a789760a054e4fa6dcd7048a059b42cf19edf -address /run/containerd/containerd.sock
       0 Ssl   \_ /coredns -conf /etc/coredns/Corefile
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 649a507d45831aca1de5231b49afc8ff37d90add813e7ecd451d12eedd785b0c -address /run/containerd/containerd.sock
       0 Ssl   \_ /coredns -conf /etc/coredns/Corefile
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 62b369de8d8cece4d33ec9fda4d23a9718379a8df8b30173d68f20bff830fed2 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7cbb177bee18dbdeed21fb90e74378e2081436ad5bf116b36ad5077fe382df30 -address /run/containerd/containerd.sock
       0 Ss    \_ /bin/bash /usr/local/bin/
       0 S         \_ nginx: master process nginx -g daemon off;
   65534 S             \_ nginx: worker process
       0 Ss   /lib/systemd/systemd --user
       0 S     \_ (sd-pam)
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6669168db70db4e6c741e8a047942af06dd745fae4d594291d1d6e1077b05082 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id d5fa78fa31f11a4c5fb9fd2e853a00f0e60e414a7bce2e0d8fcd1f6ab2b30074 -address /run/containerd/containerd.sock
     101 Ss    \_ /usr/bin/dumb-init -- /nginx-ingress-controller --publish-service=ingress-nginx/ingress-nginx-controller --election-id=ingress-controller-leader --ingress-class=nginx --configmap=ingress-nginx/ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key
     101 Ssl       \_ /nginx-ingress-controller --publish-service=ingress-nginx/ingress-nginx-controller --election-id=ingress-controller-leader --ingress-class=nginx --configmap=ingress-nginx/ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key
     101 S             \_ nginx: master process /usr/local/nginx/sbin/nginx -c /etc/nginx/nginx.conf
     101 Sl                \_ nginx: worker process
     101 Sl                \_ nginx: worker process
     101 Sl                \_ nginx: worker process
     101 Sl                \_ nginx: worker process
     101 S                 \_ nginx: cache manager process

There’s a lot going on there. Some bits are obvious; we can see the nginx ingress controller, our echoserver (the other nginx process hanging off /usr/local/bin/, and some things that look related to weave. The rest appears to be Kubernete’s related infrastructure.

kube-scheduler, kube-controller-manager, kube-apiserver, kube-proxy all look like core Kubernetes bits. etcd is a distributed, reliable key-value store. coredns is a DNS server, with plugins for Kubernetes and etcd.

What does Docker claim is happening?

docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED      STATUS      PORTS     NAMES
d5fa78fa31f1   "/usr/bin/dumb-init …"   3 days ago   Up 3 days             k8s_controller_ingress-nginx-controller-5b74bc9868-bczdr_ingress-nginx_4d7d3d81-a769-4de9-a4fb-04763b7c1605_0
6669168db70d                "/pause"                 3 days ago   Up 3 days             k8s_POD_ingress-nginx-controller-5b74bc9868-bczdr_ingress-nginx_4d7d3d81-a769-4de9-a4fb-04763b7c1605_0
7cbb177bee18                 "/usr/local/bin/run.…"   3 days ago   Up 3 days             k8s_echoserver_hello-node-59bffcc9fd-8hkgb_default_c7111c9e-7131-40e0-876d-be89d5ca1812_0
62b369de8d8c                "/pause"                 3 days ago   Up 3 days             k8s_POD_hello-node-59bffcc9fd-8hkgb_default_c7111c9e-7131-40e0-876d-be89d5ca1812_0
649a507d4583   296a6d5035e2                          "/coredns -conf /etc…"   4 days ago   Up 4 days             k8s_coredns_coredns-558bd4d5db-flrfq_kube-system_f8b2b52e-6673-4966-82b1-3fbe052a0297_0
4a30785f9187   296a6d5035e2                          "/coredns -conf /etc…"   4 days ago   Up 4 days             k8s_coredns_coredns-558bd4d5db-4nvrg_kube-system_1976f4d6-647c-45ca-b268-95f071f064d5_0
9ffd6b668ddf                "/pause"                 4 days ago   Up 4 days             k8s_POD_coredns-558bd4d5db-flrfq_kube-system_f8b2b52e-6673-4966-82b1-3fbe052a0297_0
534c0a698478                "/pause"                 4 days ago   Up 4 days             k8s_POD_coredns-558bd4d5db-4nvrg_kube-system_1976f4d6-647c-45ca-b268-95f071f064d5_0
36b418e69ae7   df29c0a4002c                          "/home/weave/launch.…"   4 days ago   Up 4 days             k8s_weave_weave-net-mchmg_kube-system_b9af9615-8cde-4a18-8555-6da1f51b7136_1
48d735f7f44e   weaveworks/weave-npc                  "/usr/bin/"     4 days ago   Up 4 days             k8s_weave-npc_weave-net-mchmg_kube-system_b9af9615-8cde-4a18-8555-6da1f51b7136_0
7104f65b5d92                "/pause"                 4 days ago   Up 4 days             k8s_POD_weave-net-mchmg_kube-system_b9af9615-8cde-4a18-8555-6da1f51b7136_0
26d92a720c56   4359e752b596                          "/usr/local/bin/kube…"   4 days ago   Up 4 days             k8s_kube-proxy_kube-proxy-6d8kg_kube-system_8bf2d7ec-4850-427f-860f-465a9ff84841_0
73fae81715b6                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-proxy-6d8kg_kube-system_8bf2d7ec-4850-427f-860f-465a9ff84841_0
89f35bf7a825   771ffcf9ca63                          "kube-apiserver --ad…"   4 days ago   Up 4 days             k8s_kube-apiserver_kube-apiserver-udon_kube-system_1af8c5f362b7b02269f4d244cb0e6fbf_0
afa9798c9f66   a4183b88f6e6                          "kube-scheduler --au…"   4 days ago   Up 4 days             k8s_kube-scheduler_kube-scheduler-udon_kube-system_629dc49dfd9f7446eb681f1dcffe6d74_0
2dabff6e4f59   0369cf4303ff                          "etcd --advertise-cl…"   4 days ago   Up 4 days             k8s_etcd_etcd-udon_kube-system_c2a3008c1d9895f171cd394e38656ea0_0
4b3708b62f4d   e16544fd47b0                          "kube-controller-man…"   4 days ago   Up 4 days             k8s_kube-controller-manager_kube-controller-manager-udon_kube-system_1d1b9018c3c6e7aa2e803c6e9ccd2eab_0
fd95c597ff31                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-scheduler-udon_kube-system_629dc49dfd9f7446eb681f1dcffe6d74_0
589c1545d9e0                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-controller-manager-udon_kube-system_1d1b9018c3c6e7aa2e803c6e9ccd2eab_0
6f417fd8a8c5                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-apiserver-udon_kube-system_1af8c5f362b7b02269f4d244cb0e6fbf_0
c2ff2c50f0bc                "/pause"                 4 days ago   Up 4 days             k8s_POD_etcd-udon_kube-system_c2a3008c1d9895f171cd394e38656ea0_0

Ok, that’s interesting. Before we dig into it, what does Kubernetes say? (I’ve trimmed the RESTARTS + AGE columns to make things fit a bit better here; they weren’t interesting).

noodles@udon:~$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                        READY   STATUS
default         hello-node-59bffcc9fd-8hkgb                 1/1     Running
ingress-nginx   ingress-nginx-admission-create-8jgkt        0/1     Completed
ingress-nginx   ingress-nginx-admission-patch-jdq4t         0/1     Completed
ingress-nginx   ingress-nginx-controller-5b74bc9868-bczdr   1/1     Running
kube-system     coredns-558bd4d5db-4nvrg                    1/1     Running
kube-system     coredns-558bd4d5db-flrfq                    1/1     Running
kube-system     etcd-udon                                   1/1     Running
kube-system     kube-apiserver-udon                         1/1     Running
kube-system     kube-controller-manager-udon                1/1     Running
kube-system     kube-proxy-6d8kg                            1/1     Running
kube-system     kube-scheduler-udon                         1/1     Running
kube-system     weave-net-mchmg                             2/2     Running

So there are a lot more Docker instances running than Kubernetes pods. What’s happening there? Well, it turns out that Kubernetes builds pods from multiple different Docker instances. If you think of a traditional container as being comprised of a set of namespaces (process, network, hostname etc) and a cgroup then a pod is made up of the namespaces and then each docker instance within that pod has it’s own cgroup. Ian Lewis has a much deeper discussion in What are Kubernetes Pods Anyway?, but my takeaway is that a pod is a set of sort-of containers that are coupled. We can see this more clearly if we ask systemd for the cgroup breakdown:

Control group /:
│ ├─user-0.slice 
│ │ ├─session-29.scope 
│ │ │ ├─ 515899 sshd: root@pts/1
│ │ │ ├─ 515913 -bash
│ │ │ ├─3519743 systemd-cgls
│ │ │ └─3519744 cat
│ │ └─user@0.service …
│ │   └─init.scope 
│ │     ├─515902 /lib/systemd/systemd --user
│ │     └─515903 (sd-pam)
│ └─user-1000.slice 
│   ├─user@1000.service …
│   │ └─init.scope 
│   │   ├─2564011 /lib/systemd/systemd --user
│   │   └─2564012 (sd-pam)
│   └─session-110.scope 
│     ├─2564007 sshd: noodles [priv]
│     ├─2564040 sshd: noodles@pts/0
│     └─2564041 -bash
│ └─1 /sbin/init
│ ├─containerd.service …
│ │ ├─  21383 /usr/bin/containerd-shim-runc-v2 -namespace moby -id fd95c597ff31…
│ │ ├─  21408 /usr/bin/containerd-shim-runc-v2 -namespace moby -id c2ff2c50f0bc…
│ │ ├─  21432 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 589c1545d9e0…
│ │ ├─  21459 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6f417fd8a8c5…
│ │ ├─  21582 /usr/bin/containerd-shim-runc-v2 -namespace moby -id afa9798c9f66…
│ │ ├─  21607 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4b3708b62f4d…
│ │ ├─  21640 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 89f35bf7a825…
│ │ ├─  21648 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 2dabff6e4f59…
│ │ ├─  22343 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 73fae81715b6…
│ │ ├─  22391 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 26d92a720c56…
│ │ ├─  26992 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7104f65b5d92…
│ │ ├─  27405 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 48d735f7f44e…
│ │ ├─  27531 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 36b418e69ae7…
│ │ ├─  27941 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 534c0a698478…
│ │ ├─  27960 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 9ffd6b668ddf…
│ │ ├─  28131 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4a30785f9187…
│ │ ├─  28159 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 649a507d4583…
│ │ ├─ 514667 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 62b369de8d8c…
│ │ ├─ 514976 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7cbb177bee18…
│ │ ├─ 698904 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6669168db70d…
│ │ ├─ 699284 /usr/bin/containerd-shim-runc-v2 -namespace moby -id d5fa78fa31f1…
│ │ └─2805479 /usr/bin/containerd
│ ├─systemd-udevd.service 
│ │ └─2805502 /lib/systemd/systemd-udevd
│ ├─cron.service 
│ │ └─2805474 /usr/sbin/cron -f
│ ├─docker.service …
│ │ └─528 /usr/sbin/dockerd -H fd://
│ ├─kubelet.service 
│ │ └─2805501 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap…
│ ├─systemd-journald.service 
│ │ └─2805505 /lib/systemd/systemd-journald
│ ├─ssh.service 
│ │ └─2805500 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
│ ├─ifup@enx00e04c6851de.service 
│ │ └─2805675 /sbin/dhclient -4 -v -i -pf /run/ -lf…
│ ├─rsyslog.service 
│ │ └─2805488 /usr/sbin/rsyslogd -n -iNONE
│ ├─smartmontools.service 
│ │ └─2805499 /usr/sbin/smartd -n
│ ├─dbus.service 
│ │ └─527 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile…
│ ├─systemd-timesyncd.service 
│ │ └─2805513 /lib/systemd/systemd-timesyncd
│ ├─system-getty.slice 
│ │ └─getty@tty1.service 
│ │   └─536 /sbin/agetty -o -p -- \u --noclear tty1 linux
│ └─systemd-logind.service 
│   └─533 /lib/systemd/systemd-logind
  │ ├─kubepods-burstable-pod1af8c5f362b7b02269f4d244cb0e6fbf.slice 
  │ │ ├─docker-6f417fd8a8c573a2b8f792af08cdcd7ce663457f0f7218c8d55afa3732e6ee94.scope …
  │ │ │ └─21493 /pause
  │ │ └─docker-89f35bf7a825eb97db7035d29aa475a3a1c8aaccda0860a46388a3a923cd10bc.scope …
  │ │   └─21699 kube-apiserver --advertise-address= --allow-privi…
  │ ├─kubepods-burstable-podf8b2b52e_6673_4966_82b1_3fbe052a0297.slice 
  │ │ ├─docker-649a507d45831aca1de5231b49afc8ff37d90add813e7ecd451d12eedd785b0c.scope …
  │ │ │ └─28187 /coredns -conf /etc/coredns/Corefile
  │ │ └─docker-9ffd6b668ddfbf3c64c6783bc6f4f6cc9e92bfb16c83fb214c2cbb4044993bf0.scope …
  │ │   └─27987 /pause
  │ ├─kubepods-burstable-podc2a3008c1d9895f171cd394e38656ea0.slice 
  │ │ ├─docker-c2ff2c50f0bc052feda2281741c4f37df7905e3b819294ec645148ae13c3fe1b.scope …
  │ │ │ └─21481 /pause
  │ │ └─docker-2dabff6e4f59c96d931d95781d28314065b46d0e6f07f8c65dc52aa465f69456.scope …
  │ │   └─21701 etcd --advertise-client-urls= --cert…
  │ ├─kubepods-burstable-pod629dc49dfd9f7446eb681f1dcffe6d74.slice 
  │ │ ├─docker-fd95c597ff3171ff110b7bf440229e76c5108d5d93be75ffeab54869df734413.scope …
  │ │ │ └─21491 /pause
  │ │ └─docker-afa9798c9f663b21df8f38d9634469e6b4db0984124547cd472a7789c61ef752.scope …
  │ │   └─21680 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/sche…
  │ ├─kubepods-burstable-podb9af9615_8cde_4a18_8555_6da1f51b7136.slice 
  │ │ ├─docker-48d735f7f44e3944851563f03f32c60811f81409e7378641404035dffd8c1eb4.scope …
  │ │ │ ├─27424 /usr/bin/weave-npc
  │ │ │ └─27458 /usr/sbin/ulogd -v
  │ │ ├─docker-36b418e69ae7076fe5a44d16cef223d8908016474cb65910f2fd54cca470566b.scope …
  │ │ │ ├─27549 /bin/sh /home/weave/
  │ │ │ ├─27629 /home/weave/weaver --port=6783 --datapath=datapath --name=12:82…
  │ │ │ └─27825 /home/weave/kube-utils -run-reclaim-daemon -node-name=udon -pee…
  │ │ └─docker-7104f65b5d92a56a2df93514ed0a78cfd1090ca47b6ce4e0badc43be6c6c538e.scope …
  │ │   └─27011 /pause
  │ ├─kubepods-burstable-pod4d7d3d81_a769_4de9_a4fb_04763b7c1605.slice 
  │ │ ├─docker-6669168db70db4e6c741e8a047942af06dd745fae4d594291d1d6e1077b05082.scope …
  │ │ │ └─698925 /pause
  │ │ └─docker-d5fa78fa31f11a4c5fb9fd2e853a00f0e60e414a7bce2e0d8fcd1f6ab2b30074.scope …
  │ │   ├─ 699303 /usr/bin/dumb-init -- /nginx-ingress-controller --publish-ser…
  │ │   ├─ 699316 /nginx-ingress-controller --publish-service=ingress-nginx/ing…
  │ │   ├─ 699405 nginx: master process /usr/local/nginx/sbin/nginx -c /etc/ngi…
  │ │   ├─1075085 nginx: worker process
  │ │   ├─1075086 nginx: worker process
  │ │   ├─1075087 nginx: worker process
  │ │   ├─1075088 nginx: worker process
  │ │   └─1075089 nginx: cache manager process
  │ ├─kubepods-burstable-pod1976f4d6_647c_45ca_b268_95f071f064d5.slice 
  │ │ ├─docker-4a30785f91873a7e6a191e86928a789760a054e4fa6dcd7048a059b42cf19edf.scope …
  │ │ │ └─28178 /coredns -conf /etc/coredns/Corefile
  │ │ └─docker-534c0a698478599277482d97a137fab8ef4d62db8a8a5cf011b4bead28246f70.scope …
  │ │   └─27995 /pause
  │ └─kubepods-burstable-pod1d1b9018c3c6e7aa2e803c6e9ccd2eab.slice 
  │   ├─docker-589c1545d9e0cdf8ea391745c54c8f4db49f5f437b1a2e448e7744b2c12f8856.scope …
  │   │ └─21489 /pause
  │   └─docker-4b3708b62f4d427690f5979848c59fce522dab6c62a9c53b806ffbaef3f88e62.scope …
  │     └─21690 kube-controller-manager --authentication-kubeconfig=/etc/kubern…
    │ ├─docker-62b369de8d8cece4d33ec9fda4d23a9718379a8df8b30173d68f20bff830fed2.scope …
    │ │ └─514688 /pause
    │ └─docker-7cbb177bee18dbdeed21fb90e74378e2081436ad5bf116b36ad5077fe382df30.scope …
    │   ├─514999 /bin/bash /usr/local/bin/
    │   ├─515039 nginx: master process nginx -g daemon off;
    │   └─515040 nginx: worker process
      ├─docker-73fae81715b670255b66419a7959798b287be7bbb41e96f8b711fa529aa02f0d.scope …
      │ └─22364 /pause
      └─docker-26d92a720c560caaa5f8a0217bc98e486b1c032af6c7c5d75df508021d462878.scope …
        └─22412 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.c…

Again, there’s a lot going on here, but if you look for the kubepods.slice piece then you can see our pods are divided into two sets, kubepods-burstable.slice and kubepods-besteffort.slice. Under those you can see the individual pods, all of which have at least 2 separate cgroups, one of which is running /pause. Turns out this is a generic Kubernetes image which basically performs the process reaping that an init process would do on a normal system; it just sits and waits for processes to exit and cleans them up. Again, Ian Lewis has more details on the pause container.

Finally let’s dig into the actual containers. The pause container seems like a good place to start. We can examine the details of where the filesystem is (may differ if you’re not using the overlay2 image thingy). The hex string is the container ID listed by docker ps.

# docker inspect --format='{{.GraphDriver.Data.MergedDir}}' 6669168db70d
# cd /var/lib/docker/overlay2/5a2d76012476349e6b58eb6a279bac400968cefae8537082ea873b2e791ff3c6/merged
# find . | sed -e 's;^./;;'
# file pause
pause: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 3.2.0, BuildID[sha1]=d35dab7152881e37373d819f6864cd43c0124a65, stripped

This is a nice, minimal container. The pause binary is statically linked, so there are no extra libraries required and it’s just a basic set of support devices and files. I doubt the pieces in /etc are even required. Let’s try the echoserver next:

# docker inspect --format='{{.GraphDriver.Data.MergedDir}}' 7cbb177bee18
# cd /var/lib/docker/overlay2/09042bc1aff16a9cba43f1a6a68f7786c4748e989a60833ec7417837c4bfaacb/merged
# find . | wc -l

Wow. That’s a lot more stuff. Poking /etc/os-release shows why:

# grep PRETTY etc/os-release
PRETTY_NAME="Ubuntu 16.04.2 LTS"

Aha. It’s an Ubuntu-based image. We can cut straight to the chase with the nginx ingress container:

# docker exec d5fa78fa31f1 grep PRETTY /etc/os-release
PRETTY_NAME="Alpine Linux v3.13"

That’s a bit more reasonable an image for a container; Alpine Linux is a much smaller distro.

I don’t feel there’s a lot more poking to do here. It’s not something I’d expect to do on a normal Kubernetes setup, but I wanted to dig under the hood to make sure it really was just a normal container situation. I think the next steps involve adding a bit more complexity - that means building a pod with more than a single node, and then running an application that’s a bit more complicated. That should help explore two major advantages of running this sort of setup; resilency from a node dying, and the ability to scale out beyond what a single node can do.

Worse Than FailureCodeSOD: A Little Info

Matt has plans for the next few years: dealing with the "inheritance" of some legacy systems.

They're written in VB.Net, which isn't itself a problem, but the code quality leaves just a bit to be desired.

Dim FileInfo As New System.IO.FileInfo(FileName) FileName = FileInfo.Name If FileName = "" Then Throw New ApplicationException("cannot determine the file name for '" & FileName & "'") End If

I suspect they wanted to check if the file exists. Which is definitely a reasonable thing to want to check with aFileInfo object, which is why it has an Exists property which one could use. But this code doesn't accomplish the exists check- if the file doesn't exist, the Name property is whatever name you used to construct the object. Perhaps they hoped to canonicalize the name, but that's also not a thing the Name property does.

So, in practice, this code doesn't do anything, but boy, it looks like it might be an important check.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianJohn Goerzen: Roundup of Unique Data/Storage Hosting Options

Recently I have been taking another look at the services at and it got me thinking: what would I do with a lot of storage? What might I want to run with it, if it were fairly cheap?

  • Backups are an obvious place to start. Borgbackup makes a pretty compelling option: very bandwidth-efficient thanks to block-level rolling hash dedup, encryption fully on the client side, etc. Borg can run over ssh, though does need a server-side program.
  • Nextcloud is another option. With Google Photos getting quite expensive now, if you could have a TB of storage that you control, what might you do with it? Nextcloud also includes IM, video chat, and online document editing similar to Google Docs.
  • I’ve written before about the really neat properties of Syncthing: distributed synchronization that needs no server component. It also supports untrusted nodes in the mesh, where all content is encrypted before it reaches them. Sometimes an intermediary node is useful; for instance, if nodes A and C are to sync but are rarely online at the same time, an untrusted node B that is always online can facilitate synchronization. A server with some space could help with this.
  • A relay for NNCP or UUCP.
  • More broadly, you could self-host your photo or video cllection.

Let’s start taking a look at what’s out there. I’m going to try to focus on things that are unique for some reason: pricing, features, etc. Incidentally, good reviews are hard to find due to the proliferation of affiliate links. I have no affiliate relationships with anyone mentioned here and there are no affiliate links in this post.

I’ll start with the highest-end community and commercial options (though both are quite competitive on price for what they are), and then move on to the cheaper options.

Community option: SDF

SDF is somewhat hard to define. “What is SDF?” could prompt answers like:

  • A community-run network offering free Unix shells to the public
  • A diverse community of people that connect with unique tools. A social network in the 80s sense, sort of.
  • A provider of… let me see… VPN, DSL, and even dialup access.
  • An organization that runs various Open Source social network services, including Mastodon, Pixelfed (image sharing), PeerTube (video sharing), WordPress, even Minecraft.
  • A provider of various services for a nominal charge: $3/mo gets you access to the MetaArray with 800GB of storage space which you have shell access to, and can store stuff on with Nextcloud, host public webpages, etc.
  • Thriving communities around amateur radio, musicians, Plan 9, and even – brace yourself – TOPS-20, a DEC operating system first released in 1976 and not updated since 1988.
  • There’s even a Wikipedia article about SDF.

There’s a lot there. SDF lets you use things for yourself, of course, but you can also join a community. It’s not a commercial service backed by SLAs — it’s best-effort — but it’s been around more than 30 years and has a great track record.

Top commercial option for backup storage: offers storage broadly over SSH: sftp, rsync, scp, borg, rclone, restic, git-annex, git, and such. You do not get a shell, but you do get to run a few noninteractive commands via ssh. You can, for instance, run git clone on the rsync server.

The rsync special sauce is in ZFS. They run raidz3 on their arrays (and also offer dual location setups for an additional fee), offer both free and paid ZFS snapshots, etc. The service is designed to be extremely reliable, particularly for backups, and it seems to me to meet those goals.

Basic storage is $0.025 per GB/mo, but with certain account types such as borg, can be had for $0.015 per GB/mo. The minimum size is 400GB or $10/mo. There are no bandwidth charges. This makes it quite economical even compared to, say, S3. Additional discounts start at 10TB, so 10TB with would cost $204.80/mo or $81.92 on the borg plan.

You won’t run Nextcloud on this thing, but for backups that must be reliable, or even a photo collection or something, it makes perfect sense.

When you look into other options, you’ll find that other providers are a lot more vague about their storage setup than

Various offerings from Hetzner

Hetzner is one of Europe’s large hosting companies, and they have several options of interest.

Their Storage Box competes directly with the service. Their per-GB storage cost is lower than, and although they do include a certain amount of free bandwidth with each account, bandwidth is not unlimited and could result in charges. Still, if you don’t drive 2x or more your storage usage in bandwidth each month, it would be cheaper than rsync. The Storage Box also uses ZFS with some kind of redundancy, though they don’t specifcy details.

What differentiates them from is the protocol support. They support sftp, scp, Borg, ssh, rsync, etc. just as does. But then they also throw in Samba/CIFS, FTPS, HTTPS, and WebDAV – all optionally enabled or disabled by you. Although things like sshfs exist, they aren’t particularly optimal for some use cases, and CIFS support may just be what you need in some situations.

10TB with Hetzner would cost EUR 39.90/mo, or about $48.84/mo. (This figure is higher for Europeans, who also have to pay VAT.)

Hetzner also offers a Storage Share, which is a private Nextcloud instance. 10TB of that is exactly the same cost as 10TB of the Storage Box. You can add your own users, groups, etc. to this as your are the Nextcloud admin of your instance. Hetzner throws in automatic updates (which is great, as updates have been a pain in my side for a long time). Nextcloud is ideal for things like photo sharing, even has email and chat built in, etc. For about the same price at 2TB of Google One, you can have 2TB of Nextcloud with all those services for yourself. Not bad. You can also mount a Nextcloud instance with WebDAV.

Interestingly, Nextcloud supports “external storages” as backend for the data. It supports another Nextcloud instance, OpenStack or S3 object storage, and SFTP, SMB/CIFS, and WebDAV. If you’re thinking you’d like both SFTP and Nextcloud access to a pool of storage, I imagine you could always get a large Storage Box from Hetzner (internal transfer is free), pair it with a small Nextcloud instance, and link the two with Nextcloud external storage.

Dedicated Servers

If you want a more DIY approach, you can find some interesting deals on actual dedicated server hardware – you get the entire machine to yourself. I’ve been using OVH’s SoYouStart for a number of years, with good experienaces, and they have a number of server configurations available. For instance, for $45.99, you can get a Xeon box with 4x2TB drives and 32GB RAM. With RAID5 or raidz1, that’s 6TB of available space – and cheaper than the 6TB from (though less redundant) plus you get the whole box to yourself too. OVH directly has some more storage servers; for instance, you can get a box with 4x4TB + 1x500GB SSD for $86.75/mo, giving you 12TB available with RAID5/raidz1, plus a 16GB server to do what you want with.

Hetzner also has some larger options available, for instance 2x4TB at EUR39 or 2x8TB at EUR54, both with 64GB of RAM.

Bargain Corner

Yes, you can find 10TB for $25/mo. It’s hosted on ceph, by what appears to be mostly a single person (though with a lot of experience and a fair bit of transparency). You’re not going to have the round-the-clock support experience as with, nor its raidz3 level of redundancy – but if you don’t need that, there are quite a few options.

Let’s start with Lima Labs. Yes, 10TB is $25/mo, and they support sftp, rsync, borg, and even NFS mounts on storage backed by Ceph. The owner, Sam, seems to be a nice guy but the service isn’t going to be on the scale of or Hetzner. That may or may not be OK for your needs – I mean, you can even get 1TB for $5/mo, so there are some fantastic deals to be had here.

BorgBase does Borg hosting and borg hosting only. You can get 1TB for $6.67/mo or, for instance, 10TB for $53.46. They don’t say much about their infrastructure and it’s hard to get a read on the company, but for Borg backups, it could be a nice option.

Bargain Corner Part 2: Seedboxes

There’s a market out there of companies offering BitTorrent seeding and downloading services. Typically, these services offer you Unix ssh access to a shell, give you a bunch of space on completely non-redundant drives (theory being that the data on them is transient), lots of bandwidth, for a low price. Some people use them for BitTorrent, others for media serving and such.

If you are willing to take the lowest in drive redundancy, there are some deals to be had. Whatbox is a popular leader here, and has an extensive wiki with info. Or you can find some “shared storage” plans – for instance, 12TB for $32.49/mo. But it’s completely non-redundant drives.

Seedbox has a partner company, Walker Servers, with some interesting deals; for instance, 4x8TB for EUR 52.45. Not bad for 24TB usable with RAID5 – but Walker Servers is completely unknown to me and doesn’t publish a phone number. So, YMMV.


I’m sure I’ve left out many quality options here, but hopefully this is enough to lay out a general lay of the land. Leave other suggestions in the comments.

Rondam RamblingsConservatives Can't Handle the Truth

When the truth is not on your side one thing you can do is to try to change it, and when that doesn't work, outlaw it:Under the culture war rallying cry of combating “critical race theory” — an academic framework centered on the idea that racism is systemic, not just a collection of individual prejudices — [Republican] lawmakers have endorsed an extraordinary intervention in classrooms across


Chaotic IdealismIs activism a moral obligation?

Yes, it is, but with one caveat: Activism has a wide definition.

Let’s say you are a busy person, middle-aged with three children and a job, and not closely identified with any oppressed minority or social justice issue. You have to spend most of your time keeping your family fed, and in your spare time you still have to ensure that your children have someone to love them and watch over them. For you to go off getting arrested during a public civil disobedience publicity stunt would actually be irresponsible, because your children might lose you as a parent, and if you made the wrong sort of enemies, you might put them at risk. Some people might call that cowardly, seeing as how the children of oppressed minorities are at risk by default; but I call it natural, because you are a parent and your children come first.

However, that doesn’t mean you can’t be an activist. Look at those children–you can teach them what you know about being kind, about taking care of the world around them, about paying attention to the news and to current events; you can teach them about critical thinking and about how to argue without becoming (verbally or physically) violent. You can, of course, do things that don’t involve endangering your children, like taking part in a pride parade, writing letters to the editor, or joining a peaceful, child-friendly demonstration. You can use money and influence to support the causes you care about.

Activism does not need to be formal. You can be a quiet supporter of those who need support; you can quite casually reprimand those who do and say things that make your community hostile to one group or another. You can encourage fairness and kindness in everything you do, without ever having to preach. In a perfect world, that would be the only sort of activism we ever needed.

There are many other situations in which activism is made difficult. Some people are in an oppressed minority, and so badly affected by prejudice that it is simply unsafe for you to speak up. Think of a transgender teen in a transphobic household who is likely to be beaten up; or a disabled person living in an abusive institution who will be mistreated for doing anything but pretend to be “grateful” for their “care”. Sometimes, in those situations, activism means simply surviving, as best you can, and clinging as tightly to your morals as you can, being as supportive as you can of anyone else in the same position as you, while keeping it clear in your mind that the things you see happening around you which you cannot prevent are not your fault–they are the fault of your abusers.

And sometimes, it’s simply difficult to get started. You don’t have the skills; you don’t know where to go or what to do. It’s very difficult to be the first to hold a sign, alone on a street corner; or the first to say, “I don’t think this is right,” when everybody else seems to take it for granted; or the first to stand up to someone who has been taking their unjust use of their power as a given. Even more than that, it can be difficult to be an activist when you don’t even know what is wrong with the world or how that wrongness perpetuates itself. Sometimes, activism can mean just learning more. It can mean reading books or blogs or finding other people who also care and talking to them. It can mean finding someone else who can be the first person on the street corner, and joining them. It can mean taking it in stride when you are embarrassed to discover that something you have been doing was hurting people, to recognize that because you grew up in a prejudiced world, you were indoctrinated with those ideas, and that this isn’t your fault.

One form of activism that many people completely ignore is the practice of volunteering. Of course, volunteering has to be done right–you have to evaluate your skills, find out where you are going to actually do some good, and use those skills to their best effect. Just doing things for the sake of doing them–or, even worse, for the sake of selfies and reputation–is not going to help anybody. Find out where the need is, find out what you can do, and figure out how to match those things in a way that’s effective. And above all, never use your volunteer work to diminish the self-determination and self-respect of those you help. Empower them.

Activism is more than just the stereotypical protest and civil disobedience. But being an activist is part of being an ethical member of your community. We are human beings; we are meant to work together. If we don’t use our skills and resources to make our communities better, in whatever form that takes for our particular circumstances, then we are giving up part of what it is to be human.

Cryptogram The DarkSide Ransomware Gang

The New York Times has a long story on the DarkSide ransomware gang.

A glimpse into DarkSide’s secret communications in the months leading up to the Colonial Pipeline attack reveals a criminal operation on the rise, pulling in millions of dollars in ransom payments each month.

DarkSide offers what is known as “ransomware as a service,” in which a malware developer charges a user fee to so-called affiliates like Woris, who may not have the technical skills to actually create ransomware but are still capable of breaking into a victim’s computer systems.

DarkSide’s services include providing technical support for hackers, negotiating with targets like the publishing company, processing payments, and devising tailored pressure campaigns through blackmail and other means, such as secondary hacks to crash websites. DarkSide’s user fees operated on a sliding scale: 25 percent for any ransoms less than $500,000 down to 10 percent for ransoms over $5 million, according to the computer security firm, FireEye.

Cryptogram Security Vulnerability in Apple’s Silicon “M1” Chip

The website for the M1racles security vulnerability is an excellent demonstration that not all vulnerabilities are exploitable. Be sure to read the FAQ through to the end.

EDITED TO ADD: Wired article.

Planet DebianSven Hoexter: pulseaudio/alsa and dynamic mic sensitivity in my browser

It's a gross hack but works for now. To prevent overly sensitive mic settings autotuned by the browser in web conferences, I currently edit as root /usr/share/pulseaudio/alsa-mixer/paths/analog-input-internal-mic.conf. Change in [Element Capture] the setting volume from merge to 80.

The config block as a whole looks like this:

[Element Capture]
switch = mute
volume = 80
override-map.1 = all
override-map.2 = all-left,all-right

Solution found at

Kevin RuddProject Syndicate: Our Responsibility to South Asia

Almost one-quarter of humanity lives on the Indian subcontinent. That fact is easily forgotten elsewhere, as world leaders focus on combating outbreaks of COVID-19 and its new variants within their own countries. But when our descendants pass judgment on this moment in history, they won’t remember just the lockdowns, face masks, and vaccination programs. They will also remember India and its neighbors.

They will remember how human remains have been found bloated and decomposing on the banks of the sacred River Ganges; how bodies had to be left in the heat outside crematoria, owing to a lack of wood for funerary pyres. They will remember how hospitals ran low on oxygen, medication, and hospital beds, while people lined up outside emergency departments and clinics begging for someone to save their loved ones.

All of this will be seared in memory and history. Beyond inflicting agony on the sick, the coronavirus outbreak in the world’s most populous democracy is now robbing victims of their dignity in death, too.

At the Asia Society, we hear accounts almost daily from friends and colleagues who have lost their relatives. According to one member of our Asia 21 Young Leaders network, “An uncle passed away last evening. Another the day before. A friend’s father last week. Everyone I know has someone they’ve lost.”

There are already too many of these stories, and now this tragedy is spilling across India’s borders. In Nepal, where one out of every two citizens is testing positive for the virus, the hardship is multiplied by the fact that India is the country’s principal supplier of vaccines and oxygen; that supply line is now shut down.

While these up-close images reveal an unfolding humanitarian calamity, the 30,000-foot perspective shows that things will only worsen as this deadly wave expands unchecked to rural areas of the subcontinent, where essential medical facilities are even scarcer. As fellow members of the human family, and as citizens of democracies that stand up for each other when help is required, we all need to act – governments, businesses, and private citizens. The quicker we do so, the more lives we might save.

Helping South Asia is not only the right thing to do; it is also in our own self-interest. The rampant spread of the virus anywhere can create more deadly variants that threaten all of us. So, what can be done?

Start with vaccination: we need to put shots into at least a billion arms as fast as possible. To date, fewer than 10% of citizens in each South Asian country (with the exception of Bhutan) have received at least one vaccine dose, according to Our World in Data. We must pull new levers to speed things up.

To that end, the rest of the world should join the United States and 100-plus other countries in backing a temporary World Trade Organization waiver of intellectual-property protections on vaccines. While not a silver bullet, this initiative, coupled with the removal of restrictions on related supplies and equipment, would help India’s sizable pharmaceutical industry to increase production, thereby reducing vaccine shortages domestically and in the region.

It is also incumbent on countries with excess vaccine supplies – particularly those in the developed world – to share the wealth. Earlier in the pandemic, India set an example by sending more than 66 million doses of vaccines to 95 countries around the world when it could have vaccinated its own people more rapidly. It is time to return the favor.

Equally important, more must be done to counter the scourge of misinformation. In an environment where fraudulent miracle cures are being propagated widely on social media, the world should help fund and support vaccine-literacy programs. Campaigns to increase the acceptance of masks, vaccines, social distancing, and other measures are needed especially in rural parts of the subcontinent, where complex sociocultural factors and linguistic diversity pose additional challenges.

Finally, there is the problem of insufficient oxygen – canisters, concentrators, and tankers to transport them. Of all the requests we have heard from our friends in the region, the plea for more oxygen has been the most urgent. India has only around 1,600 cryogenic tankers capable of transporting oxygen from production facilities to hospitals. And that includes the tankers it already supplied to Nepal, which itself has such a paucity of oxygen canisters that it is now asking mountaineers returning from Mount Everest to donate their empty ones.

Shipping cryogenic tankers and oxygen canisters to South Asia will help save the lives of those threatened by the shortage, rather than by COVID-19 itself. Here, developed countries with ample production capacity can help in ways that local nongovernmental organizations cannot – and help they must.

Ultimately, this pandemic, and the legacy of our global response, belongs to all of us. Each generation is confronted by challenges great and small, and this one is ours. Unless we can truly protect people everywhere by arresting the virus and slowing its mutations, we may find ourselves facing the prospect of a permanent pandemic.

Photo: U.S. Air Force personnel stand with over 280K Rapid Diagnostic Test kits, a climate-friendly oxygen generation unit, and enough N95 masks to protect 1 million of India’s healthcare workers. The plane landed in New Delhi, India on May 5, 2021. (Martha Van Lieshout/USAID)

The post Project Syndicate: Our Responsibility to South Asia appeared first on Kevin Rudd.

Planet DebianJoachim Breitner: Verifying the code of the Internet Identity service

The following post was meant to be posted at, but that discourse instance didn’t like it; maybe too much inline code, so I’m posting it here instead. To my regular blog audience, please excuse the lack of context. Please comment at the forum post. The text was later also posted on the DFINITY medium blog

You probably have used to log into various applications (the NNS UI, OpenChat etc.) before, and if you do that, you are trusting this service to take good care of your credentials. Furthermore, you might want to check that the Internet Identity is really not tracking you. So you want to know: Is this really running the code we claim it to run? Of course the following applies to other canisters as well, but I’ll stick to the Internet Identity in this case.

I’ll walk you through the steps of verifying that:

Find out what is running

A service on the Internet Computer, i.e. a canister, is a WebAssembly module. The Internet Computer does intentionally not allow you to just download the Wasm code of any canisters, because maybe some developer wants to keep their code private. But it does expose a hash of the Wasm module. The easiest way to get it is using dfx:

$ dfx canister --no-wallet --network ic info rdmx6-jaaaa-aaaaa-aaadq-cai
Controller: r7inp-6aaaa-aaaaa-aaabq-cai
Module hash: 0xd4af9277f3e8d26fd8cdc7874a9f47b6456587fbb2a64d61b6b6880d144d3c04

The “controller” here is the canister id of the governance canister. This tells you that the Internet Identity is controlled by the Network Nervous System (NNS), and its code can only be changed via proposals that are voted on. This is good; if the controller was just, say, me, I could just change the code of the Internet Identity and take over all your identities.

The “Module hash” is the SHA-256 hash of the .wasm that was deployed. So let’s follow that trace.

Finding the right commit

Since upgrades to the Internet Identity are done via proposals to the NNS, we should find a description of such a proposal in the repository, in the proposals/network_canister_management directory.

Github’s list of recent NNS proposals

Github’s list of recent NNS proposals

We have to find the latest proposal upgrading the Internet Identity. The folder unfortunately contains proposals for many canisters, and the file naming isn’t super helpful. I usually go through the list from bottom and look at the second column, which contains the title of the latest commit creating or modifying a file.

In this case, the second to last is the one we care about: This file lists rationales, gives an overview of changes and, most importantly, says that bd51eab is the commit we are upgrading to.

The file also says that the wasm hash is d4a…c04, which matches what we saw above. This is good: it seems we really found the youngest proposal upgrading the Internet Identity, and that the proposal actually went through.

WARNING: If you are paranoid, don’t trust this file. There is nothing preventing a proposal proposer to create a file pointing to one revision, but actually including other code in the proposal. That’s why the next steps are needed.

Getting the source

Now that we have the revision, we can get the source and check out revision bd51eab:

/tmp $ git clone
Klone nach 'internet-identity' ...
remote: Enumerating objects: 3959, done.
remote: Counting objects: 100% (344/344), done.
remote: Compressing objects: 100% (248/248), done.
remote: Total 3959 (delta 161), reused 207 (delta 92), pack-reused 3615
Empfange Objekte: 100% (3959/3959), 6.05 MiB | 3.94 MiB/s, Fertig.
Löse Unterschiede auf: 100% (2290/2290), Fertig.
/tmp $ cd internet-identity/
/tmp/internet-identity $ git checkout bd51eab
/tmp/internet-identity $ git log --oneline -n 1
bd51eab (HEAD, tag: mainnet-20210527T2203Z) Registers the seed phrase before showing it (#301)

In the last line you see that the Internet Identity team has tagged that revision with a tag name that contains the proposal description file name. Very tidy!

Reproducing the build

The has the following build instructions:

Official build

The official build should ideally be reproducible, so that independent parties can validate that we really deploy what we claim to deploy.

We try to achieve some level of reproducibility using a Dockerized build environment. The following steps should build the official Wasm image

docker build -t internet-identity-service .
docker run --rm --entrypoint cat internet-identity-service /internet_identity.wasm > internet_identity.wasm
sha256sum internet_identity.wasm

The resulting internet_identity.wasm is ready for deployment as rdmx6-jaaaa-aaaaa-aaadq-cai, which is the reserved principal for this service.

It actually suffices to run the first command, as it also prints the hash (we don’t need to copy the .wasm out of the Docker canister):

/tmp/internet-identity $ docker build -t internet-identity-service .
Step 26/26 : RUN sha256sum internet_identity.wasm
 ---> Running in 1a04644b544c
d4af9277f3e8d26fd8cdc7874a9f47b6456587fbb2a64d61b6b6880d144d3c04  internet_identity.wasm
Removing intermediate container 1a04644b544c
 ---> bfe6a63a7980
Successfully built bfe6a63a7980
Successfully tagged internet-identity-service:latest

Success! The hashes match.

You don’t believe me? Try it yourself (and let us know if you get a different hash, maybe I got hacked). This may fail if you have too little RAM configured for Docker, 8GB should be enough.

At this point you have a trust path from the code sitting in front of you to the Internet Identity running at, including the front-end code, and you can start auditing the source code.

What about the canister id?

If you payed close attention you might have noticed that we got the module has for canister rdmx6-jaaaa-aaaaa-aaadq-cai, but we are accessing a web application at So where is this connection?

In the future, I expect some form of a DNS-like “nice host name registry” on the Internet Computer that stores a mapping from nice names to canister ids, and that you will be able to query that to for “which canister serves rdmx6-jaaaa-aaaaa-aaadq-cai” in a secure way (e.g. using certified variables). But since we don’t have that yet, but still want you to be able to use a nice name for the Internet Identity (and not have to change the name later, which would cause headaches), we have hard-coded this mapping for now.

The relevant code here is the “Certifying Service Worker” that your browser downloads when accessing any * URL. This piece of code will then intercept all requests to that domain, map it to an query call, and then use certified variables to validate the response. And indeed, the mapping is in the code there:

const hostnameCanisterIdMap: Record<string, [string, string]> = {
  '': ['rdmx6-jaaaa-aaaaa-aaadq-cai', ''],
  '': ['qoctq-giaaa-aaaaa-aaaea-cai', ''],
  '': ['h5aet-waaaa-aaaab-qaamq-cai', ''],

What about other canisters?

In principle, the same approach works for other canisters, whether it’s OpenChat, the NNS canisters etc. But the details will differ, as every canister developer might have their own way of

  • communicating the location and revision of the source for their canisters
  • building the canisters

In particular, without a reproducible way of building the canister, this will fail, and that’s why projects like are so important in general.

Worse Than FailureCodeSOD: A World Class Programmer

Jesse had a "special" co-worker, Rupert. Rupert was the sort of person that thinks they're the smartest person walking the Earth today, and was quite happy to loudly proclaim that everyone else is wrong. Rupert was happy so long as everyone else was ready to bask in his "genius".

Fortunately for Jesse, Rupert left, because he'd received a huge offer for a senior developer role at a different company. Everyone at Jesse's company privately chuckled about it, because this is the kind of code Rupert's genius produced:

protected function getStreet($street) { return $street; } protected function getCity($city) { return $city; } protected function getState($state) { return $state; } protected function getZip($zip) { return $zip; }

Jesse writes:

Rupert, who wrote these methods had the title of senior developer, and was arrogant about it to boot. I could even forgive someone making a mistake like this once, meaning to remove it, and forgetting about it as it got buried within a larger project, but he wrote four methods all of which do nothing and then called them over and over again. He even had the unmitigated gall to say to the entirety of the development team that he was a "world-class coder" at one point. Apparently in his mind "world-class" means "often writes code that does literally nothing."

Well, Rupert's brought his world-class talents to a new employer, so good luck to them. They'll need it.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianMatthew Garrett: Producing a trustworthy x86-based Linux appliance

Let's say you're building some form of appliance on top of general purpose x86 hardware. You want to be able to verify the software it's running hasn't been tampered with. What's the best approach with existing technology?

Let's split this into two separate problems. The first is to do as much as we can to ensure that the software can't be modified without our consent[1]. This requires that each component in the boot chain verify that the next component is legitimate. We call the first component in this chain the root of trust, and in the x86 world this is the system firmware[2]. This firmware is responsible for verifying the bootloader, and the easiest way to do this on x86 is to use UEFI Secure Boot. In this setup the firmware contains a set of trusted signing certificates and will only boot executables with a chain of trust to one of these certificates. Switching the system into setup mode from the firmware menu will allow you to remove the existing keys and install new ones.

(Note: You shouldn't use the trusted certificate directly for signing bootloaders - instead, the trusted certificate should be used to sign another certificate and the key for that certificate used to sign your bootloader. This way, if you ever need to revoke the signing certificate, you can simply sign a new one with the trusted parent and push out a revocation update instead of having to provision new keys)

But what do you want to sign? In the general purpose Linux world, we use an intermediate bootloader called Shim to bridge from the Microsoft signing authority to a distribution one. Shim then verifies the signature on grub, and grub in turn verifies the signature on the kernel. This is a large body of code that exists because of the use cases that general purpose distributions need to support - primarily, booting on arbitrary off the shelf hardware, and allowing arbitrary and complicated boot setups. This is unnecessary in the appliance case, where the hardware target can be well defined, where there's no need for interoperability with the Microsoft signing authority, and where the boot configuration can be extremely static.

We can skip all of this complexity using systemd-boot's unified Linux image support. This has the format described here, but the short version is that it's simply a kernel and initramfs linked into a small EFI executable that will run them. Instructions for generating such an image are here, and if you follow them you'll end up with a single static image that can be directly executed by the firmware. Signing this avoids dealing with a whole host of problems associated with relying on shim and grub, but note that you'll be embedding the initramfs as well. Again, this should be fine for appliance use-cases, but you'll need your build system to support building the initramfs at image creation time rather than relying on it being generated on the host.

At this point we have a single image that can be verified by the firmware and will get us to the point of a running kernel and initramfs. Unless you've got enough RAM that you can put your entire workload in the initramfs, you're going to want a filesystem as well, and you're going to want to verify that that filesystem hasn't been tampered with. The easiest approach to this is to use dm-verity, a device-mapper layer that uses a hash tree to verify that the filesystem contents haven't been modified. The kernel needs to know what the root hash is, so this can either be embedded into your initramfs image or into the kernel command line. Either way, it'll end up in the signed boot image, so nobody will be able to tamper with it.

It's important to note that a dm-verity partition is read-only - the kernel doesn't have the cryptographic secret that would be required to generate a new hash tree if the partition is modified. So if you require the ability to write data or logs anywhere, you'll need to add a new partition for that. If this partition is unencrypted, an attacker with access to the device will be able to put whatever they want on there. You should treat any data you read from there as untrusted, and ensure that it's validated before use (ie, don't just feed it to a random parser written in C and expect that everything's going to be ok). On the other hand, if it's encrypted, remember that you can't just put the encryption key in the boot image - an attacker with access to the device is going to be able to dump that and extract it. You'll probably want to use a TPM-sealed encryption secret, which will be discussed later on.

At this point everything in the boot process is cryptographically verified, and so should be difficult to tamper with. Unfortunately this isn't really sufficient - on x86 systems there's typically no verification of the integrity of the secure boot database. An attacker with physical access to the system could attach a programmer directly to the firmware flash and rewrite the secure boot database to include keys they control. They could then replace the boot image with one that they've signed, and the machine would happily boot code that the attacker controlled. We need to be able to demonstrate that the system booted using the correct secure boot keys, and the only way we can do that is to use the TPM.

I wrote an introduction to TPMs a while back. The important thing to know here is that the TPM contains a set of Platform Configuration Registers that are large enough to contain a cryptographic hash. During boot, each component of the boot process will generate a "measurement" of other security critical components, including the next component to be booted. These measurements are a representation of the data in question - they may simply be a hash of the object being measured, or the hash of a structure containing various pieces of metadata. Each measurement is passed to the TPM, along with the PCR it should be measured into. The TPM takes the new measurement, appends it to the existing value, and then stores the hash of this concatenated data in the PCR. This means that the final PCR value depends not only on the measurement, but also on every previous measurement. Without breaking the hash algorithm, there's no way to set the PCR to an arbitrary value. The hash values and some associated data are stored in a log that's kept in system RAM, which we'll come back to later.

Different PCRs store different pieces of information, but the one that's most interesting to us is PCR 7. Its use is documented in the TCG PC Client Platform Firmware Profile (section, but the short version is that the firmware will measure the secure boot keys that are used to boot the system. If the secure boot keys are altered (such as by an attacker flashing new ones), the PCR 7 value will change.

What can we do with this? There's a couple of choices. For devices that are online, we can perform remote attestation, a process where the device can provide a signed copy of the PCR values to another system. If the system also provides a copy of the TPM event log, the individual events in the log can be replayed in the same way that the TPM would use to calculate the PCR values, and then compared to the actual PCR values. If they match, that implies that the log values are correct, and we can then analyse individual log entries to make assumptions about system state. If a device has been tampered with, the PCR 7 values and associated log entries won't match the expected values, and we can detect the tampering.

If a device is offline, or if there's a need to permit local verification of the device state, we still have options. First, we can perform remote attestation to a local device. I demonstrated doing this over Bluetooth at LCA back in 2020. Alternatively, we can take advantage of other TPM features. TPMs can be configured to store secrets or keys in a way that renders them inaccessible unless a chosen set of PCRs have specific values. This is used in tpm2-totp, which uses a secret stored in the TPM to generate a TOTP value. If the same secret is enrolled in any standard TOTP app, the value generated by the machine can be compared to the value in the app. If they match, the PCR values the secret was sealed to are unmodified. If they don't, or if no numbers are generated at all, that demonstrates that PCR 7 is no longer the same value, and that the system has been tampered with.

Unfortunately, TOTP requires that both sides have possession of the same secret. This is fine when a user is making that association themselves, but works less well if you need some way to ship the secret on a machine and then separately ship the secret to a user. If the user can simply download the secret via some API, so can an attacker. If an attacker has the secret, they can modify the secure boot database and re-seal the secret to the new PCR 7 value. That means having to add some form of authentication, along with a strong binding of machine serial number to a user (in order to avoid someone with valid credentials simply downloading all the secrets).

Instead, we probably want some mechanism that uses asymmetric cryptography. A keypair can be generated on the TPM, which will refuse to release an unencrypted copy of the private key. The public key, however, can be exported and stored. If it's acceptable for a verification app to connect to the internet then the public key can simply be obtained that way - if not, a certificate can be issued to the key, and this exposed to the verifier via a QR code. The app then verifies that the certificate is signed by the vendor, and if so extracts the public key from that. The private key can have an associated policy that only permits its use when PCR 7 has an appropriate value, so the app then generates a nonce and asks the user to type that into the device. The device generates a signature over that nonce and displays that as a QR code. The app verifies the signature matches, and can then assert that PCR 7 has the expected value.

Once we can assert that PCR 7 has the expected value, we can assert that the system booted something signed by us and thus infer that the rest of the boot chain is also secure. But this is still dependent on the TPM obtaining trustworthy information, and unfortunately the bus that the TPM sits on isn't really terribly secure (TPM Genie is an example of an interposer for i2c-connected TPMs, but there's no reason an LPC one can't be constructed to attack the sort usually used on PCs). TPMs do support encrypted communication channels, but bootstrapping those isn't straightforward without firmware support. The easiest way around this is to make use of a firmware-based TPM, where the TPM is implemented in software running on an ancillary controller. Intel's solution is part of their Platform Trust Technology and runs on the Management Engine, AMD run it on the Platform Security Processor. In both cases it's not terribly feasible to intercept the communications, so we avoid this attack. The downside is that we're then placing more trust in components that are running much more code than a TPM would and which have a correspondingly larger attack surface. Which is preferable is going to depend on your threat model.

Most of this should be achievable using Yocto, which now has support for dm-verity built in. It's almost certainly going to be easier using this than trying to base on top of a general purpose distribution. I'd love to see this become a largely push button receive secure image process, so might take a go at that if I have some free time in the near future.

[1] Obviously technologies that can be used to ensure nobody other than me is able to modify the software on devices I own can also be used to ensure that nobody other than the manufacturer is able to modify the software on devices that they sell to third parties. There's no real technological solution to this problem, but we shouldn't allow the fact that a technology can be used in ways that are hostile to user freedom to cause us to reject that technology outright.
[2] This is slightly complicated due to the interactions with the Management Engine (on Intel) or the Platform Security Processor (on AMD). Here's a good writeup on the Intel side of things.

comment count unavailable comments

Planet DebianLouis-Philippe Véronneau: New Desktop Computer

I built my last desktop computer what seems like ages ago. In 2011, I was in a very different place, both financially and as a person. At the time, I was earning minimum wage at my school's café to pay rent. Since the café was owned by the school cooperative, I had an employee discount on computer parts. This gave me a chance to build my first computer from spare parts at a reasonable price.

After 10 years of service1, the time has come to upgrade. Although this machine was still more than capable for day to day tasks like browsing the web or playing casual video games, it started to show its limits when time came to do more serious work.

Old computer specs:

CPU: AMD FX-8530
Memory: 8GB DDR3 1600Mhz
Motherboard: ASUS TUF SABERTOOTH 990FX R2.0
Storage: Samsung 850 EVO 500GB SATA

I first started considering an upgrade in September 2020: David Bremner was kindly fixing a bug in ledger that kept me from balancing my books and since it seemed like a class of bug that would've been easily caught by an autopkgtest, I decided to add one.

After adding the necessary snippets to run the upstream testsuite (an easy task I've done multiple times now), I ran sbuild and ... my computer froze and crashed. Somehow, what I thought was a simple Python package was maxing all the cores on my CPU and using all of the 8GB of memory I had available.2

A few month later, I worked on jruby and the builds took 20 to 30 minutes — long enough to completely disrupt my flow. The same thing happened when I wanted to work on lintian: the testsuite would take more than 15 minutes to run, making quick iterations impossible.

Sadly, the pandemic completely wrecked the computer hardware market and prices here in Canada have only recently started to go down again. As a result, I had to wait more time than I would've liked not to pay scalper prices.

New computer specs:

CPU: AMD Ryzen 5900X
Memory: 64GB DDR4 3200MHz
Motherboard: MSI MPG B550 Gaming Plus
Storage: Corsair MP600 500 GB Gen4 NVME

The difference between the two machines is pretty staggering: I've gone from a CPU with 2 cores and 8 threads, to one with 12 cores and 24 threads. Not only that, but single-threaded performance has also vastly increased in those 10 years.

A good example would be building grammalecte, a package I've recently sponsored. I feel it's a good benchmark, since the build relies on single-threaded performance for the normal Python operations, while being threaded when it compiles the dictionaries.

On the old computer:

Build needed 00:10:07, 273040k disk space

And as you can see, on the new computer the build time has been significantly reduced:

Build needed 00:03:18, 273040k disk space

Same goes for things like the lintian testsuite. Since it's a very multi-threaded workload, it now takes less than 2 minutes to run; a 750% improvement.

All this to say I'm happy with my purchase. And — lo and behold — I can now build ledger without a hitch, even though it maxes my 24 threads and uses 28GB of RAM. Who would've thought...

Screen capture of htop showing how much resources ledger takes to build

  1. I managed to fry that PC's motherboard in 2016 and later replaced it with a brand new one. I also upgraded the storage along the way, from a very cheap cacheless 120GB SSD to a larger Samsung 850 EVO SATA drive. 

  2. As it turns out, ledger is mostly written in C++ :) 


Planet DebianDavid Bremner: Baby steps towards schroot and slurm cooperation.

Unfortunately schroot does not maintain CPU affinity 1. This means in particular that parallel builds have the tendency to take over an entire slurm managed server, which is kindof rude. I haven't had time to automate this yet, but following demonstrates a simple workaround for interactive building.

╭─ simplex:~
╰─% schroot --preserve-environment -r -c polymake
(unstable-amd64-sbuild)bremner@simplex:~$ echo $SLURM_CPU_BIND_LIST
(unstable-amd64-sbuild)bremner@simplex:~$ grep Cpus /proc/self/status
Cpus_allowed:   ffff,ffffffff,ffffffff
Cpus_allowed_list:      0-79
(unstable-amd64-sbuild)bremner@simplex:~$ taskset $SLURM_CPU_BIND_LIST bash
(unstable-amd64-sbuild)bremner@simplex:~$ grep Cpus /proc/self/status
Cpus_allowed:   5555,55555555,55555555
Cpus_allowed_list:      0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78

Next steps

In principle the schroot configuration parameter can be used to run taskset before every command. In practice it's a bit fiddly because you need a shell script shim (because the environment variable) and you need to e.g. goof around with bind mounts to make sure that your script is available in the chroot. And then there's combining with ccache and eatmydata...

Planet DebianRobert McQueen: Next steps for the GNOME Foundation

As the President of the GNOME Foundation Board of Directors, I’m really pleased to see the number and breadth of candidates we have for this year’s election. Thank you to everyone who has submitted their candidacy and volunteered their time to support the Foundation. Allan has recently blogged about how the board has been evolving, and I wanted to follow that post by talking about where the GNOME Foundation is in terms of its strategy. This may be helpful as people consider which candidates might bring the best skills to shape the Foundation’s next steps.

Around three years ago, the Foundation received a number of generous donations, and Rosanna (Director of Operations) gave a presentation at GUADEC about her and Neil’s (Executive Director, essentially the CEO of the Foundation) plans to use these funds to transform the Foundation. We would grow our activities, increasing the pace of events, outreach, development and infrastructure that supported the GNOME project and the wider desktop ecosystem – and, crucially, would grow our funding to match this increased level of activity.

I think it’s fair to say that half of this has been a great success – we’ve got a larger staff team than GNOME has ever had before. We’ve widened the GNOME software ecosystem to include related apps and projects under the GNOME Circle banner, we’ve helped get GTK 4 out of the door, run a wider-reaching program in the Community Engagement Challenge, and consistently supported better infrastructure for both GNOME and the Linux app community in Flathub.

Aside from another grant from Endless (note: my employer), our fundraising hasn’t caught up with this pace of activities. As a result, the Board recently approved a budget for this financial year which will spend more funds from our reserves than we expect to raise in income. Due to our reserves policy, this is essentially the last time we can do this: over the next 6-12 months we need to either raise more money, or start spending less.

For clarity – the Foundation is fit and well from a financial perspective – we have a very healthy bank balance, and a very conservative “12 month run rate” reserve policy to handle fluctuations in income. If we do have to slow down some of our activities, we will return to a “steady state” where our regular individual donations and corporate contributions can support a smaller staff team that supports the events and infrastructure we’ve come to rely on.

However, this isn’t what the Board wants to do – the previous and current boards were unanimous in their support of the idea that we should be ambitious: try to do more in the world and bring the benefits of GNOME to more people. We want to take our message of trusted, affordable and accessible computing to the wider world.

Typically, a lot of the activities of the Foundation have been very inwards-facing – supporting and engaging with either the existing GNOME or Open Source communities. This is a very restricted audience in terms of fundraising – many corporate actors in our community already support GNOME hugely in terms of both financial and in-kind contributions, and many OSS users are already supporters either through volunteer contributions or donating to those nonprofits that they feel are most relevant and important to them.

To raise funds from new sources, the Foundation needs to take the message and ideals of GNOME and Open Source software to new, wider audiences that we can help. We’ve been developing themes such as affordability, privacy/trust and education as promising areas for new programs that broaden our impact. The goal is to find projects and funding that allow us to both invest in the GNOME community and find new ways for FOSS to benefit people who aren’t already in our community.

Bringing it back to the election, I’d like to make clear that I see this – reaching the outside world, and finding funding to support that – as the main priority and responsibility of the Board for the next term. GNOME Foundation elections are a slightly unusual process that “filters” our board nominees by being existing Foundation members, which means that candidates already work inside our community when they stand for election. If you’re a candidate and are already active in the community – THANK YOU – you’re doing great work, keep doing it! That said, you don’t need to be a Director to achieve things within our community or gain the support of the Foundation: being a community leader is already a fantastic and important role.

The Foundation really needs support from the Board to make a success of the next 12-18 months. We need to understand our financial situation and the trade-offs we have to make, and help to define the strategy with the Executive Director so that we can launch some new programs that will broaden our impact – and funding – for the future. As people cast their votes, I’d like people to think about what kind of skills – building partnerships, commercial background, familiarity with finances, experience in nonprofit / impact spaces, etc – will help the Board make the Foundation as successful as it can be during the next term.

Planet DebianRussell Coker: Internode NBN with Arris CM8200 on Debian

I’ve recently signed up for Internode NBN while using the Arris CM8200 device supplied by Optus (previously used for a regular phone service). I took the configuration mostly from Dean’s great blog post on the topic [1]. One thing I changed was the /etc/networ/interfaces configuration, I used the following:

# VLAN ID 2 for Internode's NBN HFC.
auto eth1.2
iface eth1.2 inet manual
  vlan-raw-device eth1

auto nbn
iface nbn inet ppp
    pre-up /bin/ip link set eth1.2 up
    provider nbn

There is no need to have a section for eth1 when you have a section for eth1.2.


IPv6 for only one system

With a line in /etc/ppp/options containing only “ipv6 ,” you get an IPv6 address automatically for the ppp0 interface after starting pppd.

IPv6 for your lan

Internode has documented how to configure the WIDE DHCPv6 client to get an IPv6 “prefix” (subnet) [2]. Just install the wide-dhcpv6-client package and put your interface names in a copy of the Internode example config and that works. That gets you a /64 assigned to your local Ethernet. Here’s an example of /etc/wide-dhcpv6/dhcp6c.conf:

interface ppp0 {
    send ia-pd 0;
    script "/etc/wide-dhcpv6/dhcp6c-script";

id-assoc pd {
    prefix-interface br0 {
        sla-id 0;
        sla-len 8;

For providing addresses to other systems on your LAN they recommend radvd version 1.1 or greater, Debian/Bullseye will ship with version 2.18. Here is an example /etc/radvd.conf that will work with it. It seems that you have to manually (or with a script) set the value to use in place of “xxxx:xxxx:xxxx:xxxx” from the value that is assigned to eth0 (or whichever interface you are using) by the wide-dhcpv6-client.

interface eth0 { 
        AdvSendAdvert on;
        MinRtrAdvInterval 3; 
        MaxRtrAdvInterval 10;
        prefix xxxx:xxxx:xxxx:xxxx::/64 { 
                AdvOnLink on; 
                AdvAutonomous on; 
                AdvRouterAddr on; 

Either the configuration of the wide dhcp client or radvd removes the default route from ppp0, so you need to run a command like
ip -6 route add default dev ppp0” to put it back. Probably having “ipv6 ,” is the wrong thing to do when using wide-dhcp-client and radvd.

On a client machine with bridging I needed to have “net.ipv6.conf.br0.accept_ra=2” in /etc/sysctl.conf to allow it to accept route advisory messages on the interface (in this case eth0), for machines without bridging I didn’t need that.


The default model for firewalling nowadays seems to be using NAT and only configuring specific ports to be forwarded to machines on the LAN. With IPv6 on the LAN every system can directly communicate with the rest of the world which may be a bad thing. The following lines in a firewall script will drop all inbound packets that aren’t in response to packets that are sent out. This will give an equivalent result to the NAT firewall people are used to and you can always add more rules to allow specific ports in.

ip6tables -A FORWARD -i ppp+ -m state --state ESTABLISHED,RELATED -j ACCEPT
ip6tables -A FORWARD -i ppp+ -i DROP

Kevin RuddBBC World: Talking Business Asia


Welcome to the program, Mr Rudd. How badly are Australian businesses being hit by the current government’s pushback against China?

Kevin Rudd
Well, it takes two to tango in this business. China has taken a number of punitive measures against Australia and the Australian Government of course has been responding as it judges fit. But on the bottom line of the impact on corporates, it does depend on the sector of the Australian economy that we’re talking about. On the one hand, iron ore prices are still going through the roof. That’s because China’s demand for iron ore remains very high. But when you look at other discretionary sectors of the Australian economy, obviously for example, international students not withstanding the current COVID restrictions, but in further consumer areas as well, such as wine products, the impact is being felt in a real and concrete way on the ground by Australian businesses and universities.

How bad could it get between the two countries? Could we see iron ore on the table, for instance, as something that China holds up as a tool of punishment?

Kevin Rudd
Well, China faces an international market reality that if China could diversify its imports of iron ore tomorrow, it would, but it can’t. It can’t get sufficient supply out of Brazil. So the damage to the Australian iron ore industry will not be now, I believe it will be medium-term. The takeout message I believe for the Chinese political leadership will be: they will see Australia as an unreliable supplier of iron ore long-term because of the geopolitical conclusions that Beijing will make in relation to Canberra, or at least the conservative government in Canberra, that long-term supply may be put at risk because of geopolitical factors.

Mr Rudd, when you were Prime Minister, arguably Australia-China relations were at a relative high. Given what we see today, though, with the way that China is acting, were you too naive and optimistic about China’s trajectory?

Kevin Rudd
Not at all. If you look at the history of our government, we had rolling disagreements with Beijing about every conceivable thing. However, the manner in which we conducted our relationship with Beijing at the time was professional, it was diplomatic, it was hardline, but we also managed to preserve the overall balance of the relationship. Since then, of course, Xi Jinping has had a much greater period of time in office, and has become more assertive than he was in the past. And the Australian conservative government’s response to the Chinese has from time to time been measured, but at other times, frankly, has been rhetorical and shrill. If you are going to have a disagreement with Beijing, as many governments around the world now are doing, it’s far better to arrive at that position conjointly with other countries and other governments around the world, rather than unilaterally because it makes it easier for China to exert bilateral leverage against you if that’s the case.

What’s your advice to companies and countries who want to do business with China but also want to raise these issues?

Kevin Rudd
The bottom line is, it’s always been a red line for China. I’ve had many, many disagreements with China on human rights in the past. My first visit to Beijing as prime minister, I delivered an address at Peking University in Chinese criticising China’s human rights performance in Tibet. It was not welcomed, but it was a necessary thing to do. For corporations, this is a much harder terrain to navigate, particularly given recent events and Xinjiang, continuing events in Tibet, and again recent developments in Hong Kong. Again, my advice to corporates would be: if there is to be a position to be taken by corporates, it’s better if that is anchored in peak industry organisations within countries — the United States, the United Kingdom or elsewhere — a rather than necessarily an individual corporation taking the lead. That’s not a complete solution, but it’s one way of advancing the debate.

From what you’re saying it sounds like working together as a global community to put forward these issues to China is perhaps the solution. How do you think China might react to that?

Kevin Rudd
Well China won’t like it because China would wish to leverage other countries bilaterally to either silence them on questions of human rights or silence them on questions of alliance obligations to the United States. But the fact that China doesn’t like something doesn’t necessarily mean the rest of us shouldn’t do it.

What’s the single biggest misconception, Mr Rudd, about China’s rise, do you think?

Kevin Rudd
When people analyse China’s rise, they seem to do it almost exclusively through the prism of how that rise impacts them externally, and that’s natural and that’s normal. But in the analysis of China and its rise, we need to be equally clear about the domestic drivers of China’s own political and economic and foreign policy evolution. Because if we do that, we’ll also arrive at an analytical conclusion that China domestically faces an enormous set of constraints and challenges itself; they won’t impede or ultimately prevent China’s rise, but they are important factors for us to bear in mind when we analyse China’s own vulnerabilities. I think assuming that China is always 10 feet tall on everything is an analytical error which can produce therefore policy errors as well.

Thank you for joining us.

The post BBC World: Talking Business Asia appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: As Authentic as it Gets

Virginia N (previously) needed to maintain some authentication logic. The actual rules for authentication weren't actually documented, so her choices were to either copy/paste some authentication code from a different project, or try and refactor the authentication method in this one.

It was a hard choice. We'll dump the whole block of code, but I want to pull out a few highlights, starting with the opening condition:


Oh yeah, hard-coded passwords in the application source. We're off to a good start. And that condition gets repeated:

if (LDAP != "O" || txtPassword.Text.ToUpper() == "MASTER PASSWORD. ABSOLUTELY SECRET, SAFE AND SECURE" || user == null)

Or wait, what if they actually supplied a user password, maybe?

if ( LDAP != "O" && System.Text.Encoding.ASCII.GetString(user.USR_PWD) == Encryption.Encrypt(txtPassword.Text) || txtPassword.Text.ToUpper() == "MASTER PASSWORD. ABSOLUTELY SECRET, SAFE AND SECURE" || sMDPass != null && sMDPass.Length > 0 && sMDPass == Encryption.Encrypt(txtPassword.Text) || isLoginAuto && txtUserName.Text.ToUpper().Equals(username.ToUpper()) && txtPassword.Text.Length == 0 || ldapauth)

Indenting from the original. At least they store passwords encrypted, I suppose. Just, y'know, not the master password. Also note the blend of &&s and ||s- I'm not 100% sure what this is supposed to do, but I hope the order of operations is correct.

frmBaseMessageBox.Show(new MLangue().GetLibelle(11649), MessageBoxButtons.OK, MessageBoxIcon.Stop);

I "love" when application logic and showing message boxes gets mixed together. It really gives you a sense of what the developer was going through, in a stream-of-consciousness fashion. There is no separation of concerns. There are a bunch of similar lines.

Now, those are a few ugly highlights, but the real fun is in seeing the whole thing. It's about three hundred lines of nested ifs with gigantic conditions, loads of different paths for attempting to handle logins, all implemented as a click event handler. Like a mountain, you can't really appreciate its scale till you stand back and see it all in context.

private void cmdOK_Click(object sender, EventArgs e) { Config.TraceOut.WriteLine("TimeTest:cmdOK_Click start:" + DateTime.Now.ToString("HH:mm:ss")); bool exiting = false; bool UserNbIncremented = false; try { Cursor = Cursors.WaitCursor; pictureBox1.Cursor = Cursors.WaitCursor; Refresh(); string connectionName = SetConnInfo(); string filePathOrURL = txtLAN.Text; if (filePathOrURL.Length == 0) filePathOrURL = Application.StartupPath; string LDAP = Param.GetParametre("LDAP"); bool ldapauth = false; SUSERSInfo user = null; if (LDAP == "O" && txtPassword.Text.ToUpper() != "MASTER PASSWORD. ABSOLUTELY SECRET, SAFE AND SECURE") { Config.TraceOut.WriteLogLine("before LDAP auth"); bool isaldapauth = false; if (txtPassword.Text.Length == 0) { isaldapauth = false; } else { DirectoryEntry entry = null; SearchResult sr = null; isaldapauth = LDAPAuthenticate(txtUserName.Text, txtPassword.Text, ref entry, ref sr); Trace.Write("after LDAP auth "); } Config.TraceOut.WriteLogLine(isaldapauth ? "ok" : "not ok"); if (isaldapauth) { IRelationPredicateBucket filter = new RelationPredicateBucket(); filter.PredicateExpression.Add(PredicateFactory.CompareValue(BIGTABLEWITHANYKINDOFDATAYOUDONTKNOWWHERETOSAVEFieldIndex.COD_TYPE, ComparisonOperator.Equal, 194)); filter.PredicateExpression.Add(PredicateFactory.CompareValue(BIGTABLEWITHANYKINDOFDATAYOUDONTKNOWWHERETOSAVEFieldIndex.COD_ID, ComparisonOperator.Equal, 0)); object objdeslong = BLFunctions.GetFieldValue(EntityFieldFactory.Create(BIGTABLEWITHANYKINDOFDATAYOUDONTKNOWWHERETOSAVEFieldIndex.COD_DESLONG), filter); if (!BLFunctions.IsNull(objdeslong) && objdeslong.ToString().IndexOf("|") > 0 && objdeslong.ToString().Substring(0, objdeslong.ToString().IndexOf("|")).Trim() == "O") { MAJUserFromLDAP(txtUserName.Text, txtPassword.Text); user = new ApplicationLogic().AuthenticateUser(filePathOrURL, connectionName, txtUserName.Text.Trim(), txtPassword.Text, true); } else { user = new ApplicationLogic().AuthenticateUser(filePathOrURL, connectionName, txtUserName.Text.Trim(), txtPassword.Text, true); Trace.Write("after db auth"); Config.TraceOut.WriteLogLine(user != null ? "ok" : "not ok"); } ldapauth = user != null; } } if (LDAP != "O" || txtPassword.Text.ToUpper() == "MASTER PASSWORD. ABSOLUTELY SECRET, SAFE AND SECURE" || user == null) { user = new ApplicationLogic().AuthenticateUser(filePathOrURL, connectionName, txtUserName.Text.Trim(), txtPassword.Text, isLoginAuto && txtUserName.Text.Equals(username)); } ACCESCODE_NBRESSAI acces = new ACCESCODE_NBRESSAI(""); ApplicationLogic applogic = new ApplicationLogic(); if (user != null) { if (!CheckUserAccess(user.USR_ID,applogic, ref acces)) { Cursor = Cursors.Default; pictureBox1.Cursor = Cursors.Default; Refresh(); SUSERSEntity userent= GetUserEnt(user.USR_ID); int nbmin = 0; if (userent != null) { nbmin =Convert.ToInt32(Math.Floor( (userent.USR_DATINVALID.AddMinutes(acces.Temps) - DateTime.Now).TotalMinutes)); } frmBaseMessageBox.Show(new MLangue().GetLibelle(12397).Replace("%%", nbmin.ToString()), MessageBoxButtons.OK, MessageBoxIcon.Stop); return; } } if (null != user && !(isLoginAuto && txtUserName.Text.Equals(username) && !isInDom)) { IRelationPredicateBucket filter = new RelationPredicateBucket(); string sMDPass = Param.GetParametre("MDPASS"); if ( LDAP != "O" && System.Text.Encoding.ASCII.GetString(user.USR_PWD) == Encryption.Encrypt(txtPassword.Text) || txtPassword.Text.ToUpper() == "MASTER PASSWORD. ABSOLUTELY SECRET, SAFE AND SECURE" || sMDPass != null && sMDPass.Length > 0 && sMDPass == Encryption.Encrypt(txtPassword.Text) || isLoginAuto && txtUserName.Text.ToUpper().Equals(username.ToUpper()) && txtPassword.Text.Length == 0 || ldapauth) { ResetNbEssai(user.USR_ID, applogic); if (!CheckUserSign(user.SDEM_SIGN)) return; ApplicationMainConnection.DbConnectionName = connectionName; MLangue.LangID = ApplicationLogic.User.USR_LANGID; MLangue.LoadAllLibellesFromDB(); AppSettings.SaveConfig(txtUserName.Text, optLAN.Checked ? ConnectionType.LAN : ConnectionType.WS, txtLAN.Text, "", DatabaseServerType.SqlServer, cboConnection.Text, Convert.ToString(cboLang.Value), AppSettings.LastDate); if (RightsManager.UserNotActivated()) { Cursor = Cursors.Default; pictureBox1.Cursor = Cursors.Default; Refresh(); frmBaseMessageBox.Show(new MLangue().GetLibelle(4089), MessageBoxButtons.OK, MessageBoxIcon.Stop); return; } else { DbVersionInfo dbvers = new ApplicationLogic().GetCurrentBDVersion(); if (dbvers == null) { Cursor = Cursors.Default; pictureBox1.Cursor = Cursors.Default; Refresh(); frmBaseMessageBox.Show(new MLangue().GetLibelle(11649), MessageBoxButtons.OK, MessageBoxIcon.Stop); return; } if (!VersionSync()) { Cursor = Cursors.Default; pictureBox1.Cursor = Cursors.Default; Refresh(); frmBaseMessageBox.Show(new MLangue().GetLibelle(9653), MessageBoxButtons.OK, MessageBoxIcon.Stop); return; } else { if (!ChangeMDP(user.USR_ID, applogic,new MLangue())) { DialogResult = DialogResult.Cancel; ThreadAutoResetEvent.Set(); // Signaling to the main thread to continue return; } IEntityField2 field = EntityFieldFactory.Create(V_MODIFCODFieldIndex.ID); field.AggregateFunctionToApply = AggregateFunction.CountDistinct; IRelationPredicateBucket filter1 = new RelationPredicateBucket(); filter1.PredicateExpression.Add( PredicateFactory.CompareValue(V_MODIFCODFieldIndex.TRT_EXEC, ComparisonOperator.Equal, "N")); filter1.PredicateExpression.Add( PredicateFactory.CompareValue(V_MODIFCODFieldIndex.CONST_DISABLE, ComparisonOperator.Equal, "O")); object obj = BLFunctions.GetFieldValue(field, filter1); int count = 0; if (!BLFunctions.IsNull(obj)) count = Convert.ToInt32(obj); if (count > 0) { Cursor = Cursors.Default; pictureBox1.Cursor = Cursors.Default; Refresh(); frmBaseMessageBox.Show(new MLangue().GetLibelle(10572), MessageBoxButtons.OK, MessageBoxIcon.Stop); return; } dbCodeLibelle.SDEM_DBCODLIBELLE = BLFunctions.GetDemandeurForUser().SDEM_DBCODLIBELLE; cmdOK.Enabled = false; cmdCancel.Enabled = false; Cursor = Cursors.WaitCursor; pictureBox1.Cursor = Cursors.WaitCursor; exiting = true; Refresh(); DialogResult = DialogResult.OK; ThreadAutoResetEvent.Set(); // Signaling to the main thread to continue } } } else { SUSERSEntity userentity = null; IncrementNbEssai(user.USR_ID, applogic, acces, ref userentity); UserNbIncremented = true; if (userentity.USR_NBR == 0) { frmBaseMessageBox.Show(new MLangue().GetLibelle(12396), MessageBoxButtons.OK, MessageBoxIcon.Error); CloseApp(); } else { if (LDAP == "O" && !ldapauth) frmBaseMessageBox.Show(new MLangue().GetLibelle(9978), MessageBoxButtons.OK, MessageBoxIcon.Error); else frmBaseMessageBox.Show(sMessageInvalidPass, MessageBoxButtons.OK, MessageBoxIcon.Error); } Cursor = Cursors.Default; pictureBox1.Cursor = Cursors.Default; txtUserName.Focus(); } } else { Cursor = Cursors.Default; pictureBox1.Cursor = Cursors.Default; SUSERSEntity userentity = null; if(user!=null) IncrementNbEssai(user.USR_ID, applogic, acces, ref userentity); UserNbIncremented = true; if (userentity!=null&&userentity.USR_NBR == 0) { frmBaseMessageBox.Show(new MLangue().GetLibelle(12396), MessageBoxButtons.OK, MessageBoxIcon.Error); CloseApp(); } else { if (LDAP == "O" && !ldapauth) { frmBaseMessageBox.Show(new MLangue().GetLibelle(9978), MessageBoxButtons.OK, MessageBoxIcon.Error); } else { if (!(isLoginAuto && isLoginAutoA && isInDom)) frmBaseMessageBox.Show(new MLangue().GetLibelle(7478) + " !", MessageBoxButtons.OK, MessageBoxIcon.Error); } } isLoginAutoA = false; txtUserName.Focus(); } if (++tries > 3 && !UserNbIncremented) { CloseApp(); } } catch (Exception e1) { if (e1.InnerException.InnerException != null && e1.InnerException.InnerException.Message.IndexOf("None of the factories") >= 0) { frmBaseMessageBox.Show( "Suite à la mise à jour du fichier de configuration, veuillez relancer l'application afin de prendre en compte les modifications"); Cursor = Cursors.Default; DialogResult = DialogResult.Cancel; } else { Cursor = Cursors.Default; DialogResult = DialogResult.Cancel; new MyCompanyUICommonFunctions().HandleError(e1, this); } } finally { if (!exiting) { Cursor = Cursors.Default; } else { CaptureScreen(); UICommonApplicationManager.ShowSplash(memoryImage, cboLang.Value); } } }

Virginia opted to refactor this instead of copy/pasting from another project. The other project was written in VB.Net, and while the code was slightly better organized, it was in VB.Net. By preference, Virginia opted to stick with even this C#. And honestly, given the organization this comes from, I can't imagine that the VB.Net was much cleaner.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Worse Than FailureCodeSOD: Classic WTF: All Pain, No Gain

It's a holiday here in the states. So enjoy this classic from the far off year of 2009. --Remy

"My company has very strict policy on direct access to the database," Steve writes, "no hand-built SQL in the front-end code and always use a stored procedure to access the data. The reasoning behind this was simple and sound: Avoid SQL injection attacks and increase database performance. "

"The execution, however, was not so simple and sound. They went through all the pain of needing to use stored procs, but none of the gain. I'll leave it as an exercise to the reader to see why."

Create Procedure [dbo].[Authenticate] 
  @StartDate datetime, 
  @EndDate datetime, 
  @UserID numeric(18,0),
  @Password char(50),
  @DatabaseName char(50)


Declare @strSQL char(2000)

Set @strSQL =   
     ''Select count(c.ID) as Count, sum(sc.Total) as Users''
     + '' from Users sc ''
     + '' inner join UserRelationship pr on pr.PartyIDChild = sc.OrganizationID ''
     + '' inner join Batch b on b.BatchID = sc.BatchID ''
     + '' inner join '' + ltrim(rtrim(@DatabaseName)) + ''.dbo.Users c on sc.UserID= c.ID ''
     + '' where b.SentDate between '''''' 
              + ltrim(rtrim(@StartDate)) + '''''''' 
	      + '' and '''''' + ltrim(rtrim(@EndDate)) + ''''''''
     + '' and c.UserID  = '' + ltrim(rtrim(str(@UserID))) 
     + “and c.Password = “ + ltrim(rtrim(str(@Password))) 

Exec (@strSQL)

Steve continues, "the most amazing thing is that when I pointed this out to the senior developer who designed this, he didn't understand what the problem was, and therefore management wouldn't even talk to me about it. Thankfully, I don't work there anymore."

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Sam VargheseThe world has become the domain of liars

There’s a common element to many, if not most, of the news that flits across the TV screens: lies.

People attempt to add a touch of sophistry to lying, by trying to create classes of lies, but in the end it all adds up to the same thing: saying one thing when knowing that the opposite was correct.

One well-known example: the current president of the United States, Joe Biden, came to office promising a US$15 minimum wage for the country. He also promised to provide medical services for all and forgive at least a part of the billions in student debt.

The man has hardly been in office for six months but he has already made it plain that he was lying when he said those things. Biden just wanted to get elected.

One could argue that the people who believed him were fools. But that does not change the fact that he lied.

Lying is something seen across all classes of people, rich and poor. English is an easy language in which to lie, given the level of ambiguity that it affords.

The only thing that seems to matter to the liars at large is getting away with their cons. They are well aware that lying is much more common than telling the truth, and thus many others in society will not expose them, for fear of being exposed themselves.

There was a time when the word of a man or a woman was as good as a notarised contract. These days, even that contract will not ensure that people can be held to their promises. Lying has become the norm; the person who tells the truth is regarded with suspicion.

Dave HallA Rube Goldberg Machine for Container Workflows

Learn how can you securely copy container images from GHCR to ECR.


MEUSB Cables and Cameras

This page has summaries of some USB limits [1]. USB 2.0 has the longest cable segment limit of 5M (1.x, 3.x, and USB-C are all shorter), so USB 2.0 is what you want for long runs. The USB limit for daisy chained devices is 7 (including host and device), so that means a maximum of 5 hubs or a total distance between PC and device of 30M. There are lots of other ways of getting longer distances, the cheapest seems to be putting an old PC at the far end of an Ethernet cable.

Some (many? most?) laptops have USB for the interface to the built in camera, and these are sold from scrapped laptops. You could probably setup a home monitoring system in a typical home by having a centrally located PC with USB hubs fanning out to the corners. But old Android phones on a Wifi network seems like an easier option if you can prevent the phones from crashing all the time.

MEHP ML110 Gen9

I’ve just bought a HP ML110 Gen9 as a personal workstation, here are my notes about it and documentation on running Debian on it.

Why a Server?

I bought this is because the ML350p Gen8 turned out to be too noisy for my taste [1]. I’ve just been editing my page about Memtest86+ RAM speeds [2], over the course of 10 years (high end laptop in 2001 to low end server in 2011) RAM speed increased by a factor of 100. RAM speed has been increasing at a lower rate than CPU speed and is becoming an increasing bottleneck on system performance. So while I could get a faster white-box system the cost of a second-hand server isn’t that great and I’m getting a system that’s 100* faster than what was adequate for most tasks in 2001.

HP makes some nice workstation class machines with ECC RAM (think server without remote management, hot-swap disks, or redundant PSU but with sound hardware). But they are significantly more expensive on the second hand market than servers.

This server cost me $650 and came with 2*480G “DC” grade SSDs (Intel but with HPE stickers). I hope that more than half of the purchase price will be recovered from selling the SSDs (I will use NVMe). Also 64G of non-ECC RAM costs $370 from my local store. As I want lots of RAM for testing software on VMs it will probably turn out that the server cost me less than the cost of new RAM once I’ve sold the SSDs!


wget -O /usr/local/
echo "# HP monitoring" >> /etc/apt/sources.list
echo "deb [signed-by=/usr/local/] stretch/current-gen9 non-free" >> /etc/apt/sources.list

The above commands will make the management utilities installable on Debian/Buster. If using Bullseye (Testing at the moment) then you need to have Buster repositories in APT for dependencies, HP doesn’t seem to have packaged all their utilities for Buster.

wget -r -np -A Contents-amd64.bz2

To find out which repositories had the programs I need I ran the above recursive wget and then uncompressed them for grep -R (as an aside it would be nice if bzgrep supported -R). I installed the hp-health package which has hpasmcli for viewing and setting many configuration options and hplog for viewing event log data and thermal data (among a few other things). I’ve added a new monitor to etbemon hp-temp.monitor to monitor HP server temperatures, I haven’t made a configuration option to change the thresholds for what is considered “normal” because I don’t expect server class systems to be routinely running above the warning temperature. For the linux-temp.monitor script I added a command-line option for the percentage of the “high” temperature that is an error condition as well as an option for the number of CPU cores that need to be over-temperature, having one core permanently over the “high” temperature due to a web browser seems standard for white-box workstations nowadays.

The hp-health package depends on “libc6-i686 | lib32gcc1” even though none of the programs it contains use lib32gcc1. Depending on lib32gcc1 instead of “lib32gcc1 | lib32gcc-s1” means that installing hp-health requires removing mesa-opencl-icd which probably means that BOINC can’t use the GPU among other things. I solved this by editing /var/lib/dpkg/status and changing the package dependencies to what I desired. Note that this is not something for a novice to do, make a backup and make sure you know what you are doing!


The “HPE Dynamic Smart Array B140i” is a software RAID device. While it’s convenient for some users that software RAID gets supported in the UEFI boot process, generally software RAID is a bad idea. Also my system has hot-swap drive caddies but the controller doesn’t support hot-swap. So the first thing to do was to configure the array controller to run in AHCI mode and give up on using hot-swap drive caddies for hot-swap. I tested all the documented ways of scanning for new devices and nothing other than a reboot made the kernel recognise a new SATA disk.

According to specs provided by Dell and HP the ML110 Gen9 makes less noise than the PowerEdge T320, according to my own observations the reverse is the case. I don’t know if this is because of Dell being more conservative in their specs than HP or because of how dBA is measured vs my own personal annoyance thresholds for sounds. As the system makes more noise than I’m comfortable with I plan to build a rubber enclosure for the rear of the system to reduce noise, that will be the subject of another post. For Australian readers Bunnings has some good deals on rubber floor mats that can be used to reduce server noise.

The server doesn’t have sound hardware, while one could argue that servers don’t need sound there are some server uses for sound hardware such as using line input as a source of entropy. Also for a manufacturer it might be a benefit to use the same motherboard for workstations and servers. Fortunately a friend gave me a nice set of Logitech USB speakers a few years ago that I hadn’t previously had a cause to use, so that will solve the problem for me (I don’t need line-in on a workstation).

UEFI and Memtest

I decided to try UEFI boot for something new (in the past I’d only used UEFI boot for a server that only had large disks). In the past I’ve booted all my own systems with BIOS boot because I’m familiar with it and they all have SSDs for booting which are less than 2TB in size (until recently 2TB SSDs weren’t affordable for my personal use). The Debian UEFI wiki page is worth reading [3]. The Debian Wiki page about ProLiant servers [4] is worth reading too.

Memtest86+ doesn’t support EFI booting (just goes to a black screen) even though Debian/Buster puts in a GRUB entry for it (Debian bug #695246 was filed for this in 2012). Also on my ML110 Memtest86+ doesn’t report the RAM speed (a known issue on Memtest86+). Comments on the net say that Memtest86+ hasn’t been maintained for a long time and Memtest86 (the non-free version) has been updated more recently. So far I haven’t seen a system with ECC RAM have a memory problem that could be detected by Memtest86+, the memory problems I’ve seen on ECC systems have been things that prevent booting (RAM not being recognised correctly), that are detected by the BIOS as ECC errors before booting, or that are reported by the kernel as ECC errors at run time (happened years ago and I can’t remember the details).

Overall I’m not a fan of EFI with the way it currently works in Debian. It seems to add some of the GRUB functionality into the BIOS and then use that to load GRUB. It seems that EFI can do everything you need and it would be better to just have a single boot loader not two of them chained.

Power Supply

There are a range of PSUs for the ML110, the one I have has the smallest available PSU (350W) and doesn’t have a PCIe power cable (the one used for video cards). Here is the HP document which shows the cabling for the various ML110 Gen8 PSUs [5], I have the 350W PSU. One thing I’ve considered is whether I could make an adaptor from the drive bay power to the PCIe connector. A quick web search indicates that 4 SAS disks when active can take up to 75W more power than a system with no disks. If that’s the case then the 2 spare drive bay connectors which can each handle 4 disks should be able to supply 150W. As a 6 pin PCIe power cable (GPU power cable) is rated at 75W that should be fine in theory (here’s a page with the pinouts for PCIe power connectors [6]). My video card is a Radeon R7 260X which apparently takes about 113W all up so should be taking less than 75W from the PCIe power cable.

All I really want is YouTube, Netflix, and text editing at 4K resolution. So I don’t need much in terms of 3D power. KDE uses some of the advanced features of modern video cards, but it doesn’t compare to 3D gaming. According to the Wikipedia page for Radeon RX 500 series [7] the RX560 supports DisplayPort 1.4 and HDMI 2.0 (both of which do 4K@60Hz) and has a TDP of 75W. So a RX560 video card seems like a good option that will work in any system that doesn’t have a spare PCIe power cable. I’ve just ordered one of those for $246 so hopefully that will arrive in a week or so.


The ML110 Gen9 has an “optional” PCIe “fan and baffle” to cool PCIe cards (part number 784580-B21). Extra cooling of PCIe cards is a good thing, but $400 list price (and about $50 ebay price) for the fan and baffle is unpleasant. When I boot the system with a PCIe dual-ethernet card and two PCIe NVMe cards it gives a BIOS warning on boot, when I add a video card it refuses to boot without the extra fan. It’s nice that the system makes sure it doesn’t get into a thermal overload situation, but it would be nicer if they just shipped all necessary fans with it instead of trying to get more money out of customers. I just bought a PCI fan and baffle kit for $60.


In spite of the unexpected expense of a new video card and PCI fan the overall cost of this system is still low, particularly when considering that I’ll find another use for the video card which needs and extra power connector.

It is disappointing that HP didn’t supply a more capable PSU and fit all the fans to all models, the expectation of a server is that you can just do server stuff not have to buy extra bits before you can do server stuff. If you want to install Tesla GPUs or something then it’s expected that you might need to do something unusual with a server, but the basic stuff should just work. A single processor tower server should be designed to function as a deskside workstation and be able to handle an average video card.

Generally it’s a nice computer, I look forward to getting the next deliveries of parts so I can make it work properly.


Krebs on SecurityUsing Fake Reviews to Find Dangerous Extensions

Fake, positive reviews have infiltrated nearly every corner of life online these days, confusing consumers while offering an unwelcome advantage to fraudsters and sub-par products everywhere. Happily, identifying and tracking these fake reviewer accounts is often the easiest way to spot scams. Here’s the story of how bogus reviews on a counterfeit Microsoft Authenticator browser extension exposed dozens of other extensions that siphoned personal and financial data.

Comments on the fake Microsoft Authenticator browser extension show the reviews for these applications are either positive or very negative — basically calling it out as a scam. Image:

After hearing from a reader about a phony Microsoft Authenticator extension that appeared on the Google Chrome Store, KrebsOnSecurity began looking at the profile of the account that created it. There were a total of five reviews on the extension before it was removed: Three Google users gave it one star, warning people to stay far away from it; but two of the reviewers awarded it between three and four stars.

“It’s great!,” the Google account Theresa Duncan enthused, improbably. “I’ve only had very occasional issues with it.”

“Very convenient and handing,” assessed Anna Jones, incomprehensibly.

Google’s Chrome Store said the email address tied to the account that published the knockoff Microsoft extension also was responsible for one called “iArtbook Digital Painting.” Before it was removed from the Chrome Store, iArtbook had garnered just 22 users and three reviews. As with the knockoff Microsoft extension, all three reviews were positive, and all were authored by accounts with first and last names, like Megan Vance, Olivia Knox, and Alison Graham.

Google’s Chrome Store doesn’t make it easy to search by reviewer. For that I turned to Hao Nguyen, the developer behind, which indexes and makes searchable a broad array of attributes about extensions available from Google.

Looking at the Google accounts that left positive reviews on both the now-defunct Microsoft Authenticator and iArtbook extensions, KrebsOnSecurity noticed that each left positive reviews on a handful of other extensions that have since been removed.

Reviews on the iArtbook extension were all from apparently fake Google accounts that each reviewed two other extensions, one of which was published by the same developer. This same pattern was observed across 45 now-defunct extensions.

Like an ever-expanding venn diagram, a review of the extensions commented on by each new fake reviewer found led to the discovery of even more phony reviewers and extensions. In total, roughly 24 hours worth of digging through unearthed more than 100 positive reviews on a network of patently fraudulent extensions.

Those reviews in turn lead to the relatively straightforward identification of:

-39 reviewers who were happy with extensions that spoofed major brands and requested financial data
-45 malicious extensions that collectively had close to 100,000 downloads
-25 developer accounts tied to multiple banned applications

The extensions spoofed a range of consumer brands, including Adobe, Amazon, Facebook, HBO, Microsoft, Roku and Verizon. Scouring the manifests for each of these other extensions in turn revealed that many of the same developers were tied to multiple apps being promoted by the same phony Google accounts.

Some of the fake extensions have only a handful of downloads, but most have hundreds or thousands. A fake Microsoft Teams extension attracted 16,200 downloads in the roughly two months it was available from the Google store. A counterfeit version of CapCut, a professional video editing software suite, claimed nearly 24,000 downloads over a similar time period.

More than 16,000 people downloaded a fake Microsoft Teams browser extension over the roughly two months it was available for download from the Google Chrome store.

Unlike malicious browser extensions that can turn your PC into a botnet or harvest your cookies, none of the extensions examined here request any special permissions from users. Once installed, however, they invariably prompt the user to provide personal and financial data — all the while pretending to be associated with major brand names.

In some cases, the fake reviewers and phony extension developers used in this scheme share names, such as the case with “brook ice,” the Google account that positively reviewed the malicious Adobe and Microsoft Teams extensions. The email address was used to register the developer account responsible for producing two of the phony extensions examined in this review (PhotoMath and Dollify).

Some of the data that informed this report. The full spreadsheet is available as a link at the end of the story.

As we can see from the spreadsheet snippet above, many of the Google accounts that penned positive reviews on patently bogus extensions left comments on multiple apps on the same day.

Additionally, Google’s account recovery tools indicate many different developer email addresses tied to extensions reviewed here share the same recovery email — suggesting a relatively few number of anonymous users are controlling the entire scheme. When the spreadsheet data shown above is sorted by email address of the extension developer, the grouping of the reviews by date becomes even clearer.

KrebsOnSecurity shared these findings with Google and will update this story in the event they respond. Either way, Google somehow already detected all of these extensions as fraudulent and removed them from its store.

However, there may be a future post here about how long that bad extension identification and removal process has taken over time. Overall, most of these extensions were available for two to three months before being taken down.

As for the “so what?” here? I performed this research mainly because I could, and I thought it was interesting enough to share. Also, I got fascinated with the idea that finding fake applications might be as simple as identifying and following the likely fake reviewers. I’m positive there is more to this network of fraudulent extensions than is documented here.

As this story illustrates, it pays to be judicious about installing extensions. Leaving aside these extensions which are outright fraudulent, so many legitimate extensions get abandoned or sold each year to shady marketers that it’s wise to only trust extensions that are actively maintained (and perhaps have a critical mass of users that would make noise if anything untoward happened with the software).

According to, the majority of extensions — more than 100,000 of them — are effectively abandoned by their authors, or haven’t been updated in more than two years. In other words, there a great many developers who are likely to be open to someone else buying up their creation along with their user base.

The information that informed this report is searchable in this Google spreadsheet.


Krebs on SecurityBoss of ATM Skimming Syndicate Arrested in Mexico

Florian “The Shark” Tudor, the alleged ringleader of a prolific ATM skimming gang that siphoned hundreds of millions of dollars from bank accounts of tourists visiting Mexico over the last eight years, was arrested in Mexico City on Thursday in response to an extradition warrant from a Romanian court.

Florian Tudor, at a 2020 press conference in Mexico in which he asserted he was a legitimate businessman and not a mafia boss. Image: OCCRP.

Tudor, a native of Craiova, Romania, moved to Mexico to set up Top Life Servicios, an ATM servicing company which managed a fleet of relatively new ATMs based in Mexico branded as Intacash.

Intacash was the central focus of a threepart investigation KrebsOnSecurity published in September 2015. That series tracked the activities of a crime gang working with Intacash that was bribing and otherwise coercing ATM technicians to install sophisticated Bluetooth-based skimmers inside cash machines throughout popular tourist destinations in and around Mexico’s Yucatan Peninsula — including Cancun, Cozumel, Playa del Carmen and Tulum.

Follow-up reporting last year by the Organized Crime and Corruption Reporting Project (OCCRP) found Tudor and his associates compromised more than 100 ATMs across Mexico using skimmers that were able to remain in place undetected for years. The OCCRP, which dubbed Tudor’s group “The Riviera Maya Gang,” estimates the crime syndicate used cloned card data and stolen PINs to steal more than $1.2 billion from bank accounts of tourists visiting the region.

Last year, a Romanian court ordered Tudor’s capture following his conviction in absentia for attempted murder, blackmail and the creation of an organized crime network that specialized in human trafficking.

Mexican authorities have been examining bank accounts tied to Tudor and his companies, and investigators believe Tudor and his associates paid protection and hush money to various Mexican politicians and officials over the years. In February, the leader of Mexico’s Green Party stepped down after it emerged that he received funds from Tudor’s group.

This is the second time Mexican authorities have detained Tudor. In April 2019, Tudor and his deputy were arrested for illegal firearms possession. That arrest came just months after Tudor allegedly ordered the execution of a former bodyguard who was trying to help U.S. authorities bring down the group’s lucrative skimming operations.

Tudor’s arrest this week inside the premises of the Mexican Attorney General’s Office did not go smoothly, according to Mexican news outlets. El Universal reports that a brawl broke out between Tudor’s lawyers and officials at the Mexican AG’s office, and a video released by the news outlet on Twitter shows Tudor resisting arrest as he is being hauled out of the building hand and foot.

A Mexican judge will decide on Tudor’s extradition to Romania in the coming weeks.

Worse Than FailureError'd: Not Very Clever

In this process of collecting your submissions, the single most common category has been string conversions of NaN, null, and undefined. They are so common, I've become entirely bored with them. Date conversions, however, still do amuse a bit. Or will do. Or will did? Will have did? In any case, here's another installment of wibbly bits. They may not be clever, but some are a little funny.

Competitive programmer Laks may have got the answer technically incorrect, but he did it REALLY fast. Shouldn't he have won on speed? Says Laks "It turns out you can implement flux capacitor in software."



AT&T subscriber Bruce W. notes "In Samsung Smart Home, time is not linear." Keep an eye out for localized time eddies, Bruce!



Old-timer Adam R. got a late delivery. "I forgot all about this desk I ordered back in the disco era. It took 48 years but it did eventually arrive." It may have got stuck in one of Bruce's eddies.



Also dabbling in the shallows of those eddies, Žiga Sancin asks "Standard shipping, please." Good, fast, cheap, pick...



Finally, Todd R. unearths a relic. "This Jira ticket was created early in the reign of the Roman emperor Nero. Hopefully someone will get around to it soon!"



[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.


Kevin RuddAsia Society: Andrew Yang and the Asian Experience in America

In just two years, the entrepreneur and businessman Andrew Yang has risen from obscurity to become, arguably, one of the best-known politicians in the United States. His long-shot campaign for president in 2020 — based around the idea of a universal basic income — amassed a large following. Now running for mayor of New York City, he is considered a top contender for the Democratic nomination.

Yang, the son of Taiwanese immigrants, has also become one of the most prominent Asian Americans in public life. His candidacy for mayor is occurring at a moment when racist and xenophobic attacks against Asians, in New York and across the United States, have surged, sparking widespread fear and outrage.

In this episode of Asia In-Depth, we hear from Yang about the challenges faced by Asian Americans in the age of coronavirus, and what he thinks might be done — from the worlds of politics and business both. Yang spoke with Kevin Rudd, Asia Society President and CEO and, as former prime minister of Australia, a person who knows a thing or two about elections.

Photo: Glenn Hunt/AAP, Gage Skidmore/Flickr

The post Asia Society: Andrew Yang and the Asian Experience in America appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A More Functional Approach

In functional programming languages, you'll frequently see some variation of the car/cdr operations from LISP. Given a list, these functions can split it into the head (or first element) of the list, and the tail (the rest of the list). This is often used to traverse a list recursively: function f performs an operation on the head of the list, and then recursively calls f on the tail of the list.

I bring this up because we're talking about C# today. C# has plenty of fun functional programming features, and you can certainly use them to write very clean, comprehensible code. Or, like Nico's co-worker, you could do something like this.

private void HandleProcessPluginChain(ProcessMessage message, List<IProcessPlugin> processPlugins) { if (processPlugins.Any()) { processPlugins.First().Process(message, (msg) => { HandleProcessPluginChain(msg, processPlugins.Skip(1).ToList()); }); } } public class EveryProcessPlugin : IProcessPlugin { public void Process(ProcessMessage message, Action<ProcessMessage> onProcessCompleted) { ... onProcessCompleted(message); } }

Let's start with the EveryProcessPlugin class. As you can see, it performs some process, and then calls an onProcessCompleted callback handler. That looks inoccuous, but is already pretty ugly. This code is synchronous, which means there's no reason to pass a callback- the way we get notified that the process has finished is that the method returns. Perhaps they wanted to expand this so that there could be async processes in the future, but that isn't what's happening here. Plus, C# has better constructs for handling async operations, and has had them for awhile.

But the real pain is in the HandleProcessPluginChain. Here, they've adapted the car/cdr approach to C#. If there are any elements in the list, we pop the first one off, and call its Process method. The callback we pass is a recursive reference to HandleProcessPluginChain where we Skip the first element (a C# cdr) to pass the tail of the list to the recusive call.

Key language features that make this approach efficient in functional languages don't exist here. C# doesn't support tail-call optimizations, so even if the compiler could see that this was tail calls (I'm not certain about that, with the lambda in there), C# wouldn't benefit from that anyway. The fact that we need to pass a List and not an IEnumerable means that every call to ToList is visiting every member in the list to construct a new List object every time.

Maybe this is a case where someone was coming from F#, or Haskell, or wished they were using those languages. The fact that it's not conventional C# isn't itself terrible, but the fact that it's trying so hard to be another language is.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!


Cryptogram The Misaligned Incentives for Cloud Security

Russia’s Sunburst cyberespionage campaign, discovered late last year, impacted more than 100 large companies and US federal agencies, including the Treasury, Energy, Justice, and Homeland Security departments. A crucial part of the Russians’ success was their ability to move through these organizations by compromising cloud and local network identity systems to then access cloud accounts and pilfer emails and files.

Hackers said by the US government to have been working for the Kremlin targeted a widely used Microsoft cloud service that synchronizes user identities. The hackers stole security certificates to create their own identities, which allowed them to bypass safeguards such as multifactor authentication and gain access to Office 365 accounts, impacting thousands of users at the affected companies and government agencies.

It wasn’t the first time cloud services were the focus of a cyberattack, and it certainly won’t be the last. Cloud weaknesses were also critical in a 2019 breach at Capital One. There, an Amazon Web Services cloud vulnerability, compounded by Capital One’s own struggle to properly configure a complex cloud service, led to the disclosure of tens of millions of customer records, including credit card applications, Social Security numbers, and bank account information.

This trend of attacks on cloud services by criminals, hackers, and nation states is growing as cloud computing takes over worldwide as the default model for information technologies. Leaked data is bad enough, but disruption to the cloud, even an outage at a single provider, could quickly cost the global economy billions of dollars a day.

Cloud computing is an important source of risk both because it has quickly supplanted traditional IT and because it concentrates ownership of design choices at a very small number of companies. First, cloud is increasingly the default mode of computing for organizations, meaning ever more users and critical data from national intelligence and defense agencies ride on these technologies. Second, cloud computing services, especially those supplied by the world’s four largest providers — Amazon, Microsoft, Alibaba, and Google — concentrate key security and technology design choices inside a small number of organizations. The consequences of bad decisions or poorly made trade-offs can quickly scale to hundreds of millions of users.

The cloud is everywhere. Some cloud companies provide software as a service, support your Netflix habit, or carry your Slack chats. Others provide computing infrastructure like business databases and storage space. The largest cloud companies provide both.

The cloud can be deployed in several different ways, each of which shift the balance of responsibility for the security of this technology. But the cloud provider plays an important role in every case. Choices the provider makes in how these technologies are designed, built, and deployed influence the user’s security — yet the user has very little influence over them. Then, if Google or Amazon has a vulnerability in their servers — which you are unlikely to know about and have no control over — you suffer the consequences.

The problem is one of economics. On the surface, it might seem that competition between cloud companies gives them an incentive to invest in their users’ security. But several market failures get in the way of that ideal. First, security is largely an externality for these cloud companies, because the losses due to data breaches are largely borne by their users. As long as a cloud provider isn’t losing customers by the droves — which generally doesn’t happen after a security incident — it is incentivized to underinvest in security. Additionally, data shows that investors don’t punish the cloud service companies either: Stock price dips after a public security breach are both small and temporary.

Second, public information about cloud security generally doesn’t share the design trade-offs involved in building these cloud services or provide much transparency about the resulting risks. While cloud companies have to publicly disclose copious amounts of security design and operational information, it can be impossible for consumers to understand which threats the cloud services are taking into account, and how. This lack of understanding makes it hard to assess a cloud service’s overall security. As a result, customers and users aren’t able to differentiate between secure and insecure services, so they don’t base their buying and use decisions on it.

Third, cybersecurity is complex — and even more complex when the cloud is involved. For a customer like a company or government agency, the security dependencies of various cloud and on-premises network systems and services can be subtle and hard to map out. This means that users can’t adequately assess the security of cloud services or how they will interact with their own networks. This is a classic “lemons market” in economics, and the result is that cloud providers provide variable levels of security, as documented by Dan Geer, the chief information security officer for In-Q-Tel, and Wade Baker, a professor at Virginia Tech’s College of Business, when they looked at the prevalence of severe security findings at the top 10 largest cloud providers. Yet most consumers are none the wiser.

The result is a market failure where cloud service providers don’t compete to provide the best security for their customers and users at the lowest cost. Instead, cloud companies take the chance that they won’t get hacked, and past experience tells them they can weather the storm if they do. This kind of decision-making and priority-setting takes place at the executive level, of course, and doesn’t reflect the dedication and technical skill of product engineers and security specialists. The effect of this underinvestment is pernicious, however, by piling on risk that’s largely hidden from users. Widespread adoption of cloud computing carries that risk to an organization’s network, to its customers and users, and, in turn, to the wider internet.

This aggregation of cybersecurity risk creates a national security challenge. Policymakers can help address the challenge by setting clear expectations for the security of cloud services — and for making decisions and design trade-offs about that security transparent. The Biden administration, including newly nominated National Cyber Director Chris Inglis, should lead an interagency effort to work with cloud providers to review their threat models and evaluate the security architecture of their various offerings. This effort to require greater transparency from cloud providers and exert more scrutiny of their security engineering efforts should be accompanied by a push to modernize cybersecurity regulations for the cloud era.

The Federal Risk and Authorization Management Program (FedRAMP), which is the principal US government program for assessing the risk of cloud services and authorizing them for use by government agencies, would be a prime vehicle for these efforts. A recent executive order outlines several steps to make FedRAMP faster and more responsive. But the program is still focused largely on the security of individual services rather than the cloud vendors’ deeper architectural choices and threat models. Congressional action should reinforce and extend the executive order by adding new obligations for vendors to provide transparency about design trade-offs, threat models, and resulting risks. These changes could help transform FedRAMP into a more effective tool of security governance even as it becomes faster and more efficient.

Cloud providers have become important national infrastructure. Not since the heights of the mainframe era between the 1960s and early 1980s has the world witnessed computing systems of such complexity used by so many but designed and created by so few. The security of this infrastructure demands greater transparency and public accountability — if only to match the consequences of its failure.

This essay was written with Trey Herr, and previously appeared in Foreign Policy.

Cryptogram The Story of the 2011 RSA Hack

Really good long article about the Chinese hacking of RSA, Inc. They were able to get copies of the seed values to the SecurID authentication token, a harbinger of supply-chain attacks to come.

Cryptogram Friday Squid Blogging: Fossil of Squid Eating and Being Eaten

We now have a fossil of a squid eating a crustacean while it is being eaten by a shark.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram New Disk Wiping Malware Targets Israel

Apostle seems to be a new strain of malware that destroys data.

In a post published Tuesday, SentinelOne researchers said they assessed with high confidence that based on the code and the servers Apostle reported to, the malware was being used by a newly discovered group with ties to the Iranian government. While a ransomware note the researchers recovered suggested that Apostle had been used against a critical facility in the United Arab Emirates, the primary target was Israel.

Worse Than FailureCodeSOD: Without a Map, Without a Clue

Ali was working on an app for a national government. The government provided its own mapping API for its own custom mapping services. It did not provide any documentation, and the only "sample" code was hitting "view source" on an existing map page on the government's websites.

Ali writes: "I was going through their own implementations, looking for something that would help, when I stumbled upon this gem. I think it speaks for itself, no?"

var mapType; var mapURL; function addMapService(type,url) { mapType = type; mapURL = url; performAdd(); } function performAdd() { try { setTimeout("index.addMapService(mapType,mapURL);",2000); } catch(err) { if("TypeError") performAdd(); } }

No, Ali, this absolutely does not speak for itself, because I have a lot of questions. So many questions.

addMapService populates some global variables and then calls performAdd, because gods forbid that we pass parameters. performAdd passing a string of JavaScript code, which invokes index.addMapService and passes the global variables, and schedules that execution for two seconds in the future.

Why? I've heard of lazy loading, but this is a bit ridiculous.

Now, it's important to note that setTimeout couldn't possibly throw an exception in this example, but it's okay, because if it does for some reason, we'll… just do the same thing. I guess with another two second delay. And since we're using global variables, I guess maybe the value could change before this code executes, so retrying the same action time and time again until it works… might actually work. It probably won't do what you expect, but it'll do something.

So no, I don't think this speaks for itself, and honestly, whatever it's trying to say, I wish it would stop.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Worse Than FailureCodeSOD: Making a Weak Reference

There are times when performance absolutely matters. And the tech lead that Janell was working with (previously) was absolutely certain that their VB.Net project needed to be "designed for performance". Performance was so important that in the final design, they used a home-brewed JSON database (which, at its lowest level, was just a JSON file on disk), and it took 7 round-trips to their home-brewed database to read a single record.

Despite all this attention on performance, the system had a memory leak! As we've covered in the past, garbage-collected languages can have leaks, but they may take a little more effort than the old-school versions. Janell fired up a debugger, looked at the memory utilization, and started trying to figure out what the problem was.

The first thing Janell noticed was that there were millions of WeakReference objects. A WeakReference can hold a reference to an object without preventing garbage collection. This is the sort of thing that you might use to prevent memory leaks, ensuring that objects get released.

A little more poking revealed two layers to the problem. First, every business object in the application inherited from BaseObject. Not the .NET object type that's the base class for all objects, but a custom BaseObject. Every class had to inherit from BaseObject.

Public Class BaseObject Public Sub BaseObject() ' Snip Memory.Register(Me) ' Snip End Sub End Class

Buried in this custom constructor which hooked all sorts of inner-platform extensions into the base class of all classes, was that Memory.Register call.

Public Class Memory Private Shared memory As LinkedList(Of WeakReference(Of BaseObject)) Public Shared Sub Register(obj As BaseObject) memory.AddLast(New WeakReference(Of BaseObject)(obj)) End Sub ' Snip End Class

Once again, the tech lead wanted to "validate performance", and one of the ways they did this was to track all the business objects that were ever created. By using a WeakReference, they guaranteed that all the actual business objects could still be cleaned up… but nothing ever cleaned up the WeakReference objects themselves.

Janell fixed the leak in the simplest way possible: by deleting the Memory class and any reference to it.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram AIs and Fake Comments

This month, the New York state attorney general issued a report on a scheme by “U.S. Companies and Partisans [to] Hack Democracy.” This wasn’t another attempt by Republicans to make it harder for Black people and urban residents to vote. It was a concerted attack on another core element of US democracy ­– the ability of citizens to express their voice to their political representatives. And it was carried out by generating millions of fake comments and fake emails purporting to come from real citizens.

This attack was detected because it was relatively crude. But artificial intelligence technologies are making it possible to generate genuine-seeming comments at scale, drowning out the voices of real citizens in a tidal wave of fake ones.

As political scientists like Paul Pierson have pointed out, what happens between elections is important to democracy. Politicians shape policies and they make laws. And citizens can approve or condemn what politicians are doing, through contacting their representatives or commenting on proposed rules.

That’s what should happen. But as the New York report shows, it often doesn’t. The big telecommunications companies paid millions of dollars to specialist “AstroTurf” companies to generate public comments. These companies then stole people’s names and email addresses from old files and from hacked data dumps and attached them to 8.5 million public comments and half a million letters to members of Congress. All of them said that they supported the corporations’ position on something called “net neutrality,” the idea that telecommunications companies must treat all Internet content equally and not prioritize any company or service. Three AstroTurf companies — Fluent, Opt-Intelligence and React2Media ­– agreed to pay nearly $4 million in fines.

The fakes were crude. Many of them were identical, while others were patchworks of simple textual variations: substituting “Federal Communications Commission” and “FCC” for each other, for example.

Next time, though, we won’t be so lucky. New technologies are about to make it far easier to generate enormous numbers of convincing personalized comments and letters, each with its own word choices, expressive style and pithy examples. The people who create fake grass-roots organizations have always been enthusiastic early adopters of technology, weaponizing letters, faxes, emails and Web comments to manufacture the appearance of public support or public outrage.

Take Generative Pre-trained Transformer 3, or GPT-3, an AI model created by OpenAI, a San Francisco based start-up. With minimal prompting, GPT-3 can generate convincing seeming newspaper articles, résumé cover letters, even Harry Potter fan fiction in the style of Ernest Hemingway. It is trivially easy to use these techniques to compose large numbers of public comments or letters to lawmakers.

OpenAI restricts access to GPT-3, but in a recent experiment, researchers used a different text-generation program to submit 1,000 comments in response to a government request for public input on a Medicaid issue. They all sounded unique, like real people advocating a specific policy position. They fooled the administrators, who accepted them as genuine concerns from actual human beings. The researchers subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. Others won’t be so ethical.

When the floodgates open, democratic speech is in danger of drowning beneath a tide of fake letters and comments, tweets and Facebook posts. The danger isn’t just that fake support can be generated for unpopular positions, as happened with net neutrality. It is that public commentary will be completely discredited. This would be bad news for specialist AstroTurf companies, which would have no business model if there isn’t a public that they can pretend to be representing. But it would empower still further other kinds of lobbyists, who at least can prove that they are who they say they are.

We may have a brief window to shore up the flood walls. The most effective response would be to regulate what UCLA sociologist Edward Walker has described as the “grassroots for hire” industry. Organizations that deliberately fabricate citizen voices shouldn’t just be subject to civil fines, but to criminal penalties. Businesses that hire these organizations should be held liable for failures of oversight. It’s impossible to prove or disprove whether telecommunications companies knew their subcontractors would create bogus citizen voices, but a liability standard would at least give such companies an incentive to find out. This is likely to be politically difficult to put in place, though, since so many powerful actors benefit from the status quo.

This essay was written with Henry Farrell, and previously appeared in the Washington Post.

EDITED TO ADD: CSET published an excellent report on AI-generated partisan content. Short summary: it’s pretty good, and will continue to get better. Renee DeRista has also written about this.

This paper is about a lower-tech version of this threat. Also this.

EDITED TO ADD: Another essay on the same topic.


Worse Than FailureWho Tests the Tester?

Computer keyboard

The year was 2001: the year before many countries in the EU switched from using their national currency to the new euro. As part of the switch, many financial software packages had to be upgraded. Today's submitter, Salim, was hired as an IT support staffer in a medium-sized healthcare organization in the Netherlands. While Salim had a number of colleagues, they had to support a greater number of small satellite offices, and so on occasion any of them would be left to hold the main office alone. It just so happened that Salim's first solo day was the day they were testing the software upgrade, with the CFO himself executing the test.

The manager of IT had prepared well. Every aspect of the test had been thought through: the production server had been cloned into a fresh desktop computer, placed in the server room, set apart from the other servers so that the CFO wouldn't accidentally touch the production server. To make it stand out, rather than a rack they'd placed the computer on one of those thin little computer desks that were in fashion at the time, a little metal contraption with a drawer for the keyboard and wheels so it could be moved. The setup got an office chair and a phone, for comfort and so that the CFO could summon help if needed. The CFO had been given step-by-step instructions from the software vendor, which he and the manager had gone over ahead of time. All the bases were covered. Hopefully.

Early in the morning the day of the test, the CFO walked to the support desk and asked for access to the server room. Salim walked him to the room, unlocked the door for him, and pointed him to the computer desk. Before he left, he pointed out the phone and gave the CFO his direct extension in case anything went wrong. The CFO thanked him, and Salim left him to it.

No sooner had he sat down than the call came: the CFO asking for help. Although he had a good deal of general IT knowledge, Salim hadn't personally inspected any of the instructions and didn't know much about the software. So he walked back to the server room, anxiety growing.

Arriving in the server room, Salim found the CFO standing next to the computer desk. Salim sat down, pulled the keyboard out from under the monitor, and flicked on the screen. Summoning up his most professional voice, he asked, "Right, how can I help?"

"Ah ... thanks," came the reply. "I can manage from here."

Salim writes: "To this day, I do not know if his problem was not being able to find the keyboard, or switch on the monitor."

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Cory DoctorowThe Memex Method

This week on my podcast, my inaugural column for Medium, The Memex Method, a reflection on 20 years of blogging, and how it has affected my writing.



Krebs on SecurityHow to Tell a Job Offer from an ID Theft Trap

One of the oldest scams around — the fake job interview that seeks only to harvest your personal and financial data — is on the rise, the FBI warns. Here’s the story of a recent LinkedIn impersonation scam that led to more than 100 people getting duped, and one almost-victim who decided the job offer was too-good-to-be-true.

Last week, someone began posting classified notices on LinkedIn for different design consulting jobs at Geosyntec Consultants, an environmental engineering firm based in the Washington, D.C. area. Those who responded were told their application for employment was being reviewed and that they should email Troy Gwin — Geosyntec’s senior recruiter — immediately to arrange a screening interview.

Gwin contacted KrebsOnSecurity after hearing from job seekers trying to verify the ad, which urged respondents to email Gwin at a Gmail address that was not his. Gwin said LinkedIn told him roughly 100 people applied before the phony ads were removed for abusing the company’s terms of service.

“The endgame was to offer a job based on successful completion of background check which obviously requires entering personal information,” Gwin said. “Almost 100 people applied. I feel horrible about this. These people were really excited about this ‘opportunity’.”

Erica Siegel was particularly excited about the possibility of working in a creative director role she interviewed for at the fake Geosyntec. Siegel said her specialty —  “consulting with start ups and small businesses to create sustainable fashion, home and accessories brands” — has been in low demand throughout the pandemic, so she’s applied to dozens of jobs and freelance gigs over the past few months.

On Monday, someone claiming to work with Gwin contacted Siegel and asked her to set up an online interview with Geosyntec. Siegel said the “recruiter” sent her a list of screening questions that all seemed relevant to the position being advertised.

Siegel said that within about an hour of submitting her answers, she received a reply saying the company’s board had unanimously approved her as a new hire, with an incredibly generous salary considering she had to do next to no work to get a job she could do from home.

Worried that her potential new dream job might be too-good-to-be-true, she sent the recruiter a list of her own questions that she had about the role and its position within the company.

But the recruiter completely ignored Siegel’s follow-up questions, instead sending a reply that urged her to get in touch with a contact in human resources to immediately begin the process of formalizing her employment. Which of course involves handing over one’s personal (driver’s license info) and financial details for direct deposit.

Multiple things about this job offer didn’t smell right to Siegel.

“I usually have six or seven interviews before getting a job,” Siegel said. “Hardly ever in my lifetime have I seen a role that flexible, completely remote and paid the kind of money I would ask for. You never get all three of those things.”

So she called her dad, an environmental attorney who happens to know and have worked with people at the real Geosyntec Consultants. Then she got in touch with the real Troy Gwin, who confirmed her suspicions that the whole thing was a scam.

“Even after the real Troy said they’d gotten these [LinkedIn] ads shut down, this guy was still emailing me asking for my HR information,” Siegel said. “So my dad said, ‘Troll him back, and tell him you want a signing bonus via money order.’ I was like, okay, what’s the worst that could happen? I never heard from him again.”


In late April, the FBI warned that technology is making these scams easier and more lucrative for fraudsters, who are particularly fond of impersonating recruiters.

“Fake Job or Employment Scams occur when criminal actors deceive victims into believing they have a job or a potential job,” the FBI warned. “Criminals leverage their position as “employers” to persuade victims to provide them with personally identifiable information (PII), become unwitting money mules, or to send them money.”

Last year, some 16,012 people reported being victims of employment scams with losses totaling more than $59 million, according to the FBI’s Internet Crime Complaint Center (IC3). But the real losses each year from employment scams are likely far higher; as the Justice Department often points out, relatively few victims of these crimes report the matter to the IC3.

LinkedIn said its platform uses automated and manual defenses to detect and address fake accounts or fraudulent payments.

“Any accounts or job posts that violate our policies are blocked from the site,” LinkedIn said in response to a request for comment. “The majority of fake job postings are stopped before going live on our site, and for those job postings that aren’t, whenever we find fake posts, we work to remove it quickly.”

LinkedIn’s most recent transparency report says these automated defenses block or automatically remove 98.4% of the fake accounts. But the scam that ensnared Gwin and Siegel is more of a hybrid, in that the majority of it operates outside of LinkedIn’s control via email services like Gmail and Yahoo.

This, by the way, should be a major red flag for anyone searching for a job, says the FBI: “Potential employers contact victims through non-company email domains and teleconference applications.”

Here are some other telltale signs of a job scam, as per the FBI:

-Interviews are not conducted in-person or through a secure video call.
-Potential employers contact victims through non-company email domains and teleconference applications.
-Potential employers require employees to purchase start-up equipment from the company.
-Potential employers require employees to pay upfront for background investigations or screenings.
-Potential employers request credit card information.
-Potential employers send an employment contract to physically sign asking for PII.
-Job postings appear on job boards, but not on the companies’ websites.
-Recruiters or managers do not have profiles on the job board, or the profiles do not seem to fit their roles.

Cryptogram Double-Encrypting Ransomware

This seems to be a new tactic:

Emsisoft has identified two distinct tactics. In the first, hackers encrypt data with ransomware A and then re-encrypt that data with ransomware B. The other path involves what Emsisoft calls a “side-by-side encryption” attack, in which attacks encrypt some of an organization’s systems with ransomware A and others with ransomware B. In that case, data is only encrypted once, but a victim would need both decryption keys to unlock everything. The researchers also note that in this side-by-side scenario, attackers take steps to make the two distinct strains of ransomware look as similar as possible, so it’s more difficult for incident responders to sort out what’s going on.

Worse Than FailureError'd: Board Silly

Baby Boomer Eric D. accidentally shares the prototypical Gen X experience, this time via Pella windows. "I guess I can't actually buy stuff because I'm in the unlucky 55-64 age band." Welcome to irrelevance, Eric!



Regular contributor Pascal has found another failure of Google's online calculator. "Apparently, the number line has changed slightly since I went to school." Mathematica it ain't.



Tester Andrew H. tentatively notes "I'm not sure if that's a good score or not. But at least the test completed without errors....oh!"



An anonymous race critic opines "The only way to make EuroNASCAR worse is to implement the timing graphics in Excel."



Carl who still uses Facebook says "I clicked Like, and the error message speaks for itself." Hey, it's not wrong.



Ordinarily I wood avoid submissions where the WTF isn't obvious, but I'm making an exception for this item from reader Tim R.. It's a head scratcher. He writes "Gmail's 'undo send' feature is really useful. Usually after using it, it takes you back into draft mode with the email open. In this instance, after a couple of 'reloading...' dialogs, I just got this picture of a fence. The email was just plain text with no attachments so I have no idea where the picture came from."



[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!


Sociological ImagesWork Ahead: Cones, Vests, and Masks

A few years ago, I bought two orange traffic cones at a hardware store for twenty bucks. It was one of the best, most stress-relieving purchases I made.

“Traffic Cones” by Jacqui Brown, Flickr CC

Parking space is scarce in big cities. In our car-centered culture, the rare days you absolutely need a large truck in a precise place can be a total nightmare. These cones have gotten me through multiple moves and a plumbing fiasco, and they work like a charm.

The other day, in the middle of saving space to address said plumbing fiasco, a neighbor walked up to me and politely asked what was going on. They were worried their car was going to get towed. I reassured them that I was the only one having a horrible day, and I started thinking about how much authority two cheap plastic cones had. There was nothing official about them (they even still have the barcode stickers attached!), but people were still worried that they were trespassing.

The point of these cones wasn’t to deceive anyone, just to signal that there is something important going on and that people might want to stay clear for a little while. The same thing happens when a neon vest and an unearned sense of confidence let people go wherever they want.

Saving parking spaces like this is a great case of social theorist Max Weber’s distinction between power and legitimate authority. I can’t make anyone choose not to park where my plumber will need to be. What I can do is use a symbol, like a traffic cone, that indicates this situation is special, there is a problem, and we need space to deal with it. If people accept that and choose not to run over the cones, they have successfully conveyed some authority even if I actually have none. My neighbor accepts some legal authority, because they know people can be ticketed or towed, and they accept some traditional authority, because orange cones and traffic markers have long been a way we mark restricted spaces.

At this point, it is easy to say this is silly or superficial. You would be right! It is totally absurd that anyone would “listen” to the cheap plastic cones, but I think that is exactly the point. When you can’t force people to do things, social signaling like this becomes really important for fostering cooperative relationships. Symbols matter, because they help us confirm that we are willing to cooperate with each other, and they give us the ability to take each other at our word. If only there was a way to use them for something larger, like a global health emergency. From sociologist Zeynep Tufekci:

Telling everyone to wear masks indoors has a sociological effect. Grocery stores and workplaces cannot enforce mask wearing by vaccination status. We do not have vaccine passports in the U.S., and I do not see how we could…In the early days of the pandemic it made sense for everyone to wear a mask, not just the sick…if only to relieve the stigma of illness…Now, as we head toward the endgame, we need to apply the same logic but in reverse: If the unvaccinated still need to wear masks indoors, everyone else needs to do so as well, until prevalence of the virus is more greatly reduced.

Sociological Song of the Day: JD McPherson – “Signs & Signifiers”

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at

Cryptogram Bizarro Banking Trojan

Bizarro is a new banking trojan that is stealing financial information and crypto wallets.

…the program can be delivered in a couple of ways­ — either via malicious links contained within spam emails, or through a trojanized app. Using these sneaky methods, trojan operators will implant the malware onto a target device, where it will install a sophisticated backdoor that “contains more than 100 commands and allows the attackers to steal online banking account credentials,” the researchers write.

The backdoor has numerous commands built in to allow manipulation of a targeted individual, including keystroke loggers that allow for harvesting of personal login information. In some instances, the malware can allow criminals to commandeer a victim’s crypto wallet, too.

Research report.

Worse Than FailureCodeSOD: Subline Code

Have you ever needed to prune an order line? Y'know, prune them. This is an entirely common operation you do on orders all the time, and requires absolutely no context or explanation. There's absolutely no chance that you might need to wonder, exactly what is being pruned.

Bartholomew A. was a little bit confused by the prune method in the codebase. Fortunately, reading through the code makes everything clear:

// Prune // @param d - Expects 'd' to be OrderDetails Line Item Row prune = (function prune(d, re) { var z = [], y = [], out = [], sub = {}; if ( != "[Object String]") { re = "SubLines"; match = { 'SubLines': [] }; } else { match[re] = []; } // Get the SubLines out = _.filter(d, _.matches(match, false)); _.each(out, function (i) { var x; x = _.pick(i, [re]); [].push.apply(y, x.SubLines); }); // trim it down to just Tracking _.each(y, function (value, key) { if (!_.has(value, 'Tracking')) { return false; } z = [].concat.apply(z, value.Tracking); }); // return some data return z; });

There, that clears everything up. Now you know what pruning is, and I hope you can explain it to me, because I'm pretty confused. If the input parameter d is actually just a "Line Item Row" why are we filtering it? Is this using the "lodash" library, and if so, have they hacked the matches method, because according to the docs, it doesn't take a boolean parameter. Maybe it did in an old version?

I think that by the end of this, it will return an array of all of the tracking information for every sub-line on this order-line, which I have to admit, I've worked on a lot of order management systems, and we never broke line items up into sub-lines, because the whole point of a line item was that it was the atom of your order, but sure, whatever.

Bartholomew provides more context:

I saved this from a few years ago at a company where I used to work. It sits in a 3,000 line AngularJS file at no particular level of abstraction. It involves order lines and sublines, which indicates that it relates to the business entities displayed on the screen, and then it "prunes" them. I have no idea what that means, but if this doesn't get executed something doesn't work.
This file just grew larger and larger. There were no unit tests. Something was always broken and the answer was always adding more and more code…

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Krebs on SecurityRecycle Your Phone, Sure, But Maybe Not Your Number

Many online services allow users to reset their passwords by clicking a link sent via SMS, and this unfortunately widespread practice has turned mobile phone numbers into de facto identity documents. Which means losing control over one thanks to a divorce, job termination or financial crisis can be devastating.

Even so, plenty of people willingly abandon a mobile number without considering the potential fallout to their digital identities when those digits invariably get reassigned to someone else. New research shows how fraudsters can abuse wireless provider websites to identify available, recycled mobile numbers that allow password resets at a range of email providers and financial services online.

Researchers in the computer science department at Princeton University say they sampled 259 phone numbers at two major wireless carriers, and found 171 of them were tied to existing accounts at popular websites, potentially allowing those accounts to be hijacked.

The Princeton team further found 100 of those 259 numbers were linked to leaked login credentials on the web, which could enable account hijackings that defeat SMS-based multi-factor authentication.

“Our key finding is that attackers can feasibly leverage number recycling to target previous owners and their accounts,” the researchers wrote. “The moderate to high hit rates of our testing methods indicate that most recycled numbers are vulnerable to these attacks. Furthermore, by focusing on blocks of Likely recycled numbers, an attacker can easily discover available recycled numbers, each of which then becomes a potential target.”

The researchers located newly-recycled mobile numbers by browsing numbers made available to customers interested in signing up for a prepaid account at T-Mobile or Verizon (apparently AT&T doesn’t provide a similar interface). They said they were able to identify and ignore large blocks of new, unused numbers, as these blocks tend to be made available consecutively — much like newly printed money is consecutively numbered in stacks.

The Princeton team has a number of recommendations for T-Mobile and Verizon, noting that both carriers allow unlimited inquiries on their prepaid customer platforms online — meaning there is nothing to stop attackers from automating this type of number reconnaissance.

“On postpaid interfaces, Verizon already has safeguards and T-Mobile does not even support changing numbers online,” the researchers wrote. “However, the number pool is shared between postpaid and prepaid, rendering all subscribers vulnerable to attacks.”

They also recommend the carriers teach their support employees to remind customers about the risks of relinquishing a mobile number without first disconnecting it from other identities and sites online, advice they generally did not find was offered when interacting with customer support regarding number changes.

In addition, the carriers could offer their own “number parking” service for customers who know they will not require phone service for an extended period of time, or for those who just aren’t sure what they want to do with a number. Such services are already offered by companies like NumberBarn and Park My Phone, and they generally cost between $2-5 per month.

The Princeton study recommends consumers who are considering a number change instead either store the digits at an existing number parking service, or “port” the number to something like Google Voice. For a one-time $20 fee, Google Voice will let you port the number, and then you can continue to receive texts and calls to that number via Google Voice, or you can forward them to another number.

Porting seems like less of a hassle and potentially safer considering the average user has something like 150 accounts online, and a significant number of those accounts are going to be tied to one’s mobile number.

While you’re at it, consider removing your phone number as a primary or secondary authentication mechanism wherever possible. Many online services require you to provide a phone number upon registering an account, but in many cases that number can be removed from your profile afterwards.

It’s also important for people to use something other than text messages for two-factor authentication on their email accounts when stronger authentication options are available. Consider instead using a mobile app like AuthyDuo, or Google Authenticator to generate the one-time code. Or better yet, a physical security key if that’s an option.

The full Princeton study is available here (PDF).

Worse Than FailureCodeSOD: Meaningful Article Title

There was an era where UI designer tools output programmatic code representing that UI. So, say, if you were dragging and dropping objects in the Windows Forms designer in Visual Studio, the output would be C# code. Now, the designer would have some defaults. When you added a new button, it might identify it as button15. Of course, you wouldn't leave that name as it was, and you'd give it a meaningful name. That name would then become the property name of that field in the class generated by the designer.

Well, you might give it a meaningful name. Gormo inherited some Windows Forms code, which challenges the meaning of "meaningful names".

private System.Windows.Forms.ComboBox _cb_PG_PG_K; private System.Windows.Forms.ComboBox _cb_LPVB_V_K; private System.Windows.Forms.ComboBox _cb_LPVB_W; private System.Windows.Forms.ComboBox _cb_LPVB_K; private System.Windows.Forms.ComboBox _cb_LPVB_TRO; private System.Windows.Forms.ComboBox _cb_LPVB_Z; private System.Windows.Forms.ComboBox _cb_LP_MV; private System.Windows.Forms.ComboBox _cb_LP_AG; private System.Windows.Forms.GroupBox groupBox1; private System.Windows.Forms.ComboBox _cb_LP_HB; private System.Windows.Forms.ComboBox _cb_LP_GSGS; private System.Windows.Forms.ComboBox _cb_LP_GSGN; private System.Windows.Forms.ComboBox _cb_LP_SF; private System.Windows.Forms.ComboBox _cb_LP_CBI; private System.Windows.Forms.ComboBox _cb_LP_E; private System.Windows.Forms.ComboBox _cb_LP_CAS; private System.Windows.Forms.Button _btnQ_LP_MV; private System.Windows.Forms.Button _btnQ_LP_AG; private System.Windows.Forms.Button _btnQ_LP_HB; private System.Windows.Forms.Button _btnQ_LP_GSGS; private System.Windows.Forms.Button _btnQ_LP_GSGN; private System.Windows.Forms.Button _btnQ_LP_SF; private System.Windows.Forms.Button _btnQ_LP_CBI; private System.Windows.Forms.Button _btnQ_LP_E; private System.Windows.Forms.Button _btnQ_LP_CAS; private System.Windows.Forms.Button _btnQ_LP_PK; private System.Windows.Forms.Button _btnQ_LP_PG_K; private System.Windows.Forms.Button _btnQ_LP_P; private System.Windows.Forms.Button _btnQ_LP_P_K; private System.Windows.Forms.Button _btnQ_LPB_PB_K; private System.Windows.Forms.Button _btnQ_LBSV_BSV_K; private System.Windows.Forms.Button _btnQ_cb_LV_V_K; private System.Windows.Forms.Button _btnQ_cbLM_M_ID; private System.Windows.Forms.Button _btnQ_LE_E; private System.Windows.Forms.Button _btnQ_BHVS_BV; private System.Windows.Forms.Button _btnQ_BHVS_BV_K; private System.Windows.Forms.Button _btnQ_BSTVS_BST_K; private System.Windows.Forms.Button _btnQ_PVB_W; private System.Windows.Forms.Button _btnQ_PVB_K; private System.Windows.Forms.Button _btnQ_PVB_TRE; private System.Windows.Forms.Button _btnQ_PVB_TRO; private System.Windows.Forms.Button _btnQ_PVB_Z; private System.Windows.Forms.Button _btnQ_PVB; private System.Windows.Forms.Button _btnQ_PVB_PVB_K; private System.Windows.Forms.Label label18; private System.Windows.Forms.Label label13; private System.Windows.Forms.PictureBox pictureBox2; private System.Windows.Forms.PictureBox pictureBox1; private System.Windows.Forms.CheckBox _chkb_Filtern; private System.Windows.Forms.ComboBox _cbLM_M_ID; private System.Windows.Forms.ComboBox _cb_LP_PK; private System.Windows.Forms.ComboBox _cbLM_N; private System.Windows.Forms.ComboBox _cb_PG_PG; private System.Windows.Forms.ComboBox _cb_LPB_PB_K; private System.Windows.Forms.ComboBox _cb_LPB_PB; private System.Windows.Forms.ComboBox _cb_LV_V; private System.Windows.Forms.ComboBox _cb_LV_V_L; private System.Windows.Forms.ComboBox _cb_LBSV_BSV_K; private System.Windows.Forms.ComboBox _cb_LBSV_BSV; private System.Windows.Forms.ComboBox _cb_BSTVS_BST_K; private System.Windows.Forms.ComboBox _cb_LPVB_PVB; private System.Windows.Forms.ComboBox _cb_BSTVS_BST; private System.Windows.Forms.ComboBox _cb_LPVB_PVB_K; private System.Windows.Forms.ComboBox _cb_BHVS_BV_K; private System.Windows.Forms.ComboBox _cb_BHVS_BV; private System.Windows.Forms.Label _labelcb; private System.Windows.Forms.ComboBox _cb_M; private System.Windows.Forms.ComboBox _cb_LE_E; private System.Windows.Forms.Button _btn_Export; private System.Windows.Forms.CheckBox _chkb_M; private System.Windows.Forms.ComboBox _cb_LP_P; private System.Windows.Forms.ComboBox _cb_LP_P_K; private System.Windows.Forms.Button _btn_M; private System.Windows.Forms.ToolTip toolTip1; private System.Windows.Forms.RadioButton _rb_123; private System.Windows.Forms.RadioButton _rb_ABC; private System.Windows.Forms.Button _btnQ_PMGR_PM_G; private System.Windows.Forms.Button _btnQ_PMGR_PM_G_K; private System.Windows.Forms.ComboBox _cb_LPMGR_PM_G; private System.Windows.Forms.ComboBox _cb_LPMGR_PM_G_K; private System.Windows.Forms.CheckBox _chkbLPMG; private System.Windows.Forms.CheckBox _chkb_A; private System.Windows.Forms.ListBox _lb_ERG; private System.Windows.Forms.CheckBox _chkb_LE; private System.Windows.Forms.CheckBox _chkb_BHVS; private System.Windows.Forms.CheckBox _chkb_BSTVS; private System.Windows.Forms.CheckBox _chkb_LP; private System.Windows.Forms.CheckBox _chkb_LPG; private System.Windows.Forms.CheckBox _chkb_LBSTV; private System.Windows.Forms.CheckBox _chkb_LPB; private System.Windows.Forms.CheckBox _chkb_LV; private System.Windows.Forms.CheckBox _chkb_LM; private System.Windows.Forms.CheckBox _chkb_LPVB; private System.Windows.Forms.RadioButton _rb_LM_Z; private System.Windows.Forms.RadioButton _rb_LM_E; private System.Windows.Forms.Panel _p_N; private System.Windows.Forms.CheckBox _chkb_ERG_S; private System.Windows.Forms.CheckBox _chkb_ERG_A; private System.Windows.Forms.CheckBox _chkb_dbS; private System.Windows.Forms.Button button1; private System.Windows.Forms.ComboBox _cb_LPVB_TRE; private System.Windows.Forms.Button button2; private System.Windows.Forms.Button button33; private System.Windows.Forms.Button button31; private System.Windows.Forms.Button button30; private System.Windows.Forms.Button button29; private System.Windows.Forms.Button button28; private System.Windows.Forms.Button button27; private System.Windows.Forms.Button button26; private System.Windows.Forms.Button button25; private System.Windows.Forms.Button button24; private System.Windows.Forms.Button button23; private System.Windows.Forms.Button button21; private System.Windows.Forms.Button button19; private System.Windows.Forms.Button button17; private System.Windows.Forms.Button button15; private System.Windows.Forms.Button button12; private System.Windows.Forms.Button button10; private System.Windows.Forms.Button button8; private System.Windows.Forms.Button button7; private System.Windows.Forms.Button button6; private System.Windows.Forms.Button button5; private System.Windows.Forms.Button button4; private System.Windows.Forms.Button button3; private System.Windows.Forms.ComboBox _cb_BSTVS_V_K; private System.Windows.Forms.Button button11; private System.Windows.Forms.ComboBox _cb_BHVS_V_K; private System.Windows.Forms.Button button9; private System.Windows.Forms.Label label1; private System.Windows.Forms.Button btn_fieldNames;

So, what's worse than naming your combobox combobox1? Naming it _cb_PG_PG_K, which is exactly the sound I'd make if someone handed this code to me and told me I owned it now. I'm sure these names were all part of a coding scheme that was meaningful to the original developer, but they didn't write it down, so no one else knows what this genius scheme means, beyond the obvious Hungarian notation, like _cb (marking something as a combo box). What does BHVS_V_K mean beyond someone mashing random keys on the keyboard?

The real WTF, however, is clearly that this pile of fields on a form is a clear indicator that this is one of those kinds of UIs that fits the "your company's app" section of this classic comic.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


LongNowHow Long is Now?

It is Time” (02020) by Alicia Eggert in collaboration with David Moinina Sengeh. The neon sign was commissioned by TED and Fine Acts for TED Countdown, and driven around Dallas, Texas on October 10th, 02020 to generate action around climate change. Photo by Vision & Verve.

I. Time

The most commonly-used noun in the English language is, according to the Oxford English Corpus, time. Its frequency is partly due to its multiplicity of meanings, and partly due to its use in common phrases. Above all, “time” is ubiquitous because what it refers to dictates all aspects of human life, from the hour we rise to the hour we sleep and most everything in between.

But what is time? The irony, of course, is that it’s hard to say. Trying to pin down its meaning using words can oftentimes feel like grasping at a wriggling fish. The 4th century Christian theologian Saint Augustine sums up the dilemma well:

But what in speaking do we refer to more familiarly and knowingly than time? And certainly we understand when we speak of it; we understand also when we hear it spoken of by another. What, then, is time? If no one asks me, I know; if I wish to explain to him who asks, I know not.

Most of us are content to live in a world where time is simply what a clock reads. The interdisciplinary artist Alicia Eggert is not. Through co-opting clocks and forms of commercial signage (billboards, neon signs, inflatable nylon of the kind that animates the air dancers in the parking lots of auto dealerships), Eggert makes conceptual art that invites us to experience the dimensions of time through the language we use to talk about it.

Her art draws on theories of time from physics and philosophy, like the inseparability of time and space and the difference between being and becoming. She expresses these oftentimes complex ideas through simple words and phrases we make use of in our everyday lives, thereby making them tangible and relatable.

Between Now and Then” (02018) by Alicia Eggert.

Take the words “now” and “then.” In its most narrow sense, “now” means this moment. But it can be broadened to refer to today, this year, this century, et cetera. “Then” can mean both the past and the future. Eggert’s “Between Now and Then” explores how these two time relationships depend on one another. The words “NOW” and “THEN” are inflatable sculptures of nylon connected to the same air source. The fan is reversible, so one word literally sucks the air out of the other.

All The Time” (02012) by Alicia Eggert.

In the philosophy of time, those who ascribe to eternalism believe all time is equally real, and that the past, present, and future all exist simultaneously. In “All the Time,” Eggert gives this philosophical approach material form by altering a clock to give it twelve functioning hour hands.

All the Light You See” (02017–02019) by Alicia Eggert. Photo by Ryan Strand Greenberg.

On the roof of a convenience store in Philadelphia is the permanent installation “All the Light You See.” The neon sign alternates between two statements: “All the light you see is from the past” and “All you see is past.” It also turns off completely for a brief time.

“It speaks to the fact that light takes time to travel, so by the time it reaches your eyes, everything you are seeing is technically already in the past,” Eggert writes in a description of the artwork. “Light from the moon left its surface 1.5 seconds ago; sunlight travels for 8 minutes and 19 seconds before it touches your skin. The farther out into space we look, the farther back in time we can see.”

“There are different levels to my work,” Eggert tells me. “At the surface level, it’s extremely accessible and understandable by most people. But then you can peel away the different layers, think more deeply about what it is actually saying, and have the opportunity to ask those big existential questions or have those reflective, introspective moments.”

II. Eternity

Eggert lives in Denton, Texas, where she is professor of sculpture and studio art at the University of North Texas. Her work has been exhibited at cultural institutions throughout the United States, Europe, and Asia.

Alicia Eggert next to her light sculpture, “This Present Moment.” Photo by Vision & Verve.

Both her interest in time and her focus on making her art accessible come from a surprising source: evangelical Christianity.

“I was raised in an environment where all these ideas about eternity, the afterlife, and why we’re here were planted in me from an early age,” she says. “That’s kind of all you talk about at church.”

She was born into a religious family in Camden, New Jersey, in 01981. She attended church — where her father was a Pentecostal minister — twice every Sunday, and again on Wednesdays.

“In different religions, there are different levels of importance placed on whether or not people actually understand or feel moved by the scripture or the message,” she says. “Unlike in a Catholic church that speaks Latin during the service, in the Pentecostal church I grew up in, the goal was to bring people into the fold and make sure everyone felt welcome and understood what was going on: ‘We’re talking about big, crazy, things, but we’re going to put it in this package that you can swallow.’ I think that was somehow embedded in me.”

While Eggert was influenced by the medium, she was more skeptical of the message.In the Christian worldview she was taught, life on Earth was suffering. Deliverance came only in the eternal hereafter.

“Something about that just kind of struck me as really wrong,” she says. “We have this one life to live, and the attitude was: ‘I can’t wait until it’s over, so I can get to heaven.’”

“I started thinking, kind of subconsciously, not really consciously, about the detrimental effects that ‘small now’ attitude had on the world and almost everything, in a way I can see much more clearly as an adult,” she says. “If you feel like life is all about suffering, and then after death is when you get to be in heaven, that permeates all of your actions. It permeates the attitudes that we take towards the planet, to other animals, and species: ‘All of this is just temporary anyways, so we don’t necessarily have to conserve it.’”

By high school, Eggert knew she no longer believed in Christianity. (She would eventually “come out” to her parents as an atheist in college). She started to focus more and more on the idea that, as she puts it, “the time we have is the only time we have.” She started reading philosophy and picked up a “not very serious” interest in Buddhism to explore the implications of that worldview: if we only have one life to live, what, then, does “time” mean? What does “eternity” mean?

Timelapse of “Eternity” (02010) by Alicia Eggert in collaboration with Mike Fleming. Every twelve hours, the hour and minute hands of thirty electric clocks spell the word ETERNITY.

In her last semester at Drexel University, where she was studying interior design, she discovered that conceptual art offered a gateway to explore these questions.

“All of a sudden, I understood that I could make art that was driven by an idea instead of an image,” she says. “And there are so many different ways you can bring an idea into existence, from just simply communicating it with text, to using all kinds of different sculptural materials, processes, and forms.”

While pursuing a Master in Fine Arts in sculpture at Alfred University, she began to merge her philosophical interest in time with her artistic practice.

The Length of Now” (02008) by Alicia Eggert.

In an early piece from 02008 titled “The Length of Now,” Eggert soaked a yarn of red string in water and froze it into the shape of the word “now.” She then hung it on a wall and filmed it while it melted. (“Now,” in that artwork, turned out to be two minutes and forty-three seconds long).

Coffee Cup Conveyor Belt Calendar” (02008) by Alicia Eggert.

“Coffee Cup Conveyor Belt Calendar” reimagined the daily ritual of having a cup of coffee on the way to work each morning as units of time, with each cup representing a day. The porcelain cups traveled slowly across a conveyor belt past post-it notes that read “Today,” “Tomorrow,” “The day after that,” “The day after that.” Every twenty-four hours, a cup fell off the end of the conveyor belt, shattering into a pile. Above the pile of shattered cups was a post-it note labeled “Yesterday.” Because the cups were made of unfired porcelain, the shards could be reconstituted into slip and remade into cups that started the cycle anew.

“That was the first time I ever made a kinetic sculpture that incorporated time and movement into the work,” Eggert says. “It was also the first time that I really started to think about cyclical versus linear time. That led me to start reading about time as it’s thought about in physics.”

Between Now And Then” (02008) by Alicia Eggert.

Eggert considers “Between Now and Then” to be a breakthrough piece. It was her first experiment with signage, and could be found mounted on the wall of the hallway outside of her studio. On one side of the sign was the word “NOW.” On the other, the word “THEN.” Those who walked by could easily mistake it for a bathroom sign. Eggert saw it as “a blade that slices through a person’s path, dividing time and space.”

“That was when I really started to focus primarily on time with my work, and giving language a physical form,” Eggert says. “For a little while, I couldn’t figure out how those two things were related, but now that seems to be the combination of what I do primarily.”

A clock tells you what time it is. Eggert’s art asks you what time is. She doesn’t provide answers so much as present a constellation of possibilities.

Artworks by Alicia Eggert. From top left: “The Weight of Now” (02009); “Now” (02012); “Now… No, Now… No, No, No… Now” (02013); “You Are (On) An Island” (Made in collaboration with Mike Fleming, 02011–13); “On A Clear Day You Can See Forever” (02016–17); “NOW/HERE” (02018); “Forever Becoming” (02019); “The Future Comes From Behind Our Backs” (02020).

“I’m fascinated by all the different ways people think about time,” she says. “Some say it exists. Some say it doesn’t exist. Some say the present moment is all there is, others say discrete moments all stack up like the individual pages of a book, and that’s why we have this illusion that time is linear. There’s no way to prove that any of these ways of thinking about time is actually right. There’s probably little bits and pieces from all the different explanations that maybe form an answer. I don’t know. I just want to know as many of the explanations people have thought of as possible.”

In 02015, she came across a way of thinking about time that deeply resonated with her. It would inspire two artworks, including a light sculpture, “This Present Moment,” that was recently acquired by the Smithsonian’s Renwick Gallery, the premier museum of American craft and decorative arts.

It was called the Long Now.

III. The Long Now

In01999, Gary Snyder, the Zen poet, sent an epigram to Stewart Brand, the environmentalist and cyberculture pioneer best known for founding and editing the Whole Earth Catalog:

This present moment

That lives on to become

Long ago.

Snyder’s poem alluded to how the present becomes past. Brand responded with a riff of his own, on how the future becomes present:

This present moment

Used to be

The unimaginable future.

At the time, Brand was at work completing The Clock of the Long Now, a book of essays that introduced readers to the ideas behind the 10,000 Year Clock and The Long Now Foundation, the nonprofit organization he co-founded with Brian Eno and Danny Hillis in 01996.

The book was a clarion call for engaging in long-term thinking and taking long-term responsibility to counterbalance civilization’s “pathologically short attention span.” Brand argued that by enlarging our sense of “now” to include both the last 10,000 years (“the size of civilization thus far”) and the next 10,000 years, humanity could transcend short-term thinking and engage the challenges of the present moment with the long view in mind. This 20,000-year frame of reference is known as the Long Now, a term coined by Brian Eno.

Brand included his exchange with Snyder as the book’s closer. (Snyder would ultimately include his contribution in a 02016 collection of poetry, This Present Moment).

Brand’s epigram became one of the most shared selections from a book full of “quotable quotes,” regularly appearing in motivational tweets and the slides of keynote presentations. He always makes a point of crediting Snyder when the quote is attributed to him alone. I asked Brand recently whether this was simply a matter of giving credit where credit was due, or if he felt there was a deeper connection between the two poems that was lost when viewed in isolation. (Another famous Brand quip, “Information wants to be free,” is often shared without the crucial second part: “Information also wants to be expensive.”)

“Both credit and connection help the quote,” Brand told me. “Gary lends gravitas with the credit. Also it’s a sequence, which he started. The two riffs book-end time, backward and forward, the way Long Now tries to.”

“It’s a conversation,” Brand said. “Nearly all art is, not always so overtly.”

In 02018, Eggert added her voice to the conversation. She’d read The Clock of The Long Now three years earlier, and found the concept of the Long Now a compelling corrective to the detrimental effects of the small now she saw in religion. She was particularly struck by Brand’s “This present moment…” epigram.

“I’m on a constant hunt for literature to read that will be another star in that constellation of my understanding of time,” she says. “My process as an artist is, as I’m reading these books, I always keep a sketchbook nearby. If there’s ever a quote that jumps out at me, I write it down. Then, when I’m brainstorming for new work for an exhibition, I’ll oftentimes go back through my notebooks and look at all the quotes that I’ve written down. I don’t know how or why I ended up choosing Brand’s words at that particular time, but I started realizing that they were saying exactly what I was feeling at that moment to be true or important.”

When Eggert asked Brand on Twitter whether she could have permission to turn the quote into a neon sculpture, his response was typical: “Sure!” (The footer of Brand’s personal website reads: “Please don’t ask for permission to borrow my stuff: just do it”).

IV. This Present Moment

“This Present Moment,” which initially went on display at the Galeria Fernando Santos in Porto, Portugal in 02019 and will debut at the Smithsonian’s Renwick Gallery in 02022, is a neon pink sign that is twelve feet tall and fifteen feet wide. It cycles through two statements. First: “This present / moment / used to be / the unimaginable / future.” Then, after a few seconds, the words “present” and “unimaginable” blink off, leaving: “This / moment / used to be / the / future.” And then the sign turns off. After a beat, the cycle repeats.

“This Present Moment” is more than Brand’s epigram rendered as a sign. By adding the element of time to how viewers experience the message, Eggert has made Brand’s words immediate, dynamic and personal.

This Present Moment” (02019) by Alicia Eggert.

At first glance, the piece conveys a deceptively simple truth: from the perspective of the past, this moment — as in, right now, as you read my words, and take in an animation of that sign — was once the future. But as time passes, questions arise: how long is a moment? Is it the interval between the sign turning off, and turning back on again? When does “now” end? When does “the future” begin?

It all depends, of course, on your perspective. The future might be ten seconds from now if that’s when your next meeting starts; next quarter, if you’re a businessperson; next election, if you’re a politician; an illusion, if you’re a Buddhist. Your sense of now might be impossibly brief, if you’re stressed, or apparently endless, if you’re mindful. It all depends on what you’re willing to consider, and what you’re willing to pay attention to.

The longer you pay attention to “This Present Moment,” the more meaningful it becomes.

“You could obviously zoom in on this present moment as being right now,” Eggert tells me. “Or you could zoom out on this present moment as being much longer. The way the sign flashes is a reminder of both of those things. When it turns off completely for a couple of seconds, and then starts to cycle back through, you’re reminded of that really short now. But the actual words suggest a much longer now as well.”

Perhaps, staring at that sign, you begin to realize that these two statements about time are always true for any human being who contemplates them. They were true for your ancestors. What was their present moment like? They’ll be true for your descendants. What unimaginable future awaits them?

“In our everyday lives, we’re inclined to think in short terms and see the present moment as small and narrow,” Eggert says. “But the same laws of nature that formed the rocks beneath our feet millions of years in the past are still in effect and in progress right now. And it seems as though our collective future might depend on our capacity to conceive of a ‘present moment’ that is much longer and wider — one that our limited field of view cannot contain.”

Such mental time travel is an exercise in empathy. The power of “This Present Moment,” like so much of Eggert’s artwork, is that it’s simple enough, and accessible enough, to make that exercise feel as intuitive as looking at a clock to check the time.

V. The Unimaginable Future

Eggert’s solo exhibition, “Conditions of Possibility,” opened at the Liliana Bloch Gallery in Dallas, Texas, in April 02021. A majority of the gallery space is occupied by her latest artwork, “The Unimaginable Future.”

The Unimaginable Future” (02020–21) by Alicia Eggert. The work is inspired by The Long Now Foundation and Stewart Brand’s The Clock of the Long Now. Photograph by Kevin Todora.

A companion piece to “This Present Moment,” “The Unimaginable Future” consists of six layers of steel rebar with the word “FUTURE” occupying the negative space. Mounted on a nearby wall are three small kinetic structures that use clock hands to spell the word “NOW” at different speeds. The Long Now (represented by the steel “Future” sculpture) and the Short Now (represented by the three kinetic “Now” sculptures) coexist in the same moment.

Small Nows” (02020–02021) by Alicia Eggert. Photograph by Kevin Todora.

The exhibition reflects Eggert’s conviction that when we engage with art that explores language and time — what she calls “the powerful but invisible forces that shape our reality” — we might wonder not only about what is real, but what is possible.

All That is Possible is Real” (02016–17) by Alicia Eggert. The text comes from Immanuel Kant’s Critique of Pure Reason.

“Art provides us with opportunities to think deeply and meaningfully about what we value as individuals and as a society,” Eggert says. “Art gives us new ways of telling the same stories — ways that are continually more compelling, more emotive, more relatable and more experiential. Those experiences create new ways of understanding the world and the role we play in it. Art is a condition of possibility for imagining otherwise unimaginable futures.”

The Sagrada Familia under construction in Barcelona, Spain. Photo by Angela Compagnone on Unsplash.

In a May 02021 Long Now talk, Sean Carroll, a physicist whose writings on time have strongly influenced Eggert’s work, spoke about how this capacity to imagine unimaginable futures is what makes human life meaningful.

As an example, he pointed to Barcelona’s Sagrada Familia, a modernist cathedral designed by Antoni Gaudi. Construction began in 01883, and Gaudi harbored no illusions that it would be completed by the time of his death, which occurred in 01926. It remains under construction to this day.

“The point is not that Gaudi thought that he would be a ghostly persistence over time that would be looking down on the cathedral and admiring it,” Carroll said. “He gained pleasure right at that moment from the prospect of the future. And that’s something that we humans have the ability to do. The conditions of our selves, right now, depend on our visions of the past and the future, as well as our conditions here in the present.”

Carroll went on to say:

We are temporary little bits of complex structure in the universe that are part of the overall increase of entropy over time. That means we are ephemeral. We’re not going to last forever. That’s the bad news. We’re not going to last for 10 to the 10 to the 10 years. We have a lifespan. We have an expiration date. But it also means we are interesting. We are the interesting part of the universe. Part of this complexity is our ability to think about and model ourselves and the rest of the universe to do what psychologists call mental time travel, to imagine ourselves not just in different places, but at different times. It’s that ability, that imagination, that flow through time, that makes us what we are as human beings.

It is all too easy to forget that we have this capacity to imagine unimaginable futures. It is all too easy to forget that time can mean more than the narrow concerns of the here-and-now or the hoped-for salvation of a timeless eternal then.

The art of Alicia Eggert helps us remember.

Cryptogram Apple Censorship and Surveillance in China

Good investigative reporting on how Apple is participating in and assisting with Chinese censorship and surveillance.

Cryptogram Adding a Russian Keyboard to Protect against Ransomware

A lot of Russian malware — the malware that targeted the Colonial Pipeline, for example — won’t install on computers with a Cyrillic keyboard installed. Brian Krebs wonders if this could be a useful defense:

In Russia, for example, authorities there generally will not initiate a cybercrime investigation against one of their own unless a company or individual within the country’s borders files an official complaint as a victim. Ensuring that no affiliates can produce victims in their own countries is the easiest way for these criminals to stay off the radar of domestic law enforcement agencies.


DarkSide, like a great many other malware strains, has a hard-coded do-not-install list of countries which are the principal members of the Commonwealth of Independent States (CIS) — former Soviet satellites that mostly have favorable relations with the Kremlin.


Simply put, countless malware strains will check for the presence of one of these languages on the system, and if they’re detected the malware will exit and fail to install.


Will installing one of these languages keep your Windows computer safe from all malware? Absolutely not. There is plenty of malware that doesn’t care where in the world you are. And there is no substitute for adopting a defense-in-depth posture, and avoiding risky behaviors online.

But is there really a downside to taking this simple, free, prophylactic approach? None that I can see, other than perhaps a sinking feeling of capitulation. The worst that could happen is that you accidentally toggle the language settings and all your menu options are in Russian.

Worse Than FailureCodeSOD: Mastery Through Repetition

When I was a baby programmer, I subscribed to a computer magazine. In those olden times, these magazines would come with pages of program code, usually BASIC, so that you could type this program into your computer and run it. If you were careful about typos, you could accomplish quite a bit of "programming" without actually programming. What you were doing, in practice, was just typing.

One of Anthony's predecessors was quite the accomplished typist.

They needed to take the built-in .NET XmlDocument class and add a one method to it. Now, C# offers a few techniques for doing this. The "traditional" object-oriented approach is to use inheritance. Now, that's not without its downsides, so C# also has the concept of "extension methods". This is a little bit of syntactic sugar, which allows you to declare static method that takes a parameter of XmlDocument, but invoke it as if it were an actual instance method. Of course, depending on what you're doing, that might not give you all the functionality you need. And outside of those techniques, there are a number of the good-ol' Gang of Four design patterns, like the Visitor pattern which could solve this problem without loads of complexity. Or even just "a plain old static method" might fit.

But Anthony's predecessor didn't want to do any of those. They instead chose to use typing.

public XmlHelper() { doc = new XmlDocument(); } public XmlHelper(string xml) { doc = new XmlDocument(); this.doc.LoadXml(xml); } public XmlHelper(XmlDocument doc) { this.doc = doc; } public void Attach(XmlDocument doc) { this.doc = doc; } public void LoadXml(string xml) { this.doc.LoadXml(xml); } public void LoadFromFile(string filePath) { this.doc.Load(filePath); } public XmlNode SelectSingleNode(string xpath) { return this.doc.SelectSingleNode(xpath); } public XmlNodeList SelectNodes(string xpath) { return this.doc.SelectNodes(xpath); } .... // another dozen similar overloads public XmlNodeList GetFilteredNodeList() { return DoSomeFilteringOnMyNodeList(this.doc); } // another dozen of trivial overloads }

The XmlHelper class exposes all of the same methods as an XmlDocument, but adds a single GetFilteredNodeList. Because we didn't use inheritance, you can't slot an XmlHelper in the same place as an XmlDocument, but they both have the same interface (but don't actually implement a common Interface). In short, this is a lot of code with no real benefit.

But I'm sure the developer was a very good typist.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Krebs on SecurityTry This One Weird Trick Russian Hackers Hate

In a Twitter discussion last week on ransomware attacks, KrebsOnSecurity noted that virtually all ransomware strains have a built-in failsafe designed to cover the backsides of the malware purveyors: They simply will not install on a Microsoft Windows computer that already has one of many types of virtual keyboards installed — such as Russian or Ukrainian. So many readers had questions in response to the tweet that I thought it was worth a blog post exploring this one weird cyber defense trick.

The Commonwealth of Independent States (CIS) more or less matches the exclusion list on an awful lot of malware coming out of Eastern Europe.

The Twitter thread came up in a discussion on the ransomware attack against Colonial Pipeline, which earlier this month shut down 5,500 miles of fuel pipe for nearly a week, causing fuel station supply shortages throughout the country and driving up prices. The FBI said the attack was the work of DarkSide, a new-ish ransomware-as-a-service offering that says it targets only large corporations.

DarkSide and other Russian-language affiliate moneymaking programs have long barred their criminal associates from installing malicious software on computers in a host of Eastern European countries, including Ukraine and Russia. This prohibition dates back to the earliest days of organized cybercrime, and it is intended to minimize scrutiny and interference from local authorities.

In Russia, for example, authorities there generally will not initiate a cybercrime investigation against one of their own unless a company or individual within the country’s borders files an official complaint as a victim. Ensuring that no affiliates can produce victims in their own countries is the easiest way for these criminals to stay off the radar of domestic law enforcement agencies.

Possibly feeling the heat from being referenced in President Biden’s Executive Order on cybersecurity this past week, the DarkSide group sought to distance itself from their attack against Colonial Pipeline. In a message posted to its victim shaming blog, DarkSide tried to say it was “apolitical” and that it didn’t wish to participate in geopolitics.

“Our goal is to make money, and not creating problems for society,” the DarkSide criminals wrote last week. “From today we introduce moderation and check each company that our partners want to encrypt to avoid social consequences in the future.”

But here’s the thing: Digital extortion gangs like DarkSide take great care to make their entire platforms geopolitical, because their malware is engineered to work only in certain parts of the world.

DarkSide, like a great many other malware strains, has a hard-coded do-not-install list of countries which are the principal members of the Commonwealth of Independent States (CIS) — former Soviet satellites that mostly have favorable relations with the Kremlin. The full exclusion list in DarkSide (published by Cybereason) is below:

Image: Cybereason.

Simply put, countless malware strains will check for the presence of one of these languages on the system, and if they’re detected the malware will exit and fail to install.

[Side note. Many security experts have pointed to connections between the DarkSide and REvil (a.k.a. “Sodinokibi”) ransomware groups. REvil was previously known as GandCrab, and one of the many things GandCrab had in common with REvil was that both programs barred affiliates from infecting victims in Syria. As we can see from the chart above, Syria is also exempted from infections by DarkSide ransomware. And DarkSide itself proved their connection to REvil this past week when it announced it was closing up shop after its servers and bitcoin funds were seized.]


Will installing one of these languages keep your Windows computer safe from all malware? Absolutely not. There is plenty of malware that doesn’t care where in the world you are. And there is no substitute for adopting a defense-in-depth posture, and avoiding risky behaviors online.

But is there really a downside to taking this simple, free, prophylactic approach? None that I can see, other than perhaps a sinking feeling of capitulation. The worst that could happen is that you accidentally toggle the language settings and all your menu options are in Russian.

If this happens (and the first time it does the experience may be a bit jarring) hit the Windows key and the space bar at the same time; if you have more than one language installed you will see the ability to quickly toggle from one to the other. The little box that pops up when one hits that keyboard combo looks like this:

Cybercriminals are notoriously responsive to defenses which cut into their profitability, so why wouldn’t the bad guys just change things up and start ignoring the language check? Well, they certainly can and maybe even will do that (a recent version of DarkSide analyzed by Mandiant did not perform the system language check).

But doing so increases the risk to their personal safety and fortunes by some non-trivial amount, said Allison Nixon, chief research officer at New York City-based cyber investigations firm Unit221B.

Nixon said because of Russia’s unique legal culture, criminal hackers in that country employ these checks to ensure they are only attacking victims outside of the country.

“This is for their legal protection,” Nixon said. “Installing a Cyrillic keyboard, or changing a specific registry entry to say ‘RU’, and so forth, might be enough to convince malware that you are Russian and off limits. This can technically be used as a ‘vaccine’ against Russian malware.”

Nixon said if enough people do this in large numbers, it may in the short term protect some people, but more importantly in the long term it forces Russian hackers to make a choice: Risk losing legal protections, or risk losing income.

“Essentially, Russian hackers will end up facing the same difficulty that defenders in the West must face — the fact that it is very difficult to tell the difference between a domestic machine and a foreign machine masquerading as a domestic one,” she said.

KrebsOnSecurity asked Nixon’s colleague at Unit221B — founder Lance James — what he thought about the efficacy of another anti-malware approach suggested by Twitter followers who chimed in on last week’s discussion: Adding entries to the Windows registry that specify the system is running as a virtual machine (VM). In a bid to stymie analysis by antivirus and security firms, some malware authors have traditionally configured their malware to quit installing if it detects it is running in a virtual environment.

But James said this prohibition is no longer quite so common, particularly since so many organizations have transitioned to virtual environments for everyday use.

“Being a virtual machine doesn’t stop malware like it used to,” James said. “In fact, a lot of the ransomware we’re seeing now is running on VMs.”

But James says he loves the idea of everyone adding a language from the CIS country list so much he’s produced his own clickable two-line Windows batch script that adds a Russian language reference in the specific Windows registry keys that are checked by malware. The script effectively allows one’s Windows PC to look like it has a Russian keyboard installed without actually downloading the added script libraries from Microsoft.

To install a different keyboard language on a Windows 10 computer the old fashioned way, hit the Windows key and X at the same time, then select Settings, and then select “Time and Language.” Select Language, and then scroll down and you should see an option to install another character set. Pick one, and the language should be installed the next time you reboot. Again, if for some reason you need to toggle between languages, Windows+Spacebar is your friend.

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 07)

This week on my podcast, the conclusion to my seven-part serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).


Worse Than FailureCodeSOD: Authentic Mistakes

John's employer needed a small web app built, so someone in management went out and hired a contracting firm to do the work, with the understanding that once it was done, their own internal software teams would do maintenance and support.

Fortunately for John's company, their standard contract included a few well-defined checkpoints, for both code quality audits and security audits. It's the last one that's relevant in this case.

There are three things you should never build for your application: date handling logic, encryption algorithms, or authentication mechanisms. These are all things that sound simple on the surface, but are actually quite difficult. You will mess them up, and you'll regret it. What's remarkable here, however, is seeing how badly one can mess up authentication:

$(document).ready(function() { $("#password").val(""); $("#button").click( function() { if($("#password").val() == "<?php echo $rowFromDatabase['admin_password']; ?>"){ showAdminInterface(); } else { alert('Password not valid :('); }; }); });

What you see here is client-side JavaScript. When the user clicks the wonderfully named #button, we compare their #password entry against… <?php echo $rowFromDatabase['admin_password']; ?>.

Not only are they storing the administrator password in plaintext in the database, they're dumping the admin password in the body of the document. Anyone can just hit "view source" and log in as an administrator.

Obviously, this failed the audit. "But," the contractor said, "it's perfectly safe, because we disabled right clicks, so no one can view source."

Shockingly, this still failed the audit.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram Is 85% of US Critical Infrastructure in Private Hands?

Most US critical infrastructure is run by private corporations. This has major security implications, because it’s putting a random power company in — say — Ohio — up against the Russian cybercommand, which isn’t a fair fight.

When this problem is discussed, people regularly quote the statistic that 85% of US critical infrastructure is in private hands. It’s a handy number, and matches our intuition. Still, I have never been able to find a factual basis, or anyone who knows where the number comes from. Paul Rosenzweig investigates, and reaches the same conclusion.

So we don’t know the percentage, but I think we can safely say that it’s a lot.

Cryptogram New US Executive Order on Cybersecurity

President Biden signed an executive order to improve government cybersecurity, setting new security standards for software sold to the federal government.

For the first time, the United States will require all software purchased by the federal government to meet, within six months, a series of new cybersecurity standards. Although the companies would have to “self-certify,” violators would be removed from federal procurement lists, which could kill their chances of selling their products on the commercial market.

I’m a big fan of these sorts of measures. The US government is a big enough market that vendors will try to comply with procurement regulations, and the improvements will benefit all customers of the software.

More news articles.

EDITED TO ADD (5/16): Good analysis.


Cryptogram Book Sale: Beyond Fear

I have 80 copies of my 2000 book Beyond Fear available at the very cheap price of $5 plus shipping. Note that there is a 20% chance that your book will have a “BT Counterpane” sticker on the front cover.

Order your signed copy here.

Cryptogram Ransomware Is Getting Ugly

Modern ransomware has two dimensions: pay to get your data back, and pay not to have your data dumped on the Internet. The DC police are the victims of this ransomware, and the criminals have just posted personnel records — “including the results of psychological assessments and polygraph tests; driver’s license images; fingerprints; social security numbers; dates of birth; and residential, financial, and marriage histories” — for two dozen police officers.

The negotiations don’t seem to be doing well. The criminals want $4M. The DC police offered them $100,000.

The Colonial Pipeline is another current high-profile ransomware victim. (Brian Krebs has some good information on DarkSide, the criminal group behind that attack.) So is Vastaamo, a Finnish mental heal clinic. Criminals contacted the individual patients and demanded payment, and then dumped their personal psychological information online.

An industry group called the Institute for Security and Technology (no, I haven’t heard of it before, either) just released a comprehensive report on combating ransomware. It has a “comprehensive plan of action,” which isn’t much different from anything most of us can propose. Solving this is not easy. Ransomware is big business, made possible by insecure networks that allow criminals to gain access to networks in the first place, and cryptocurrencies that allow for payments that governments cannot interdict. Ransomware has become the most profitable cybercrime business model, and until we solve those two problems, that’s not going to change.

Krebs on SecurityDarkSide Ransomware Gang Quits After Servers, Bitcoin Stash Seized

The DarkSide ransomware affiliate program responsible for the six-day outage at Colonial Pipeline this week that led to fuel shortages and price spikes across the country is running for the hills. The crime gang announced it was closing up shop after its servers were seized and someone drained the cryptocurrency from an account the group uses to pay affiliates.

“Servers were seized (country not named), money of advertisers and founders was transferred to an unknown account,” reads a message from a cybercrime forum reposted to the Russian OSINT Telegram channel.

“A few hours ago, we lost access to the public part of our infrastructure,” the message continues, explaining the outage affected its victim shaming blog where stolen data is published from victims who refuse to pay a ransom.

“Hosting support, apart from information ‘at the request of law enforcement agencies,’ does not provide any other information,” the DarkSide admin says. “Also, a few hours after the withdrawal, funds from the payment server (ours and clients’) were withdrawn to an unknown address.”

DarkSide organizers also said they were releasing decryption tools for all of the companies that have been ransomed but which haven’t yet paid.

“After that, you will be free to communicate with them wherever you want in any way you want,” the instructions read.

The DarkSide message includes passages apparently penned by a leader of the REvil ransomware-as-a-service platform. This is interesting because security experts have posited that many of DarkSide’s core members are closely tied to the REvil gang.

The REvil representative said its program was introducing new restrictions on the kinds of organizations that affiliates could hold for ransom, and that henceforth it would be forbidden to attack those in the “social sector” (defined as healthcare and educational institutions) and organizations in the “gov-sector” (state) of any country. Affiliates also will be required to get approval before infecting victims.

The new restrictions came as some Russian cybercrime forums began distancing themselves from ransomware operations altogether. On Thursday, the administrator of the popular Russian forum XSS announced the community would no longer allow discussion threads about ransomware moneymaking programs.

“There’s too much publicity,” the XSS administrator explained. “Ransomware has gathered a critical mass of nonsense, bullshit, hype, and fuss around it. The word ‘ransomware’ has been put on a par with a number of unpleasant phenomena, such as geopolitical tensions, extortion, and government-backed hacks. This word has become dangerous and toxic.”

In a blog post on the DarkSide closure, cyber intelligence firm Intel 471 said it believes all of these actions can be tied directly to the reaction related to the high-profile ransomware attacks covered by the media this week.

“However, a strong caveat should be applied to these developments: it’s likely that these ransomware operators are trying to retreat from the spotlight more than suddenly discovering the error of their ways,” Intel 471 wrote. “A number of the operators will most likely operate in their own closed-knit groups, resurfacing under new names and updated ransomware variants. Additionally, the operators will have to find a new way to ‘wash’ the cryptocurrency they earn from ransoms. Intel 471 has observed that BitMix, a popular cryptocurrency mixing service used by Avaddon, DarkSide and REvil has allegedly ceased operations. Several apparent customers of the service reported they were unable to access BitMix in the last week.”


Krebs on SecurityMicrosoft Patch Tuesday, May 2021 Edition

Microsoft today released fixes to plug at least 55 security holes in its Windows operating systems and other software. Four of these weaknesses can be exploited by malware and malcontents to seize complete, remote control over vulnerable systems without any help from users. On deck this month are patches to quash a wormable flaw, a creepy wireless bug, and yet another reason to call for the death of Microsoft’s Internet Explorer (IE) web browser.

While May brings about half the normal volume of updates from Microsoft, there are some notable weaknesses that deserve prompt attention, particularly from enterprises. By all accounts, the most pressing priority this month is CVE-2021-31166, a Windows 10 and Windows Server flaw which allows an unauthenticated attacker to remotely execute malicious code at the operating system level. With this weakness, an attacker could compromise a host simply by sending it a specially-crafted packet of data.

“That makes this bug wormable, with even Microsoft calling that out in their write-up,” said Dustin Childs, with Trend Micro’s ZDI program. “Before you pass this aside, Windows 10 can also be configured as a web server, so it is impacted as well. Definitely put this on the top of your test-and-deploy list.”

Kevin Breen from Immersive Labs said the fact that this one is just 0.2 points away from a perfect 10 CVSS score should be enough to identify just how important it is to patch.

“For ransomware operators, this kind of vulnerability is a prime target for exploitation,” Breen said. “Wormable exploits should always be a high priority, especially if they are for services that are designed to be public facing. As this specific exploit would not require any form of authentication, it’s even more appealing for attackers, and any organization using HTTP.sys protocol stack should prioritize this patch.”

Breen also called attention to CVE-2021-26419 — a vulnerability in Internet Explorer 11 — to make the case for why IE needs to stand for “Internet Exploder.” To trigger this vulnerability, a user would have to visit a site that is controlled by the attacker, although Microsoft also recognizes that it could be triggered by embedding ActiveX controls in Office Documents.

“IE needs to die – and I’m not the only one that thinks so,” Breen said. “If you are an organization that has to provide IE11 to support legacy applications, consider enforcing a policy on the users that restricts the domains that can be accessed by IE11 to only those legacy applications. All other web browsing should be performed with a supported browser.”

Another curious bug fixed this month is CVE-2020-24587, described as a “Windows Wireless Networking Information Disclosure Vulnerability.” ZDI’s Childs said this one has the potential to be pretty damaging.

“This patch fixes a vulnerability that could allow an attacker to disclose the contents of encrypted wireless packets on an affected system,” he said. “It’s not clear what the range on such an attack would be, but you should assume some proximity is needed. You’ll also note this CVE is from 2020, which could indicate Microsoft has been working on this fix for some time.”

Microsoft also patched four more security holes its Exchange Server corporate email platform, which recently was besieged by attacks on four other zero-day Exchange flaws that resulted in hundreds of thousands of servers worldwide getting hacked. One of the bugs is credited to Orange Tsai of the DEVCORE research team, who was responsible for disclosing the ProxyLogon Exchange Server vulnerability that was patched in an out-of-band release back in March.

Researcher Orange Tsai commenting that nobody guessed the remote zero-day he reported on Jan. 5, 2021 to Microsoft was in Exchange Server.

“While none of these flaws are deemed critical in nature, it is a reminder that researchers and attackers are still looking closely at Exchange Server for additional vulnerabilities, so organizations that have yet to update their systems should do so as soon as possible,” said Satnam Narang, staff research engineer at Tenable.

As always, it’s a good idea for Windows users to get in the habit of updating at least once a month, but for regular users (read: not enterprises) it’s usually safe to wait a few days until after the patches are released, so that Microsoft has time to iron out any kinks in the new armor.

But before you update, please make sure you have backed up your system and/or important files. It’s not uncommon for a Windows update package to hose one’s system or prevent it from booting properly, and some updates have been known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

If you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.


LongNowPlay inspired by Long Now premieres this month

Gutter Street, a London-based theatre company, is premiering a play called The Long Now later this month. “The Long Now is inspired by the work of the @longnow foundation and takes a look at the need to promote long term thinking through our unique Gutter Street Lens,” the company said on Twitter.

Play summary:

Tudor is the finest clockmaker of all time. She knows her cogs from her clogs but will she be able to finish fixing her town’s ancient clock before time runs out? She is distracted by the beast that twists her dreams into nightmares and the wonder of the outside world. In search for the right tools in her trusty pile of things, will she finally finish the job she started…or will she just have another cup of tea?

More info and tickets here.


LongNowStewart Brand and Brian Eno on “We Are As Gods”

In March 02021, We Are As Gods, the documentary about Long Now co-founder Stewart Brand, premiered at SXSW. As part of the premiere, the documentary’s directors, David Alvarado and Jason Sussberg, hosted a conversation between Brand and fellow Long Now co-founder Brian Eno. (Eno scored the film, contributing 24 original tracks to the soundtrack.) The full conversation can be watched above. A transcript follows below.  

David Alvarado: Hi. My name is David Alvarado. I’m one of the directors for a new documentary film called We Are as Gods. This is a documentary feature that explores the extraordinary life of a radical thinker, environmentalist, and controversial technologist, Stewart Brand. This is a story that marries psychedelia, counterculture, futurism. It’s an unexpected journey of a complicated American polymath at the vanguard of our culture.

Today, we’re having a conversation with the subject of the film himself, Stewart Brand, and Brian Eno.

Jason Sussberg: Okay. In the unlikely event that you don’t know either of our two speakers, allow me to introduce them. First off, we have Brian Eno, who’s a musician, a producer, a visual artist and an activist. He is the founding member of the Long Now Foundation, along with Stewart Brand. He’s a musician of multiple albums, solo and collaborative. His latest album is called Film Music 1976-2020, which was released a few months ago, and we are lucky bastards because it includes a song from our film, We Are as Gods, called “A Reasonable Question.”

Stewart Brand, he is the subject of our documentary. Somewhere, long ago, I read a description of Stewart saying that he was “a finder and a founder,” which I think is a really apt way to talk about him. He finds tools, peoples, and ideas, and blends them together. He founded or co-founded Revive and Restore, The Long Now Foundation, The WELL, Global Business Network, and the Whole Earth Catalog and all of its offshoots. He is an author of multiple books, and he’s currently working on a new book called Maintenance. He’s a trained ecologist at Stanford and served as an infantry officer in the Army. I will let Stewart and Brian take it from here.

Stewart Brand: Brian, what a pleasure to be talking to you. I just love this.

Brian Eno: Yes.

Stewart Brand: You and I go back a long way. I was a fan before I was a friend, and so I continue to be a fan. I’m a fan of the music that you added to this film. I’m curious about particularly the one that is in your new album, Film Music. What’s it called…”[A] Reasonable Question.” Tell me what you remember about that piece, and I want to ask the makers of the film here what it was like from their end.

Jason Sussberg: We can play it for our audience now.

David Alvarado: You originally titled it “Why Does Music Like This Exist?”

Brian Eno: The reason it had that original title, “Why Does Music Like This Even Exist?”, was because it was one of those nights when I was in a mood of complete desperation, and thinking, “What am I doing? Is it of any use whatsoever?” I’ve learned to completely distrust my moods when I’m working on music. I could think something is fantastic, and then realize a few months later that it’s terrible, and vice versa. So what I do is I routinely mix everything that I ever work on, because I just don’t trust my judgment at the moment of working on it. That piece, the desperation I felt about it is reflected in the original title, “Why Does Music Like This Even Exist?” I was thinking, “God, this is so uninteresting. I’ve done this kind of thing a thousand times before.”

In fact, it was only when we started looking for pieces for this film…the way I look for things is just by putting my archive on random shuffle, and then doing the cleaning or washing up or tidying up books or things like that. So I just hear pieces appear. I often don’t remember them at first. I don’t remember when I did them. Anyway, this piece came up. I thought, “Oh. That’s quite a good piece.”

David Alvarado: I mean, that’s so brilliant because it’s actually… We weren’t involved, obviously, in choosing what music tracks you wanted to use for your 1976 to 2020 film album, and so you chose that one, the very one that you weren’t liking at the beginning. That’s just incredible.

Brian Eno: Yes. Well, this has happened now so many times that I think one’s judgment at the time of working has very little to do with the quality of what you’re making. It’s just to do with your mood at that moment.

Stewart Brand: So in this case, Brian, that piece is kind of joyous and exciting to hear. These guys put it in a part of the film where I’m at my best, I’m actually part of a real frontier happening. This must be a first for you, in a sense, you’re not only scoring the film, you’re in the film. This piece of film, I now realize as we listened to it, then cuts into you talking about me, but not about the music. You had no idea when they were interviewing you it was going to be overlaid on this. I sort of have to applaud these guys for not getting cute there and drowning you out with your own music there or something. “Yeah, well, he is chatting on, but let’s listen to the music.” But nevertheless, it really works in there. Do you like how it worked out in the film?

Brian Eno: Yes. Yes, I do. I like that, and quite a few of the other pieces appeared probably in places that I wouldn’t have imagined putting them, actually. This, I think, is one of the exciting things about doing film music, that you hear the music differently when you see it placed in a context. Just like music can modify a film, the film can modify the music as well. So sometimes you see the music and you think, “Oh, yes. They’ve spotted a feeling in that that I didn’t, or I hadn’t articulated anyway, I wasn’t aware of, perhaps.”

Stewart Brand: You’ve done a lot of, and the album shows it, you’ve done a lot of music for film. Are there sort of rules in your mind of how you do that? It’s different than ambient music, I guess, but there must be sort of criteria of, “Oh yeah, this is for a film, therefore X.” Are there things that you don’t do in film music?

Brian Eno: Yes. I’ll tell you what the relationship is with ambient music. Both ambient music and most of the film music I make deliberately leaves a space where somebody else might fill that space in with a lead instrument or something that is telling a story, something narrative, if you like. Even if it’s instrumental, it can still be narrative in the sense that you get the idea that this thing is the central element, which is having the adventure, and the rest is a sort of support structure to that or a landscape for that.

So what I realized, one of the things I liked about film music was that you very often just got landscape, which wasn’t populated, because the film is meant to be the thing that populates the landscape, if you like. I started listening to film music probably in the late ’60s, and it was Italian, like Nino Rota and Ennio Morricone and those kinds of people, who were writing very, very atmospheric music, which sort of lacked a central presence. I like that hole that was left, because I found the hole very inviting. It kind of says, “Come on, you be the adventurer. You, the listener, you’re in this landscape, what’s happening to you?” It’s a deliberate incompleteness, in a way, or an unfinishedness that that music has. I think that was part of the idea of ambient music as well, to try to make something that didn’t try to fix your attention, to hold it and keep it in one place, that deliberately allowed it to wander around and have a look around. So this happens to be a good formula for film music.

I really started making film music in a strange way. I used to, when I was working on my early song albums, sometimes at the end of the day I’d have half an hour left and I’d have a track up on a multi-track tape, with all the different instruments, and I’d say to the engineer, “Let’s make the film music version now.” And what that normally meant was take out the main instruments, the voice, particularly the voice, and then other things that were sort of leading the piece. Take those all out, slow the tape down, often, to half speed, and see what we can do with what’s left. Actually, I often found those parts of the day more exciting than the rest of the day, when suddenly something came into existence that nobody had ever thought about before. That was sort of how I started making film music.

So I had collected up a lot of pieces like that, and I thought, “Do you know what, I should send these to film directors. They might find a use for these.” And indeed they did. So that’s how it started, really.

Stewart Brand: So you initiated that, the filmmakers did not come to you.

Brian Eno: No. I had been approached only once before. Actually, before I ever made any albums I’d been approached by a filmmaker to do a piece of music for him, but other than that, no, I didn’t have any approaches. I sort of got the ball rolling by saying, “Look, I’m doing this kind of music, and I think it would be good for films.” So I released an album which was called Music for Films, though in fact none of the music had been in films. It was a sort of proposal: this is music that could be in films. I just left out the could be.

Stewart Brand: You are a very good marketer of your product, I must say. That’s just neat. So from graphic designers, the idea of figure-ground, and sometimes they flip and things like that, that’s all very interesting. It sounds like in a way this is music which is all ground, but invites a figure.

Brian Eno: Yes, yes.

Stewart Brand: You’re a graphic artist originally, is that right?

Brian Eno: Well, I was trained as a fine artist, actually. I was trained as a painter. Well, when I say I was trained, I went to an art school which claimed it was teaching a fine art course, so I did painting and sculpture. But actually I did as much music there as I did visual art as well.

Stewart Brand: So it’s an art school, and you were doing music. Were other people in that school doing music at that time, or is that unique to you?

Brian Eno: No, that was in the ’60s. The art schools were the crucible of a lot of what happened in pop music at that time. And funnily enough, also the art schools were where experimental composers would find an audience. The music schools were absolutely uninterested in them. Music schools were very, very academic at that time. People had just started, I was one of the pioneers of this, I suppose, had just started making music in studios. So instead of sitting down with a guitar and writing something and then going into the studio to record it, people like me were going into studios to make something using the possibilities of that place, something that you couldn’t have made otherwise. You wouldn’t come up with a guitar or a piano. A sort of whole new era of music came out of that, really. But it really came out of this possibility of multi-track recording.

Stewart Brand: So this is pre-digital? You’re basically working with the tapes and mixing tapes, or what?

Brian Eno: This was late ’60s, early ’70s. What had happened was that until about 01968, the maximum number of tracks you had was four tracks. I think people went four-track in 01968. I think the last Beatles album was done on four track, which was considered incredibly luxurious. What that meant, four tracks, was that you could do something on one track, something on another, mix them down to one track so you still got one track and then three others left, then you could kind of build things up slowly and carefully.

Over time, so, it meant something different musically, because it separated music from performance. It made music much more like painting, in that you could add something one day and take it off the next day, add something else. The act of making music extended in time like the act of painting does. You didn’t have to just walk in front of the canvas and do it all in one go, which was how music had previously been recorded. That meant that recording studios were something that painting students immediately understood, because they understood that process. But music students didn’t. They still thought it had to be about performance. In fact, there was a lot of resistance from musicians in general, because they thought that it was cheating, it wasn’t fair you were doing these things. You couldn’t actually play them. Of course, I thought, “Well, who cares? It doesn’t really matter, does it? What matters is what comes out at the end.”

Stewart Brand: Well, I was doing a little bit of music, well, sort of background stuff or putting together things for art installations at that time, and what I well remember is fucking razor blade, where you’re cutting the tape and splicing it, doing all these things. It was pretty raw. But of course, the film guys are going through the same stuff at that time. They were with their razor blade equivalents, cutting and splicing and whatnotting. So digital has just exploded the range of possibilities, which I think I’ve heard some of your theory that exploded them too far, and you’re always looking for ways to restrain your possibilities when you’re composing. Is that right?

Brian Eno: Yes. Well, I suppose it’s a problem that everybody has now, when you think about it. Now, we’re all faced with a whole universe of rabbit holes that we could spend our time disappearing down. So you have to permanently be a curator, don’t you think? You have to be always thinking, “Okay. There’s a million interesting things out there, but I’d like to get something done, so how am I going to reduce that variety and choose a path to follow?”

Stewart Brand: How much of that process is intention and how much is discovery?

Brian Eno: I think the thing that decides that is whether you’ve got a deadline or not. The most important element in my working life, a lot of the time, is a deadline. The reason it’s important… Well, I’m sure as a writer you probably appreciate deadlines as well. It makes you realize you’ve got to stop pissing around. You have to finally decide on something. So the archive of music that I have now, which is to say after those days of fiddling around like I’ve described with that piece, I’d make a rough mix, they go into the archive — I’ve got 6,790 pieces in the archive now, I noticed today. They’re nearly all unfinished. They’re sort of provocative beginnings. They’re interesting openings. When I get a job like the job of doing this film music, I think, “Okay. I need some music.” So I naturally go to the archive and see what I’ve already started which might be possible to finish as the piece for this film, for example.

So whether I finish something or not completely depends really on whether it has a destination and a deadline. If it’s got a destination, that really helps, because I think, “Okay. It’s not going to be something like that. It’s not going to be that.” It just clears a lot of those possibilities which are amplifying every day. They’re multiplying every day, these possibilities. 

Stewart Brand: One thing that surprised me about your work on this film, is I thought you would have just handed them a handful of cool things and they would then turn it into the right background at the right place from their standpoint. But it sounds like there was interaction, Jason and David, between you and Brian on some of these cuts. What do you want to say about that?

Jason Sussberg: Yeah. I mean, we had an amazing selection of great tracks to plug in and see if they could help amplify the scene visually by giving it a sonic landscape that we could work with. Then, our initial thinking was that’s how we were going to work. But then we ended up going back to you, Brian, and asking for perhaps a different track or a different tone. And then you ended up, actually, making entirely new original music, to our great delight. So one day when we woke up and we had in our inbox original music that you scored specifically for scenes, that was a great delight. We were able to have a back and forth.

Brian Eno: Yes, that’s-

Stewart Brand: Were you giving him visual scenes or just descriptions?

Jason Sussberg: Right. Actually, what we did was we pulled together descriptions of the scenes and then we had… You just wanted, Brian, just a handful of photographs to kind of grok what we were doing. I don’t think you… Maybe you could talk about why you didn’t want the actual scene, but you had a handful of stills and a description of what we were going for tonally, and then you took it from there. What we got back was both surprising and made perfect sense every time.

Brian Eno: I remember one piece in particular that I made in relation to a description and some photographs, which was called, when I made it, it was called “Brand Ostinato.” I don’t know what it became. You’d have to look up your notes to see what title it finally took. But that piece, I was very pleased with. I wanted something that was really dynamic and fresh and bracing, made you sort of stand up. So I was pleased with that one.

But I usually don’t want to see too much of the film, because one of the things I think that music can do is to not just enhance what is already there in the film, which is what most American soundtrack writing is about… Most Hollywood writing is about underlining, about saying, “Oh, this is a sad scene. We’ll make it a little sadder with some music.” Or, “This is an action scene. We’ll give it a little bit more action.” As if the audience is a bit stupid and has to be told, “This is a sad scene. You’re supposed to feel a bit weepy now.” Whereas I thought the other day, what I like better than underlining is undermining. I like this idea of making something that isn’t really quite in the film. It’s a flavor or a taste that you can point to, and people say, “Oh, yes. There’s something different going on there.”

I mean, it would be very easy with Stewart to make music that was kind of epic and, I don’t know, Western or American or Californian or something like that. There are some obvious things you could do. If you were that kind of composer, you’d carefully study Stewart and you’d find things that were Stewart-ish in music and make them. But I thought, “No. What is exciting about this is the shock of the new kind of feeling.” That piece, that particular piece, “Brand Ostinato,” has that feeling, I think, of something that is very strikingly upright and disciplined. This discipline, that’s I think the feeling of it that I like. I don’t think, in that particular part in the film, where that occurs, I don’t think that’s a scene where you would see discipline, unless somebody had suggested it to you by way of a piece of music, for example.

Stewart Brand: And Jason, did you in fact use that piece of music with that part of the film?

Jason Sussberg: Yeah, I don’t think it was exactly where Brian had intended to put it, but hearing the description, what we did was we put that song in a scene where you are going to George Church’s lab, Stewart, and we’re trying to build up George Church as this genius geneticist. So the song was actually, curiously, written about Stewart and Stewart’s character of discipline, but we apply it to another character in the film. However, what you were going for, which is this upright, adventurous, Western spirit, I think is embodied by the work of the Church Lab to de-extinct animals. So it has that same bravado and gusto that you intended, it was just we kind of… And maybe this is what you were referring to about undermining and underlining, I feel like we kind of undermined your original intention and applied it to a different character, and that dialectic was working. Of course, Stewart is in that scene, but I think that song, that track really amplifies the mood that we were going for, which is the end of the first act.

Brian Eno: Usually, when people do music that is about cutting edge science, it’s all very drifty and cosmic. It’s all kind of, “Wow, it’s so weird,” kind of thing. I really wanted to say science is about discipline, actually. It’s about doing things well and doing things right. It’s not hippie-trippy. Of course, you can feel that way about it once it’s done, but I don’t think you do it that way. So I didn’t want to go the trippy route.

David Alvarado: Yeah. We loved it. It still is the anthem of the film for us. I mean, you named it as such, but it just really feels like it embodies Stewart’s quest on all his amazing adventures he’s been on. So that’s fantastic.

Brian Eno: One of the things that is actually really touching about this film is the early life stuff, which of course I never knew anything about. As women always say, “Well, men never ask that sort of question, do they?” And in fact, in my case it’s completely true. I never bothered to ask people how they got going or that kind of autobiographical question. But what strikes me, first of all, your father was quite an important part of the story. I got the feeling that quite a lot of the character that is described in there is attributed to your father has come right through to you as well, this respect for tools and for making things, which is different from the intellectual respect for thinking about things. Often intellectuals respect other thinkers, but they don’t often respect makers in the same way. So, I wonder when you started to become aware that there could be an overlap between those two things, that there was a you that was a making you and there was a thinking you as well? I wonder if there was a point where those two sort of came together for you, in your early life.

Stewart Brand: Well, you’re pointing out something that I hadn’t really noticed as well, frankly, until the film, which is what I remember is that my father was sort of ground and my mother was figure. She was the big event. She got me completely buried in books and thinking, and she was a liberal. I never did learn what my father’s politics were, but they’re probably pretty conservative. He tried to teach me to fish and he was a really desperately awful teacher. He once taught a class of potential MIT students, he failed every one of them. My older brother Mike said, “Why did you do that?” And he said, “Well, they just did not learn the material. They didn’t make it.” And my brother actually said, “You don’t think that says anything about you as their teacher?”

So I kind of discounted —  as I’m making youthful, stupid judgments — him. I think what you pointed out is a very good one. He was trained as a civil engineer at MIT. Another older brother, Pete, went to MIT. I later completely got embedded at MIT at The Media Lab and Negroponte and all of that. In a way I feel more identified with MIT than I do with Stanford where I did graduate. In Stanford I took as many humanities as I could with a science major.

But I think it’s also something that happened with the ’60s, Brian, which is that what we were dropping out of — late beatniks, early hippies, which is my generation — was a construct that universities were imparting, and I imagine British universities have a slightly different version of this than American ones, but still, the Ivy League-type ones. I remember one of the eventual sayings of the hippies was “back to basics,” which we translated as “back to the land,” which turned out to be a mistake, but the back to basics part was pretty good. We had this idea, we were immediately followed by the baby boom. It was the bulge in the snake, the pig in the python. There were so many of us that the world was always asking us our opinion of things, which we wind up taking for granted. You could, as a young person, you could just call a press conference. “I’m a young person. I want to expound some ideas.” And they would show up and write it all seriously down. The Beatles ran into this. It was just hysterical. Pretty soon you start having opinions. 

We were getting Volkswagen Bugs and vans. This is in my mind now because I’m working on this book about maintenance. We were learning how to fix our own cars. Partly it was the either having no money or pretending to have no money, which, by the way, that was me. It turned out I actually had a fair amount, I just ignored it, that my parents had invested in my name. We were eating out of and exploring and finding amazing things basically in garbage cans and debris boxes. Learning how to cook and eat roadkill and make clothing and domes and all these things. This was something that Peter Drucker noticed about that generation, that they were the first set of creatives that took not just art but also in a sense craft and just stuff seriously, and learned… Mostly we were making mistakes with the stuff, but then you either just backed away from it or you learned how to do it decently after all and become a great guitar maker or whatever it might be. That was what the Whole Earth Catalog tapped into, was that desire to not just make your own life, but make your own world.

Brian Eno: I’m trying to think… In my own life, I can remember some games I played as kids that I made up myself. I realized that they were really the first creative things that I ever did. I invented these games. I won’t bother to explain them, they were pretty simple, but I can remember the excitement of having thought of it myself, and thinking, “I made this. I made this idea myself.” I was sort of intrigued by it. I just wondered if there was a moment in your life when you had that feeling of, “This is the pleasure of thinking, the pleasure of coming up with something that didn’t exist before”?

Stewart Brand: There was one and it’s very well expressed in the film, which was the Trips Festival in January 01966. That was the first time that I took charge over something. I’d been going along with Ken Kesey and the Pranksters. I’d been going along with various creative people, USCO, a group of artists on the East Coast, and contributing but not leading. Once I heard from one of the Pranksters, Mike Hagen, that they wanted to do a thing that would be a Trips Festival, kind of an acid test for the whole Bay Area. I knew that they could not pull that off, but that it should happen. I picked up the phone and I started making arrangements for this public event.

And it worked out great. We were lucky in all the ways that you can be lucky in, and not unlucky in any of the ways you can be unlucky. It was a coup. It was a lot of being a tour de force, not by me, but by basically the Bay Area creatives getting together in one place and changing each other and the world. That was the point for me that I had really given myself agency to drive things.

There’s other things that give you reality in the world. Also in the film is when I appeared on the Dick Cavett Show.

Brian Eno: Oh, yes.

Stewart Brand: Which was a strange event for all of us. But the effect it had in my family was that… My father was dead by then, but my mother had always been sort of treating me as the youngest child, needing help. She would send money from time to time, keep me going in North Beach. But once I was on Dick Cavett, which she regularly watched, I had grown up in her eyes. I was now an adult. I should be treated as a peer.

Brian Eno: So no more money.

Stewart Brand: Well… yeah, yeah. Did that ever happen? I think she sort of liked occasionally keeping a token of dependency going. She was very generous with the money.

The great thing of being a hippie is you didn’t need much. I was not an expensive dependent. That was, I think, another thing there that the hippies weren’t, and that makes us freer about being wealthy or not, is that we’ve had perfectly good lives without much money at all. So the money is kind of an interesting new thing that you can get fucked up by or do creatively or just ignore. But you have those choices in a way, I think, that people who are either born to money or who are getting rich young don’t have. They have other interesting situations to deal with. For us, the discipline was not enough money, and for some of them the discipline is too much money, and how do you keep that from killing you.

Brian Eno: Yes. Yeah. I’ll ask the filmmakers a question as well, if I may. It’s a very simple question, but it isn’t actually answered in the film. The question is: why Stewart? Why did you choose to make a film about him? There are so many interesting people in North America, let alone in the West Coast, but what drew you to him in particular?

Jason Sussberg: I’ll answer this, and then I’ll let you take a swipe at this, David. I mean, I’ve always looked up to Stewart from the time that I ran into an old Whole Earth Catalog. It was the Last Whole Earth Catalog, when I was 18 years old, going to college in the year 02000. So this was 25 years after it was written. I sort of dove into it head first and realized this strange artifact from the past actually was a representation of possibilities, a representation of the future. So after that moment, I read a book of Stewart’s that just came out, about the Clock of the Long Now, and after that… I’ve always been an environmentalist and Earth consciousness and trying to think about how to preserve the natural world, but also I believe in technology as a hopeful future that we can have. We can use tools to create a more sustainable world. So Stewart was able to blend these two ideas in a way that seemed uncontroversial, and it really resonated with me as a fan of science and technology and the natural world. So Stewart, pretty much from an early age, was someone I always looked up to.

When David and I went to grad school, we were talking about the problems of the environmental movement, and Stewart was at the time writing a book that would basically later articulate these ideas.

Brian Eno: Oh, yes, good.

Jason Sussberg: And so when that book came out, it was like it just put our foot on the pedals, like, “Wow, we should make a movie of Stewart and his perspective.” But yeah, I was just always a fan of his.

Brian Eno: So that was quite a long time ago, then.

Jason Sussberg: Yeah, 10 years-

Brian Eno: Is that when you started thinking about it?

Jason Sussberg: Yeah, absolutely. I had made a short film of a friend of probably yours, Brian, and of Stewart’s, Lloyd Kahn. It was a short little eight-minute documentary about Lloyd Kahn and how he thought of shelter and of home construction. That was after that moment that I thought, “This is a really rich territory to explore.” I think that actually was 02008, so at that moment I already had the inkling of, wow, this would be a fantastic biographical documentary that nobody had made.

Stewart Brand: I’m curious, what’s David’s interest?

David Alvarado: Yeah, well, I think Jason and I are drawn to complicated stories, and my god, Stewart. There was a moment in college when I almost stopped becoming a filmmaker and wanted to become a geologist. I just was so fascinated by the complexity of looking at the land, being able to read the stratigraphy, for example, of a cliff and understand deep history of how that relates to what the land looks like now. So, I of course came back into film, but I see a lot of that there in your life. I mean, the layers of what you’ve done… The top layer for us is the de-extinction, the idea of resurrecting extinct species to reset ecosystems and repair damage that humans have caused. That could be its own subject, and if it’s all you did, that would be fascinating. But sitting right underneath that sits all these amazing things all the way back to the ’60s. So I think it’s just like my path as an artist to just dig through layers and, oh boy, your life was just full of it. It was a pleasure to be able to do that with you, so thank you for sharing your life with us.

Stewart Brand: Well, thank you for packaging my life for me. As Kevin Kelly says, the movie that you put out is sort of a trailer for the whole body of stuff that you’ve got. But by going through that process with you, and for example digitizing all of my tens of thousands of photographs, and then the interviews and the shooting in various places and having the adventure in Siberia and whatnot, but… When you get to the late 70s, Brian, and if you try to think of your life as an arc or a passage or story or a whole of any kind, it’s actually quite hard, because you’ve got these various telescopic views back to certain points, but they don’t link up. You don’t understand where you’ve been very well. It’s always a mishmash. With John Markoff also doing a book version of my life, it’s actually quite freeing for me to have that done. And Brian, this is where I wish Hermione Lee would do your biography. She would do you a great favor by just, “Here is everything you’ve done, and here is what it all means. My goodness, it’s quite interesting.” And then you don’t have to do that.

Brian Eno: Yeah, I’d be so grateful if she would do that, or if anybody would do that, yes.

Stewart Brand: It’s a real gift in that it’s also a really well done work of art. It has been just delightful for me. I think one of the things, Brian, it’ll be interesting to see which you see in this when you see the film more than once, or maybe you’ve already done so, is you’ve made a great expense of your time and effort, a re-watchable film. And Brian, the music is a big part of this. The music is blended in so much in a landscapy way, that except for a couple of places where it comes to the fore, like when I’m out in the canoe on Higgins Lake and you’re singing away, that it takes a re-listen, a re-viewing of the film to really start to get what the music is doing.

And then, you guys had such a wealth of material, both of my father’s amazing filmmaking and then from the wealth of photography I did, and then the wealth of stuff you found as archivists, I mean, the number of cuts in this film must be some kind of a record for a documentary, the number of images that go blasting by. So, instead of a gallery of photographs, it’s basically a gallery of contact sheets where you’re not looking at the shot I made of so-and-so, you’ve got all 10 of them, but sort of blinked together. That rewards re-viewing, because there’s a lot of stuff where things go by and you go, “Wait, what was that? Oh, no, there’s a new thing. Oh, what was that one? That one’s gone too.” They’re adding up. It’s a nice accumulative kind of drenching of the viewer in things that really rewards…

It’s one of the reasons that I think it’s actually going to do well on people’s video screenings, because they can stop it and go, “Wait a minute. What just happened?” And go back a couple of frames. Whereas in the theater, this is going to go blasting on by. Anyway, that’s my view, that this has been enjoyable to revisit.

Brian Eno: When you first watched… Well, I don’t know at which stage you first started watching what David and Jason had been doing, but were there any kind of nasty surprises, any places where you thought, “Oh god, I wish they hadn’t found that bit of film”?

David Alvarado: That’s a great question. Yeah.

Stewart Brand: Brian, the deal I sort of made with myself and with these guys, and that I made the same one with [John] Markoff, is it’s really their product. I’m delighted to be the raw material, but I won’t make any judgments about their judgments. When I think something is wrong, a photograph that depicts somebody that turns out not to be actually that person, I would speak up and I did do that. I’ve done much more of that sort of thing with Markoff in the book. But whenever there’s interpretation, that’s not my job. I have to flip into it, and it’s easy to be, when you both care about your life and you don’t care about your life, you would have this attitude too, of Brian Eno, yawn, been there done that, got sent a fucking T-shirt. So finding a way to not be bored about one’s life is actually kind of interesting, and that’s seeing through this refraction in a funhouse mirror, in a kaleidoscope of other people’s read, that makes it actually sort of enjoyable to engage.

Brian Eno: Yes. I think one of the things that’s interesting when you watch somebody else’s take on your life, somebody writes a biography of you or recants back to you a period that you lived through, is it makes you aware of how much you constructed the story that you hold yourself. You’ve got this kind of narrative, then I did this and then of course that led to that, and then I did that… And it all sort of makes sense when you tell the story, but when somebody else tells the story, it’s just like I was saying about conspiracy theories, to think that they can come up with a completely different story, and it’s actually equally plausible, and sometimes, frighteningly, even more plausible than the one you’ve been telling yourself.

Stewart Brand: Well, it gets stronger than that, because these are people who’ve done the research. So an example from the film is these guys really went through all my father’s film. There’s stuff in there I didn’t know about. There’s an incredibly sweet photograph of my young mother, my mother being young, and basically cradling the infant, me, and canoodling with me. I’d never seen that before. So I get a blast of, “Oh, mom, how great, thank you,” that I wouldn’t have gotten if they hadn’t done this research.

And lots of times, especially for Markoff’s research…So, Doug Engelbart and The Mother of All Demos, I have a story I’ve been telling for years to myself and to the world of how I got involved in being a sort of filmmaker within that project. It turned out I had just completely forgotten that I’d actually studied Doug Engelbart before any of that, and I was going to put him in an event I was going to organize called the Education Fair, and the whole theory of his approach, very humanist approach to computers and the use of computers, computers basically blending in to human collaboration, was something I got very early. And I did the Trips Festival and he sort of thought I was a showman and then they brought me on as the adviser to the actual production. But the genesis of the event, I’d been telling this wrong story for years. There’s quite a lot of that. As you say, I think our own view of ourselves becomes fiction very quickly.

Brian Eno: Yes. Yes. It’s partly because one wants to see a kind of linear progression and a causality. One doesn’t really want to admit that there was a lot of randomness in it, that if you’d taken that turning on the street that day, life would have panned out completely differently. That’s so disorientating, that thought, that we don’t tolerate it for long. We sort of patch it up to make the story hold together.

Stewart Brand: That’s what you’ll get from the Tom Stoppard biography. Remember that his first serious, well, popular play was Rosencrantz and Guildenstern Are Dead, and it starts with a flip of a coin. It turns out his own past of how he got from Singapore to India and things like that were just these kind of random war-related events that carved a path of chance, chance, chance, chance, that then informed his creative life for the rest of his life. There’s a book coming out from Daniel Kahneman called Noise, that Bachman and Kahneman and another guy have generated. It looks like it’s going to be fantastic. Basically, he’s going beyond Thinking Fast and Slow to…a whole lot of the data that science and our world and the mind deals with is this kind of randomized, stochastic noise, which we then interpret as signal. And it’s not. It’s hard to hold it in your mind, that randomness. It’s one of the things I appreciate from having studied evolution at an impressionable age, is that a lot of evolution is: randomness is not a bad thing that happens. Randomness is the most creative thing that happens.

Brian Eno: Yes. Well, we are born pattern recognizers. If we don’t find them, we’ll construct them. We take all the patterns that we recognize very seriously. We think that they are reality. But they aren’t necessarily exclusive. They’re not exclusive realities.

Jason Sussberg: All right. I hate to end it here. This discussion is really fascinating. We’re getting into some very heady philosophical ideas. But unfortunately, our time is short. So we have to bid both Stewart and Brian farewell. I encourage everybody to go watch the film We Are as Gods, if you haven’t already. Thank you so much for participating in this discussion.

David Alvarado: A special thanks to Stripe Press for helping making this film a reality. Thank you to you, the viewer, for watching, to Stewart for sharing your life, and Brian for this amazing original score.

Brian Eno: Good. Well, good luck with it. I hope it does very well.


LongNowMeet Ty Caudle, The Interval’s New Beverage Director

Long Now is pleased to announce that longtime Interval bartender Ty Caudle will become The Interval’s next Beverage Director. He takes the reins from Todd Carnam, who has moved to Washington, D.C. after a creative three-year run at the helm. 

“We are very excited and grateful to have Ty in such a strong position to make this transition both seamless and inspired,” says Alexander Rose, Long Now’s Executive Director and Founder of The Interval. 

Caudle’s bartending career began at a small backyard party in San Francisco. He was working as a caterer for the event, and when the bartender failed to show, he was thrust into the role despite having zero experience.

“We had no idea what we were doing,” he says, “but there was definitely an energy to bartending that wasn’t otherwise present in catering.”

After a friend gifted him a copy of Imbibe! by David Wondrich, Caudle knew he’d found his calling.

“The book opened up a world that I otherwise would’ve never known,” he says. “It traced the history of forgotten ingredients and techniques, painted a rich tapestry of the world of bartending in the 01800s, and most importantly taught me that tending bar was a legitimate profession, one to be studied and practiced.”

Ty Caudle at The Interval. 

And so he did. Caudle devoured every bartending book he could find, bought esoteric cocktail ingredients, and experimented at home. He visited distilleries in Kentucky, Tequila, Oaxaca, Ireland, and Copenhagen to learn more about how different cultures approached spirit production.

“Those trips cemented my deep respect for the craft and history of distillation,” he says. “Whether on a tropical hillside under a tin roof or in a cacophonous bustling factory, spirit production is one of humanity’s great achievements. As bartenders, we have a responsibility to honor those artisans’ tireless efforts with every martini or manhattan we stir.”

Breaking through in the industry during the Great Recession, however, proved challenging. Caudle eventually landed a gig prepping the bar at the now-shuttered Locanda in the Mission. This led to other bartending opportunities at a small handful of spaces in the same neighborhood as Locanda.

The Interval at Long Now.

The Interval opened its doors in 02014 with Jennifer Colliau as its Beverage Director. Colliau was something of a legend in the Bay Area’s vibrant bar scene, having founded Small Hand Foods after eight years tending bar at San Francisco’s celebrated Slanted Door restaurant.

Caudle was a big fan of Colliau’s work, and promptly responded to an ad for a part-time bartender position at The Interval.

Jennifer Colliau, The Interval’s first Beverage Director. 

“The job listing was decidedly different,” Caudle says. “It gave me a glimpse of how unique The Interval is.”

Following a promising interview with then-Bar Manager Haley Samas-Berry, Caudle returned to The Interval a few days later for a stage. Expecting to find Samas-Berry behind the bar, Caudle was mortified to find Colliau there instead. Caudle was, suffice it to say, a little intimidated:  

I walked over with my shakers and spoons and jigger, hands trembling, and she asked if I wouldn’t mind making drinks with their tools instead. I said, “Sure,” as I walked into the other room to set my things down. Inside I was completely freaking out. It took every bit of my strength to emerge from that space. I already felt in over my head and this amplified it. For the next hour or so I welcomed guests and set down menus and poured water. Every time a drink order came in Jennifer would stand over my shoulder and recite the recipe to me while correcting a litany of technical mistakes that I was making. The torture finally relented and we went upstairs and had a good conversation. But I remember leaving that night thinking there was just no way in hell I was going to get that job. 

Caudle got the job. And now, following years of excellent work, he’s got Colliau’s old job, too. 

We spoke with Caudle about his new role, his approach to cocktail creation and design, and what Interval patrons can expect once the bar fully reopens.

Your promotion to Beverage Director brings the opportunity to try new things, while also contending with a rich legacy from past Beverage Directors Jennifer Colliau and Todd Carnam. What new things are you excited to bring to the table? What do you hope to maintain from the past?

I feel uniquely positioned as I have worked in the space under the tutelage of both Jennifer and Todd. 

Jennifer set the standard and created the beverage identity of The Interval. She taught us that we can’t unknow things. To that end, I’m excited to continue the pursuit of the best version of a beverage, meticulously molding it while uncovering its rich history.

The Interval’s former Beverage Directors Jennifer Colliau and Todd Carnam

Todd is a storyteller and a curmudgeonly romantic at heart. He taught us that a drink can evoke a feeling and connect to a larger narrative, of the cocktail’s role as a totem. I hope to honor that spirit and the creativity it fosters in my approach to menu development.

Foremost, I’m excited to feature wine, beer, and spirits made by people that don’t look like me. I’m personally captivated by the fantastic complexity of what eventually winds up in a glass on the bar. Every drink is the confluence of many brilliant makers and I seek to pay respect to their efforts. I think it is easy for us to forget that alcohol is an agricultural product. It started as a plant in the ground in a corner of the world and so many things had to go right for it to find its way to us. I hope to imbue our staff with a passion for the process of making these delicious products and to craft drinks that honor them.

A trio of selections from The Interval’s Old Fashioned menu.

What’s your approach to cocktail design and creation?

I can be somewhat reluctant to design new drinks. The cocktail world has such a rich history and so many people have contributed across generations. With that in mind, I often find myself focusing on making the very best version of a beverage that we know well or that may have been overlooked. 

It tends to take me a long time to mold a bigger picture of what the theme of a cocktail or a menu should be. Once I have that in place I get excited to uncover pieces that fit into the whole. Our Tiki Not Tiki menu is a great example. After we established that template, I found myself scouring cocktail books and menus for tropical drinks that didn’t fit into the Tiki canon. Each discovery was a revelation, a spark to continue forth.

Mai Tai from The Interval’s Tiki Not Tiki menu. 

What’s one of the most challenging cocktails for The Interval to make? 

Generally, we like to do as much work behind the scenes preparing ingredients and putting things together ahead of time to ensure cocktails get to guests quickly.

The Interval’s take on the Kalimotxo.

I will say that one of our biggest challenges in development came with the Kalimotxo. This simple Spanish blend of Coca Cola and box wine was incredibly difficult to replicate. For starters, it was extremely trying to imitate the singular flavor of coke, eventually replacing its woodsy vanilla with Carpano Antica and its baking spice notes with lots of Angostura. Harder still was finding a red wine that didn’t overpower the rest of the ingredients. In the end, we wound up bringing in an entirely new wine outside of our offerings just to get the final flavor profile we were looking for.

Everyone has different tastes, but what would you recommend as a cocktail for a first-timer to the Interval to highlight what distinguishes the establishment from other cocktail bars?

The Interval’s Navy Gimlet.

The Navy Gimlet perfectly encapsulates what we strive for at The Interval. With the time involved to infuse navy strength gin with lime oil and to slowly filter the finished product, its preparation takes days but arrives to the guest in no time at all. The gimlet has been maligned for decades as a result of artificial ingredients and certain preparations and we’ve done our very best to correct those deficiencies. We make a delicious lime cordial and stir (rather than shake) our pearlescent iteration. It’s a drink with a history, deceptively simple and infinitely refreshing.

A busy evening at The Interval, May 02016. 

What do you think is the biggest misconception people have about tending bar?

I think the physical act of bartending is unnecessarily heralded in the public eye. Anyone can mix drinks. Sure, there are hundreds of classics to memorize and plenty of muscle memory to establish, but that side of tending bar is overwhelmingly a teachable skill.

The component that cannot be taught as easily is hospitality. There is a degree of empathy and emotional availability necessary to do this work that isn’t required in many other professions. Bartenders absorb the energy of every guest that sits in front of them and a genuine desire to serve is essential to providing a superior guest experience. This comes naturally to some and can be a lifelong pursuit for others. Putting aside the day thus far and being truly hospitable behind the bar is the goal we spend our careers striving for. 

For the latest on opening hours, placing to-go orders, and events, head to The Interval’s website, or follow The Interval on Instagram, Twitter, and Facebook.


Sam VargheseAll the news (apart from the Middle East issue) that’s fit to print

The Saturday Paper — as its name implies — is a weekend newspaper published from Melbourne, Australia. Given this, it rarely has any real news, but some of the features are well-written.

There is a column called Gadfly (again the name would indicate what it is about) which is extremely well-written and is one of the articles that I read every week. It was written for some years by one Richard Ackland, a lawyer with very good writing skills, and is now penned by one Sami Shah, an Indian, who is, again a good writer. Gadfly is funny and, like most of the opinion content in the paper, is left-oriented.

The same cannot be said of some of the other writers. Karen Middleton and Rick Morton fall into the category of poor writers, though the latter sometimes does provide a story that has not been run anywhere else. Middleton can only be described as a hack.

Mike Seccombe is another of the good writers and, when he figures on the day’s menu, one can be assured that the content will be good. Another good writer, David Marr, has now gone missing; indeed, he is not writing for any newspaper at the moment.

But the one fault line that The Saturday Paper has is that it will never cover the Middle East. The owner, Morry Schwartz [seen below in an image used courtesy of Fairfax], leans towards supporting the right-wing Israeli leader Benjamin Netanyahu and thus no matter what atrocities are being perpetrated on the Palestinians, you can be assured that not even a word will appear in this newspaper.

Critics of the paper avoid mentioning this, in keeping with the habit prevalent in the West, of never saying anything that could be construed as being critical of Israel.

This proclivity of Schwartz was noticed early on and mentioned by a couple of Australian writers. One, Tim Robertson, had this to say when the paper had just started out: “…the Saturday Paper’s coverage of Israel’s assault on Gaza has been conspicuously, well, non-existent. As the death toll rises and more atrocities are committed, the Saturday Paper’s pages remain, to date, devoid of any comment.”

Explaining this, John van Tiggelen, a former editor of The Monthly (another Schwartz publication) said: “…mean, it’s seen as a Left-wing publication, but the publisher is very Right-wing on Israel […] And he’s very much to the, you know, Benjamin Netanyahu end of politics. So, you can’t touch it; just don’t touch it. It’s a glass wall.”

Australian media are very touchy about Israel. One of the country’s better writers, Mike Carlton, lost a plum job with the former Fairfax Media — now absorbed into the publishing and broadcasting firm, Nine Entertainment — when he criticised Israel over one of its attacks on Gaza.

And some supporters of Israel in Melbourne are quite powerful. Fairfax had — and still has — a rather juvenile columnist named Julie Szego. When one of her columns was rejected by the then editor, Paul Ramadge (the staff used to say of him, “Ramadge rhymes with damage”), she ran to Fairfax board member Mark Leibler and requested him to intervene. Hey presto, the column was published.

Of course, it is the prerogative of an editor or owner to keep out what he/she does not want published. But if one is given to describing one’s publication as a newspaper and then ignores one of the world’s major issues, then one’s credibility does tend to suffer.



LongNowTouching the Future

Aboriginal fish traps.

In search of a new story for the future of artificial intelligence, Long Now speaker Genevieve Bell looks back to its cybernetic origins — and keeps on looking, thousands of years into the past.

From her new essay in Griffith Review:

In this moment, we need to be reminded that stories of the future – about AI, or any kind – are never just about technology; they are about people and they are about the places those people find themselves, the places they might call home and the systems that bind them all together.

Genevieve Bell, “Touching the Future” in Griffith Review.



Sam VargheseTime for ABC to bite the bullet and bring Tony Jones back to Q+A

Finally, someone from the mainstream Australian media has called it: Q+A, once one of the more popular shows on the ABC, is really not worth watching any more.

Of course, being Australian, the manner in which this sentiment was expressed was oblique, more so given that it came from a critic who writes for the Nine newspapers, Craig Mathieson.

Hamish Macdonald: his immature approach to Q+A has led to the program going downhill. Courtesy YouTube

A second critical review has appeared on April 5, this time in The Australian.

Newspapers from this company are generally classed as being from the left — they once were, when they were owned by Fairfax Media, but centrist or right of centre would be more accurate these days — and given that the ABC is also considered to be part of the left, criticism was generally absent.

Mathieson did not come right out and call the program atrocious – which is what it is right now. The way the headline on Mathieson’s article put it was that Q+A was once an agenda setter, but was no longer essential viewing. He was right about the former, but to call it essential viewing at any stage of its existence is probably an exaggeration.

He cited viewing figures to bolster his views: “Audience figures for Q+A have plummeted this year. Last week [25 March], it failed to crack the top 20 free-to-air programs on the Thursday night it aired, indicating a capital city audience of just 237,000. In March 2020, the number was above 500,000, and likewise in March 2016,” he wrote.

“This was meant to be the year that Q+A ascended to new prominence. Since its debut in 2008 it had aired about 9.30pm on Mondays, the feisty debate chaser to Four Corners and Media Watch.

“In 2021, it moved to 8.30pm on Thursday, an hour earlier presumably to give it access to a larger audience and its own anchoring role on the ABC’s schedule. But even with Back Roads, one of the national broadcaster’s quiet achievers, as an 8pm lead-in, the viewing figures are starting to resemble a death spiral.”

Veteran ABC journalist Tony Jones was the Q+A host until just two seasons ago. Then Hamish Macdonald, from the tabloid TV channel 10, was given the job. And things have generally gone downhill from that point onwards.

Courtesy The Australian

Jones brought a mature outlook to the show and was generally able to keep the discussion interesting. he always had things in check and the panellists were kept in line when they tried to ramble on. Quite often, the show was prevented from going down a difficult path by a simple “I’ll take that as a comment” from Jones.

Macdonald often loses control of things. He seems to be trying too hard to differentiate himself from Jones, bringing too many angles to a single episode and generally trying to engineer gotcha situations. It turns to be quite juvenile. One word describes him: callow. It is one that can be applied to many of the ABC’s recent recruits.

Had the previous host been anyone but Jones, the difference would not have been so stark. But then even when others like Virginia Trioli or Annabel Crabb stood in for Jones, the show was watchable as nobody tried out gimmicks. Again, Trioli and Crabb are very good at their jobs. The same cannot be said for Macdonald.

Now that Jones has had to put his plan of accompanying his partner, Sarah Ferguson, to China, the ABC might like to think of bringing him back to Q+A. The plan was for to Ferguson to be the ABC’s regular correspondent in China, but that was dropped after the previous correspondent, Bill Birtles, fled the country last September, along with Michael Smith, a correspondent for the Australian Financial Review. Jones had planned to write a book while in China.

The ABC needs to bite the bullet and rescue what was once one of its flagship shows. As Mathieson did, it is worthwhile pointing out that two other popular shows, 7.30 and Four Corners, have held their own during the same period that Q+A has gone downhill, even improving on previous audience numbers.

If change does come it would be at the end of this season. Another season of Macdonald will mean that Q+A may have to be pensioned off like Lateline which was killed largely because the main host, Emma Alberici, had made it into a terrible program. Under Jones, and others like Maxine McKew, Trioli and even the comparatively younger Stephen Cannane, Lateline was always compulsory watching for any Australian who followed news somewhat seriously.


Sam VargheseAFR’s Aaron Patrick shows us what gutter journalism is all about

Australian journalists often criticise each other, with those on the right tending to go for those on the left and vice versa. But, generally, in these stoushes, details of people’s private lives are not revealed.

But there are exceptions, and one of those was witnessed on March 31, when Aaron Patrick, the senior correspondent with the Australian Financial Review, took a swing at Samantha Maiden, a reporter with, a free site operated by News Corporation, over coverage of numerous issues around women. (News Corporation’s other sites are all paywalled.)

In February, Maiden exposed the story of a young Liberal staffer, Brittany Higgins, who had been allegedly raped by a colleague in Parliament House some two years ago.

And then this month, after an ABC journalist Louise Milligan had written about an unnamed senior politician who was accused of an alleged rape some 30-plus years ago, Maiden revealed a large number of additional details about the case, something which Patrick dubbed “intimate and compelling information“.

The takeoff point for Patrick was his claim that women pursuing these cases are “activists”. Apparently, he does not think that women have news sense and are producing some of the stronger stories in the media because they are simply better at their jobs than the countless men.

Patrick used Prime Minister Scott Morrison’s reaction at a media conference when asked about standards in Parliament House as some kind of a lede. Sky News’ staffer Andrew Clennell had asked Morrison: “Prime Minister, if you’re the boss at a business and there had been an alleged rape on your watch and this incident we heard about last night on your watch, your job would probably be in a bit of jeopardy, wouldn’t it? Doesn’t it look like you have lost control of your ministerial staff?”

When Morrison suggested that standards in other workplaces could also be faulted, Clennell did not give up, saying: “Well, they’re better than these I would suggest, Prime Minister.”

To which Morrison, who has a short fuse, retorted: “Let me take you up on that Let me take you up on that. Right now, you would be aware that in your own organisation that there is a person who has had a complaint made against them for harassment of a woman in a women’s toilet and that matter is being pursued by your own HR department.”

What he was referring to was an exchange of words between Maiden and a reporter named Jade Gailberger who works for the News Corporation internal wire service. Maiden, according to Patrick, wanted someone else to be on the federal parliamentary press gallery committee, rather than Gailberger. So some words were exchanged.

Morrison’s mischaracterisation was not allowed to stand, with the boss of News Corporation shooting it down in a strongly worded press release the same night.

Then Patrick, perhaps feeling he had set the scene, reached into the gutter, revealing personal details about Maiden, which have nothing to do with her reporting ability (which, even a cynic like me, would rate as damn good).

In Australia (and most of the West), there is a kind of fake politeness that dominates the workplace, and Patrick appears to want Maiden to conform to this. The fact that she works best on her own — “fellow journalists… said she did not prize teamwork and intimidated younger reporters, male and female” — is a big negative to him.

He seems to be unaware that the best journalists are always oddballs. They do not follow the beaten path; for them output is more important than input. Patrick is clearly in the opposite camp. Seymour Hersh, Matt Taibbi and Glenn Greenwald come to mind.

In the end, the whole piece ended up being an exercise in character defamation and one of carrying water for Morrison. Patrick may get much more than a Christmas card from the Lodge this year, perhaps a turkey and a bottle of champers as well.

Journalists are supposed to hold power to account, not to bitch about their colleagues’ personal lives and try to tear them down. But many journalists, Patrick included, have long forgotten that they are the fourth estate and want to be players themselves.

There is an incestuous relationship between journalists who work in Canberra and politicians; I have mentioned it on more than one occasion. So what Patrick has done is not unique.

But in the process he has forgotten that the main job of a journalist is to either break, or else follow up, on news stories. These kinds of personal smears are one more reason why the public has been turned off mainstream media. He has truly plumbed the depths. You really can’t go lower than this.


Chaotic IdealismWhy we need a higher minimum wage

Imagine an auction where your work is up for sale; but many other people’s work is also up for sale, so that some lots will always remain unsold. There are more workers than jobs.

What is the best strategy for someone who wants a worker, any worker? It is to be the first to bid, bid the minimum, and then not raise anyone else’s bid. Raising is counterproductive because supply exceeds demand, and one can always wait until other buyers have hired their workers to bid the minimum on one of the lots left over. Because this is the ideal strategy, everyone will be using it. Every lot of work that can be bought, is bought, and for the minimum possible price.

For the worker, the only possible strategy is to accept any bid, because if they do not accept, they will be left till last and their work will be one of the unsold lots.

There is a way out for the worker, and that is to learn a skilled trade. However, this is a way out only for that worker. Other unskilled workers are still caught in the same system, and because there are still unskilled jobs and unskilled workers, the minimum-wage auction will go on as before.

Moreover, if too many workers learn skilled trades, employers in those trades will fulfill their quotas, leaving these overqualified workers to compete for unskilled jobs where their skills are irrelevant–back to the minimum-wage auction.

When the minimum wage is too low, the unskilled (or overqualified) worker naturally tries to fill their own needs, usually by taking more than one job, and by adding more family members–children and spouses–to the job market, to take jobs rather than being homemakers or students. This unbalances the system even further: There are yet more workers, and yet fewer jobs. The employer is able to bid even lower, and the worker must immediately accept any offer they can, for fear of not being employed at all.

When the employer hits the federal minimum wage, they cannot reduce the worker’s pay further; but they can still split jobs into part-time positions without benefits, hire people to work for tips, and hire “self-employed” “independent contractors” who can be paid less than minimum wage because they are technically not their employees. And this is what they do, because the market permits them to do it, because people still take those jobs, because those are the only ones they can get.

We have too many people in the work force and too few jobs for them to do. A low minimum wage forces more people to take more jobs, while simultaneously allowing employers to pay less.

If we raised the minimum wage, then there would be fewer workers, because a minimum wage job would once again be enough to support a family. Many jobs are being replaced with automation, but because of the higher minimum wage, those jobs would no longer be desperately fought over by unskilled workers.

As more jobs are replaced by automation, we may end up with the same scenario again: People fight over jobs, and employers find ways to pay less and less. At this point, we would need to institute a universal basic income, paid for by taxes on corporations. There’s simply no way around that–even though it might slow down when employers are forced to stop hiring so many part-timers and contractors, the number of jobs will eventually be much less than the number of people willing to work. At that point, those extra workers will be supported by universal basic income and, instead, do unpaid work like art, volunteer work, or study. The only alternative to this is a world in which a majority of unskilled workers are barely scraping by on half a job, crammed together in apartments that take five salaries to pay for, unable to afford health care, higher education, or anything but the next day’s low-quality food–and sometimes not even that.


Sam VarghesePeter van Onselen is no journalist. He is a political operative

Peter van Onselen is an academic from Western Australia who came to prominence in 2007 when he co-authored a biography of John Howard, the Liberal prime minister who reigned from 1996 to 2007.

Nearly 14 years later, Van Onselen has graduated to become a journalist who writes a weekly column for the right-wing broadsheet, The Australian, and also functions as the political editor for the tabloid free-to-air TV channel, 10.

Recently, however, Van Onselen has shown that he is no journalist, but rather a political operative who looks to back his powerful friends when they need his help. And nobody has needed his help more than the attorney-general, Christian Porter, a close mate of his and a source for many of his stories.

Porter was recently accused of raping a woman in 1988, when she was 16 and he was 17. The woman, known as Kate, died by her own hand last year, and did not make a police complaint, though she did toy with the idea. She is said to have been a highly intelligent person, but the alleged incident appears to have taken its toll, and she was described by many as having some mental problems.

Christian Porter (extreme left) and Peter van Onselen (third from right) in their younger days. Photo courtesy Kangaroo Court of Australia

Porter was not named in any stories initially, but held a press conference on 3 March and outed himself. He then took leave from his job and intends to come back at the end of the month.

A dossier of notes that Kate maintained was sent to a number of people and Van Onselen managed to obtain a copy. Using that he tried to paint Porter as innocent, both in written columns and also on the state-run TV broadcaster, where he appeared as a panellist on a program called Insiders.

He had a clear conflict of interest and should not have appeared on such a program. That’s what a journalist does. But then he does not appear to be a journalist at all.

Other right-wing journalists, like Chris Uhlmann of Nine Entertainment and Phillip Coorey of the Australian Financial Review, have also done their bit to defend Porter, but none as blatantly as Van Onselen, who also defended his friend on ABC Radio.

Porter has now filed defamation charges against the ABC for an article written by one of its journalists, Louise Milligan, based on that dossier. She did not contact Porter for comment, but then he was not named in the story. Many other journalists contacted Porter’s office and the offices of other ministers too, but did not get a reaction.

The ABC has to file its written defence by May 4 and Porter’s lawyers have to respond by May 11.

A preliminary court hearing is to be held on May 14, but it is unlikely that a trial will be held before 2021 ends.

The incestuous relationship between politicians and journalists is quite common in Canberra; here are two other instances: 1, 2.


LongNowIn Real Time

Horologist Brittany Nicole Cox giving a talk at The Interval at Long Now on horological heritage (02019). Photo by Anthony Thornton.

How do you measure a year? As straightforward as this seems, it is a truly personal question to each of us. What comes to mind? Life, weather or seismic events, loss or gains, political enterprises, a global pandemic? Or terms such as calendars, months, or dates? As a horologist, someone who studies time, I’ve realized there is no concrete way to answer that question. Yet, my job lies in the calculation, measurement, and the sure prediction of time passing in hours, minutes, and seconds. One might say I measure time through numbers, but often it is measured through the inevitable deterioration of the mechanisms I study that are responsible for calculating the passing of time. If anything, I have found that time is not measurable, but perceptible. It is the observation of change and loss that accounts for the passing of time.

Brittany Nicole Cox at her workbench. Photograph by Ben Lindbloom.

In my work I watch the brass and steel components of clock and watch mechanisms wear and break down, an indicator of how hard time has been on them. The tarnish of brass, the result of age and environmental factors. These mechanisms are continually renewed with the intention of the timepiece maintaining both its tangible and intangible qualities: its ability to calculate and record the passing of time, as well as fulfill its function as an artifact created by someone long ago with their own artistic vision and intentions for the observer. As time went on, these mechanisms were made with more wear resistant materials, always with the hope that they could outlast degradation, despite time. Perhaps one of the most successful at this was the 18th century clockmaker John Harrison, the man responsible for inventing the first marine chronometer. Some of his time pieces required no lubrication, as he invented rolling bearings for the application and relied on the synthesis between materials to maintain the time keeping qualities of the mechanism.¹ The clocks of John Harrison can still be seen keeping time at the Royal Observatory in Greenwich, London.

John Harrison’s H4, displayed at the Royal Observatory in Greenwich. Photograph by Mike Peel (CC BY-SA 4.0).

Keeping time is the work carried on by many before me and is one of the only things we still have in common with pre-Homo sapiens. We have measured time by seasons, famine, light, and darkness, our almanacs a result of such tidings.² These tomes published yearly include such things as tide tables, dates of eclipses and the movements of celestial bodies, and religious festivities. They recommend planting times for crops, give weather forecasts, and record the rising and setting of the sun and moon.³ Yet, none of these things truly indicate the inevitable passing of time. Only one thing changes on a molecular level from second to second. From the moment before and after a baby is born, or the instant when your loved one is still taking in breath to the moment when they are gone — the moment when you are present tense to the moment when you are past. A loss of heat is the only thing that indicates the passing of time.⁴ The more I have studied time, the more ethereal it becomes. Manifesting as water, in its different forms. Much like a snowflake melts, the longer you hold it or try to study it.⁵ Much like a snowflake, each person’s experience of time is different. It cannot be regulated. Time is a personal manifestation of our perception of the space we occupy, truly unique to each of us. It is a strange fact that our heads age faster than our feet. A shorter person is younger than you if you were born at the same instant in time.⁶ Even if time could be measured by some concrete means, our experience of time changes throughout our lives due to physical changes that occur in our brain.⁷ We cannot hold time, possess it, buy it, earn it, or commodify it. It may be the one thing we cannot commodify. Our experience of time changes, one day based on what we have gained and another through what we have lost, or more concisely put, what has changed.

Al-Jaziri’s candle clock (01305). Source: Freer Gallery of Art.

Perhaps one of the oldest methods of telling time through a loss or change principle are candle clocks. The earliest ones were often long thin candles with marked intervals to indicate the passing of hours as the candle burned down.⁸ Later variations included dials and even automata.⁹ The chemistry of a candle simply explained is as follows: you light a candle, the heat from the flame melts the wax, which becomes liquid. This liquid is then drawn up into the wick via capillary action. The heat from the flame vaporizes the liquid wax turning it to gas, which is then drawn into the flame creating heat and light. Enough heat is created to continue this cycle until the wax is exhausted.¹⁰

Chinese incense clock. Source: Science Museum Group(CC BY-NC-SA 4.0).

Incense clocks work in a similar fashion and at times were just as elaborate with bells and gongs, pulleys, and dials. The simplest form was that of an incense stick calibrated to burn at a known rate of combustion. Hours, minutes, and days were passed in witness of the incense stick.¹¹ Yet these forms of telling time through loss are based on confined, predictable, known systems. Our time is not. Our bodies are not like candles or incense sticks and yet we deteriorate with time, changed by factors such as our environment, toxins, or disease that can accelerate the deterioration of our bodies. Change is the body’s way of knowing time.

This may not leave one feeling very grounded in their experience of time, yet our individual perception is all that we have. Life by nature is fleeting. It does not outlast time. Our life is finite and time continues. It is one of the great condolences it can offer. When loss is too great to bear, remember the age-old adage, “everything passes with time.” There is wisdom with this idea carried across cultures. In the Cheyenne Native American tribe, there was a saying told to those ailing, going into battle, or suffering the losses that life brings,

My friends,

Only the stones

Stay on Earth forever

Use your best ability¹²

Though stones change, they do stay. They lose their original primeval form, eventually becoming something only recognizable through magnification. Their erosion is an indicator of time, much like seasons. The degradation of all materials, organic and inorganic, is irreversible and inevitable. To calculate the passing of time through the lens of water eroding stone is a manifestation of nature’s experience of time. Time is based here on the flow rate of the river. It is season based, environment based, climate based, degradation based and is impacted both negatively and positively through the cumulative actions of human beings.

Alaska River Time engages a network of glacial and spring rivers to regulate a new kind of clock, which speeds up and slows down with the waters. The clock can be used to recalibrate all aspects of life from work schedules to personal relationships. Source: Alaska River Time.

The Alaska River Time project of Jonathon Keats brings about an intentional unification between nature’s experience of time and our perception of its passing, while bringing to light our direct impact on it. We are both forced to bear witness and invited to engage. It is not unlike the time realized in our bodies, but here through known bodies of water.

I’d like to say that River Time can offer a more accurate time keeping system than the finest atomic clock, quartz watch, or mechanical time keeper, as it provides a true reflection of time through real time change. I realize that it is unpredictable and the flow rate of a river depends on many factors that the river is forced to exist within, that it cannot control, but can only experience. Perhaps it is this unpredictability which is its greatest asset.


[1] Jonathan Betts, John Harrison: inventor of the precision timekeeper.

[2] “The term almanac is of uncertain medieval Arabic origin; in modern Arabic, al-manākh is the word for climate,” From the Encyclopaedia Britannica.

[3] Encyclopaedia Britannica.

[4] Carlo Rovelli, The Order of Time.

[5] Carlo Rovelli, The Order of Time.

[6] Carlo Rovelli, The Order of Time.

[7] David Eagleman, Livewired: The Inside Story of the Ever-Changing Brain.

[8] H.H. Cunynghame, Time and Clocks: A Description of Ancient and Modern Methods of Measuring Time.

[9] Alfred Chapuis, Le Monde des Automates.

[10] Encyclopaedia Britannica.

[11] N.H.N Mody, Japanese Clocks.

[12] Paul Goble, The Boy and His Mud Horses: and Other Stories from the Tipi.


Alfred Chapuis and Eduouard Gelis, Le Monde des Automates: Etude Historique et Technique (Paris: 1928), Pages 51–68.

Britannica, T. Editors of Encyclopaedia. “Almanac.” Encyclopedia Britannica, January 25, 2018.

Britannica, T. Editors of Encyclopaedia. “Candle.” Encyclopedia Britannica, July 20, 2019.

Carlo Rovelli, The Order of Time (New York: Riverhead Books, 2018), Pages 3, 10, 25.

David Eagleman, Livewired: The Inside Story of the Ever-Changing Brain (New York: Pantheon, 2020).

H.H. Cunynghame, Time and Clocks: A Description of Ancient and Modern Methods of Measuring Time (Detroit: Single Tree Press, 1970), Page 46.

Jonathan Betts, John Harrison: inventor of the precision timekeeper. Endeavour Volume 17, Issue 4, 1993, Pages 160–167.

N.H.N Mody, Japanese Clocks (Japan: Charles E. Tuttle Company, Inc., 1977), Plate 114.

Paul Goble, The Boy and His Mud Horses: and Other Stories from the Tipi (China, World Wisdom, Inc., 2010).

Recommended Reading

  • Desert Solitaire by Edward Abbey
  • The Order of Time by Carlo Rovelli
  • The Sound of a Wild Snail Eating by Elisabeth Tova Bailey

Learn More

  • Watch Brittany Cox’s 02019 Interval talk, “Horological Heritage.”
  • Watch Jonathon Keats’s 02015 Interval talk, “Envisioning Deep Time.”
  • Pre-order Jonathon Keats’s forthcoming book, Thought Experiments: The Art of Jonathon Keats.

This essay was commissioned by the Anchorage Museum and was originally published on the Alaska River Time website.


LongNowLong Now Member Ignite Talks 02020

With thousands of members from all around the world, from artists and writers to engineers and farmers, the Long Now community has a wide range of perspectives, stories, and experience to offer.

On October 20, 02020, we heard 12 of them in a curated set of short Ignite talks given by Long Now Members. What’s an Ignite talk? It’s a story format created by Brady Forrest and Bre Pettis that’s exactly 5 minutes long, told by a speaker who’s working with 20 slides that auto-advance every 15 seconds (ready or not).

These 12 Ignite talks ranged from geeky, fanciful, poignant, educational, with some fresh angles on long-term thinking. We’re pleased to share them with you below.

Collaborating with Insects

Catherine Chalmers

Long Now Member Catherine Chalmers guides us through her multimedia “American Cockroach Project”—a 10-year investigation into humanity’s adversarial relationship with nature.

Activism as Futurism: Imagining Better Worlds

Allison Cooper

Long Now Member Allison Cooper encourages us to widen our windows on what is possible, plausible, probable, and preferable.

Change Agents (and How to Become One)

Danese Cooper

Long Now Member Danese Cooper shares a personal journey — of being changed by the world, and changing the world.

Instant stone (just add water!)

Jason Crawford

Long Now Member Jason Crawford shares the story of concrete, a sufficiently advanced technology indistinguishable from magic.

Plastic Mathematics in the Clock

Stewart Dickson

Long Now Member Stewart Dickson recounts the Equation of Time’s journey from mathematical equation, to 3D model, to machined-metal cam for the Clock of the Long Now.

Deep Fakes & The Archaic Revival

Michael Garfield

Long Now Member Michael Garfield tells a story about the end of reality. Not the end of the world, but the end of the idea of one consensus world.

The Great Dead End

Quentin Hardy

Long Now Member Quentin Hardy uses the historical example of Sienna, Italy to suggest that our present plague-year will have downstream cultural effects for generations.

The Future of Storytelling

Asmara Marek

Long Now Member Asmara Marek points at paths forward for the future of storytelling.

Our future drugs will come from the oceans; Can we save them in time?

Louis Metzger

Long Now Member Louis Metzger explains how our individual and collective well-being is intimately dependent on the preservation of ocean biodiversity.

Leways: The Story of a Chinatown Pool Hall

Marc Pomerleau

Long Now Member Marc Pomerleau gives us a glimpse of a Chinatown past, and a vision of its vitality rediscovered in a Chinatown future.

Art and Time

Madeline Sunley

Long Now Member Madeline Sunley shares her ideas & process for making oil paintings of marking systems for communication with the far future.

A Longer Now

Scott Thrift

Long Now Member Scott Thrift creates analog tools that tune our awareness to the perennial cycles of the day, the moon, and the year, so we can collectively rediscover the original nature of time–and a longer now.



LongNowPodcast: The Transformation | Peter Leyden

A compelling case can be made that we are in the early stages of another tech and economic boom in the next 30 years that will help solve our era’s biggest challenges like climate change, and lead to a societal transformation that will be understood as civilizational change by the year 02100.

Peter Leyden has built the case for this extremely positive yet plausible scenario of the period from 02020 to 02050 as a sequel to the Wired cover story and book he co-authored with Long Now cofounder Peter Schwartz 25 years ago called The Long Boom: The Future History of the World 1980 to 2020.

His latest project, The Transformation, is an optimistic analysis on what lies ahead, based on deep interviews with 25 world-class experts looking at new technologies and long-term trends that are largely positive, and could come together in surprisingly synergistic ways.

Listen on Apple Podcasts.

Listen on Spotify.


Chaotic IdealismShould you tell your date you’re asexual?

If you’re not familiar with asexuality, here’s a brief definition: Asexuals are people who aren’t sexually attracted to anybody. Many asexuals don’t want sex; some are outright disgusted by the idea of having sex, while others merely find it boring.

Some asexuals will have sex with their partners, the way you might attend a football game with your sports-fan partner even if you don’t like sports yourself; some asexuals are sex-positive, meaning they don’t feel sexually attracted to anybody, but do enjoy having sex when they get the opportunity. For demisexuals, sexual attraction emerges only once they are already deeply connected, emotionally, to another person. Asexuality is a sexual orientation, just like being bi, straight, or gay.

So… should you tell your date you’re asexual? And if so, when?

First, and most importantly: No asexual should ever have to feel like they have to disclose their sexual orientation just to protect themselves from being forced to have sex before they feel they’re ready. “I didn’t know they were asexual” is not a valid reason for your date to push you into sex, because there is never a valid reason to do that. If you say “no” and your partner pressures you anyway, that’s a huge red flag that they don’t respect you; that’s not the sort of person you want as a partner. Dump them, and don’t look back.

Obviously, if you’re the sort of asexual who finds sex disgusting or so boring you’d rather watch paint dry, and they’re looking for a relationship that, if successful, will eventually become sexual, then you need to tell your date right away, preferably before you’re on a date to begin with–otherwise you’re wasting your time and theirs.

But things get more complicated for non-sex-repulsed aces and demisexuals. If you’re open to sex, then you aren’t going to be automatically incompatible with someone who wants sex, so you wouldn’t be wasting their time not telling them immediately. Once you have a more mature relationship, it’ll be natural to tell them everything about yourself, including your asexuality. Or you can tell them right away (and I recommend it, because I think it’s good to have everything in the open at once, whether that’s asexuality, or disability, or religion, or your desire to have six children or no children at all)–but you are not obligated to do so.

If your friends are the sort who start having sex while dating only casually, then you might not realize how common it is for people to wait until they feel deeply attached or have formalized their commitment. Even allosexuals don’t all jump right into bed with one another. Some wait for marriage, or for deep, true love. Some simply don’t enjoy casual sex. Before birth control, it was held up as the universal ideal to prevent couples from having a baby without a family to raise one; but just because we have birth control doesn’t mean we have to rush right into sex. There are many valid emotional, social, philosophical, and religious reasons to want to wait.

Those who want to wait to have sex are often shamed as being “prudish” because they turn down sex when it’s offered; or they’re told they’re “admirable” for waiting for marriage, as though it were the default to want to have sex, and anyone who said “no” must be denying themselves. That can be hard to deal with, especially in a world where sex is wedged into every storyline, used as an enticement in advertisements, and seen as a “basic human need” right up there with oxygen.

You can tell them right away that you are ace, and that your attraction to them isn’t sexual–it’s romantic, or perhaps platonic. If you are demisexual, you can tell them that you won’t feel like having sex unless you have a deep connection. You can put it right in your dating profile or on your social-media accounts. Or you can wait until the topic of sex comes up.

If you get the impression that the other person expects a hookup for casual sex, and that’s not what you’re looking for, then make sure you’re on the same page. If the other person looks to be trying to initiate a sexual relationship, then tell them. You can use words like “demisexual” or “sex-positive asexual”, if you like, or you can just explain it by describing what you personally need to feel comfortable with sex. Just remember that if a relationship is respectful and mature, as it should be, nobody will be forcing anyone into anything they don’t want.


Chaotic IdealismWe Give Words Their Power

Recently, a friend of mine posted a meme that recommended we should use the term “enslaved people” rather than “slaves”, because being a slave is a circumstance, rather than an identity. I did not think it was particularly useful to do so; it misses the point. The important thing, when teaching about slavery, is to teach from the perspective of the slaves themselves, so that the student never forgets that the slaves are fellow humans rather than objects, and that they have been made property despite their intrinsic human equality with their legal masters.

I have often been confounded with the need to change my language this way. When language changes more quickly than I can keep up, I often find myself misunderstood because I use the old words and people hear the new definitions. It happened when I was told that I could no longer say “all lives matter”, because it now meant that the only lives that did matter were the white, neurotypical ones. It happened when I was told that when I mourned their genocide during the Holocaust, I could not call them Gypsies, but must call them Roma instead, because that is what they call themselves. I never seem to be able to change my words as quickly as neurotypicals do. It can get frustrating.

Most of the time when this happens, I do change my language, because I recognize that neurotypicals burden words with all sorts of things not in the words’ actual definition, and then when I say them, they hear all those extra meanings too. If I want to communicate, I have to keep up. But it bothers me a great deal, for several reasons.

First, it seems that people substitute a change in language for a change in behavior. One simply cannot say the n-word without being immediately branded a racist (for an experiment, imagine what you might think of me if I had not censored it). With the extra meanings loaded onto that word, that is exactly what it means now: “I am a racist.” And if you say it, you are saying you are not just a racist, but a proud racist.

But although this word has become a taboo, many other things that are more hurtful to black people than a word will ever be, are not taboos. White people say they want to live in a good neighborhood; they mean they want to live outside a poor black neighborhood. They send their child to a “good school”, and leave the underfunded, crowded public schools for the black children. White people casually hire other white people for jobs, choose them as friends, date them, and generally perpetuate, informally, segregation. None of this is taboo, the way the n-word is. People who would never say the n-word will happily act in ways that say, “I am a racist”.

Because of this language taboo, saying you are a racist has become more shunned than actually acting like a racist.

Second, language is being used as a password into liberal, socially-conscious circles. If one does not say the right words, one is assumed not to care about human rights. The focus has changed. Instead of policing one another’s actions, people police one another’s language. A person who has not lifted a finger to help empower the minority groups in their own community can, with the full consensus of their social circle, brand another person as the enemy–even if the other person has been spending a great deal of time and effort working toward equality. Saying the right words has become a substitute for doing the right thing.

I’ve seen the same phenomenon in a very different milieu–that of fundamentalist Christianity. One must say the right words, pray the right prayers, or one is an outsider. Words are given near-magical power.

In fundamentalist circles, to use any kind of “bad language” is to be immediately castigated (and I don’t mean using God or Jesus as swear words, which would be understandable as it shows a lack of respect. Rather, it is the simple scatological and sexual language that is considered most sinful). But it is completely permitted to insult, belittle, or bully someone without that sort of language, especially if one can put it in polite terms. I have heard “God bless you” being used as a patronizing insult–multiple times.

There are superstitions surrounding language. People use “In Jesus’s name,” to close out a prayer, with the belief that if one does not pray in Jesus’s name, God will not hear. They talk about becoming a Christian by saying the right words–that one repents of sin, asks for forgiveness, and asks Christ into one’s heart–and believe that one cannot be a Christian unless one has said those words, whether or not one lives according to them.

Fundamentalists also identify one another, and exclude outsiders, by the use of language. There are so many words that are loaded with a ton of meaning outside their literal definition that communicating with a fundamentalist, in their own language, is like crossing a minefield. Terms like “God’s will”, “persecution”, “sinner”, or “end times”, come so loaded with meaning that anyone who hasn’t spent years in that culture will immediately sound like an outsider when they open their mouths. They too have fallen into the trap of policing one another’s language rather than their behavior.

It is so very similar to what I see in liberal circles, and that troubles me. Groups can lose sight of their purpose in this endless quest to affirm and reinforce their group identity, because they give language so much power.

What’s done is done: Once a word has been given a meaning, we can’t take it back. But should we really be looking for more words we can load with negative meanings and declare taboos? From where I stand, all that does is power the euphemism treadmill. People like me go from being called cretins, to morons, to retarded, to developmentally delayed; and all the time, we are treated as second-class citizens no matter how much the label changes.

As an autistic person, language is not my first language. Language is only what I translate my thoughts into when I want to communicate them to others. Yet neurotypicals seem convinced that words are thoughts and language is reality. Some even believe they can affect reality by saying the right words: Every tradition of magic, whether cultural or fictional, has to do with saying the right words, making the right gestures, and/or creating the right symbols. Does that sound familiar? It should; the ways of magic are also the ways of language, whether written, gestured, or spoken.

Neurotypicals give language power, and because culture is as real as any other idea, language is indeed granted the power they give it. But this is not intrinsic power. Language has only the power we give it, and we are giving it too much power.

As a language-user, I have no choice but to tiptoe across the minefield of connotation. If I say the wrong word, people hear things I am not saying or believe things of me that are not true. I have to spend a lot of time and effort on updating my language rather than actually doing useful things to mitigate or overturn the social systems that created the desire to linguistically distance ourselves from the atrocities associated with them. But it bothers me, because the more we focus on linguistic distance, the more we seem to lose focus on the need to actually change the way the world works.

If only we did not give language so much power, we would be much better off.

LongNowEvan “Skytree” Snyder on Atomic Priests and Crystal Synthesizers

Image for post
Evan “Skytree” Snyder in his studio. Source: Facebook.

Evan “Skytree” Snyder straddles two worlds: by day, he is a robotics engineer. By night, he produces electronic music that drops listeners into lush atmospheres evocative of both the ancient world and distant future.

We had a chance to speak with Snyder about his 02020 album Infraplanetary and his recent experiments with piezoelectric musical synthesis. Both projects ratchet up themes of deep time, inviting listeners to meditate on singing rocks and post-historic correspondences.

Our discussion has been edited for clarity and length.

Let’s talk about the lyrics to “Atomic Priest” off Infraplanetary.

An excerpt:

“This is for the humans living ten thousand years from now
With radioactive capsules, thousands of feet underground
Grabbin’ the mic to warn you of these hazardous sites
For those who lack in the sight in the black of the night
The least good that we could do is form an Atomic Priesthood
To keep the future species from going where no one should
We’ve buried the mistakes of past nuclear waste
Hidden underground for future races to face
It’s our task to leave signs for civilization to trace
But who’s to say what language these generations will embrace?
Basic symbols up for vast interpretation
Disasters resulting from grave mistranslation
This is not a place of honor and glory
This is a deep geological nuclear repository
Reaching through millennia to give some education
And preserve the evolution of beings and vegetation.”

These are hip-hop artist Jackson Whalan’s words, but you prompted him to write a fairly specific piece about communicating to the distant future. What motivated you to make this, and how does it fit into the way you consider and communicate deep time concerns in the rest of your work?

Skytree: I really appreciate the opportunity to discuss this with you. “Atomic Priest” is definitely inspired by my lifelong fascination with deep time — specifically its effect on design principles, engineering challenges, and bridging cultures. I’m intrigued by things that endure, how they endure, and why. The simple practice of considering the long-term is uniquely inspiring, and compared to the relative chaos of the present I find some refuge and meditative calm when reflecting on the decamillennial scale.

The long view also shows up in my process as a music producer. Building compositions is a months-long solo endeavor within my audio workstation. It’s an obsessive, detailed, and laborious process, and my reflecting on deeper timescales while composing is reflected in the product. I’m mindful that the end result feels timeless or out of sync with everyday chronology.

Collaboration makes the work less lonely. The lyrics to “Atomic Priest” were indeed written by Jackson. When I sent him the instrumental to record over I already had a title and theme, and included an article describing the unique challenges of the EPA’s Waste Isolation Pilot Plant (WIPP) — an attempt to contain nuclear materials that remain lethal for over 300,000 years. When I first read the article in 02006 I was captivated by the project’s concept sketches of how one might warn unknown future civilizations about nuclear contamination. I then researched the EPA’s Human Interference Task Force and the work of linguist Thomas Sebok, which I also provided to Jackson for reference. I was thrilled with the result. Combining something as contemporary and human as hip-hop with a subject so immense in scale feels very satisfying to me.

Image for post
Skytree’s song, “Atomic Priest,” was inspired in part by the Waste Isolation Pilot Plant in Carlsbad, New Mexico. Source: Center for Land Use Interpretation.

You’re touching on something that goes deeper than the future-chic aesthetic of many other electronic artists.

Futurism has always been an inspiration, but on this album I tried to go a bit deeper with it than just sounds or spaces. I often stepped back and reflected on what it might sound like to someone in the deep future, in the unlikely chance they’d find it. What sort of “message in a bottle” might surprise them, excite them, deviate from what they’d expect to find, or feel like a knowing hand-shake from the past?

This potential for a two-way dialogue between entities separated by eons is one of most tantalizing potentials of thinking in deep time scales. The Voyager craft are of course excellent literal vehicles for this potential — designed in the hopes to one day be found, perhaps light years from our star system and far, far in the future, by intelligences we may never meet or learn of but who realize we intended them to find this message. That is perhaps about as close to a real time machine as we may ever get. I’d like to think this album is the best result I’ve achieved to that end.

Image for post
A still from Skytree’s music video for “Out There.” Source: Instagram / Skytreemusic.

I want to talk with you more about your current project, linking up piezoelectric sensors to crystals to send CV signals to modular synthesizers. As someone who actually ate Moon dust as a kid, can you please wax philosophical about making music from stones, and what it is about this that stimulates your artistic or scientific imagination?

My grandfather was the chief of security at NASA during Apollo, and served there for 25 years. One of his most recognizable accomplishments was that he was personally responsible for safely transporting Moon specimens for public viewing and analysis from the NASA archives to the Smithsonian, where many of them are still on display today. He accompanied them on the last leg of their journey to the public’s eye. As a kid, I remember visiting the Smithsonian with my family and marveling at how he was a small but notable part of that incredible accomplishment.

Shortly thereafter, I took that a step too far and snuck a small taste of his personal sample of Moon dust while he was mowing his lawn. I was 8 years old. I remember carefully observing how long it took him to mow the lawn, when it obstructed his view into his house, where he kept his display case keys in his home office, and noting where the small step stool was that I needed to reach the top shelf. It wasn’t so much out of mischief, though outfoxing NASA’s former chief of security, as a child, on the very artifacts he was dutied to protect…feels pretty funny now. Rather, it was more out of a genuine need to try it. Something in me just had to see if I could eat part of the Moon. I did. It tasted chalky, powdery — about what you’d expect. If he were still alive today I wouldn’t dare share this story. He was a hardass and not someone to cross. (Rest in peace, Grandpa.)

So, my love of rocks goes pretty deep. For years, my artist bio has read, “sounds generated by minerals, plants, animals and artifacts.” This used to be tongue-in-cheek, avoiding genres, but I am now quite literally making sounds generated by minerals and plants, plus my already extensive use of animals and artifacts.

This series of experiments scratches a very particular itch. My favorite areas of any museum have always been geology and mineralogy. I remember staring into displays filled with crystals for so long my parents would have to pull me away — especially if they were interactive, illustrating principles like stratification, fossilization, or piezoelectricity. Ever since learning about the use of piezoelectric resonators and components in everyday electronics like radios and computers, I couldn’t help but wonder…could this same effect be demonstrated on a raw quartz point? It turns out it’s not even that difficult.

Just weeks ago, I found a successful method for turning raw quartz pieces in my collection into surprisingly effective piezoelectric pickups. Though I’d used standard factory-made piezos for years, making vibrations onto the surface of a crystal and hearing them come ringing through my headphones was an absolutely magical moment. All that’s needed is some copper tape, copper wire, the right leads, some amplification and signal processing to remove noise. Two electrodes are taped on opposite faces of the crystal point — one out of three sets of faces tends to work best and provides the greatest voltage output. Some crystals work better than others.

Image for post
Skytree uses a transducer to vibrate a crystal and records the output via piezoelectric signal. Source: Instagram / Skytreemusic.

At first I went for the tried-and-true approach of simply whacking on these specimens with a mallet, but I’ve gotten more refined with it. Using a function generator (output from a fancy oscilloscope) and a transducer (effectively a speaker without the cone), I’ve been able to impart specific frequencies onto quartz specimens, find resonant points, and record the resulting audio. Moreover, I’ve been able to use this piezoelectric signal as control voltage for my modular synth. I can’t underscore enough how much excitement and motivation this brings me and how happy I am to share this. There’s something incredible about using relatively unaltered geological specimens, perhaps hundreds of thousands of years old, in a modular synthesizer in 02021. It feels like a very raw and timeless dialogue between my creative self and immense forces of nature.

Image for post
Still from a video of Skytree explaining his modular “geosonification” rig. Source: Instagram / Skytreemusic.

I’m already imagining the crystal keyboard in the dash of Carl Sagan’s Ship of the Imagination, only it’s a Moog.

I’ve also been experimenting with using conductive specimens like meteorite and native copper as crude theremin antennae, to send control voltage to synth modules. This is far easier to set up than the piezoelectric experiments, but nonetheless highlights important and useful physical principles of these materials. My next experiments will involve pyrite, which shifts from an insulator to semiconductor to conductor depending on the strength of the magnetic field it’s exposed to. An electromagnet is sitting on my desk and ready to aid my continued explorations of literal rock music. For the time being, I’m calling this process “geosonification” as a nod to using plants in synthesis under the guise of “biosonification.”

It gives me a way to integrate my loves of music and science and make mutually reinforcing discoveries. With music, I often discover more about myself. With science experiments, I discover more about the world. Combining the two, I get both. It keeps me playing and interested. I’m not an exceptionally talented instrumentalist, but this gives me a way to tread new ground using some of the oldest tricks on Earth.

Since you mentioned plants, and as far as leaving a record for the future is concerned, we’re having this exchange in the context of the growing popularity of attaching sensors and MIDI converters to plants, and sonifying data in general. Data sonification seems key in the ongoing work of making multiple spatiotemporal scales easier to grasp and work with. And “letting plants speak” in music seems par for the course right now, as the Wood Wide Web becomes a colloquial idea and we collectively grapple with the ideas of personhood for companies or ecosystems operating on vastly different timescales.

Yeah, to the point of piezoelectrcity and plants, I have a synth module that turns subtle variations in capacitance from a plant, person or other semiconductor into usable control voltages. My dad has been a huge inspiration with all this. He recently retired after 27 years in the National Park Service as midwest region radio manager. Growing up, there were always electronics around; I was exposed to the fundamentals of these technologies pretty early on and first burned my hand on a soldering iron when I was ten.

One of the most fascinating stories my dad ever told me was about an unexplained vast radio deadzone in National Park land. It turned out that a miles-long row of trees had grown into an old line of forgotten barbed wire fence. This grid of metal wire turned the electrolytic trees into a giant capacitor, which significantly disrupted radio propagation in the entire region. That’s a pretty seamless, unintended, and unexpected blend of nature and technology. It’s also a reminder there really is a hidden dimension of energy running through things, and sometimes you find it by accident.

That’s a fine place to end this.

Thanks, Michael and Long Now, for your inspiring work, and thank you to all the long-view thinkers out there that share a sense of wonder, awe, and stillness when gazing into the unknowable future.

Learn More:


Chaotic IdealismSocial rules for fat people, as observed by a fat person

  1. Never be seen eating, unless you are eating something exceedingly low-calorie and tasteless, such as a plain rice cake or a dry salad.
  2. Never admit to enjoying food.
  3. Never talk about your favorite food, your favorite restaurant, your favorite flavor, etc. You are not allowed to have these.
  4. Always be on a diet. Always.
  5. You cannot eat too little; fat people can never suffer from malnutrition or starvation.
  6. Never admit to over-eating.
  7. You are not allowed to exercise in public. People don’t want to see you moving, especially if you are wearing tight clothing.
  8. Do not go to a gym. You do not belong there.
  9. Do not participate in team sports or any form of athletic competition. You do not belong there.
  10. Do not go to a swimming pool, or the beach. You do not belong there.
  11. You are not allowed to complain when your doctor treats you as a second-class citizen. You deserve it.
  12. You are not allowed to complain when you physically cannot fit into a small seat. This is your fault.
  13. Do not acquire a physical disability that forces you to use any form of mobility assistance, especially a motorized scooter. This will be judged to be the result of your fat, and your refusal to lose your fat.
  14. Do not point out that losing weight has a lower success rate than quitting heroin. That doesn’t matter. Besides, it probably isn’t true because all those people on the infomercials lost weight, so it must actually be easy.
  15. Do not ever claim to be disciplined or responsible. You are obviously neither.
  16. You are not allowed to enjoy any part of your looks or your body, especially not anything related to your fat. For example, you are not allowed to appreciate your curves, your ability to move heavy objects, or your ability to stay put even if someone tries to move you.
  17. You are required to endanger your health with any and all weight-loss supplements, medications, or fad diets that come your way. Otherwise, you are not trying.
  18. You are required to appreciate others’ wise recommendations, such as, “It’s easy; just eat less,” or, “You should go jogging once in a while,” and act like you never thought of them before. You haven’t, right? After all, if you had, you’d be thin.
  19. You are not allowed to have an eating disorder. You’re obviously too fat for any kind of anorexia or bulimia; and binge-eating disorder is just another way to say “undisciplined”.
  20. You are not allowed to eat tasty food, even in private, without feeling guilty about it.
  21. Anything you could possibly eat can be interpreted as the cause of your fat. If you eat rice, you’re fat because you are eating carbs; if you eat chicken, you’re fat because you’re eating meat; if you eat salads, you are obviously getting fat from salad dressing. You can never eat the right thing.
  22. When accused of eating too much fast food, never claim that you think fast food is bland and you practically never eat it. You obviously eat way too much fast food, because you are fat.
  23. If you drink diet Coke, that is why you are fat. If you drink Coke with sugar in it, that is why you are fat.
  24. When your large size causes a problem of any sort, it is your fault, not the fault of the person who designed your environment not to be accessible to fat people.
  25. You are not allowed to wear revealing or tight clothing.
  26. If you wear loose clothing, you must admit that it is because you are ashamed of being fat, rather than because you find loose clothing comfortable.
  27. “You’ve lost weight” is a compliment, even if it comes after a two-week bout with the stomach flu and you’re feeling like death warmed over.
  28. If you get cancer, people will a.) assume it is your fault because of your fat, and b.) reassure you that at least chemo will make you lose weight (even though quite the opposite may be true). You are required to act as though this is encouraging.
  29. If you get sick, it is because you are fat. It cannot be due to germs, your genetics, your environment, or your simple bad luck.
  30. If you are injured, you should lose weight; your fat is preventing the injury from healing.
  31. If you take medicine to stay healthy, it is because you are fat.
  32. If you have a mental illness, it would resolve if you lost weight.
  33. If you have a physical disability, you would be cured if you lost weight.
  34. If you are mocked for a reason completely unrelated to being fat, you will also be mocked for being fat.
  35. Thin people will get the job you wanted. This is just, because you are obviously less responsible.
  36. Thin people will also get their diabetes, heart disease, high cholesterol, sleep apnea, etc., diagnosed way too late, because they are thin and could not possibly have diabetes, heart disease, high cholesterol, sleep apnea, etc. Despite this, few thin people will join you when you insist that the medical community stop assuming that diseases like this are inevitable in fat people and never found in thin people.
  37. If you are athletic and can lift more, work longer, or hike circles around your thin friends, you are not allowed to admit this, because you are fat and you obviously cannot.
  38. You are not allowed to find a loving relationship with someone who honestly loves you and your body. They are obviously a chubby-chaser, or a desperate case settling for less.
  39. Anyone who is thin is automatically superior to you.
  40. Anyone who is thin is automatically healthier than you.
  41. You are a second-class citizen, and you deserve it. Stay in your place.

There is a reason I recommend breaking social rules.


LongNowThe Time Machine

Long Now co-founder Brian Eno in front of his 77 Million Paintings generative artwork (02007).

Editor’s Note: This paper was sent our way by its lead author, Henry McGhie. It was originally published in Museum & Society, July 2020. 18(2) 183-197. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. No changes have been made. 

The Time Machine: challenging perceptions of time and place to enhance climate change engagement through museums

By Henry McGhie*, Sarah Mander**, Asher Minns***


This article proposes that applying time-related concepts in museum exhibitions and events can contribute constructively to people’s engagement with climate change. Climate change now and future presents particular challenges as it is perceived to be psychologically distant. The link between this distance and effective climate action is complex and presents an opportunity for museums, as sites where psychological distance can be explored in safe, consequence-free ways. This paper explores how museums can help people develop an understanding of their place within the rhetoric of climate change, and assist them with their personal or collective response to the climate challenge. To do so, we find that two time- and place-related concepts, Brian Eno’s the Big Here and Long Now and Foucault’s heterotopia, can provide useful framings through which museums can support constructive climate change engagement.

Key words: Museums, climate change, futures, engagement, psychological distance

1. Introduction

Climate change presents one of the most serious challenges to human society and the environment, where both reducing emissions and adapting to the impacts of climate change involve major systemic change to society and the economy. Given the scale, nature and speed of these systemic changes, greater public engagement has been considered to be essential for numerous reasons, including the building of democratic support for action (see for example Carvalho and Peterson 2012), and to improve policy making (Pidgeon and Fischhoff 2011), notably through the incorporation of diverse perspectives (Chilvers et al. 2018). From an international climate change policy perspective, the United Nations Framework Convention on Climate Change (UNFCCC) (1992) and Paris Agreement (2015) each include an article on education, training, public awareness, public participation and access to information (article 6, which also includes ‘international co-operation’, and article 12 respectively, referred to jointly as Action for Climate Empowerment).¹ The UN Sustainable Development Goals, a blueprint for international sustainable development from 2015–30, include a goal (13) to ‘Take urgent action to combat climate change and its impacts’; this goal includes a target to ‘Improve education, awareness-raising and human and institutional capacity on climate change mitigation, adaptation, impact reduction and early warning’.²

Climate change engagement may be defined as ‘an ongoing personal state of connection’ with the issue of climate change (Lorenzoni et al. 2007: 446; Whitmarsh et al. 2011). As connection incorporates a broad range of aspects that constitute what we think, feel and do about climate change — cognitive, socio-emotional and behavioral aspects — simply knowing more about climate change does not necessarily promote action and, where information provision does not provide people with an understanding of the actions that are needed or is demotivating, it can inadvertently disempower people (Moser and Dilling 2004; O’Neill and Nicholson-Cole 2009). The three elements of climate change engagement — cognitive, socio-emotional and behavioural — approximate to the three domains of the learning model used by UNESCO as a framework for Global Citizenship Education (GCED) and Education for Sustainable Development (ESD); GCED aims to educate people ‘to know, to do, to be, and to live together’, empowering learners of all ages to play an active role in overcoming global challenges (UNESCO 2015: 22; see also UNESCO 2017).

Cognitive, socio-emotional and behavioural aspects connect in non-linear, non- sequential ways, but are iterative and dialogical. Engaging constructively with all three aspects presents a plausible route towards constructive engagement with the topic, allowing people to make sense of climate change in their daily lives, connecting thoughts and concerns with choices and actions (Lorenzoni et al. 2007).

Museums have the potential to be important venues to promote public education, empowerment and action around climate change (see below), and were formally recognized at COP24 in Katowice (Poland) in December 2018 as key sites for supporting Action for Climate Empowerment.³ In this paper, we explore two questions: 1) how can museums help people develop their understanding of what climate change means to them? and 2) how can museums help facilitate a response to the climate challenge? These questions are explored using two concepts, Michel Foucault’s work on heterotopias and Brian Eno’s the Big Here and Long Now. We suggest that these can be used to challenge conventional ways of thinking about time and place, and frame climate change engagement in museums in a way that allows people to negotiate and navigate the psychological distance of climate change in constructive ways. In Section 2 we provide an overview of the potential roles of museums in responding to climate change; in Section 3 we discuss the literature on psychological distance. In Sections 4 and 5 we present Michel Foucault’s work on heterotopias, and Brian Eno’s the Big Here and Long Now, in relation to climate change focused exhibitions in museums.

2. Museums and climate change

Fiona Cameron and her colleagues have written extensively on the role[s] of museums in the context of climate change. They explored the current and potential roles of museums (specifically, natural history museums, science museums and science centres) in society in relation to climate change, in Australia and the US as part of the ‘Hot Science Global Citizens: The Agency of the Museum Sector in Climate Change Interventions’ project (2008–12). Their results demonstrated significant differences between the current and desired roles of museums in respect of climate change among the public and museum workers. The project suggested nine strategic positions for museums to adopt to better meet the desires of their publics, as well as key role changes for science centres and museums (based on large differences between public and museums’ desires for particular positions) (Cameron 2011, 2012). Results of the ‘Hot Science’ project were used to develop a set of nine principles intended to support museums and science centres to act meaningfully on climate change (Cameron et al. 2013).

Cameron (2010) introduced the concepts of ‘liquid museums’ and ‘liquid governmentalities’ to explore how museums can support action and empowerment around contemporary issues such as climate change, without exercising authoritarian control (see also Cameron 2007, 2011). Cameron et al. (2013: 9) wrote

The big task of the museum sector is not only to inform publics on the science of climate change but also to equip citizens with tactical knowledges that enable participation in actions and debates on climate change that affect their futures.

They also suggested that

museums and science centers can engage a future-oriented, forward thinking frame, as places to link the past to the far future through projections of what might happen as places to offer practical governance options and as places to present long-term temporal trajectories. They offer an antidote to short-term thinking and the failure of governments to act, by presenting the variable dispositions, ideologies, and governance options, thereby constructing a mediated view of the future as a series of creative pathways (Cameron et al. 2013: 11; see also Cameron and Neilson 2015).

Notwithstanding the wide potential of museums to contribute meaningfully to addressing the challenges of climate change, Canadian Robert Janes has noted that, for the most part, museums have been slow to incorporate climate change into their work, risking their own long-term relevance (Janes 2009, 2016).

In Curating the Future, Newell et al. proposed that museums can be effective places for supporting discussion and action to address climate change. Through a wide range of case studies that read or re-read objects and exhibitions in the context of rapid climate change, they explored how contemporary museums have been adjusting their conceptual, material and organizational structures to reposition themselves on four deeply rooted trajectories that separate colonized and colonizer, Nature and Culture, local and global, authority and uncertainty (Newell et al. 2017).

Rather than direct their attention to protecting material from the past, museums can direct their work (the full range of their work, including collecting and public-facing work) towards supporting and enabling better futures more actively. Natural history museums and science centres could readily engage around contemporary issues such as climate change and other environmental topics (as could many other kinds of museums) to become ‘natural futures museums’; military museums could focus on topics around the causes and consequences of contemporary wars in order to reduce future conflicts; and ethnographic museums could emphasize issues around cultural diversity and identity in the face of globalization and social inequality (see e.g. Basu and Modest 2015; Dorfman 2018). This approach recognizes the interconnectedness of different forms of heritage — material, natural, cultural and intangible — and connects with emerging ideas of heritage as a future-making practice, e.g.

heritage is not a passive process of simply preserving things from the past that we choose to hold up as a mirror to the present, associated with a particular set of values that we wish to take with us into the future. Thinking of heritage as a creative engagement with the past in the present focuses our attention on our ability to take an active and informed role in the production of our own ‘tomorrow’ (Harrison 2013: 4).

In previous work, we have proposed sets of recommendations for museums, to support them to develop constructive climate change engagement activities (McGhie et al. 2018; McGhie 2019). The present paper builds on these contributions, by providing a more theoretical framework drawing on applied social psychology perspectives.

3. Psychological distance, climate change and museums

From the perspective of many in the Global North, climate change is widely perceived to be a distant phenomenon, something which will happen in the future, in far-away places (so impacting most on those in the Global South), and which has great uncertainty associated with it in terms of the likelihood, scale and nature of impacts. The proximity of climate change can be usefully described in terms of ‘psychological distance’, a theoretical construct defined as ‘a subjective perception of distance between the self and some object, event, or person’ (Wang et al. 2019). Four dimensions of psychological distance have been identified: temporal distance (time), spatial distance (place), social distance (cultural difference), and hypothetical distance (certainty or uncertainty) (Trope and Liberman 2010). These, together, describe the ‘perception of when [an event] occurs, where it occurs, to whom it occurs and whether it occurs’ (Trope and Liberman 2010: 442, quoted in Wang et al. 2019: 2).

As the need to mitigate climate change becomes more urgent (Committee on Climate Change 2019a, 2019b) and climate impacts are felt more strongly (see for example Burke and Stott, 2017; Van Oldenborgh et al. 2017), the influence of the proximity of climate change on people’s decisions to reduce their greenhouse gas emissions or adapt to climate impacts has been suggested as ‘a promising strategy for increasing public engagement with climate change’ (Jones et al. 2017). Reducing psychological distance has frequently been suggested as a means of increasing public engagement with, and action to address, climate change (see Schuldt et al. 2018 for references). There is indeed evidence from several studies that public concern about climate change decreases as the psychological distance of climate change increases, but this is not a simple or straightforward panacea (see Wang et al. 2019 for references). Exploring whether pro-environmental behaviour was best predicted by concrete, close perceptions of climate change (psychological closeness), or abstract, distant perceptions (large psychological distance), Spence et al. (2012) found that, among a nationally representative cohort of people in Britain aged over fifteen years of age (N=1,822), psychological closeness with energy futures and climate change was associated with higher levels of concern and preparedness to reduce energy consumption; so, people who have direct experience of climate impacts, which brings it close in terms of time, place and certainty, have been reported as being more willing to take mitigation actions (Spence et al. 2012; Broomell et al. 2015). However, Spence et al. (2012) also found that greater distance on the social distance dimension was associated with higher preparedness to take personal action, with people expressing concern for people in the Global South who were likely to be personally more seriously impacted by climate change than the survey respondents considered they would be themselves.

Scholars have considered climate change and psychological distance in relation to Construal Level Theory (Brügger et al. 2016; Griffioen et al. 2016; Wang et al. 2019), which is concerned with the ways in which our mental representations depend on their closeness to our present situation. Phenomena of which we have direct experience, or which are close to our present situation, require little mental effort to interpret or construe (low-level construal). By contrast, phenomena which are spatially, temporally or socially distant, or where there is inherent uncertainty, require a greater amount of effort to be represented mentally, and will result in high-level construals which will be more abstract and less concrete (Brügger et al. 2016). According to this rationale, if climate change is perceived as distant, it may be conceived in an abstract way. Abstractness has been found to encourage a goal-centred mind-set, allowing for the exploration of more distant, creative solutions (Liberman and Trope 2008), and enhancing self-control (Trope and Liberman 2010, see Wang et al. 2019). However, a concrete construal of climate change may promote psychological closeness, which may foster concern (Trope and Liberman 2010; Van Boven et al. 2010). Wang et al. (2019) found that psychological closeness to climate change predicted pro-environmental behaviour, while construal level produced inconsistent results; manipulations of both features did not increase pro-environmental behaviour. They also found that the presumed close association between psychological distance and construal level may not hold true in the case of climate change.

In one study on construal level and environmental issues, interventions were most effective when participants were asked to find an abstract goal in a specific context, or a specific goal in an abstract context, in that they facilitated both a greater awareness and a consideration of how to take personal action (Rabinovich et al. 2009; see also Ejelöv et al. 2018). Moreover, McDonald et al. (2015) found a complex relationship, where direct experience (short psychological distance) did not necessarily lead to action, and that ‘the optimal framing of psychological distance depends on 1) the values, beliefs and norms of the audience, and 2) the need to avoid provoking fear and resulting avoidant emotional reactions’. To Wang et al., this ‘suggests that both psychological closeness and distance can promote pro-environmental action in different contexts’ (Wang et al. 2019: 3).

Overall, research in this area demonstrates that the relationship between psychological distance and climate change is complex, but many scholars have pointed out that inspiring more, or sufficient, action on climate change is not simply a matter of bringing climate change closer (see for example Brügger et al. 2015; McDonald et al. 2015; Brügger et al. 2016; Schuldt et al. 2018; Wang et al. 2019).

A role for museums

Clearly, climate change presents an especially complex topic when considering psychological distance and construal level. However, acknowledging this complexity and considering the dimensions of psychological distance and construal level within the design of, and intended outcomes from, climate change engagement activities has the potential to increase their effectiveness. This may help promote people’s constructive engagement with climate change as a result, and offers a distinctive role for museums to play.

Climate change engagement activities may provide opportunities to explore climate change considering the social, spatial (see for example Lorenzoni et al. 2007; Spence et al. 2012) and temporal dimensions of psychological distance and climate change (see for example Rabinovich et al. 2010). These we consider to be of particular relevance in a museum setting as museums use their artefacts, collections and exhibits to connect (‘engage’) visitors with other places and times. They use their collections to tell and create stories in formal, informal and non-formal educational activities that can resonate with, or challenge, the values and world views of their visitors (McGhie et al. 2018; McGhie 2019). Science museums and science centres can also play a particular role in supporting people to understand the key importance of uncertainty and probability in science, which relates to the hypothetical dimension of psychological distance and climate change. Increasing numbers of museums are also seeing themselves as place-makers or spaces for activism, and are actively trying to engage people with thinking about the future (e.g. Janes 2016; Janes and Sandell 2019).

We now move on to present the Big Here and Long Now and heterotopia, two concepts that provide alternative ways of thinking about time and place. We consider how these can usefully be ‘deployed’ to frame museum engagement on climate change and provide examples of where museums are using them.

4. The Big Here and Long Now

Observing the fast pace of New York lifestyles, musician Brian Eno observed ‘everyone seemed to be passing through. It was undeniably lively, but the downside was that it seemed selfish, irresponsible and randomly dangerous’. Eno conceived of this as a ‘short now’, with a fast pace of life, and short timeframes for decisions and for considering the impacts of those decisions. However, this also suggested to Eno the possibility of the opposite, the ‘long now’. Eno also considered how people think about ‘here’: for some it is their immediate surroundings, a ‘small here’, while for others the spatial scale is wider, encompassing neighbourhoods, towns and indeed the world, a ‘big here’. Eno conceived of a ‘Big Here’ and ‘Long Now’, combining these considerations of place and time respectively.⁴

The idea of the Long Now became a manifesto for the Foundation of the Long Now, established in 1996 to encourage a long-term view and stewardship of the long-term (Brand 1999). The first project of the Foundation was the idea of a 10,000-year clock, which is currently being built in Texas (see Brand 1999 for background). Futurist Danny Hillis, who devised the concept of the clock, wrote:

I cannot imagine the future, but I care about it. I know I am a part of a story that starts long before I can remember and continues long beyond when anyone will remember me. I sense that I am alive at a time of important change, and I feel a responsibility to make sure that the change comes out well. I plant my acorns knowing that I will never live to harvest the oaks. I have hope for the future.⁵

Kevin Kelly, also of the Long Now Foundation, popularized a quiz developed by naturalist Peter Warshall, which aimed to encourage people to think in a larger geographical context, namely a river’s watershed.⁶ Kelly broadened the concept to encourage people to think on a macro scale, to constitute a Big Here, which could extend to a country, the planet or indeed beyond the planetary scale. The combination of the Big Here and Long Now has been adopted by the Long Now Foundation as a means for broadening both a sense of place and time, that ‘now’ is not a particular moment but a moment that connects with what has gone before and what will follow, and ‘here’ is bigger than the small piece of ground that we stand upon. ‘Now’ and ‘here’ become entirely subjective in terms of their scope.

Conceptualizing and framing climate change in terms of the Big Here and Long Now, in contrast to the Small Here and Short Now, opens a space for stretching our thinking about place from beyond our immediate surroundings and towards a broader conceptualization of society, both spatially and temporally. This draws our attention to processes, contexts and consequences of decisions — our individual and collective decisions — over a broad range of scales and timeframes. Such an approach may help promote climate change engagement in people’s everyday lives, and climate action through responsible, sustainable consumption.

5. Heterotopia

The Paris Agreement and the Sustainable Development Goals represent an idealized, desired future state. This is a utopia, in the properly ambiguous sense of the word: both an ‘ideal place’ (a ‘eutopia’) and, being in the future, a ‘nowhere place’ (an ‘outopia’) (see, especially, Marin 1984, 1992; Hetherington 1997). In exploring and envisioning this ‘other place’, we can draw on one of the most familiar time-related concepts relating to museums, Michel Foucault’s concept of museums as heterotopia. Foucault introduced the concept in 1967, during a period of work that was concerned with archaeology and archives (Foucault 1986, 1998; see Hetherington 2015). Foucault noted ‘we are in the epoch of simultaneity: we are in the epoch of juxtaposition, the epoch of the near and far, of the side-by-side, of the dispersed’ (Foucault 1986: 22). Foucault distinguished sites that have the ‘curious property’, that ‘suspect, neutralize, or invent the set of relations that they happen to designate, mirror, or reflect’ (Foucault 1986: 24). He identified two such sites; firstly, utopias, sites with no real place that represent society in a perfected form. Secondly, there were sites,

something like counter-sites, a kind of effectively enacted utopia in which the real sites, all the other real sites that can be found within the culture, are simultaneously represented, contested, and inverted. Places of this kind are outside of all places, even though it may be possible to indicate their location in reality (Foucault 1986: 24).

These are, of course, Foucault’s heterotopia. Hetherington has built on this definition, to construe heterotopia as ‘spaces of alternate ordering. Heterotopia organize a bit of the social world in a way different to that which surrounds them’ (Hetherington 1997: viii). Foucault held there to be six principles of heterotopia: firstly, that they probably exist in every culture. Second, and importantly for our purposes, that heterotopia can be made to function in a very different fashion at different times. Third, the heterotopia is capable of juxtaposing several sites and spaces that are themselves incompatible. Fourth, heterotopia are most often linked to slices in time, and ‘the heterotopia begins to function at full capacity when men arrive at a sort of absolute break with their traditional time’ (Foucault 1986: 26). Most notably, in this respect, Foucault wrote:

…there are heterotopias of indefinitely accumulating time, for example museums and libraries. Museums and libraries have become heterotopias in which time never stops building up and topping its own summit, whereas in the seventeenth century, even at the end of the century, museums and libraries were the expression of an individual choice. By contrast, the idea of accumulating everything, of establishing a sort of general archive, the will to enclose in one place all times, all epochs, all forms, all tastes, the idea of constituting a place of all times that is itself outside of time and inaccessible to its ravages, the project of organizing in this way a sort of perpetual and indefinite accumulation of time in an immobile place, this whole idea belongs to our modernity. The museum and the library are heterotopias that are proper to western culture of the nineteenth century (Foucault 1986: 26).

Fifth, heterotopia are not freely accessible: there are limitations or rules around their openness. Finally, heterotopia have a function in relation to all remaining space, either ‘to create a space of illusion that exposes every real space’; ‘their role is to create a space that is other, another real space, as perfect, as meticulous, as well arranged as ours is messy, ill constructed, and jumbled’ (Foucault 1986: 27). Hetherington notes that heterotopia are ambiguously articulated, whether as ‘other places / places of otherness / emplacements of the other’ (Hetherington 2015: 35).

While Foucault’s work on heterotopia has, understandably, been related to museums (see Lord 2006 for examples), as Lord points out, Foucault’s primary discussion of museums as heterotopia was in terms of the building of an archive: of the materiality of the museum that builds up and the knowledges associated with that material, rather than the constant creation and recreation of the past from an interrogation of that material (they ‘endlessly accumulate times in one space through the material objects they contain and the knowledge associated with them’ (Hetherington 2015: 35)). Lord expanded on Foucault’s work on heterotopia to emphasise the key importance of narrative and interpretation in museums’ function as heterotopia:

The museum is the space in which the difference inherent in its content is experienced. It is the difference between things and words, or between objects and conceptual structures: what Foucault calls the ‘space of representation’ (1970: 130)… the space of representation is the heterotopia (Lord 2006: 4–5).

It is worth noting that museums’ attempts to represent everything or to ‘constitute a place of all times that is itself outside time’, to draw on Foucault’s phrase (Foucault 1986: 26, see Hooper-Greenhill 2000; Lord 2006), are increasingly unsustainable or impossible. Their attempts to exist ‘outside of time and inaccessible to its ravages’ (Foucault 1986: 26) are similarly tested by social, economic and environmental challenges, including climate change.

Heterotopia can be repurposed to explore the time that does not yet exist, the future, exploring Foucault’s brief mention on utopias as sites that ‘have a general relation of direct or inverted analogy with the real space of Society. They represent society itself in a perfected form, or else society turned upside down, but in any case these utopias are fundamentally unreal spaces’ (Foucault 1986: 24).⁷ Lord notes how ‘the definition of museum as heterotopia explains how the museum can be progressive without subscribing to politically problematic notions of universality or ‘total history’, but as a ‘growth of capabilities’’. She concludes that ‘museums are best placed to critique, contest and transgress those problematic notions, precisely on the basis of their Enlightenment lineage’ (Lord 2006: 12). Here, then, we can see potential for museums as sites for subverting and imagining other potential societies and futures, and a ‘growth of capabilities’ speaks well to the language of a productive future where, in the language of the Sustainable Development Goals ‘no-one is left behind’.

Figure 1. In Human Time exhibition, Climate Museum, New York, showing Peggy Weil’s film 88 Cores, image credit: Sari Goodfriend, courtesy of the Climate Museum.

6. Applying the Big Here and Long Now, and heterotopia in museums

In this section we consider how the two aforementioned concepts can be related to exhibitions and events linked to climate change, and how they can be factored into new developments. Museums typically have collections shown in exhibits that originate from different time periods and places, which speak to both the Big Here and the Long Now, extending the viewer’s or participant’s ‘here’ or ‘now’. Considering the Big Here and Long Now can provide a useful context for exploring issues such as climate change, sustainability and citizenship, and can be seen in many exhibitions about climate change. The Big Here and Long Now becomes a useful lens which, together with considerations of psychological distance and construal level, allows us to consider how museum interventions are aligning, or not, with these concepts.

To take one example, the recent exhibition Human Nature (2019–20) at the World Cultures Museum in Stockholm conveys the key message ‘it’s all connected. How we live our lives is closely related to the state of our earth’.⁸ This exhibition and this strapline extend our sense of the here and now; they seem to attempt to reduce psychological distance, linking our lives with their impacts; by giving form and voice to these relationships the museum appears to make our construal of the relationship more concrete. The Climate Museum in New York staged a two-part exhibition, In Human Time (2017–18) by Peggy Weil and Zaria Forman, to explore ‘intersections of polar ice, humanity, and time’ (fig. 1).⁹ A film, by Peggy Weil, shows close-ups of ice cores that were drilled down two miles into the Greenland Ice Sheet, spanning 110,000 years; the film pans very slowly over the ice core, revealing the subtle changes in colour, bubbles and texture of the ice. Weil wrote ‘The pace and scale of the work is a gesture towards deep time and the gravity of climate change’.¹⁰ Zaria Forman’s work consisted of a reproduction of a hyper-realistic image of an Antarctic iceberg, grounded in an ‘iceberg graveyard’ in Antarctica. The image was accompanied by a timelapse video, illustrating the process of the creation of the image. This single exhibition, in two parts, demonstrates a complex interplay of the concepts of the Big Here and Long Now, with the long timescale of the development of the ice in the ice core reflected in the slow pace of the film. The grounded, melting iceberg in the Antarctic reflects a concrete construal of the effects of climate change, while the far away nature of the Antarctic speaks of a large psychological distance.

Figure 2. Climate Control exhibition, Manchester Museum, UK, 2016, showing two entrances where visitors decided whether to explore the past or the future. Image credit: Gareth Gardner.

To take another example, the exhibition Climate Control was shown at Manchester Museum (University of Manchester) during the city’s time as European City of Science in 2015–16. Two of the authors (HM and SM) were involved in the development of the exhibition and accompanying programme. The exhibition was accompanied by a range of activities, developed in partnership with and involving academics from the University of Manchester and a range of NGOs and community organizations, as well as Manchester Climate Change Agency, which is responsible for developing and overseeing the city’s climate change mitigation and adaptation strategy. Through these partnerships, the exhibition was used as the inspiration for, and reinterpreted through, a range of engagement activities to promote climate change awareness, adaptation and mitigation.

The exhibition had two entrances where visitors could choose either to explore climate change in the past (and present) or the future (fig. 2). The section of the exhibit on the past (and present) included exhibits on fossil fuels and fossils from millions of years ago, a range of Arcic wildlife impacted by climate change today, and photographs of people impacted by climate change around the world. The exhibit emphasized the connection between events over very long timescales: the trapping of sunlight by plants millions of years ago, their preservation as fossils, and the burning of fossil fuels over the last three centuries. It also emphasized the connection between far-distant places: the burning of fossil fuels in industrial countries, and climate impacts in the Arctic and around the world. The connection was illustrated by birds that spend the summer in the Arctic and migrate to the UK in the winter, to foster a sense of shared wildlife. Images of people affected by flooding in Bangladesh, sea-level rise in Belize, and people who rely on meltwater from vanishing glaciers in Ladakh and Peru, showed the real-life impacts of climate change on people round the world. The exhibit explored climate change from a local, place-specific context, in terms of the industrial history of Manchester, a global dimension linking Manchester to the Arctic, and to a range of different communities around the world. A taxidermy mount of a Polar Bear was accompanied by the open-ended question ‘are we so different?’. This exhibition thus approached climate change from an abstract and concrete construal level, brought in various psychological distances, and was strongly linked with the Big Here and Long Now concept. The viewer or participant was always intended to be psychologically close to the place – the museum and exhibition gallery – where the exhibition was shown.

Seeking to empower visitors to the Climate Control exhibition to consider their place in this and the myriad of possible alternative future worlds, the other half of the exhibition was entitled ‘explore the future’. This part of the exhibition did not contain museum objects, but instead was a space with information on climate change action at local, national and international scales and activities which invited people to share ideas on ‘changing the future’ and to reflect on the ideas of others. The exhibition was intended to look unfinished when it first opened, as the future is not set in stone. This part of the exhibition was, we feel, a heterotopia in the sense that it asked people to create a place that is not a real place, but which has a role in relation to the external world.

The two halves of the exhibition were divided by a central wall. Visitors to the ‘explore the past’ section were invited to stick a small black sticker to a white wall to represent their carbon footprint, and to emphasize that together we make a large collective impact. This can be regarded as a concrete construal level. On the reverse side of the wall, in the ‘explore the future’ section, visitors were invited to add stickers on which they wrote their ideas on how to create a sustainable future. This, being abstract, we feel represented a higher construal level.

The accompanying engagement activities, developed in partnership with community organizations and academics, further sought to engage visitors to the museum with climate change in novel and multi-sensory ways, encouraging them to think about climate change in terms of time and place. During exhibition opening hours, researchers and practitioners invited visitors to take part in ‘Climate Conversations’, talking and telling their own climate change story. Each person took a different approach to their ‘climate conversation’ using experiments, computer simulations, stories, data and objects as the jumping off points for discussion; the purpose was not to provide information, but instead to present a diverse range of perspectives on the meaning of climate change in the lives of researchers and practitioners and, in so doing, invite visitors to think about what climate change meant to them. Climate Control sought to elicit new visions from the people of Manchester for their city, through the co-creation of alternative futures in the heterotopia of the museum. This took place in different ways including creative mapping and facilitated sessions based on Manchester’s Climate Change Strategy, where people built their visions for a sustainable Manchester from Lego, guided by policies on mitigation and adaptation from the city’s climate strategy.¹¹

The triangulation of academia, public engagement and public policy raised challenges of working together, but was aimed at supporting the development of climate change policies within the city, and promoting civic participation among the public. Climate Control drew upon Manchester’s industrial heritage and its inextricable link to climate change to create public opportunities directed towards shaping the future (McGhie et al. 2018; McGhie 2019).

7. Discussion

As the need for climate action becomes ever more urgent, we argue in this paper that museums have a key role to play, providing a space where people can work through the meaning of climate change in their own lives, and in inspiring and supporting climate actions. More ambitiously, however, we argue that museums can support people’s constructive, meaningful and impactful climate change engagement beyond the museum, by developing exhibitions and other events which recognize the psychological distance of climate change. Whilst making climate change closer — more immediate, personal or concrete — is not a silver bullet for enhancing climate change awareness, empowerment and action, working with psychological distance, in terms of time, place and uncertainty in museums, contributes to the perceived distance of climate change from people’s everyday lives, which can be a barrier to climate action.

Framing climate engagement through the Big Here and Long Now offers the opportunity to change perceptions of time and place, enabling people to explore and question the relationship between the local and the global or national, and recognize that their ‘now’ is merely a stopping off point between the past and multiple possible futures which have yet to be created. Through their exhibitions, museums can develop narratives which align with the multiple values of their visitors, telling different stories at the same time. Depending on the narrative, climate change can be made less abstract, or alternatively a narrative could be framed around the abstract aspect of climate change to encourage people to reflect on rights, responsibilities and morality. We suggest that the combination of the Big Here and Long Now with the concept of the heterotopia presents a particularly powerful approach, combining a deep exploration of ‘where we are now’, from the Big Here and Long Now, with a vision-creating element from the heterotopia: where we are trying to get to. This enriched understanding provides opportunities to explore how we, individually and collectively, will bridge the difference between our current state and the state we desire, regarding climate change.

Museums have a unique role as trusted organizations and spaces where people come not only to be entertained but also to learn; increasingly museums are using their collections in creative ways as sites of social change. Working in collaboration with partners, museums can be part of a coalition of action on climate change, as Manchester Museum sought to do with the Climate Control exhibition and associated activities. For example, the co-creation of future visions for Manchester out of Lego allowed people to explore alternative visions, with such models having a ‘performative’ purpose, moving discussions away from targets to places, lives and communities. Working with different conceptions of time and place can give people a sense of agency, whereby transformation is something created by people, rather than happening to them (see Cameron and Deslandes 2011). Museums can aim to work with people, as individuals and communities, in co-production and co-creation, to give people agency in their future and its creation: ‘Rather than treating audiences as passive species bodies to be reformed, museums need to acknowledge the creative potential of their audiences as valued actors having valued opinions and expertise, skills, capacities, desires, expectations, reflexive capabilities and imagination’ (Cameron 2011: 100).

Museums have the potential to provide people with opportunities to explore alternative pasts, presents and futures, and to negotiate the connections (and disconnections) between local and global dimensions, and short and long-term temporalities; in other words, museums can help people (individually and collectively) negotiate the psychological distance dimensions of climate change, and connect them with their own lives. Focussing on local and immediate situations has perhaps the greatest potential to empower people and to consider personal contribution, community and citizenship; while long-term dimensions can provide greater opportunity for creative exploration of more radically different, structural changes to society. ‘Starting’ with the local may engage people who are not immediately concerned with exploring more abstract ideas of the future. The combination of creative, interactive experiences mentioned above, which draw on people’s own ideas as much as projecting ‘museum narrative’ for people to consume, provides a more plausible route for supporting people’s ongoing, constructive engagement and dialogue with climate change beyond the museum, going beyond ‘mere’ intellectual understanding to self-knowledge. Providing opportunities for people to understand, share and respond as part of museum experiences provides opportunities for people to explore and begin to create possible futures together in a safe environment.

If we are to transform society, and our lives, we need spaces that support transformation and that create opportunities to imagine, design and begin to create desirable futures. When we think about the future, we normally do so in the box of our town, our house, our lives. In a museum you are transported to a different place; accepting the museum’s function as heterotopia can free you up to imagine new futures with different boundaries and free to explore different times and places (at least in some sense): surely a kind of ‘partly enacted utopia’ that can be put to work. By providing a space (physical and intellectual) and a frame to consider the present as a point on the journey from the past to one of a myriad of possible futures, museums can begin to reposition themselves to actively promote civic participation and action around climate change.

Received: 5 September 2018 

Finally accepted: 11 Mar 2020


An early version of this paper was presented (by HM) at the 25th International Congress of the History of Science and Technology (Rio de Janeiro, July 2017) in a symposium on ‘Narratives of Future Earth’. HM and SM are grateful to Dr. David Gelsthorpe, Anna Bunney (both Manchester Museum), Dr. Rebecca Cunningham (University of Technology, Sydney) and Jonny Sadler (Manchester Climate Change Agency) for help in developing the Climate Control programme at Manchester Museum.


[1], accessed 24 March 2020.

[2] accessed 24 March 2020. 

[3] accessed 16 January 2020.

[4] Brian Eno, ‘The Big Here and Long Now’, accessed 16 January 2020.

[5] accessed 25 March 2020.

[6] accessed 31 May 2020.

[7] See also Peter Johnson, ‘Some reflections on the relationship between utopia and heterotopia’, Heterotopian Studies, 2012., accessed 25 March 2020.

[8] accessed 15 January 2020.

[9] accessed 15 January 2020.

[10] accessed 15 January 2020.

[11] Manchester Climate Change Agency (2016),, accessed 25 March 2020.


Basu, P. and Modest, W. (2015) Museums, Heritage and International Development, London: Routledge.

Brand, S. (1999) The Clock of the Long Now. Time and Responsibility: The Ideas Behind the World’s Slowest Computer, New York: Basic Books.

Broomell, S.B., Budescu, D.V. and Por, H.-H (2015) ‘Personal Experience with Climate Change Predicts Intentions to Act’, Global Environmental Change, 32 67–73.

Brügger, A., Dessai, S., Devine-Wright, P., Morton, T.A. and Pidgeon, N.F. (2015) ‘Psychological Responses to the Proximity of Climate Change’, Nature Climate Change, 5 1031–7.

Brügger, A., Morton, T.A. and Dessai, S. (2016) ‘‘Proximising’ Climate Change Reconsidered: A Construal Level Theory Perspective’, Journal of Experimental Psychology, 46 125–42.

Burke, C. and Stott, P. (2017) ‘Impact of Anthropogenic Climate Change on the East Asian Summer Monsoon’, Journal of Climate, 30 5205–20.

Cameron, F.R. (2007) ‘Moral Lessons and Reforming Agendas: History Museums, Science Museums, Contentious Topics and Contemporary Societies’, in Simon J. Knell, Suzanne MacLeod and Sheila Watson (eds) Museum Revolutions: How Museums Change and are Changed, 330–42, London: Routledge.

(2010) ‘Liquid Governmentalities, Liquid Museums and the Climate Crisis’, in Fiona Cameron and Lynda Kelly (eds) Hot Topics, Public Culture, Museums,112–28, Newcastle upon Tyne: Cambridge Scholars.

(2011) ‘From Mitigation to Creativity: The Agency of Museums and Science Centres and the Means to Govern Climate Change’, Museum and Society, 9 (2) 90–106.

(2012) ‘Climate Change, Agencies, and the Museum for a Complex World’, Museum Management and Curatorship, 27 (4) 317–39.

Cameron, F.R. and Deslandes, A. (2011) ‘Museums and Science Centres as Sites for Deliberative Democracy on Climate Change’, Museum and Society, 9 (2) 136–53.

Cameron, F.R., Hodge, B. and Salazar, F. (2013) ‘Representing Climate Change in Museum Space and Places’, WIREs Climate Change, 4 (1) 9–21.

Cameron, F.R. and Neilson, B. (eds) (2015) Climate Change and Museum Futures, London: Routledge.

Carvalho, A. and Peterson, T.R. (2012) ‘Reinventing the Political: How Climate Change Can Breathe New Life into Democracies’, in Anabela Carvalho and Tarla Rai Peterson (eds) Climate Change Politics. Communication and Public Engagement, 1–28, New York: Cambria Press.

Chilvers, J., Pallett, H. and Hargreaves, T. (2018) ‘Ecologies of Participation in Socio- Technical Change: The Case of Energy System Transitions’, Energy Research and Social Science, 42 199–210.

Committee on Climate Change (2019a) Reducing UK Emissions: 2019 Progress Report to Parliament, London: Committee on Climate Change.
 (2019b) Progress in Preparing for Climate Change: 2019 Report to Parliament, London: Committee on Climate Change.

Dorfman, E. (ed) (2018) The Future of Natural History Museums, ICOM Advances in Museums Research, London: Routledge.

Ejelöv, E., Hansla, A., Bergquist, M. and Nilsson, A. (2018) ‘Regulating Emotional Responses to Climate Change — A Construal Level Perspective’, Frontiers in Psychology, 9 (629)

Foucault, M. (1986) ‘Of Other Spaces’, Diacritics, 16 (1) 22–7.
 (1998) ‘Different Spaces’, in James D. Faubion (ed) The Essential Works, vol. 2, Aesthetics, 175–85, London: Allen Lane.

Griffioen, A.M., van Beek, J., Lindhout, S.N. and Handgraaf, M.J.J. (2016) ‘Distance Makes the Mind Grow Broader: An Overview of Psychological Distance Studies in the Environmental and Health Domains’, Applied Studies in Agribusiness and Commerce, 10 (2–3) 33–46.

Harrison, R. (2013) Heritage: Critical Approaches, Abingdon: Routledge. Hetherington, K. (1997) The Badlands of Modernity: Heterotopia and Social Ordering, London: Routledge.

(2015) ‘Foucault and the Museum’, in Andrea Witcomb and Kylie Message (eds) The International Handbooks of Museum Studies: Museum Theory, 21–40, Chichester: John Wiley and Sons.

Hooper-Greenhill, E. (2000) Museums and the Interpretation of Visual Culture, London: Routledge.

Janes, R.R. (2009) Museums in a Troubled World: Renewal, Irrelevance or Collapse?, London: Routledge.

(2016) Museums Without Borders, Abingdon: Routledge.

Janes, R.R. and Sandell, R. (2019) Museum Activism, Abingdon: Routledge.

Jones, C., Hine, D.W. and Marks, D.G. (2017) ‘The Future is Now: Reducing Psychological Distance to Increase Public Engagement with Climate Change’, Risk Analysis, 37 (2) 331–41.

Liberman, N. and Trope, Y. (2008) ‘The Psychology of Transcending the Here and Now’, Science, 322 (5905) 1201–5.

Lord, B. (2006) ‘Foucault’s Museum: Difference, Representation and Genealogy’, Museum and Society, 4 (1) 1–14.

Lorenzoni, I., Nicholson-Cole, S. and Whitmarsh, L. (2007) ‘Barriers Perceived to Engaging with Climate Change Among the UK Public and their Policy Implications’, Global Environmental Change, 17 (3–4) 445–59.

Marin, L. (1984) Utopics: Spatial Play, London: Macmillan.

(1992) ‘Frontiers of Utopia: Past and Present’, Critical Inquiry, 19 (3) 397–420.

McDonald, R.I., Chai, H.Y. and Newell, B.R. (2015) ‘Personal Experience and the ‘Psychological Distance’ of Climate Change: An Integrative Review’, Journal of Environmental Psychology, 44 109–18.

McGhie, H.A. (2019) ‘Climate Change: A Different Narrative’, in Walter Leal Filho, Bettina Lackner and Henry McGhie (eds) Addressing the Challenges in Communicating Climate Change Across Various Audiences, 13–29, Cham (Switzerland): Springer International.

McGhie, H.A., Mander, S. and Underhill, R. (2018) ‘Engaging People with Climate Change through Museums’, in Walter Leal Filho, Evangelos Manolas, Anabela Marisa Azul, Ulisses M. Azeiteiro and Henry McGhie (eds), A Handbook of Climate Change Communication, vol. 3, 329–48, Cham (Switzerland): Springer.

Moser, S. and Dilling, L. (2004) ‘Making Climate Hot: Communicating the Urgency and Challenge of Global Climate Change’, Environment, 46 (10) 32–46.

Newell, J., Robbin, L. and Wehner, K. (eds) (2017) Curating the Future: Museums, Communities and Climate Change, Abingdon: Routledge.

O’Neill, S. and Nicholson-Cole, S. (2009) ‘‘Fear Won’t Do It’: Promoting Positive Engagement with Climate Change through Visual and Iconic Representations’, Science Communication, 30 355–79.

Pidgeon, N. and Fischhoff, B. (2011) ‘The Role of Social and Decision Sciences in Communicating Uncertain Climate Risks’, Nature Climate Change, 1 35–41.

Rabinovich, A., Morton, T. and Postmes, T. (2010) ‘Time Perspective and Attitude- Behaviour Consistency in Future-Oriented Behaviours’, British Journal of Social Psychology, 49 (1) 69–89.

Rabinovich, A., Morton, T.A., Postmes, T. and Verplanken, B. (2009) ‘Think Global, Act Local: The Effect of Goal and Mindset Specificity on Willingness to Donate to an Environmental Organization’, Journal of Environmental Psychology, 29 (4) 391–9.

Schuldt, J.P., Rickard, L.N. and Yang, Z.J. (2018) ‘Does Reduced Psychological Distance Increase Climate Engagement? On the Limits of Localizing Climate Change’, Journal of Environmental Psychology, 55 147–53.

Spence, A., Poortinga, W. and Pidgeon, N. (2012) ‘The Psychological Distance of Climate Change’, Risk Analysis, 32 (6) 957–72.

Trope, Y. and Liberman, N. (2010) ‘Construal‐Level Theory of Psychological Distance’, Psychological Review, 117 (2) 440–63.

UNESCO (2015) Global Citizenship Education: Topics and Learning Objectives, Paris: UNESCO.

(2017) Education for Sustainable Development Goals: Learning Objectives, Paris: UNESCO.

Van Boven, L., Kane, J., McGraw, P.A. and Dale, J. (2010) ‘Feeling Close: Emotional Intensity Reduces Perceived Psychological Distance’, Journal of Personality and Social Psychology, 98 (6) 872–85.

Van Oldenborgh, G.J., Van der Wiel, K., Sebastian, A., Singh, R., Arrighi, J., Otto, F., Haustein, K., Li, S.H., Vecchi, G. and Cullen, H. (2017) ‘Attribution of Extreme Rainfall from Hurricane Harvey, August 2017’, Environmental Research Letters, 12.

Wang, S., Hurlstone, M., Leviston, Z., Walker, I., and Lawrence, C. (2019) ‘Climate Change from a Distance: An Analysis of Construal Level and Psychological Distance from Climate Change’, Frontiers in Psychology, 10 (230), fpsyg.2019.00230.

Whitmarsh, L., O’Neill, S. and Lorenzoni, I. (2011) Engaging the Public with Climate Change: Behaviour Change and Communication, London: Earthscan.


*Henry McGhie, Curating Tomorrow, 40 Acuba Road, Liverpool UK, L15 7LR,
 Tel: 07402 659 372

Henry McGhie has a background as an ornithologist, museum curator and senior manager. He has been working on sustainability, climate change and museums for over 15 years, developing exhibitions, working with local and international policy workers, organizing international conferences and editing two books on the subject. He established Curating Tomorrow in 2019 as a consultancy for museums and the heritage sector, helping them draw on their unique resources to enhance their contributions to society and the natural environment, the Sustainable Development Goals, climate action and nature conservation. He is a member of the International Council of Museums Working Group on Sustainability.

**Sarah Mander, Tyndall Centre for Climate Change Research, University of Manchester, M13 9QL,
 Tel: 0161 3063259

Dr Sarah Mander is a Reader in Energy and Climate Policy and an interdisciplinary energy researcher, with over a decade’s experience using deliberative and participatory approaches to understand social, institutional and governance barriers to climate mitigation. For the past five years, she has coordinated Tyndall Manchester’s public engagement activities, working with museums, schools and community organizations to develop arts-based and creative approaches to climate change engagement, including theatre games and performance art. Dr Mander is a member of the Centre for Climate Change and Social Transformations (CAST), where her work combines her expertise in social responses to low-carbon technology with her belief that, in the absence of effective action on climate change from governments, innovation by grass-roots organizations is key to driving the low-carbon transition.

***Asher Minns, Tyndall Centre for Climate Change Research, University of East Anglia, Norwich,

Asher Minns is a science communicator specialising in knowledge transfer of climate change and other global change research to audiences outside of academia. He has over two decades in practice, and is also the Executive Director of the Tyndall Centre for Climate Change Research.


LongNowPodcast: Queering the Future | Jason Tester

Jason Tester asks us to see the powerful potential of “queering the future” – how looking at the future through a lens of difference and openness can reveal unexpected solutions to wicked problems, and new angles on innovation. Might a queer perspective hold some of the keys to our seemingly intractable issues?

Tester brings his research in strategic foresight, speculative design work, and understanding of the activism and resiliency of LGBTQ communities together as he looks toward the future. Can we learn new ways of thinking, and thriving, from the creative approaches and adaptive strategies that have emerged from these historically marginalized groups?

Listen on Apple Podcasts.

Listen on Spotify.