Planet Russell

,

Charles StrossDead Lies Dreaming: Spoilers

I've been head-down in the guts of a novel this month, hence lack of blogging: purely by coincidence, I'm working on the next-but-one sequel to Dead Lies Dreaming.

Which reminds me that Dead Lies Dreaming came out nearly a month ago, and some of you probably have read it and have questions!

So feel free to ask me anything about the book in the comments below.

(Be warned that (a) there will probably be spoilers, and (b) I will probably not answer questions that would supply spoilers for the next books in the ongoing project.)

Charles StrossCountdown to Crazy

This is your official thread for discussing the upcoming US presidential and congressional election on November 3rd; along with its possible outcomes.

Do not chat about the US supreme court, congress, presidency, constitution, constitutional crises (possible), coup (possible), Donald Trump and his hellspawn offspring and associates, or anything about US politics in general on the Laundry Files book launch threads. If you do, your comments will be ruthlessly moderated into oblivion.

You are allowed and encouraged to discuss those topics in the comments below this topic.

(If you want to discuss "Dead Lies Dreaming" here I won't stop you, but there's plenty of other places for that!)

Planet DebianMichael Stapelberg: Debian Code Search: positional index, TurboPFor-compressed

See the Conclusion for a summary if you’re impatient :-)

Motivation

Over the last few months, I have been developing a new index format for Debian Code Search. This required a lot of careful refactoring, re-implementation, debug tool creation and debugging.

Multiple factors motivated my work on a new index format:

  1. The existing index format has a 2G size limit, into which we have bumped a few times, requiring manual intervention to keep the system running.

  2. Debugging the existing system required creating ad-hoc debugging tools, which made debugging sessions unnecessarily lengthy and painful.

  3. I wanted to check whether switching to a different integer compression format would improve performance (it does not).

  4. I wanted to check whether storing positions with the posting lists would improve performance of identifier queries (= queries which are not using any regular expression features), which make up 78.2% of all Debian Code Search queries (it does).

I figured building a new index from scratch was the easiest approach, compared to refactoring the existing index to increase the size limit (point ①).

I also figured it would be a good idea to develop the debugging tool in lock step with the index format so that I can be sure the tool works and is useful (point ②).

Integer compression: TurboPFor

As a quick refresher, search engines typically store document IDs (representing source code files, in our case) in an ordered list (“posting list”). It usually makes sense to apply at least a rudimentary level of compression: our existing system used variable integer encoding.

TurboPFor, the self-proclaimed “Fastest Integer Compression” library, combines an advanced on-disk format with a carefully tuned SIMD implementation to reach better speeds (in micro benchmarks) at less disk usage than Russ Cox’s varint implementation in github.com/google/codesearch.

If you are curious about its inner workings, check out my “TurboPFor: an analysis”.

Applied on the Debian Code Search index, TurboPFor indeed compresses integers better:

Disk space

 
8.9G codesearch varint index

 
5.5G TurboPFor index

Switching to TurboPFor (via cgo) for storing and reading the index results in a slight speed-up of a dcs replay benchmark, which is more pronounced the more i/o is required.

Query speed (regexp, cold page cache)

 
18s codesearch varint index

 
14s TurboPFor index (cgo)

Query speed (regexp, warm page cache)

 
15s codesearch varint index

 
14s TurboPFor index (cgo)

Overall, TurboPFor is an all-around improvement in efficiency, albeit with a high cost in implementation complexity.

Positional index: trade more disk for faster queries

This section builds on the previous section: all figures come from the TurboPFor index, which can optionally support positions.

Conceptually, we’re going from:

type docid uint32
type index map[trigram][]docid

…to:

type occurrence struct {
    doc docid
    pos uint32 // byte offset in doc
}
type index map[trigram][]occurrence

The resulting index consumes more disk space, but can be queried faster:

  1. We can do fewer queries: instead of reading all the posting lists for all the trigrams, we can read the posting lists for the query’s first and last trigram only.
    This is one of the tricks described in the paper “AS-Index: A Structure For String Search Using n-grams and Algebraic Signatures” (PDF), and goes a long way without incurring the complexity, computational cost and additional disk usage of calculating algebraic signatures.

  2. Verifying the delta between the last and first position matches the length of the query term significantly reduces the number of files to read (lower false positive rate).

  3. The matching phase is quicker: instead of locating the query term in the file, we only need to compare a few bytes at a known offset for equality.

  4. More data is read sequentially (from the index), which is faster.

Disk space

A positional index consumes significantly more disk space, but not so much as to pose a challenge: a Hetzner EX61-NVME dedicated server (≈ 64 €/month) provides 1 TB worth of fast NVMe flash storage.

 
 6.5G non-positional

 
123G positional

 
  93G positional (posrel)

The idea behind the positional index (posrel) is to not store a (doc,pos) tuple on disk, but to store positions, accompanied by a stream of doc/pos relationship bits: 1 means this position belongs to the next document, 0 means this position belongs to the current document.

This is an easy way of saving some space without modifying the TurboPFor on-disk format: the posrel technique reduces the index size to about ¾.

With the increase in size, the Linux page cache hit ratio will be lower for the positional index, i.e. more data will need to be fetched from disk for querying the index.

As long as the disk can deliver data as fast as you can decompress posting lists, this only translates into one disk seek’s worth of additional latency. This is the case with modern NVMe disks that deliver thousands of MB/s, e.g. the Samsung 960 Pro (used in Hetzner’s aforementioned EX61-NVME server).

The values were measured by running dcs du -h /srv/dcs/shard*/full without and with the -pos argument.

Bytes read

A positional index requires fewer queries: reading only the first and last trigram’s posting lists and positions is sufficient to achieve a lower (!) false positive rate than evaluating all trigram’s posting lists in a non-positional index.

As a consequence, fewer files need to be read, resulting in fewer bytes required to read from disk overall.

As an additional bonus, in a positional index, more data is read sequentially (index), which is faster than random i/o, regardless of the underlying disk.

1.2G
19.8G
21.0G regexp queries

4.2G (index)
10.8G (files)
15.0G identifier queries

The values were measured by running iostat -d 25 just before running bench.zsh on an otherwise idle system.

Query speed

Even though the positional index is larger and requires more data to be read at query time (see above), thanks to the C TurboPFor library, the 2 queries on a positional index are roughly as fast as the n queries on a non-positional index (≈4s instead of ≈3s).

This is more than made up for by the combined i/o matching stage, which shrinks from ≈18.5s (7.1s i/o + 11.4s matching) to ≈1.3s.

3.3s (index)
7.1s (i/o)
11.4s (matching)
21.8s regexp queries

3.92s (index)
≈1.3s
5.22s identifier queries

Note that identifier query i/o was sped up not just by needing to read fewer bytes, but also by only having to verify bytes at a known offset instead of needing to locate the identifier within the file.

Conclusion

The new index format is overall slightly more efficient. This disk space efficiency allows us to introduce a positional index section for the first time.

Most Debian Code Search queries are positional queries (78.2%) and will be answered much quicker by leveraging the positions.

Bottomline, it is beneficial to use a positional index on disk over a non-positional index in RAM.

Planet DebianMichael Stapelberg: Linux package managers are slow

I measured how long the most popular Linux distribution’s package manager take to install small and large packages (the ack(1p) source code search Perl script and qemu, respectively).

Where required, my measurements include metadata updates such as transferring an up-to-date package list. For me, requiring a metadata update is the more common case, particularly on live systems or within Docker containers.

All measurements were taken on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz running Docker 1.13.1 on Linux 4.19, backed by a Samsung 970 Pro NVMe drive boasting many hundreds of MB/s write performance. The machine is located in Zürich and connected to the Internet with a 1 Gigabit fiber connection, so the expected top download speed is ≈115 MB/s.

See Appendix C for details on the measurement method and command outputs.

Measurements

Keep in mind that these are one-time measurements. They should be indicative of actual performance, but your experience may vary.

ack (small Perl program)

distribution package manager data wall-clock time rate
Fedora dnf 114 MB 33s 3.4 MB/s
Debian apt 16 MB 10s 1.6 MB/s
NixOS Nix 15 MB 5s 3.0 MB/s
Arch Linux pacman 6.5 MB 3s 2.1 MB/s
Alpine apk 10 MB 1s 10.0 MB/s

qemu (large C program)

distribution package manager data wall-clock time rate
Fedora dnf 226 MB 4m37s 1.2 MB/s
Debian apt 224 MB 1m35s 2.3 MB/s
Arch Linux pacman 142 MB 44s 3.2 MB/s
NixOS Nix 180 MB 34s 5.2 MB/s
Alpine apk 26 MB 2.4s 10.8 MB/s


(Looking for older measurements? See Appendix B (2019).

The difference between the slowest and fastest package managers is 30x!

How can Alpine’s apk and Arch Linux’s pacman be an order of magnitude faster than the rest? They are doing a lot less than the others, and more efficiently, too.

Pain point: too much metadata

For example, Fedora transfers a lot more data than others because its main package list is 60 MB (compressed!) alone. Compare that with Alpine’s 734 KB APKINDEX.tar.gz.

Of course the extra metadata which Fedora provides helps some use case, otherwise they hopefully would have removed it altogether. The amount of metadata seems excessive for the use case of installing a single package, which I consider the main use-case of an interactive package manager.

I expect any modern Linux distribution to only transfer absolutely required data to complete my task.

Pain point: no concurrency

Because they need to sequence executing arbitrary package maintainer-provided code (hooks and triggers), all tested package managers need to install packages sequentially (one after the other) instead of concurrently (all at the same time).

In my blog post “Can we do without hooks and triggers?”, I outline that hooks and triggers are not strictly necessary to build a working Linux distribution.

Thought experiment: further speed-ups

Strictly speaking, the only required feature of a package manager is to make available the package contents so that the package can be used: a program can be started, a kernel module can be loaded, etc.

By only implementing what’s needed for this feature, and nothing more, a package manager could likely beat apk’s performance. It could, for example:

  • skip archive extraction by mounting file system images (like AppImage or snappy)
  • use compression which is light on CPU, as networks are fast (like apk)
  • skip fsync when it is safe to do so, i.e.:
    • package installations don’t modify system state
    • atomic package installation (e.g. an append-only package store)
    • automatically clean up the package store after crashes

Current landscape

Here’s a table outlining how the various package managers listed on Wikipedia’s list of software package management systems fare:

name scope package file format hooks/triggers
AppImage apps image: ISO9660, SquashFS no
snappy apps image: SquashFS yes: hooks
FlatPak apps archive: OSTree no
0install apps archive: tar.bz2 no
nix, guix distro archive: nar.{bz2,xz} activation script
dpkg distro archive: tar.{gz,xz,bz2} in ar(1) yes
rpm distro archive: cpio.{bz2,lz,xz} scriptlets
pacman distro archive: tar.xz install
slackware distro archive: tar.{gz,xz} yes: doinst.sh
apk distro archive: tar.gz yes: .post-install
Entropy distro archive: tar.bz2 yes
ipkg, opkg distro archive: tar{,.gz} yes

Conclusion

As per the current landscape, there is no distribution-scoped package manager which uses images and leaves out hooks and triggers, not even in smaller Linux distributions.

I think that space is really interesting, as it uses a minimal design to achieve significant real-world speed-ups.

I have explored this idea in much more detail, and am happy to talk more about it in my post “Introducing the distri research linux distribution".

There are a couple of recent developments going into the same direction:

Appendix C: measurement details (2020)

ack

You can expand each of these:

Fedora’s dnf takes almost 33 seconds to fetch and unpack 114 MB.

% docker run -t -i fedora /bin/bash
[root@62d3cae2e2f9 /]# time dnf install -y ack
Fedora 32 openh264 (From Cisco) - x86_64     1.9 kB/s | 2.5 kB     00:01
Fedora Modular 32 - x86_64                   6.8 MB/s | 4.9 MB     00:00
Fedora Modular 32 - x86_64 - Updates         5.6 MB/s | 3.7 MB     00:00
Fedora 32 - x86_64 - Updates                 9.9 MB/s |  23 MB     00:02
Fedora 32 - x86_64                            39 MB/s |  70 MB     00:01
[…]
real	0m32.898s
user	0m25.121s
sys	0m1.408s

NixOS’s Nix takes a little over 5s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -iA nixpkgs.ack'
unpacking channels...
created 1 symlinks in user environment
installing 'perl5.32.0-ack-3.3.1'
these paths will be fetched (15.55 MiB download, 85.51 MiB unpacked):
  /nix/store/34l8jdg76kmwl1nbbq84r2gka0kw6rc8-perl5.32.0-ack-3.3.1-man
  /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31
  /nix/store/9fd4pjaxpjyyxvvmxy43y392l7yvcwy1-perl5.32.0-File-Next-1.18
  /nix/store/czc3c1apx55s37qx4vadqhn3fhikchxi-libunistring-0.9.10
  /nix/store/dj6n505iqrk7srn96a27jfp3i0zgwa1l-acl-2.2.53
  /nix/store/ifayp0kvijq0n4x0bv51iqrb0yzyz77g-perl-5.32.0
  /nix/store/w9wc0d31p4z93cbgxijws03j5s2c4gyf-coreutils-8.31
  /nix/store/xim9l8hym4iga6d4azam4m0k0p1nw2rm-libidn2-2.3.0
  /nix/store/y7i47qjmf10i1ngpnsavv88zjagypycd-attr-2.4.48
  /nix/store/z45mp61h51ksxz28gds5110rf3wmqpdc-perl5.32.0-ack-3.3.1
copying path '/nix/store/34l8jdg76kmwl1nbbq84r2gka0kw6rc8-perl5.32.0-ack-3.3.1-man' from 'https://cache.nixos.org'...
copying path '/nix/store/czc3c1apx55s37qx4vadqhn3fhikchxi-libunistring-0.9.10' from 'https://cache.nixos.org'...
copying path '/nix/store/9fd4pjaxpjyyxvvmxy43y392l7yvcwy1-perl5.32.0-File-Next-1.18' from 'https://cache.nixos.org'...
copying path '/nix/store/xim9l8hym4iga6d4azam4m0k0p1nw2rm-libidn2-2.3.0' from 'https://cache.nixos.org'...
copying path '/nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31' from 'https://cache.nixos.org'...
copying path '/nix/store/y7i47qjmf10i1ngpnsavv88zjagypycd-attr-2.4.48' from 'https://cache.nixos.org'...
copying path '/nix/store/dj6n505iqrk7srn96a27jfp3i0zgwa1l-acl-2.2.53' from 'https://cache.nixos.org'...
copying path '/nix/store/w9wc0d31p4z93cbgxijws03j5s2c4gyf-coreutils-8.31' from 'https://cache.nixos.org'...
copying path '/nix/store/ifayp0kvijq0n4x0bv51iqrb0yzyz77g-perl-5.32.0' from 'https://cache.nixos.org'...
copying path '/nix/store/z45mp61h51ksxz28gds5110rf3wmqpdc-perl5.32.0-ack-3.3.1' from 'https://cache.nixos.org'...
building '/nix/store/m0rl62grplq7w7k3zqhlcz2hs99y332l-user-environment.drv'...
created 49 symlinks in user environment
real	0m 5.60s
user	0m 3.21s
sys	0m 1.66s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@1996bb94a2d1:/# time (apt update && apt install -y ack-grep)
Get:1 http://deb.debian.org/debian sid InRelease [146 kB]
Get:2 http://deb.debian.org/debian sid/main amd64 Packages [8400 kB]
Fetched 8546 kB in 1s (8088 kB/s)
[…]
The following NEW packages will be installed:
  ack libfile-next-perl libgdbm-compat4 libgdbm6 libperl5.30 netbase perl perl-modules-5.30
0 upgraded, 8 newly installed, 0 to remove and 23 not upgraded.
Need to get 7341 kB of archives.
After this operation, 46.7 MB of additional disk space will be used.
[…]
real	0m9.544s
user	0m2.839s
sys	0m0.775s

Arch Linux’s pacman takes a little under 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9f6672688a64 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            130.8 KiB  1090 KiB/s 00:00
 extra          1655.8 KiB  3.48 MiB/s 00:00
 community         5.2 MiB  6.11 MiB/s 00:01
resolving dependencies...
looking for conflicting packages...

Packages (2) perl-file-next-1.18-2  ack-3.4.0-1

Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
[…]
real	0m2.936s
user	0m0.375s
sys	0m0.160s

Alpine’s apk takes a little over 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/4) Installing libbz2 (1.0.8-r1)
(2/4) Installing perl (5.30.3-r0)
(3/4) Installing perl-file-next (1.18-r0)
(4/4) Installing ack (3.3.1-r0)
Executing busybox-1.31.1-r16.trigger
OK: 43 MiB in 18 packages
real	0m 1.24s
user	0m 0.40s
sys	0m 0.15s

qemu

You can expand each of these:

Fedora’s dnf takes over 4 minutes to fetch and unpack 226 MB.

% docker run -t -i fedora /bin/bash
[root@6a52ecfc3afa /]# time dnf install -y qemu
Fedora 32 openh264 (From Cisco) - x86_64     3.1 kB/s | 2.5 kB     00:00
Fedora Modular 32 - x86_64                   6.3 MB/s | 4.9 MB     00:00
Fedora Modular 32 - x86_64 - Updates         6.0 MB/s | 3.7 MB     00:00
Fedora 32 - x86_64 - Updates                 334 kB/s |  23 MB     01:10
Fedora 32 - x86_64                            33 MB/s |  70 MB     00:02
[…]

Total download size: 181 M
Downloading Packages:
[…]

real	4m37.652s
user	0m38.239s
sys	0m6.321s

NixOS’s Nix takes almost 34s to fetch and unpack 180 MB.

% docker run -t -i nixos/nix
83971cf79f7e:/# time sh -c 'nix-channel --update && nix-env -iA nixpkgs.qemu'
unpacking channels...
created 1 symlinks in user environment
installing 'qemu-5.1.0'
these paths will be fetched (180.70 MiB download, 1146.92 MiB unpacked):
[…]
real	0m 33.64s
user	0m 16.96s
sys	0m 3.05s

Debian’s apt takes over 95 seconds to fetch and unpack 224 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 http://deb.debian.org/debian sid InRelease [146 kB]
Get:2 http://deb.debian.org/debian sid/main amd64 Packages [8400 kB]
Fetched 8546 kB in 1s (5998 kB/s)
[…]
Fetched 216 MB in 43s (5006 kB/s)
[…]
real	1m25.375s
user	0m29.163s
sys	0m12.835s

Arch Linux’s pacman takes almost 44s to fetch and unpack 142 MB.

% docker run -t -i archlinux/base
[root@58c78bda08e8 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core          130.8 KiB  1055 KiB/s 00:00
 extra        1655.8 KiB  3.70 MiB/s 00:00
 community       5.2 MiB  7.89 MiB/s 00:01
[…]
Total Download Size:   135.46 MiB
Total Installed Size:  661.05 MiB
[…]
real	0m43.901s
user	0m4.980s
sys	0m2.615s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
[…]
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Appendix B: measurement details (2019)

ack

You can expand each of these:

Fedora’s dnf takes almost 30 seconds to fetch and unpack 107 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y ack
Fedora Modular 30 - x86_64            4.4 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  3.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           17 MB/s |  19 MB     00:01
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  44 Packages

Total download size: 13 M
Installed size: 42 M
[…]
real	0m29.498s
user	0m22.954s
sys	0m1.085s

NixOS’s Nix takes 14s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i perl5.28.2-ack-2.28'
unpacking channels...
created 2 symlinks in user environment
installing 'perl5.28.2-ack-2.28'
these paths will be fetched (14.91 MiB download, 80.83 MiB unpacked):
  /nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2
  /nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48
  /nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man
  /nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27
  /nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31
  /nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53
  /nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16
  /nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28
copying path '/nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man' from 'https://cache.nixos.org'...
copying path '/nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27' from 'https://cache.nixos.org'...
copying path '/nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16' from 'https://cache.nixos.org'...
copying path '/nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48' from 'https://cache.nixos.org'...
copying path '/nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53' from 'https://cache.nixos.org'...
copying path '/nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31' from 'https://cache.nixos.org'...
copying path '/nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2' from 'https://cache.nixos.org'...
copying path '/nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28' from 'https://cache.nixos.org'...
building '/nix/store/q3243sjg91x1m8ipl0sj5gjzpnbgxrqw-user-environment.drv'...
created 56 symlinks in user environment
real	0m 14.02s
user	0m 8.83s
sys	0m 2.69s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y ack-grep)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [233 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8270 kB]
Fetched 8502 kB in 2s (4764 kB/s)
[…]
The following NEW packages will be installed:
  ack ack-grep libfile-next-perl libgdbm-compat4 libgdbm5 libperl5.26 netbase perl perl-modules-5.26
The following packages will be upgraded:
  perl-base
1 upgraded, 9 newly installed, 0 to remove and 60 not upgraded.
Need to get 8238 kB of archives.
After this operation, 42.3 MB of additional disk space will be used.
[…]
real	0m9.096s
user	0m2.616s
sys	0m0.441s

Arch Linux’s pacman takes a little over 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            132.2 KiB  1033K/s 00:00
 extra          1629.6 KiB  2.95M/s 00:01
 community         4.9 MiB  5.75M/s 00:01
[…]
Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
[…]
real	0m3.354s
user	0m0.224s
sys	0m0.049s

Alpine’s apk takes only about 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
/ # time apk add ack
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/4) Installing perl-file-next (1.16-r0)
(2/4) Installing libbz2 (1.0.6-r7)
(3/4) Installing perl (5.28.2-r1)
(4/4) Installing ack (3.0.0-r0)
Executing busybox-1.30.1-r2.trigger
OK: 44 MiB in 18 packages
real	0m 0.96s
user	0m 0.25s
sys	0m 0.07s

qemu

You can expand each of these:

Fedora’s dnf takes over a minute to fetch and unpack 266 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y qemu
Fedora Modular 30 - x86_64            3.1 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  2.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           20 MB/s |  19 MB     00:00
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  262 Packages
Upgrade    4 Packages

Total download size: 172 M
[…]
real	1m7.877s
user	0m44.237s
sys	0m3.258s

NixOS’s Nix takes 38s to fetch and unpack 262 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i qemu-4.0.0'
unpacking channels...
created 2 symlinks in user environment
installing 'qemu-4.0.0'
these paths will be fetched (262.18 MiB download, 1364.54 MiB unpacked):
[…]
real	0m 38.49s
user	0m 26.52s
sys	0m 4.43s

Debian’s apt takes 51 seconds to fetch and unpack 159 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [149 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8426 kB]
Fetched 8574 kB in 1s (6716 kB/s)
[…]
Fetched 151 MB in 2s (64.6 MB/s)
[…]
real	0m51.583s
user	0m15.671s
sys	0m3.732s

Arch Linux’s pacman takes 1m2s to fetch and unpack 124 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core       132.2 KiB   751K/s 00:00
 extra     1629.6 KiB  3.04M/s 00:01
 community    4.9 MiB  6.16M/s 00:01
[…]
Total Download Size:   123.20 MiB
Total Installed Size:  587.84 MiB
[…]
real	1m2.475s
user	0m9.272s
sys	0m2.458s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
[…]
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Planet DebianMichael Stapelberg: Winding down my Debian involvement

This post is hard to write, both in the emotional sense but also in the “I would have written a shorter letter, but I didn’t have the time” sense. Hence, please assume the best of intentions when reading it—it is not my intention to make anyone feel bad about their contributions, but rather to provide some insight into why my frustration level ultimately exceeded the threshold.

Debian has been in my life for well over 10 years at this point.

A few weeks ago, I have visited some old friends at the Zürich Debian meetup after a multi-year period of absence. On my bike ride home, it occurred to me that the topics of our discussions had remarkable overlap with my last visit. We had a discussion about the merits of systemd, which took a detour to respect in open source communities, returned to processes in Debian and eventually culminated in democracies and their theoretical/practical failings. Admittedly, that last one might be a Swiss thing.

I say this not to knock on the Debian meetup, but because it prompted me to reflect on what feelings Debian is invoking lately and whether it’s still a good fit for me.

So I’m finally making a decision that I should have made a long time ago: I am winding down my involvement in Debian to a minimum.

What does this mean?

Over the coming weeks, I will:

  • transition packages to be team-maintained where it makes sense
  • remove myself from the Uploaders field on packages with other maintainers
  • orphan packages where I am the sole maintainer

I will try to keep up best-effort maintenance of the manpages.debian.org service and the codesearch.debian.net service, but any help would be much appreciated.

For all intents and purposes, please treat me as permanently on vacation. I will try to be around for administrative issues (e.g. permission transfers) and questions addressed directly to me, permitted they are easy enough to answer.

Why?

When I joined Debian, I was still studying, i.e. I had luxurious amounts of spare time. Now, over 5 years of full time work later, my day job taught me a lot, both about what works in large software engineering projects and how I personally like my computer systems. I am very conscious of how I spend the little spare time that I have these days.

The following sections each deal with what I consider a major pain point, in no particular order. Some of them influence each other—for example, if changes worked better, we could have a chance at transitioning packages to be more easily machine readable.

Change process in Debian

The last few years, my current team at work conducted various smaller and larger refactorings across the entire code base (touching thousands of projects), so we have learnt a lot of valuable lessons about how to effectively do these changes. It irks me that Debian works almost the opposite way in every regard. I appreciate that every organization is different, but I think a lot of my points do actually apply to Debian.

In Debian, packages are nudged in the right direction by a document called the Debian Policy, or its programmatic embodiment, lintian.

While it is great to have a lint tool (for quick, local/offline feedback), it is even better to not require a lint tool at all. The team conducting the change (e.g. the C++ team introduces a new hardening flag for all packages) should be able to do their work transparent to me.

Instead, currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages.

Notably, the cost of each change is distributed onto the package maintainers in the Debian model. At work, we have found that the opposite works better: if the team behind the change is put in power to do the change for as many users as possible, they can be significantly more efficient at it, which reduces the total cost and time a lot. Of course, exceptions (e.g. a large project abusing a language feature) should still be taken care of by the respective owners, but the important bit is that the default should be the other way around.

Debian is lacking tooling for large changes: it is hard to programmatically deal with packages and repositories (see the section below). The closest to “sending out a change for review” is to open a bug report with an attached patch. I thought the workflow for accepting a change from a bug report was too complicated and started mergebot, but only Guido ever signaled interest in the project.

Culturally, reviews and reactions are slow. There are no deadlines. I literally sometimes get emails notifying me that a patch I sent out a few years ago (!!) is now merged. This turns projects from a small number of weeks into many years, which is a huge demotivator for me.

Interestingly enough, you can see artifacts of the slow online activity manifest itself in the offline culture as well: I don’t want to be discussing systemd’s merits 10 years after I first heard about it.

Lastly, changes can easily be slowed down significantly by holdouts who refuse to collaborate. My canonical example for this is rsync, whose maintainer refused my patches to make the package use debhelper purely out of personal preference.

Granting so much personal freedom to individual maintainers prevents us as a project from raising the abstraction level for building Debian packages, which in turn makes tooling harder.

How would things look like in a better world?

  1. As a project, we should strive towards more unification. Uniformity still does not rule out experimentation, it just changes the trade-off from easier experimentation and harder automation to harder experimentation and easier automation.
  2. Our culture needs to shift from “this package is my domain, how dare you touch it” to a shared sense of ownership, where anyone in the project can easily contribute (reviewed) changes without necessarily even involving individual maintainers.

To learn more about how successful large changes can look like, I recommend my colleague Hyrum Wright’s talk “Large-Scale Changes at Google: Lessons Learned From 5 Yrs of Mass Migrations”.

Fragmented workflow and infrastructure

Debian generally seems to prefer decentralized approaches over centralized ones. For example, individual packages are maintained in separate repositories (as opposed to in one repository), each repository can use any SCM (git and svn are common ones) or no SCM at all, and each repository can be hosted on a different site. Of course, what you do in such a repository also varies subtly from team to team, and even within teams.

In practice, non-standard hosting options are used rarely enough to not justify their cost, but frequently enough to be a huge pain when trying to automate changes to packages. Instead of using GitLab’s API to create a merge request, you have to design an entirely different, more complex system, which deals with intermittently (or permanently!) unreachable repositories and abstracts away differences in patch delivery (bug reports, merge requests, pull requests, email, …).

Wildly diverging workflows is not just a temporary problem either. I participated in long discussions about different git workflows during DebConf 13, and gather that there were similar discussions in the meantime.

Personally, I cannot keep enough details of the different workflows in my head. Every time I touch a package that works differently than mine, it frustrates me immensely to re-learn aspects of my day-to-day.

After noticing workflow fragmentation in the Go packaging team (which I started), I tried fixing this with the workflow changes proposal, but did not succeed in implementing it. The lack of effective automation and slow pace of changes in the surrounding tooling despite my willingness to contribute time and energy killed any motivation I had.

Old infrastructure: package uploads

When you want to make a package available in Debian, you upload GPG-signed files via anonymous FTP. There are several batch jobs (the queue daemon, unchecked, dinstall, possibly others) which run on fixed schedules (e.g. dinstall runs at 01:52 UTC, 07:52 UTC, 13:52 UTC and 19:52 UTC).

Depending on timing, I estimated that you might wait for over 7 hours (!!) before your package is actually installable.

What’s worse for me is that feedback to your upload is asynchronous. I like to do one thing, be done with it, move to the next thing. The current setup requires a many-minute wait and costly task switch for no good technical reason. You might think a few minutes aren’t a big deal, but when all the time I can spend on Debian per day is measured in minutes, this makes a huge difference in perceived productivity and fun.

The last communication I can find about speeding up this process is ganneff’s post from 2008.

How would things look like in a better world?

  1. Anonymous FTP would be replaced by a web service which ingests my package and returns an authoritative accept or reject decision in its response.
  2. For accepted packages, there would be a status page displaying the build status and when the package will be available via the mirror network.
  3. Packages should be available within a few minutes after the build completed.

Old infrastructure: bug tracker

I dread interacting with the Debian bug tracker. debbugs is a piece of software (from 1994) which is only used by Debian and the GNU project these days.

Debbugs processes emails, which is to say it is asynchronous and cumbersome to deal with. Despite running on the fastest machines we have available in Debian (or so I was told when the subject last came up), its web interface loads very slowly.

Notably, the web interface at bugs.debian.org is read-only. Setting up a working email setup for reportbug(1) or manually dealing with attachments is a rather big hurdle.

For reasons I don’t understand, every interaction with debbugs results in many different email threads.

Aside from the technical implementation, I also can never remember the different ways that Debian uses pseudo-packages for bugs and processes. I need them rarely enough to establish a mental model of how they are set up, or working memory of how they are used, but frequently enough to be annoyed by this.

How would things look like in a better world?

  1. Debian would switch from a custom bug tracker to a (any) well-established one.
  2. Debian would offer automation around processes. It is great to have a paper-trail and artifacts of the process in the form of a bug report, but the primary interface should be more convenient (e.g. a web form).

Old infrastructure: mailing list archives

It baffles me that in 2019, we still don’t have a conveniently browsable threaded archive of mailing list discussions. Email and threading is more widely used in Debian than anywhere else, so this is somewhat ironic. Gmane used to paper over this issue, but Gmane’s availability over the last few years has been spotty, to say the least (it is down as I write this).

I tried to contribute a threaded list archive, but our listmasters didn’t seem to care or want to support the project.

Debian is hard to machine-read

While it is obviously possible to deal with Debian packages programmatically, the experience is far from pleasant. Everything seems slow and cumbersome. I have picked just 3 quick examples to illustrate my point.

debiman needs help from piuparts in analyzing the alternatives mechanism of each package to display the manpages of e.g. psql(1). This is because maintainer scripts modify the alternatives database by calling shell scripts. Without actually installing a package, you cannot know which changes it does to the alternatives database.

pk4 needs to maintain its own cache to look up package metadata based on the package name. Other tools parse the apt database from scratch on every invocation. A proper database format, or at least a binary interchange format, would go a long way.

Debian Code Search wants to ingest new packages as quickly as possible. There used to be a fedmsg instance for Debian, but it no longer seems to exist. It is unclear where to get notifications from for new packages, and where best to fetch those packages.

Complicated build stack

See my “Debian package build tools” post. It really bugs me that the sprawl of tools is not seen as a problem by others.

Developer experience pretty painful

Most of the points discussed so far deal with the experience in developing Debian, but as I recently described in my post “Debugging experience in Debian”, the experience when developing using Debian leaves a lot to be desired, too.

I have more ideas

At this point, the article is getting pretty long, and hopefully you got a rough idea of my motivation.

While I described a number of specific shortcomings above, the final nail in the coffin is actually the lack of a positive outlook. I have more ideas that seem really compelling to me, but, based on how my previous projects have been going, I don’t think I can make any of these ideas happen within the Debian project.

I intend to publish a few more posts about specific ideas for improving operating systems here. Stay tuned.

Lastly, I hope this post inspires someone, ideally a group of people, to improve the developer experience within Debian.

Planet DebianKentaro Hayashi: Introduction about recent debexpo (mentors.debian.net)

I've make a presentation about "How to hack debexpo (mentors.debian.net)" at Tokyo Debian (local Debian meeting) 21, November 2020.

Here is the agenda about presentation.

  • What is mentors.debian.net
  • How to setup debexpo development environment
  • One example to hack debexpo (Showing "In Debian" flag)

The presentation slide is published at Rabbit Slide Show (Written in Japanese)

I hope that more people will be involved to hack debexpo!

,

Planet DebianShirish Agarwal: Rights, Press freedom and India

In some ways it is sad and interesting to see how personal liberty is viewed in India. And how it differs from those having the highest fame and power can get a different kind of justice then the rest cannot.

Arnab Goswami

This particular gentleman is a class apart. He is the editor as well as Republic TV, a right-leaning channel which demonizes the minority, women whatever is antithesis to the Central Govt. of India. As a result there have been a spate of cases against him in the past few months. But surprisingly, in each of them he got hearing the day after the suit was filed. This is unique in Indian legal history so much so that a popular legal site which publishes on-going cases put up a post sharing how he was getting prompt hearings. That post itself needs to be updated as there have been 3 more hearings which have been done back to back for him. This is unusual as there have been so many cases pending for the SC attention, some arguably more important than this gentleman . So many precedents have been set which will send a wrong message. The biggest one, that even though a trial is taking place in the sessions court (below High Court) the SC can interject on matters. What this will do to the morale of both lawyers as well as judges of the various Sessions Court is a matter of speculation and yet as shared unprecedented. The saddest part was when Justice Chandrachud said –

Justice Chandrachud – If you don’t like a channel then don’t watch it. – 11th November 2020 .

This is basically giving a free rope to hate speech. How can a SC say like that ? And this is the Same Supreme Court which could not take two tweets from Shri Prashant Bhushan when he made remarks against the judiciary .

J&K pleas in Supreme Court pending since August 2019 (Abrogation 370)

After abrogation of 370, citizens of Jammu and Kashmir, the population of which is 13.6 million people including 4 million Hindus have been stuck with reduced rights and their land being taken away due to new laws. Many of the Hindus which regionally are a minority now rue the fact that they supported the abrogation of 370A . Imagine, a whole state whose answers and prayers have not been heard by the Supreme Court and the people need to move a prayer stating the same.

100 Journalists, activists languishing in Jail without even a hearing

55 Journalists alone have been threatened, booked and in jail for reporting of pandemic . Their fault, they were bring the irregularities, corruption made during the pandemic early months. Activists such as Sudha Bharadwaj, who giving up her American citizenship and settling to fight for tribals is in jail for 2 years without any charges. There are many like her, There are several more petitions lying in the Supreme Court, for e.g. Varavara Rao, not a single hearing from last couple of years, even though he has taken part in so many national movements including the emergency as well as part-responsible for creation of Telengana state out of Andhra Pradesh .

Then there is Devangana kalita who works for gender rights. Similar to Sudha Bharadwaj, she had an opportunity to go to UK and settle here. She did her master’s and came back. And now she is in jail for the things that she studied. While she took part in Anti-CAA sittings, none of her speeches were incendiary but she still is locked up under UAPA (Unlawful Practises Act) . I could go on and on but at the moment these should suffice.

Petitions for Hate Speech which resulted in riots in Delhi are pending, Citizen’s Amendment Act (controversial) no hearings till date. All of the best has been explained in a newspaper article which articulates perhaps all that I wanted to articulate and more. It is and was amazing to see how in certain cases Article 32 is valid and in many it is not. Also a fair reading of Justice Bobde’s article tells you a lot how the SC is functioning. I would like to point out that barandbench along with livelawindia makes it easier for never non-lawyers and public to know how arguments are done in court, what evidences are taken as well as give some clue about judicial orders and judgements. Both of these resources are providing an invaluable service and more often than not, free of charge.

Student Suicide and High Cost of Education

For quite sometime now, the cost of education has been shooting up. While I have visited this topic earlier as well, recently a young girl committed suicide because she was unable to pay the fees as well as additional costs due to pandemic. Further investigations show that this is the case with many of the students who are unable to buy laptops. Now while one could think it is limited to one college then it would be wrong. It is almost across all India and this will continue for months and years. People do know that the pandemic is going to last a significant time and it would be a long time before R value becomes zero . Even the promising vaccine from Pfizer need constant refrigeration which is sort of next to impossible in India. It is going to make things very costly.

Last Nail on Indian Media

Just today the last nail on India has been put. Thankfully Freedom Gazette India did a much better job so just pasting that –

Information and Broadcasting Ministry bringing OTT services as well as news within its ambit.

With this, projects like Scam 1992, The Harshad Mehta Story or Bad Boy Billionaires:India, Test Case, Delhi Crime, Laakhon Mein Ek etc. etc. such kind of series, investigative journalism would be still-births. Many of these web-series also shared tales of woman empowerment while at the same time showed some of the hard choices that women had to contend to live with.

Even western media may be censored where it finds the political discourse not to its liking. There had been so many accounts of Mr. Ravish Kumar, the winner of Ramon Magsaysay, how in his shows the electricity was cut in many places. I too have been the victim when the BJP governed in Maharashtra as almost all Puneities experienced it. Light would go for just half or 45 minutes at the exact time.

There is another aspect to it. The U.S. elections showed how independent media was able to counter Mr. Trump’s various falsehoods and give rise to alternative ideas which lead the team of Bernie Sanders, Joe Biden and Kamala Harris, Biden now being the President-elect while Kamala Harris being the vice-president elect. Although the journey to the white house seems as tough as before. Let’s see what happens.

Hopefully 2021 will bring in some good news.



Krebs on SecurityConvicted SIM Swapper Gets 3 Years in Jail

A 21-year-old Irishman who pleaded guilty to charges of helping to steal millions of dollars in cryptocurrencies from victims has been sentenced to just under three years in prison. The defendant is part of an alleged conspiracy involving at least eight others in the United States who stand accused of theft via SIM swapping, a crime that involves convincing mobile phone company employees to transfer ownership of the target’s phone number to a device the attackers control.

Conor Freeman of Dublin took part in the theft of more than two million dollars worth of cryptocurrency from different victims throughout 2018. Freeman was named as a member of a group of alleged SIM swappers called “The Community” charged last year with wire fraud in connection with SIM swapping attacks that netted in excess of $2.4 million.

Among the eight others accused are three former wireless phone company employees who allegedly helped the gang hijack mobile numbers tied to their targets. Prosecutors say the men would identify people likely to have significant cryptocurrency holdings, then pay their phone company cohorts to transfer the victim’s mobile service to a new SIM card — the smart chip in each phone that ties a customer’s device to their number.

A fraudulent SIM swap allows the bad guys to intercept a target’s incoming phone calls and text messages. This is dangerous because a great many sites and services still allow customers to reset their passwords simply by clicking on a link sent via SMS. From there, attackers can gain access to any accounts that allow password resets via SMS or automated calls, from email and social media profiles to virtual currency trading platforms.

Like other accused members of The Community, Freeman was an active member of OGUsers, a forum that caters to people selling access to hijacked social media and other online accounts. But unlike others in the group, Freeman used his real name (username: Conor), and disclosed his hometown and date of birth to others on the forum. At least twice in the past few years OGUsers was hacked, and its database of profiles and user messages posted online.

According to a report in The Irish Times, Freeman spent approximately €130,000, which he had converted into cash from the stolen cryptocurrency. Conor posted on OGUsers that he spent approximately $14,000 on a Rolex watch. The rest was handed over to the police in the form of an electronic wallet that held the equivalent of more than $2 million.

The Irish Times says the judge in the case insisted the three-year sentence was warranted in order to deter the defendant and to prevent others from following in his footsteps. The judge said stealing money of this order is serious because no one can know the effect it will have on the victim, noting that one victim’s life savings were taken and the proceeds of the sale of his house were stolen.

One way to protect your accounts against SIM swappers is to remove your phone number as a primary or secondary authentication mechanism wherever possible. Many online services require you to provide a phone number upon registering an account, but in many cases that number can be removed from your profile afterwards.

It’s also important for people to use something other than text messages for two-factor authentication on their email accounts when stronger authentication options are available. Consider instead using a mobile app like Authy, Duo, or Google Authenticator to generate the one-time code. Or better yet, a physical security key if that’s an option.

Cory DoctorowThe Attack Surface Lectures: Little Revolutions

The Attack Surface Lectures were a series of eight panel discussions on the themes in my novel Attack Surface, each hosted by a different bookstore and each accompanied by a different pair of guest speakers.

This program is “Little Revolutions,” hosted by Skylight Books in Los Angeles, with guest-hosts Tochi Onyebuchi and Bethany C. Morrow. It was recorded on October 21, 2020.

Here is a link to this presentation in Skylight’s archive of author events. Please consider subscribing to Skylight’s feed of these videos to see other outstanding author events!

MP3

Worse Than FailureError'd: Reduced Complexity, Increased Errors

"I tried a more complex password and got the same error message, but after trying with a shorter password, it let me through!" wrote Sameer K.

 

Lucas T. writes, "Translation: 'Dear ladies and gentlemen, because of an internet failure (some identifying info here), the electronic signature and the owl are unavailable. The issue is being worked on. Kind regards, your application support.' Well, isn't this just great. How exactly am I supposed to work without the owl!?"

 

"I was looking into time issues regarding backing up my Mac with TimeMachine and saw that it REALLY MUST BE A TIME MACHINE AFTER ALL!!" Mike S. wrote.

 

Joel B. writes, "3D printing my desserts? Sign. Me. Up."

 

"Huh. Apparently someone hacked my Facebook account before I was born," wrote Rob.

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Cryptogram Symantec Reports on Cicada APT Attacks against Japan

Symantec is reporting on an APT group linked to China, named Cicada. They have been attacking organizations in Japan and elsewhere.

Cicada has historically been known to target Japan-linked organizations, and has also targeted MSPs in the past. The group is using living-off-the-land tools as well as custom malware in this attack campaign, including a custom malware — Backdoor.Hartip — that Symantec has not seen being used by the group before. Among the machines compromised during this attack campaign were domain controllers and file servers, and there was evidence of files being exfiltrated from some of the compromised machines.

The attackers extensively use DLL side-loading in this campaign, and were also seen leveraging the ZeroLogon vulnerability that was patched in August 2020.

Interesting details about the group’s tactics.

News article.

Cryptogram The US Military Buys Commercial Location Data

Vice has a long article about how the US military buys commercial location data worldwide.

The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a “level” app that can be used to help, for example, install shelves in a bedroom.

This isn’t new, this isn’t just data of non-US citizens, and this isn’t the US military. We have lots of instances where the government buys data that it cannot legally collect itself.

Some app developers Motherboard spoke to were not aware who their users’ location data ends up with, and even if a user examines an app’s privacy policy, they may not ultimately realize how many different industries, companies, or government agencies are buying some of their most sensitive data. U.S. law enforcement purchase of such information has raised questions about authorities buying their way to location data that may ordinarily require a warrant to access. But the USSOCOM contract and additional reporting is the first evidence that U.S. location data purchases have extended from law enforcement to military agencies.

Planet DebianMolly de Blanc: Transparency

Technology must be transparent in order to be knowable. Technology must be knowable in order for us to be able to consent to it in good faith. Good faith informed consent is necessary to preserving our (digital) autonomy.

Let’s now look at this in reverse, considering first why informed consent is necessary to our digital autonomy.

Let’s take the concept of our digital autonomy as being one of the highest goods. It is necessary to preserve and respect the value of each individual, and the collectives we choose to form. It is a right to which we are entitled by our very nature, and a prerequisite for building the lives we want, that fulfill us. This is something that we have generally agreed on as important or even sacred. Our autonomy, in whatever form it takes, in whatever part of our life it governs, is necessary and must be protected.

One of the things we must do in order to accomplish this is to build a practice and culture of consent. Giving consent — saying yes — is not enough. This consent must come from a place of understand to that which one is consenting. “Informed consent is consenting to the unknowable.”(1)

Looking at sexual consent as a parallel, even when we have a partner who discloses their sexual history and activities, we cannot know whether they are being truthful and complete. Let’s even say they are and that we can trust this, there is a limit to how much even they know about their body, health, and experience. They might not know the extent of their other partners’ experience. They might be carrying HPV without symptoms; we rarely test for herpes.

Arguably, we have more potential to definitely know what is occurring when it comes to technological consent. Technology can be broken apart. We can share and examine code, schematics, and design documentation. Certainly, lots of information is being hidden from us — a lot of code is proprietary, technical documentation unavailable, and the skills to process these things is treated as special, arcane, and even magical. Tracing the resource pipelines for the minerals and metals essential to building circuit boards is not possible for the average person. Knowing the labor practices of each step of this process, and understanding what those imply for individuals, societies, and the environments they exist in seems improbable at best.

Even though true informed consent might not be possible, it is an ideal towards which we must strive. We must work with what we have, and we must be provided as much as possible.

A periodic conversation that arises in the consideration of technology rights is whether companies should build backdoors into technology for the purpose of government exploitation. A backdoor is a hidden vulnerability in a piece of technology that, when used, would afford someone else access to your device or work or cloud storage or whatever. As long as the source code that powers computing technology is proprietary and opaque, we cannot truly know whether backdoors exist and how secure we are in our digital spaces and even our own computers, phones, and other mobile devices.

We must commit wholly to transparency and openness in order to create the possibility of as-informed-as-possible consent in order to protect our digital autonomy. We cannot exist in a vacuum and practical autonomy relies on networks of truth in order to provide the opportunity for the ideal of informed consent. These networks of truth are created through the open availability and sharing of information, relating to how and why technology works the way it does.

(1) Heintzman, Kit. 2020.

Cory DoctorowThe Attack Surface Lectures: Cyberpunk and Post Cyberpunk

The Attack Surface Lectures were a series of eight panel discussions on the themes in Cory Doctorow’s novel Attack Surface, each hosted by a different bookstore and each accompanied by a different pair of guest speakers.

This program is “Cyberpunk and Post-Cyberpunk,” hosted by Anderson’s Books in Napierville, IL, with guest-hosts Bruce Sterling and Christopher Brown. It was recorded on October 19, 2020.

Here is the original Youtube link for this program. Please consider subscribing to Anderson’s Youtube channel for access to all their outstanding author events!

MP3

Planet DebianSteinar H. Gunderson: COVID-19 vaccine confidence intervals

I keep hearing about new vaccines being “at least 90% effective”, “94.5% effective”, “92% effective” etc... and that's obviously really good news. But is that a point estimate, or a confidence interval? Does 92% mean “anything from 70% to 99%”, given that n=20?

I dusted off the memories of how bootstrapping works (I didn't want to try to figure out whether one could really approximate using the Cauchy distribution or not) and wrote some R code. Obviously, don't use this for medical or policy decisions since I don't have a background in neither medicine nor medical statistics. But it's uplifting results nevertheless; here from the Pfizer/BioNTech data that I could find:

> N <- 43538 / 2
> infected_vaccine <- c(rep(1, times = 8), rep(0, times=N-8))
> infected_placebo <- c(rep(1, times = 162), rep(0, times=N-162))
>
> infected <- c(infected_vaccine, infected_placebo)
> vaccine <- c(rep(1, times=N), rep(0, times=N))
> mydata <- data.frame(infected, vaccine)
>
> library(boot)
> rsq <- function(data, indices) {
+   d <- data[indices,]
+   num_infected_vaccine <- sum(d[which(d$vaccine == 1), ]$infected)
+   num_infected_placebo <- sum(d[which(d$vaccine == 0), ]$infected)
+   return(1.0 - num_infected_vaccine / num_infected_placebo)
+ }
>
> results <- boot(data=mydata, statistic=rsq, R=1000)
> results

ORDINARY NONPARAMETRIC BOOTSTRAP


Call:
boot(data = mydata, statistic = rsq, R = 1000)


Bootstrap Statistics :
     original       bias    std. error
t1* 0.9506173 -0.001428342  0.01832874
> boot.ci(results, type="perc")
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 1000 bootstrap replicates

CALL :
boot.ci(boot.out = results, type = "perc")

Intervals :
Level     Percentile
95%   ( 0.9063,  0.9815 )
Calculations and Intervals on Original Scale

So that would be a 95% CI of between 90.6% and 98.1% effective, roughly. The confidence intervals might be slightly too wide, since I didn't have enough RAM (!) to run the bootstrap calibrated ones (BCa).

Again, take it with a grain of salt. Corrections welcome. :-)

Planet DebianDaniel Silverstone: Withdrawing Gitano from support

Unfortunately, in Debian in particular, libgit2 is undergoing a transition which is blocked by gall. Despite having had over a month to deal with this, I've not managed to summon the tuits to update Gall to the new libgit2 which means, nominally, I ought to withdraw it from testing and possibly even from unstable given that I'm not really prepared to look after Gitano and friends in Debian any longer.

However, I'd love for Gitano to remain in Debian if it's useful to people. Gall isn't exactly a large piece of C code, and so probably won't be a huge job to do the port, I simply don't have the desire/energy to do it myself.

If someone wanted to do the work and provide a patch / "pull request" to me, then I'd happily take on the change and upload a new package, or if someone wanted to NMU the gall package in Debian I'll take the change they make and import it into upstream. I just don't have the energy to reload all that context and do the change myself.

If you want to do this, email me and let me know, so I can support you and take on the change when it's done. Otherwise I probably will go down the lines of requesting Gitano's removal from Debian in the next week or so.

Worse Than FailureCodeSOD: Prepend Eternal

Octavia inherited a decade old pile of C#, and the code quality was pretty much what one would expect from a decade old pile that hadn't seen any real refactoring: nothing but spaghetti. Worse, it also had an "inner platform" problem, as everything they put in their API could conceivably be called by their customers' own "customizations".

One small block caught her eye, as suspicious:

public void SomeFunctionality { // Other functionality here int x = SomeIntMethod(); String y = PrependZeros(x.ToString()); // Do other things with y here }

That call to PrependZeros looked… suspicious. For starters, how many zeroes? It clearly was meant to padd to a certain length, but what?

public String PrependZeros(string n) { if (n.Length == 1) { return "00" + n; } else if (n.Length == 2) { return "0" + n; } else { return n; } }

We've reimplemented one of the built-in formatting methods, badly, which isn't particularly unusual to see. This method clearly doesn't care if it gets a number that's greater than 3 digits, which maybe that's the correct behavior? Inside the codebase, this would be trivial for Octavia to remove, as its only invoked that one time.

Except she can't do that. Because the original developer placed the code in the namespace accessible to customer customizations. Which means some unknown number of customers might have baked this method into their own code. Octavia can't rename it, can't remove it, and there's no real point in re-implementing it. Maybe someday, they'll ship a new version and release some breaking changes, but for now, PrependZeros must live on, just in case a customer is using it.

Every change breaks somebody's workflow.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Krebs on SecurityTrump Fires Security Chief Christopher Krebs

President Trump on Tuesday fired his top election security official Christopher Krebs (no relation). The dismissal came via Twitter two weeks to the day after Trump lost an election he baselessly claims was stolen by widespread voting fraud.

Chris Krebs. Image: CISA.

Krebs, 43, is a former Microsoft executive appointed by Trump to head the Cybersecurity and Infrastructure Security Agency (CISA), a division of the U.S. Department of Homeland Security. As part of that role, Krebs organized federal and state efforts to improve election security, and to dispel disinformation about the integrity of the voting process.

Krebs’ dismissal was hardly unexpected. Last week, in the face of repeated statements by Trump that the president was robbed of re-election by buggy voting machines and millions of fraudulently cast ballots, Krebs’ agency rejected the claims as “unfounded,” asserting that “the November 3rd election was the most secure in American history.”

In a statement on Nov. 12, CISA declared “there is no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised.”

But in a tweet Tuesday evening, Trump called that assessment “highly inaccurate,” alleging there were “massive improprieties and fraud — including dead people voting, Poll watchers not allowed into polling locations, ‘glitches’ in the voting machines that changed votes from Trump to Biden, late voting, and many more.”

Twitter, as it has done with a remarkable number of the president’s tweets lately, flagged the statements as disputed.

By most accounts, Krebs was one of the more competent and transparent leaders in the Trump administration. But that same transparency may have cost him his job: Krebs’ agency earlier this year launched “Rumor Control,” a blog that sought to address many of the conspiracy theories the president has perpetuated in recent days.

Sen. Richard Burr, a Republican from North Carolina, said Krebs had done “a remarkable job during a challenging time,” and that the “creative and innovative campaign CISA developed to promote cybersecurity should serve as a model for other government agencies.”

Sen. Angus King, an Independent from Maine and co-chair of a commission to improve the nation’s cyber defense posture, called Krebs “an incredibly bright, high-performing, and dedicated public servant who has helped build up new cyber capabilities in the face of swiftly-evolving dangers.”

“By firing Mr. Krebs for simply doing his job, President Trump is inflicting severe damage on all Americans – who rely on CISA’s defenses, even if they don’t know it,” King said in a written statement. “If there’s any silver lining in this unjust decision, it’s this: I hope that President-elect Biden will recognize Chris’s contributions, and consult with him as the Biden administration charts the future of this critically important agency.”

KrebsOnSecurity has received more than a few messages these past two weeks from readers who wondered why the much-anticipated threat from Russian or other state-sponsored hackers never appeared to materialize in this election cycle.

That seems a bit like asking why the year 2000 came to pass with very few meaningful disruptions from the Y2K computer date rollover problem. After all, in advance of the new millennium, the federal government organized a series of task forces that helped coordinate readiness for the changeover, and to minimize the impact of any disruptions.

But the question also ignores a key goal of previous foreign election interference attempts leading up to the 2016 U.S. presidential and 2018 mid-term elections. Namely, to sow fear, uncertainty, doubt, distrust and animosity among the electorate about the democratic process and its outcomes.

To that end, it’s difficult to see how anyone has done more to advance that agenda than President Trump himself, who has yet to concede the race and continues to challenge the result in state courts and in his public statements.

Cory DoctorowThe Attack Surface Lectures: Intersectionality: Race, Surveillance, and Tech and Its History

The Attack Surface Lectures were a series of eight panel discussions on the themes in my’s novel Attack Surface, each hosted by a different bookstore and each accompanied by a different pair of guest speakers.

This program is “​​Intersectionality: Race, Surveillance, and Tech and Its History,” hosted by The Booksmith in San Francisco, with guest-hosts Malkia Devich-Cyril and Meredith Whittaker​​. It was recorded on October 15, 2020.

Here is the original Youtube link for this program. Please consider subscribing to The Booksmith’s Youtube channel for access to all their outstanding author events!

MP3

Charles StrossThe Laundry Files: an updated chronology

I've been writing Laundry Files stories since 1999, and there's now about 1.4 million words in that universe. That's a lot of stuff: a typical novel these days is 100,000 words, but these books trend long, and this count includes 11 novels (of which, #10 comes out later this month) and some shorter work. It occurs to me that while some of you have been following them from the beginning, a lot of people come to them cold in the shape of one story or another.

So below the fold I'm going to explain the Laundry Files time line, the various sub-series that share the setting, and give a running order for the series—including short stories as well as novels.

(The series title, "The Laundry Files", was pinned on me by editorial fiat at a previous publisher whose policy was that any group of 3 or more connected novels had to have a common name. It wasn't my idea: my editor at the time also published Jim Butcher, and Bob—my sole protagonist at that point in the series—worked for an organization disparagingly nicknamed "the Laundry", so the inevitable happened. Using a singular series title gives the impression that it has a singular theme, which would be like calling Terry Pratchett's Discworld books as "the Unseen University series". Anyway ...)

TLDR version: If you just want to know where to start reading, pick one of: The Atrocity Archives, The Rhesus Chart, The Nightmare Stacks, or Dead Lies Dreaming. These are all safe starting points for the series, that don't require prior familiarity. Other books might leave you confused if you dive straight in, so here's an exhaustive run-down of all the books and short stories.




Typographic conventions: story titles are rendered in italics (like this). Book titles are presented in boldface (thus).

Publication dates are presented like this: (pub: 2016). The year in which a story is set is presented like so: (set: 2005).

The list is sorted in story order rather than publication order.




The Atrocity Archive (set: 2002; pub: 2002-3)

  • The short novel which started it all. Originally published in an obscure Scottish SF digest-format magazine called Spectrum SF, it ran from 2002 to 2003, and introduced our protagonist Bob Howard, his (eventual) love interest Mo O'Brien, and a bunch of eccentric minor characters and tentacled horrors. Is a kinda-sorta tribute to spy thriller author Len Deighton.

The Concrete Jungle (set: 2003: pub: see below)

  • Novella, set a year after The Atrocity Archive, in which Bob is awakened in the middle of the night to go and count the concrete cows in Milton Keynes. Winner of the 2005 Hugo award for best SF/F novella.

The Atrocity Archives (set 2002-03, pub: 2003 (hbk), 2006 (trade ppbk))

  • Start reading here! A smaller US publisher, Golden Gryphon, liked The Atrocity Archive and wanted to publish it, but considered it to be too short on its own. So The Concrete Jungle was written, and along with an afterword they were published together as a two-story collection/episodic novel, The Atrocity Archives (note the added 's' at the end). A couple of years later, Ace (part of Penguin group) picked up the US trade and mass market paperback rights and Orbit published it in the UK. (Having won a Hugo award in the meantime really didn't hurt; it's normally quite rare for a small press item such as TAA to get picked up and republished like this.)

The Jennifer Morgue (set: 2005, pub: 2007 (hbk), 2008 (trade ppbk))

  • Golden Gryphon asked for a sequel, hence the James Bond episode in what was now clearly going to be a trilogy of comedy Lovecraftian/spy books. Note that it's riffing off the Broccoli movie franchise version of Bond, not Iain Fleming's original psychopathic British government assassin. Orbit again took UK rights, while Ace picked up the paperbacks. Because I wanted to stick with the previous book's two-story format, I wrote an extra short story:

Pimpf (set: 2006, pub: collected in The Jennifer Morgue)

  • A short story set in what I think of as the Chibi-Laundry continuity; Bob ends up inside a computer running a Neverwinter Nights server (hey, this was before World of Warcraft got big). Chibi-Laundry stories are self-parodies and probably shouldn't be thought of as canonical. (Ahem: there's a big continuity blooper tucked away in this one what comes back to bite me in later books because I forgot about it.)

Down on the Farm (novelette: set 2007, pub. 2008, Tor.com)

  • Novelette: Bob has to investigate strange goings-on at a care home for Laundry agents whose minds have gone. Introduces Krantzberg Syndrome, which plays a major role later in the series.

Equoid (novella: set 2007, pub: 2013, Tor.com)

  • A novella set between The Jennifer Morgue and The Fuller Memorandum; Bob is married to Mo and working for Iris Carpenter. Bob learns why Unicorns are Bad News. Won the 2014 Hugo award for best SF/F novella. Also published as the hardback novella edition Equoid by Subterranean Press.

The Fuller Memorandum (set: 2008, pub: 2010 (US hbk/UK ppbk))

  • Third novel, first to be published in hardback by Ace, published in paperback in the UK by Orbit. The title is an intentional call-out to Adam Hall (aka Elleston Trevor), author of the Quiller series of spy thrillers—but it's actually an Anthony Price homage. This is where we begin to get a sense that there's an overall Laundry Files story arc, and where I realized I wasn't writing a trilogy. Didn't have a short story trailer or afterword because I flamed out while trying to come up with one before the deadline. Bob encounters skullduggery within the organization and has to get to the bottom of it before something really nasty happens: also, what and where is the misplaced "Teapot" that the KGB's London resident keeps asking him about?

Overtime (novelette: set 2009, pub 2009, Tor.com)

  • A heart-warming Christmas tale of Terror. Shortlisted for the Hugo award for best novelette, 2010.

Three Tales from the Laundry Files (ebook-only collection)

  • Collection consisting of Down on the Farm, Overtime, and Equoid published the Tor.com as an ebook.

The Apocalypse Codex (set: 2010, pub: 2012 (US hbk/UK ppbk))

  • Fourth novel, and a tribute to the Modesty Blaise comic strip and books by Peter O'Donnell. A slick televangelist is getting much to cosy with the Prime Minister, and the Laundry—as a civil service agency—is forbidden from investigating. We learn about External Assets, and Bob gets the first inkling that he's being fast-tracked for promotion. Won the Locus Award for best fantasy novel in 2013.

A Conventional Boy (set: ~2011-12, not yet written)

  • Projected interstitial novella, introducing Derek the DM (The Nightmare Stacks) and Camp Sunshine (The Delirium Brief). Not yet written.

The Rhesus Chart (set: spring 2013, pub: 2014 (US hbk/UK hbk))

  • Fifth novel, a new series starting point if you want to bypass the early novels. First of a new cycle remixing contemporary fantasy sub-genres (I got bored with British spy thriller authors). Subject: Banking, Vampires, and what happens when an agile programming team inside a merchant bank develops PHANG syndrome. First to be published in hardcover in the UK by Orbit.

  • Note that the books are now set much closer together. This is a key point: the world of the Laundry Files has now developed its own parallel and gradually diverging history as the supernatural incursions become harder to cover up. Note also that Bob is powering up (the Bob of The Atrocity Archive wouldn't exactly be able to walk into a nest of vampires and escape with only minor damage to his dignity). This is why we don't see much of Bob in the next two novels.

The Annihilation Score (set: summer/autumn 2013, pub: 2015 (US hbk/UK ppbk))

  • Sixth novel, first with a non-Bob viewpoint protagonist—it's told by Mo, his wife, and contains spoilers for The Rhesus Chart. Deals with superheroes, mid-life crises, nervous breakdowns, and the King in Yellow. We're clearly deep into ahistorical territory here as we have a dress circle box for the very last Last Night of the Proms, and Orbit's lawyers made me very carefully describe the female Home Secretary as clearly not being one of her non-fictional predecessors, not even a little bit.

Escape from Puroland (set: March-April 2014, pub: summer 2021, forthcoming)

  • Interstitial novella, explaining why Bob wasn't around in the UK during the events described in The Nightmare Stacks. He was on an overseas liason mission, nailing down the coffin lid on one of Angleton's left-over toxic waste sites—this time, it's near Tokyo.

The Nightmare Stacks (set: March-April 2014, pub: June 2016 (US hbk/UK ppbk))

  • Seventh novel, and another series starting point if you want to dive into the most recent books in the series. Viewpoint character: Alex the PHANG. Deals with, well ... the Laundry has been so obsessed by CASE NIGHTMARE GREEN that they're almost completely taken by surprise when CASE NIGHTMARE RED happens. Implicitly marks the end of the Masquerade. Features a Maniac Pixie Dream Girl and the return of Bob's Kettenkrad from The Atrocity Archive. Oh, and it also utterly destroys the major British city I grew up in, because revenge is a dish best eaten cold.

The Delirium Brief (set: May-June 2014, pub: June 2017 (US hbk/UK ppbk))

  • Eighth novel, primary viewpoint character: Bob again, but with an ensemble of other viewpoints cropping up in their own chapters. And unlike the earlier Bob books it no longer pastiches other works or genres. Deals with the aftermath of The Nightmare Stacks; opens with Bob being grilled live on Newsnight by Jeremy Paxman and goes rapidly downhill from there. (I'm guessing that if the events of the previous novel had just taken place, the BBC's leading current affairs news anchor might have deferred his retirement for a couple of months ...)

The Labyrinth Index (set: winter 2014/early 2015, pub: October 2018, (US hbk/UK ppbk))

  • Ninth novel, viewpoint character: Mhari, working for the New Management in the wake of the drastic governmental changes that took place at the end of "The Delirium Brief". The shit has well and truly hit the fan on a global scale, and the new Prime Minister holds unreasonable expectations ...

Dead Lies Dreaming (set: December 2016: pub: Oct 2020 (US hbk/UK hbk))

  • New spin-off series, new starting point! The marketing blurb describes it as "book 10 in the Laundry Files" but by the time this book is set—after CASE NIGHTMARE GREEN and the end of the main Laundry story arc (some time in 2015-16) the Laundry no longer exists. We meet a cast of entirely new characters, civilians (with powers) living under the aegis of the New Management, ruled by his Dread Majesty, the Black Pharaoh. The start of a new trilogy, Dead Lies Dreaming riffs heavily off "Peter and Wendy", the original grimdark version of Peter Pan (before Walt Disney made him twee).

In His House (set: December 2016, pub: probably 2022)

  • Second book in the Dead Lies Dreaming trilogy: continues the story, riffs off Sweeney Todd and Mary Poppins—again: the latter was much darker than the Disney musical implies. (The book is written, but COVID19 has done weird things to publishers' schedules and it's provisionally in the queue behind Invisible Sun, the final Empire Games book, which is due out in September 2021.)

Bones and Nightmares (set: December 2016 and summer of 1820, pub: possibly 2023)

  • Third book in the Dead Lies Dreaming trilogy: finishes the story, riffs off The Prisoner and Jane Austen: also Kingsley's The Water Babies (with Deep Ones). In development.

Further novels are planned but not definite: there need to be 1-2 more books to finish the main Laundry Files story arc with Bob et al, filling in the time line before Dead Lies Dreaming, but the Laundry is a civil service security agency and the current political madness gripping the UK makes it really hard to satirize HMG, so I'm off on a side-quest following the tribulations of Imp, Eve, Wendy, and the gang (from Dead Lies Dreaming) until I figure out how to get back to the Laundry proper.




That's all for now. I'll attempt to update this entry as I write/publish more material.

Worse Than FailureBig Iron

Skill which you don’t use regularly can get rusty. It might not take too much to get the rust off, and remind yourself of what you’re supposed to be doing, but the process of remembering what you’re supposed to do can get a little… damaging.

Lesli spent a big chunk of her career doing IT for an insurance company. They were a conservative company in a conservative industry, which meant they were still rolling out new mainframes in the early 2000s. “Big iron” was the future for insurance.

Until it wasn’t, of course. Lesli was one of the “x86 kids”, part of the team that started with desktop support and migrated into running important services on commodity hardware.

The “big iron” mainframe folks, led by Erwin, watched the process with bemusement. Erwin had joined the company back when they had installed their first S/370 mainframe, and had a low opinion of the direction the future was taking. Watching the “x86 kids” struggle with managing growing storage needs gave him a sense of vindication, as the mainframe never had that problem.

The early x86 rollouts started in 2003, and just used internal disks. At first, only the mail server had anything as fancy as a SCSI RAID array. But as time wore on, the storage needs got harder to manage, and eventually the “x86 kids” rolled out a SAN.

The company bought a second-hand disk array and an expensive support contract with the vendor. It was stuffed with 160GB disks, RAIDed together into about 3TB of storage- a generous amount for 2004. Gradually every service moved onto the SAN, starting with file servers and moving on to email and even experiments with virtualization.

Erwin just watched, and occasionally commented about how they’d solved that problem “on big iron” a generation ago.

Storage needs grew, and more disks got crammed into the array. More disks meant more chances for failures, and each time a disk died, the vendor needed to send out a support tech to replace it. That wasn’t so bad when it was once a quarter, but when disks needed to be replaced twice a month, the hassle of getting a tech on-site, through the multiple layers of security, and into the server room became a burden.

“Hey,” Lesli’s boss suggested, circa late 2005. “Why don’t we just do it ourselves? They can just courier over the new drives, and we can swap and initialize the disk ourselves.”

Everyone liked that idea. After a quick round of training and confirmation that it was safe, that became the process. The support contract was updated, and this became the process.

Until 2009. The world had changed, and Erwin’s beloved “big iron” was declining in relevance. Many of his peers had retired, but he planned to stick it out for a few more years. As the company retired the last mainframe, they needed to reorganize IT, and that meant all the mainframe operators were now going to be server admins. Erwin was put in charge of the storage array.

The good news was that everyone decided to be cautious. Management didn’t want to set Erwin up for failure. Erwin, who frequently wore both a belt and suspenders, didn’t want to take any risks. The support contract was being renegotiated, so the vendor wanted to make sure they looked good. Everyone was ready to make the transition successful.

The first time a disk failed under Erwin’s stewardship, the vendor sent a technician. While Erwin would do all the steps required, the technician was there to train and supervise.

It started well. “You’ll see a red light on the failed disk,” the technician said.

Erwin pointed at a red light. “Like this?”

“Yes, that exactly. Now you’ll need to replace that with the new one.”

Erwin didn’t move. “And I do that how? Let’s go step-by-step.”

The tech started to explain, but went too fast for Erwin’s tastes. Erwin stopped them, and forced them to slow it down. After each step, Erwin paused to confirm it was correct, and note down what, exactly, he had done.

This turned a normally quick process into a bit of a marathon. The marathon got longer, as the technician hadn’t done this for a few years, and was a bit fuzzy on a few of the steps for this specific array, and had to correct themselves- and Erwin had to update his notes. After what felt like too much time, they closed in on the last few steps.

“Okay,” the tech said, “so you pull up a web browser, go to the admin page. Now, login. Great, hit ‘re-initialize’.”

Erwin followed the steps. “It’s warning me about possible data loss, and wants me to confirm by typing in the word ‘yes’?”

“Yeah, sure, do that,” the tech said.

Erwin did.

The tech thought the work was done, but Erwin had more questions. Since the tech was here, Erwin was going to pick their brain. Which was good, because that meant the tech was still on site when every service failed. From the domain service to SharePoint, from the HR database to the actuarial modeling backend, everything which touched the SAN was dead.

“What happened,” Erwin demanded of the tech.

“I don’t know! Something else must have failed.”

Erwin grabbed the tech, Lesli, and the other admins into a conference room. The tech was certain it couldn’t be related to what they had done, so Erwin escalated to the vendor’s phone support. He bulled through the first tier, pointing out they already had a tech onsite, and got to one of the higher-up support reps.

Erwin pulled out his notes, and in detail, recounted every step he had performed. “Finally, I clicked re-initialize.”

“Oh no!” the support rep said. “You don’t want to do that. You want to initialize the disk, not re-initialize. That re-inits the whole array. That’s why there’s a confirmation step, where you have to type ‘yes’.”

“The on-site tech told me to do exactly that.”

The on-site tech experience what must have been the most uncomfortable silence of their career.

“Oh, well, I’m sorry to hear that,” the support rep said. “That deletes all the header information on the array. The data’s still technically on the disks, but there’s no way to get at it. You’ll need to finish formatting and then recover from backup. And ah… can you take me off speaker and put the on-site tech on the line?”

Erwin handed the phone over to the tech, then rounded up the admins. They were going to have a long day ahead getting the disaster fixed. No one was in the room to hear what the support rep said to the tech. When it was over, the tech scrambled out of the office like the building was on fire, never to be heard from again.

In their defense, however, it had been a few years since they’d done the process themselves. They were a bit rusty.

Speaking of rusty, while Erwin continued to praise his “big iron” as being in every way superior to this newfangled nonsense, he stuck around for a few more years. In that time, he proved that he might never be the fastest admin, but he was the most diligent, cautious, and responsible.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Cryptogram Michael Ellis as NSA General Counsel

Over at Lawfare, Susan Hennessey has an excellent primer on how Trump loyalist Michael Ellis got to be the NSA General Counsel, over the objections of NSA Director Paul Nakasone, and what Biden can and should do about it.

While important details remain unclear, media accounts include numerous indications of irregularity in the process by which Ellis was selected for the job, including interference by the White House. At a minimum, the evidence of possible violations of civil service rules demand immediate investigation by Congress and the inspectors general of the Department of Defense and the NSA.

The moment also poses a test for President-elect Biden’s transition, which must address the delicate balance between remedying improper politicization of the intelligence community, defending career roles against impermissible burrowing, and restoring civil service rules that prohibit both partisan favoritism and retribution. The Biden team needs to set a marker now, to clarify the situation to the public and to enable a new Pentagon general counsel to proceed with credibility and independence in investigating and potentially taking remedial action upon assuming office.

The NSA general counsel is not a Senate-confirmed role. Unlike the general counsels of the CIA, Pentagon and Office of the Director of National Intelligence (ODNI), all of which require confirmation, the NSA’s general counsel is a senior career position whose occupant is formally selected by and reports to the general counsel of the Department of Defense. It’s an odd setup — ­and one that obscures certain realities, like the fact that the NSA general counsel in practice reports to the NSA director. This structure is the source of a perennial legislative fight. Every few years, Congress proposes laws to impose a confirmation requirement as more appropriately befits an essential administration role, and every few years, the executive branch opposes those efforts as dangerously politicizing what should be a nonpolitical job.

While a lack of Senate confirmation reduces some accountability and legislative screening, this career selection process has the benefit of being designed to eliminate political interference and to ensure the most qualified candidate is hired. The system includes a complex set of rules governing a selection board that interviews candidates, certifies qualifications and makes recommendations guided by a set of independent merit-based principles. The Pentagon general counsel has the final call in making a selection. For example, if the panel has ranked a first-choice candidate, the general counsel is empowered to choose one of the others.

Ryan Goodman has a similar article at Just Security.

,

Charles StrossUpcoming Attractions!

As you know by now, my next novel, Dead Lies Dreaming comes out next week—on Tuesday the 27th in the US and Thursday 29th in the UK, because I've got different publishers in different territories).

Signed copies can be ordered from Transreal Fiction in Edinburgh via the Hive online mail order service.

(You can also order it via Big River co and all good bookshops, but they don't stock signed copies: Link to Amazon US: Link to Amazon UK. Ebooks are available too, and I gather the audiobook—again, there's a different version in the US, from Audible, and the UK, from Hachette Digital—should be released at the same time.)

COVID-19 has put a brake on any plans I might have had to promote the book in public, but I'm doing a number of webcast events over the next few weeks. Here are the highlights:

Outpost 2020 is a virtual SF convention taking place from Friday 23rd (tomorrow!) to Sunday 25th. I'm on a discussion panel on Saturday 24th at 4pm (UK time), on the subject of "Reborn from the Apocalypse": Both history and current events teach that a Biblical-proportioned apocalypse is not necessarily confined to the realms of fiction. How can we reinvent ourselves, and more importantly, will we?. (Panelists: Charlie Stross, Gabriel Partida, David D. Perlmutter. Moderator: Mike Fatum.)

Orbit Live! As part of a series of Crowdcast events, at 8pm GMT on Thursday 27th RJ Barker is going to host myself and Luke Arnold in conversation about our new books: sign up for the crowdcast here.

Reddit AmA: No book launch is complete these days without an Ask me Anything on Reddit, which in my case is booked for Tuesday 3rd, starting at 5pm, UK time (9am on the US west coast, give or take an hour—the clocks change this weekend in the UK but I'm not sure when the US catches up).

The Nürnberg Digital Festival is a community driven Festival with about 20.000 attendees in Nuremberg, to discuss the future, change and everything that comes with it. Obviously this year it's an extra-digital (i.e. online-only) festival, which has the silver lining of enabling the organizers to invite guests to connect from a long way away. Which is why I'm doing an interview/keynote on Monday November 9th at 5pm (UK time). You can find out more about the Festival here (as well as buying tickets for any or all days' events). It's titled "Are we in dystopian times?" which seems to be an ongoing theme of most of the events I'm being invited to these days, and probably gives you some idea of what my answer is likely to be ...

Anyway, that's all for now: I'll add to this post if new events show up.

Krebs on SecurityBe Very Sparing in Allowing Site Notifications

An increasing number of websites are asking visitors to approve “notifications,” browser modifications that periodically display messages on the user’s mobile or desktop device. In many cases these notifications are benign, but several dodgy firms are paying site owners to install their notification scripts and then selling that communications pathway to scammers and online hucksters.

Notification prompts in Firefox (left) and Google Chrome.

When a website you visit asks permission to send notifications and you approve the request, the resulting messages that pop up appear outside of the browser. For example, on Microsoft Windows systems they typically show up in the bottom right corner of the screen — just above the system clock. These so-called “push notifications” rely on an Internet standard designed to work similarly across different operating systems and web browsers.

But many users may not fully grasp what they are consenting to when they approve notifications, or how to tell the difference between a notification sent by a website and one made to appear like an alert from the operating system or another program that’s already installed on the device.

This is evident by the apparent scale of the infrastructure behind a relatively new company based in Montenegro called PushWelcome, which advertises the ability for site owners to monetize traffic from their visitors. The company’s site currently is ranked by Alexa.com as among the top 2,000 sites in terms of Internet traffic globally.

Website publishers who sign up with PushWelcome are asked to include a small script on their page which prompts visitors to approve notifications. In many cases, the notification approval requests themselves are deceptive — disguised as prompts to click “OK” to view video material, or as “CAPTCHA” requests designed to distinguish automated bot traffic from real visitors.

An ad from PushWelcome touting the money that websites can make for embedding their dodgy push notifications scripts.

Approving notifications from a site that uses PushWelcome allows any of the company’s advertising partners to display whatever messages they choose, whenever they wish to, and in real-time. And almost invariably, those messages include misleading notifications about security risks on the user’s system, prompts to install other software, ads for dating sites, erectile disfunction medications, and dubious investment opportunities.

That’s according to a deep analysis of the PushWelcome network compiled by Indelible LLC, a cybersecurity firm based in Portland, Ore. Frank Angiolelli, vice president of security at Indelible, said rogue notifications can be abused for credential phishing, as well as foisting malware and other unwanted applications on users.

“This method is currently being used to deliver something akin to adware or click fraud type activity,” Angiolelli said. “The concerning aspect of this is that it is so very undetected by endpoint security programs, and there is a real risk this activity can be used for much more nefarious purposes.”

Sites affiliated with PushWelcome often use misleading messaging to trick people into approving notifications.

Angiolelli said the external Internet addresses, browser user agents and other telemetry tied to people who’ve accepted notifications is known to PushWelcome, which could give them the ability to target individual organizations and users with any number of fake system prompts.

Indelible also found browser modifications enabled by PushWelcome are poorly detected by antivirus and security products, although he noted Malwarebytes reliably flags as dangerous publisher sites that are associated with the notifications.

Indeed, Malwarebytes’ Pieter Arntz warned about malicious browser push notifications in a January 2019 blog post. That post includes detailed instructions on how to tell which sites you’ve allowed to send notifications, and how to remove them.

KrebsOnSecurity installed PushWelcome’s notifications on a brand new Windows test machine, and found that very soon after the system was peppered with alerts about malware threats supposedly found on the system. One notification was an ad for Norton antivirus; the other was for McAfee. Clicking either ultimately led to “buy now” pages at either Norton.com or McAfee.com.

Clicking on the PushWelcome notification in the bottom right corner of the screen opened a Web site claiming my brand new test system was infected with 5 viruses.

It seems likely that PushWelcome and/or some of its advertisers are trying to generate commissions for referring customers to purchase antivirus products at these companies. McAfee has not yet responded to requests for comment. Norton issued the following statement:

“We do not believe this actor to be an affiliate of NortonLifeLock. We are continuing to investigate this matter. NortonLifeLock takes affiliate fraud and abuse seriously and monitors ongoing compliance. When an affiliate partner abuses its responsibilities and violates our agreements, we take necessary action to remove these affiliate partners from the program and swiftly terminate our relationships. Additionally, any potential commissions earned as a result of abuse are not paid. Furthermore, NortonLifeLock sends notification to all of our affiliate partner networks about the affiliate’s abuse to ensure the affiliate is not eligible to participate in any NortonLifeLock programs in the future.”

Requests for comment sent to PushWelcome via email were returned as undeliverable. Requests submitted through the contact form on the company’s website also failed to send.

While scammy notifications may not be the most urgent threat facing Internet users today, most people are probably unaware of how this communications pathway can be abused.

What’s more, dodgy notification networks could be used for less conspicuous and sneakier purposes, including spreading fake news and malware masquerading as update notices from the user’s operating system. I hope it’s clear that regardless of which browser, device or operating system you use, it’s a good idea to be judicious about which sites you allow to serve notifications.

If you’d like to prevent sites from ever presenting notification requests, check out this guide, which has instructions for disabling notification prompts in Chrome, Firefox and Safari. Doing this for any devices you manage on behalf of friends, colleagues or family members might end up saving everyone a lot of headache down the road.

Cory DoctorowThe Attack Surface Lectures: Cross-Media Sci-Fi

The Attack Surface Lectures were a series of eight panel discussions on the themes in my’s novel Attack Surface, each hosted by a different bookstore and each accompanied by a different pair of guest speakers.

This program is “Cross-Media Sci Fi” hosted by the Brookline Booksmith in Brookline, MA, with guest-hosts John Rogers and Amber Benson. It was recorded on October 14, 2020.

Here is the original Youtube link for this program. Please consider subscribing to The Brookline Booksmith’s Youtube channel for access to all their outstanding author events!

MP3

LongNowWhat was the biggest empire in history?

What was the biggest empire in history? The answer, writes Benjamin Plackett in Live Science, depends on whether you think in terms of fraction of living humans or number of living humans, revealing the challenges inherent in attempting to compare time periods:

That’s without getting into the pros and cons of the other ways to measure size: largest land mass; largest contiguous land mass; largest army; largest gross domestic product; and so on.

But one alternative would be counted in years: we should measure empires by their long-term influence and stability, according to Martin Bommas, Director of the Macquarie University History Museum in Sydney:

“I think that to be classed as an empire, you need to have a period of peace to bring prosperity,” Bommas added. “If you look at it through years lasted, the Romans won this competition hands down.”

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, October 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, 221.50 work hours have been dispatched among 13 paid contributors. Their reports are available:
  • Abhijith PA did 16.0h (out of 14h assigned and 2h from September).
  • Adrian Bunk did 7h (out of 20.75h assigned and 5.75h from September), thus carrying over 19.5h to November.
  • Ben Hutchings did 11.5h (out of 6.25h assigned and 9.75h from September), thus carrying over 4.5h to November.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 20.75h (out of 20.75h assigned).
  • Holger Levsen did 7.0h coordinating/managing the LTS team.
  • Markus Koschany did 20.75h (out of 20.75h assigned).
  • Mike Gabriel gave back the 8h he was assigned. See below 🙂
  • Ola Lundqvist did 10.5h (out of 8h assigned and 2.5h from September).
  • Roberto C. Sánchez did 13.5h (out of 20.75h assigned) and gave back 7.25h to the pool.
  • Sylvain Beucler did 20.75h (out of 20.75h assigned).
  • Thorsten Alteholz did 20.75h (out of 20.75h assigned).
  • Utkarsh Gupta did 20.75h (out of 20.75h assigned).

Evolution of the situation

October was a regular LTS month with a LTS team meeting done via video chat thus there’s no log to be shared. After more than five years of contributing to LTS (and ELTS), Mike Gabriel announced that he founded a new company called Frei(e) Software GmbH and thus would leave us to concentrate on this new endeavor. Best of luck with that, Mike! So, once again, this is a good moment to remind that we are constantly looking for new contributors. Please contact Holger if you are interested!

The security tracker currently lists 42 packages with a known CVE and the dla-needed.txt file has 39 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianJaldhar Vyas: Sal Mubarak 2077!

[Celebrating Diwali wearing a mask]

Best wishes to the entire Debian world for a happy, prosperous and safe Gujarati new year, Vikram Samvat 2077 named Paridhawi.

Worse Than FailureCodeSOD: Mod-El Code

Long-lived projects can have… interesting little corners. Choices made 13 years ago can stick around, either because they work well enough, or because, well, every change breaks somebody's workflow.

Today's anonymous submitter was poking around the code base of a large, long-lived JavaScript framework. In a file, not modified since 2007, but still included in the product, they found this function.

_getAdjustedDay: function(/*Date*/dateObj){ //FIXME: use mod instead? //summary: used to adjust date.getDay() values to the new values based on the current first day of the week value var days = [0,1,2,3,4,5,6]; if(this.weekStartsOn>0){ for(var i=0;i<this.weekStartsOn;i++){ days.unshift(days.pop()); } } return days[dateObj.getDay()]; // Number: 0..6 where 0=Sunday }

Look, this is old JavaScript, it's date handling code, and it's handling an unusual date case, so we already know it's going to be bad. That's not a surprise at all.

The core problem is, given a date, we want to find out the day of the week it falls on, but weeks don't have to start on Sunday, so we may need to do some arithmetic to adjust the dates. That arithmetic, as the FIXME comment helpfully points out, could easily be done with the % operator.

Someone knew the right answer here, but didn't get to implementing it. Instead, we have an array of valid values. To calculate the offset, we "roll" the array using a unshift(pop) combo- take the last element off the array and plop it onto the front. We also have a bonus unnecessary "if" statement- the "for" loop would have handled that.

This isn't the first time I've seen "populate an array with values and roll the array instead of using mod", and it probably won't be the last. But there's also a bonus WTF here. This function is invoked in _initFirstDay.

_initFirstDay: function(/*Date*/dateObj, /*Boolean*/adj){ //adj: false for first day of month, true for first day of week adjusted by startOfWeek var d = new Date(dateObj); if(!adj){d.setDate(1);} d.setDate(d.getDate()-this._getAdjustedDay(d,this.weekStartsOn)); d.setHours(0,0,0,0); return d; // Date }

So, first off, this function does two entirely different things, depending on what you pass in for adj. As the comment tells us, if adj is false, we find the first day of the month. If adj is true, we find the first day of the week relative to startOfWeek. Unfortunately, I'm not sure that comment is entirely correct, because whether or not adj is false, we do some arithmetic based on _getAdjustedDay. So, if you try this for a date in November 2020, with weeks starting on Sunday, you get the results you expect- because November 1st was a Sunday. But if you try it for October, the "first day" is September 27th, not October 1st.

Maybe that's by intent and design. Maybe it isn't. It's hard to tell from the comment. But the real bonus WTF is how they call this._getAdjustedDay here- passing in two parameters. To a function which only expects one. But that function does use the value passed in anyway, since it's a property of the class.

Even code that we can safely assume is bad just from knowing its origins can still find new ways to surprise us.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianLouis-Philippe Véronneau: A better git diff

A few days ago I wrote a quick patch and missed a dumb mistake that made the program crash. When reviewing the merge request on Salsa, the problem became immediately apparent; Gitlab's diff is much better than what git diff shows by default in a terminal.

Well, it turns out since version 2.9, git bundles a better pager, diff-highlight. À la Gitlab, it will highlight what changed in the line.

The output of git diff using diff-highlight

Sadly, even though diff-highlight comes with the git package in Debian, it is not built by default (925288). You will need to:

$ sudo make --directory /usr/share/doc/git/contrib/diff-highlight

You can then add this line to your .gitconfig file:

[core]
  pager = /usr/share/doc/git/contrib/diff-highlight/diff-highlight | less --tabs=4 -RFX

If you use tig, you'll also need to add this line in your tigrc:

set diff-highlight = /usr/share/doc/git/contrib/diff-highlight/diff-highlight

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.10.1.2.0

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 779 other packages on CRAN.

This release ties up a few loose ends from the recent 0.10.1.0.0.

Changes in RcppArmadillo version 0.10.1.2.0 (2020-11-15)

  • Upgraded to Armadillo release 10.1.2 (Orchid Ambush)

  • Remove three unused int constants (#313)

  • Include main armadillo header using quotes instead of brackets

  • Rewrite version number use in old-school mode because gcc 4.8.5

  • Skipping parts of sparse conversion on Windows as win-builder fails

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: RcppAnnoy 0.0.17

annoy image

A new release 0.0.17 of RcppAnnoy is now on CRAN. RcppAnnoy is the Rcpp-based R integration of the nifty Annoy library by Erik Bernhardsson. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours—originally developed to drive the famous Spotify music discovery algorithm.

This release brings a new upstream version 1.17, released a few weeks ago, which adds multithreaded index building. This changes the API by adding a new ‘threading policy’ parameter requiring code using the main Annoy header to update. For this reason we waited a little for the dust to settle on the BioConductor 3.12 release before bringing the changes to BiocNeighbors via this commit and to uwot via this simple PR. Aaron and James updated their packages accordingly so by the time I uploaded RcppAnnoy it made for very smooth sailing as we all had done our homework with proper conditional builds, and the package had no other issue preventing automated processing at CRAN. Yay. I also added a (somewhat overdue one may argue) header file RcppAnnoy.h regrouping defines and includes which should help going forward.

Detailed changes follow below.

Changes in version 0.0.17 (2020-11-15)

  • Upgrade to Annoy 1.17, but default to serial use.

  • Add new header file to regroup includes and defines.

  • Upgrade CI script to use R with bspm on focal.

Courtesy of my CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Cory DoctorowAttack Surface Lectures master post

The Attack Surface Lectures were a series of eight panel discussions on the themes in my novel Attack Surface, each hosted by a different bookstore and each accompanied by a different pair of guest speakers.


1. Politics and Protest with Ron Deibert (Citizen Lab) and Eva Galperin (EFF)

Strand Bookstore, October 13, 2020

Original Youtube link.

MP3


2. Cross-Media SF with John Rogers (Leverage) and Amber Benson (Buffy)

Brookline Booksmith, October 14, 2020

Original Youtube link

MP3


3. Intersectionality: Race, Surveillance, and Tech and Its History with Malkia Devich-Cyril (Media Justice) and Meredith Whittaker​​ (AI Now)

The Booksmith, October 15, 2020

Original Youtube link

MP3


4. Cyberpunk and Post Cyberpunk with Bruce Sterling (Pirate Utopia) and Christopher Brown (Failed State)

Anderson’s, October 19, 2020

Original Youtube link


MP3


5. Little Revolutions with Tochi Onyebuchi (Riot Baby) and Bethany C Morrow (A Song Below Water)

Skylight Books, October 21, 2020

Original Crowdcast link

MP3

Cory DoctorowThe Attack Surface Lectures: Politics and Protest

The Attack Surface Lectures were a series of eight panel discussions on the themes in my’s novel Attack Surface, each hosted by a different bookstore and each accompanied by a different pair of guest speakers.

This program is “Politics and Protest,” hosted by The Strand in NYC, with guest-hosts Eva Galperin and Ron Deibert. It was recorded on October 13, 2020.


Here is the original Youtube link for this program. Please consider subscribing to The Strand’s Youtube channel for access to all their outstanding author events!

MP3

Planet DebianBits from Debian: New Debian Developers and Maintainers (September and October 2020)

The following contributors got their Debian Developer accounts in the last two months:

  • Benda XU (orv)
  • Joseph Nahmias (jello)
  • Marcos Fouces (marcos)
  • Hayashi Kentaro (kenhys)
  • James Valleroy (jvalleroy)
  • Helge Deller (deller)

The following contributors were added as Debian Maintainers in the last two months:

  • Ricardo Ribalda Delgado
  • Pierre Gruet
  • Henry-Nicolas Tourneur
  • Aloïs Micard
  • Jérôme Lebleu
  • Nis Martensen
  • Stephan Lachnit
  • Felix Salfelder
  • Aleksey Kravchenko
  • Étienne Mollier

Congratulations!

Cory DoctorowThe Attack Surface Lectures: Politics and Protest (fixed)

The Attack Surface Lectures were a series of eight panel discussions on the themes in my’s novel Attack Surface, each hosted by a different bookstore and each accompanied by a different pair of guest speakers.

This program is “Politics and Protest,” hosted by The Strand in NYC, with guest-hosts Eva Galperin and Ron Deibert. It was recorded on October 13, 2020.


Here is the original Youtube link for this program. Please consider subscribing to The Strand’s Youtube channel for access to all their outstanding author events!

MP3

Cryptogram On Blockchain Voting

Blockchain voting is a spectacularly dumb idea for a whole bunch of reasons. I have generally quoted Matt Blaze:

Why is blockchain voting a dumb idea? Glad you asked.

For starters:

  • It doesn’t solve any problems civil elections actually have.
  • It’s basically incompatible with “software independence”, considered an essential property.
  • It can make ballot secrecy difficult or impossible.

I’ve also quoted this XKCD cartoon.

But now I have this excellent paper from MIT researchers:

“Going from Bad to Worse: From Internet Voting to Blockchain Voting”
Sunoo Park, Michael Specter, Neha Narula, and Ronald L. Rivest

Abstract: Voters are understandably concerned about election security. News reports of possible election interference by foreign powers, of unauthorized voting, of voter disenfranchisement, and of technological failures call into question the integrity of elections worldwide.This article examines the suggestions that “voting over the Internet” or “voting on the blockchain” would increase election security, and finds such claims to be wanting and misleading. While current election systems are far from perfect, Internet- and blockchain-based voting would greatly increase the risk of undetectable, nation-scale election failures.Online voting may seem appealing: voting from a computer or smart phone may seem convenient and accessible. However, studies have been inconclusive, showing that online voting may have little to no effect on turnout in practice, and it may even increase disenfranchisement. More importantly: given the current state of computer security, any turnout increase derived from with Internet- or blockchain-based voting would come at the cost of losing meaningful assurance that votes have been counted as they were cast, and not undetectably altered or discarded. This state of affairs will continue as long as standard tactics such as malware, zero days, and denial-of-service attacks continue to be effective.This article analyzes and systematizes prior research on the security risks of online and electronic voting, and show that not only do these risks persist in blockchain-based voting systems, but blockchains may introduce additional problems for voting systems. Finally, we suggest questions for critically assessing security risks of new voting system proposals.

You may have heard of Voatz, which uses blockchain for voting. It’s an insecure mess. And this is my general essay on blockchain. Short summary: it’s completely useless.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 23)

Here’s part twenty-three of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:


Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Worse Than FailureAnnouncements: What the Fun Holiday Activity?

Time just flies right past, and before you know it, the holidays will be here. Which is why you had better hurry up and try your hand at giving us the best WTF Christmas Story ever, to help us found a new holiday tradition. Or at least, give us one bright spot in the yawning abyss of 2020.

Can you teach us the true meaning of WTFMas?

What We Want

We want your best holiday story. Any holiday is valid, though given the time of year, we're expecting one of the many solstice-adjacent holidays. This story can be based on real experiences, or it can be entirely fictional, because what we really want is a new holiday tradition.

The best submissions will:

  • Contain a core WTF, whether it's a bad boss, bad technology decisions, or incompetent team members
  • Prominently feature your chosen holiday
  • End with a valuable moral lesson, that leave us feeling full of holiday cheer

Are you going to write a traditional story? Or maybe a Dr. Seussian rhyme? A long letter to Santa? That's up to you.

How We Want It

Submissions are open from now until December 11th. Use our submission form. Check the "Story" box, and set the subject to WTF Holiday Special. Make sure to fill out the email address field, so we can contact you if you win!

What You Get

The best story will be a feature on our site, and also receive some of our new swag: a brand new TDWTF hoodie, a TDWTF mug, and a variety of stickers and other small swag.

The 2 runners up will also get a mug, stickers and other small swag.

Get writing, and let's create a new holiday tradition where opening the present may create more questions than it answers.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianAdnan Hodzic: Degiro trading tracker – Simplified tracking of your investments

TL;DRVisit degiro-trading-tracker on Github I was always interested in stocks and investing. While I wanted to get into trading for long time, I could never...

The post Degiro trading tracker – Simplified tracking of your investments appeared first on FoolControl: Phear the penguin.

Worse Than FailureCodeSOD: Unset-tled

Alleen started by digging into a PHP method which was just annoying. _find_shipment_by_object_id would, when it couldn't find the ID, return false, instead of the more expected null. Not terrible, but annoying. Worse, it didn't return the shipment eihter, just a key which could be used to fetch a shipment from an array.

Again, all that's just annoying.

It was when looking at the delete_shipment method that Alleen had the facepalm moment.

public function delete_shipment($object_id) { $key = $this->_find_shipment_by_object_id($object_id); if ($key !== FALSE) { $obj = $this->_shipments[$key]; unset($obj, $this->_shipments[$key]); } return $this; }

The PHP unset method takes a list of variables, including potentially array elements, and deletes them. For whatever reason, the person who wrote this code decided to fetch the value stored in the array, then delete the variable holding the value and the array index holding the value, when the goal was simply to delete the element from the array.

They just enjoyed deleting so much, that they needed to delete it twice.

Alleen also wonders about the return $this. It seems like the intent was to build a fluent, chainable API, but the code is never used that way. We're left with a simple mystery, but at least they couldn't return twice.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianJamie McClelland: Being your own Certificate Authority

There are many blogs and tutorials with nice shortcuts providing the necessary openssl commands to create and sign x509 certficates.

However, there is precious few instructions for how to easily create your own certificate authority.

You probably never want to do this in a production environment, but in a development environment it will make your life signficantly easier.

Create the certificate authority

Create the key and certificate

Pick a directory to store things in. Then, make your certificate authority key and certificate:

openssl genrsa -out cakey.pem 2048
openssl req -x509 -new -nodes -key cakey.pem -sha256 -days 1024 -out cacert.pem

Some tips:

  • You will be prompted to enter some information about your certificate authoirty. Provide the minimum information - i.e., only overwrite the defaults. So, provide a value for Country, State or Province, and Organization Name and leave the rest blank.
  • You probably want to leave the password blank if this is a development/testing environment.

Want to review what you created?

openssl x509 -text -noout -in cacert.pem 

Prepare your directory

You can create your own /etc/ssl/openssl.cnf file and really customize things. But I find it safer to use your distribution's default file so you can benefit from changes to it every time you upgrade.

If you do take the default file, you may have the dir option coded to demoCA (in Debian at least, maybe it's the upstream default too).

So, to avoid changing any configuration files, let's just use this value. Which means... you'll need to create that directory. The setting is relative - so you can create this directory in the same directory you have your keys.

mkdir demoCA

Lastly, you have to have a file that keeps track of your certificates. If it doesn't exist, you get an error:

touch demoCA/index.txt

That's it! Your certificate authority is ready to go.

Create a key and ceritificate signing request

First, pick your domain names (aka "common" names). For example, example.org and www.example.org.

Set those values in an environment variable. If you just have one:

export subjectAltName=DNS:example.org

If you have more then one:

export subjectAltName=DNS:example.org,DNS:www.example.org

Next, create a key and a certificate signing request:

openssl req -new -nodes -out new.csr -keyout new.key

Again, you will be prompted for some values (country, state, etc) - be sure to choose the same values you used with your certficiate authority! I honestly don't understand why this is necessary (when I set different values I get an error on the signing request step below). Maybe someone can add a comment to this post explaining why these values have to match?

Also, you must provide a common name for your certificate - you can choose the same name as the altSubjectNames value you set above (but just one domain).

Want to review what you created?

openssl req -in new.csr -text -noout 

Sign it!

At last the momenet we have been waiting for.

openssl ca -keyfile cakey.pem -cert cacert.pem -out new.crt -outdir . -rand_serial -infiles new.csr

Now yu have a new.crt and new.csr that you can install via your web browser, mail server, etc specification.

Sanity check it

This command will confirm that the certificate is trusted by your certificate authority.

openssl verify -no-CApath -CAfile cacert.pem new.crt 

But wait, there's still a question of trust

You probably want to tell your computer or browser that you want to trust your certificate signing authority.

Command line tools

Most tools in linux by default will trust all the certificates in /etc/ssl/certs/ca-certificates.crt. (If that file doesn't exist, try installing the ca-certificates package). If you want to add your certificate to that file:

cp cacert.pem /usr/local/share/ca-certificates/cacert.crt
sudo dpkg-reconfigure ca-certificates

Want to know what's funny? Ok, not really funny. If the certificate name ends with .pem the command above won't work. Seriously.

Once your certificate is installed with your web server you can now test to make sure it's all working with:

gnutls-cli --print-cert $domainName

Want a second opinion?

curl https://$domainName
wget https://$domainName -O-

Both will report errors if the certificate can't be verified by a system certificate.

If you really want to narrow down the cause of error (maybe reconfiguring ca-certificates didn't work)?

curl --cacert /path/to/your/cacert.pem --capath /tmp

Those arguments tell curl to use your certificate authority file and not to load any other certificate authority files (well, unless you have some installed in the temp directory).

Web browsers

Firefox and Chrome have their own store of trusted certificates - you'll have to import your cacert.pem file into each browser that you want to trust your key.

Planet DebianSteinar H. Gunderson: Using Buypass card readers in Linux

If you want to know the result of your corona test in Norway (or really, any other health information), you'll need to either get an electronic ID with a confidential spec where your bank holds the secret key, can use it towards other banks with no oversight, and allows whoever has that key to take up huge loans and other binding agreements in your name in a matter of minutes (also known as “BankID”)… or you can get a smart card from Buypass, where you hold the private key yourself.

Since most browsers won't talk directly to a smart card, you'll need a small proxy that exposes a REST interface on 127.0.0.1. (It used to be solved with a Java applet, but, yeah. That was 40 Chrome releases ago.) Buypass publishes those only for Windows and Mac, but the protocol was simple enough, so I made my own reimplementation called Linux Dallas Multipass. It's rough, only really seems to work in Firefox (and only if you spoof your UA to be Windows), you'll need to generate and install certificates to install it yourself… but yes. You can log in to find out that you're negative.

Planet DebianVincent Bernat: Zero-Touch Provisioning for Juniper

Juniper’s official documentation on ZTP explains how to configure the ISC DHCP Server to automatically upgrade and configure on first boot a Juniper device. However, the proposed configuration could be a bit more elegant. This note explains how.

TL;DR

Do not redefine option 43. Instead, specify the vendor option space to use to encode parameters with vendor-option-space.


When booting for the first time, a Juniper device requests its IP address through a DHCP discover message, then request additional parameters for autoconfiguration through a DHCP request message:

Dynamic Host Configuration Protocol (Request)
    Message type: Boot Request (1)
    Hardware type: Ethernet (0x01)
    Hardware address length: 6
    Hops: 0
    Transaction ID: 0x44e3a7c9
    Seconds elapsed: 0
    Bootp flags: 0x8000, Broadcast flag (Broadcast)
    Client IP address: 0.0.0.0
    Your (client) IP address: 0.0.0.0
    Next server IP address: 0.0.0.0
    Relay agent IP address: 0.0.0.0
    Client MAC address: 02:00:00:00:00:01 (02:00:00:00:00:01)
    Client hardware address padding: 00000000000000000000
    Server host name not given
    Boot file name not given
    Magic cookie: DHCP
    Option: (54) DHCP Server Identifier (10.0.2.2)
    Option: (55) Parameter Request List
        Length: 14
        Parameter Request List Item: (3) Router
        Parameter Request List Item: (51) IP Address Lease Time
        Parameter Request List Item: (1) Subnet Mask
        Parameter Request List Item: (15) Domain Name
        Parameter Request List Item: (6) Domain Name Server
        Parameter Request List Item: (66) TFTP Server Name
        Parameter Request List Item: (67) Bootfile name
        Parameter Request List Item: (120) SIP Servers
        Parameter Request List Item: (44) NetBIOS over TCP/IP Name Server
        Parameter Request List Item: (43) Vendor-Specific Information
        Parameter Request List Item: (150) TFTP Server Address
        Parameter Request List Item: (12) Host Name
        Parameter Request List Item: (7) Log Server
        Parameter Request List Item: (42) Network Time Protocol Servers
    Option: (50) Requested IP Address (10.0.2.15)
    Option: (53) DHCP Message Type (Request)
    Option: (60) Vendor class identifier
        Length: 15
        Vendor class identifier: Juniper-mx10003
    Option: (51) IP Address Lease Time
    Option: (12) Host Name
    Option: (255) End
    Padding: 00

It requests several options, including the TFTP server address option 150, and the Vendor-Specific Information Option 43—or VSIO. The DHCP server can use option 60 to identify the vendor-specific information to send. For Juniper devices, option 43 encodes the image name and the configuration file name. They are fetched from the IP address provided in option 150.

The official documentation on ZTP provides a valid configuration to answer such a request. However, it does not leverage the ability of the ISC DHCP Server to support several vendors and redefines option 43 to be Juniper-specific:

option NEW_OP-encapsulation code 43 = encapsulate NEW_OP;

Instead, it is possible to define an option space for Juniper, using a self-descriptive name, without overriding option 43:

# Juniper vendor option space
option space juniper;
option juniper.image-file-name     code 0 = text;
option juniper.config-file-name    code 1 = text;
option juniper.image-file-type     code 2 = text;
option juniper.transfer-mode       code 3 = text;
option juniper.alt-image-file-name code 4 = text;
option juniper.http-port           code 5 = text;

Then, when you need to set these suboptions, specify the vendor option space:

class "juniper-mx10003" {
  match if (option vendor-class-identifier = "Juniper-mx10003") {
  vendor-option-space juniper;
  option juniper.transfer-mode    "http";
  option juniper.image-file-name  "/images/junos-vmhost-install-mx-x86-64-19.3R2-S4.5.tgz";
  option juniper.config-file-name "/cfg/juniper-mx10003.txt";
}

This configuration returns the following answer:1

Dynamic Host Configuration Protocol (ACK)
    Message type: Boot Reply (2)
    Hardware type: Ethernet (0x01)
    Hardware address length: 6
    Hops: 0
    Transaction ID: 0x44e3a7c9
    Seconds elapsed: 0
    Bootp flags: 0x8000, Broadcast flag (Broadcast)
    Client IP address: 0.0.0.0
    Your (client) IP address: 10.0.2.15
    Next server IP address: 0.0.0.0
    Relay agent IP address: 0.0.0.0
    Client MAC address: 02:00:00:00:00:01 (02:00:00:00:00:01)
    Client hardware address padding: 00000000000000000000
    Server host name not given
    Boot file name not given
    Magic cookie: DHCP
    Option: (53) DHCP Message Type (ACK)
    Option: (54) DHCP Server Identifier (10.0.2.2)
    Option: (51) IP Address Lease Time
    Option: (1) Subnet Mask (255.255.255.0)
    Option: (3) Router
    Option: (6) Domain Name Server
    Option: (43) Vendor-Specific Information
        Length: 89
        Value: 00362f696d616765732f6a756e6f732d766d686f73742d69…
    Option: (150) TFTP Server Address
    Option: (255) End

Using vendor-option-space directive allows you to make different ZTP implementations coexist. For example, you can add the option space for PXE:

option space PXE;
option PXE.mtftp-ip    code 1 = ip-address;
option PXE.mtftp-cport code 2 = unsigned integer 16;
option PXE.mtftp-sport code 3 = unsigned integer 16;
option PXE.mtftp-tmout code 4 = unsigned integer 8;
option PXE.mtftp-delay code 5 = unsigned integer 8;
option PXE.discovery-control    code 6  = unsigned integer 8;
option PXE.discovery-mcast-addr code 7  = ip-address;
option PXE.boot-server code 8  = { unsigned integer 16, unsigned integer 8, ip-address };
option PXE.boot-menu   code 9  = { unsigned integer 16, unsigned integer 8, text };
option PXE.menu-prompt code 10 = { unsigned integer 8, text };
option PXE.boot-item   code 71 = unsigned integer 32;

class "pxeclients" {
  match if substring (option vendor-class-identifier, 0, 9) = "PXEClient";
  vendor-option-space PXE;
  option PXE.mtftp-ip 10.0.2.2;
  # […]
}

On the same topic, do not override option 125 “VIVSO.” See “Zero-Touch Provisioning for Cisco IOS.”


  1. Wireshark knows how to decode option 43 for some vendors, thanks to option 60, but not for Juniper. ↩︎

Cryptogram Friday Squid Blogging: Peru Defends Its Waters against Chinese Squid Fishing Boats

Squid geopolitics.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Inrupt’s Solid Announcement

Earlier this year, I announced that I had joined Inrupt, the company commercializing Tim Berners-Lee’s Solid specification:

The idea behind Solid is both simple and extraordinarily powerful. Your data lives in a pod that is controlled by you. Data generated by your things — your computer, your phone, your IoT whatever — is written to your pod. You authorize granular access to that pod to whoever you want for whatever reason you want. Your data is no longer in a bazillion places on the Internet, controlled by you-have-no-idea-who. It’s yours. If you want your insurance company to have access to your fitness data, you grant it through your pod. If you want your friends to have access to your vacation photos, you grant it through your pod. If you want your thermostat to share data with your air conditioner, you give both of them access through your pod.

This week, Inrupt announced the availability of the commercial-grade Enterprise Solid Server, along with a small but impressive list of initial customers of the product and the specification (like the UK National Health Service). This is a significant step forward to realizing Tim’s vision:

The technologies we’re releasing today are a component of a much-needed course correction for the web. It’s exciting to see organizations using Solid to improve the lives of everyday people — through better healthcare, more efficient government services and much more.

These first major deployments of the technology will kick off the network effect necessary to ensure the benefits of Solid will be appreciated on a massive scale. Once users have a Solid Pod, the data there can be extended, linked, and repurposed in valuable new ways. And Solid’s growing community of developers can be rest assured that their apps will benefit from the widespread adoption of reliable Solid Pods, already populated with valuable data that users are empowered to share.

A few news articles. Slashdot thread.

Planet DebianRuss Allbery: PGP::Sign 1.04

The refactor of PGP::Sign in the 1.00 release to use IPC::Run instead of hand-rolled process management code broke signing large files, which I discovered when trying to use the new module to sign checkgroups for the Big Eight Usenet hierarchies.

There were two problems: IPC::Run sets sockets to talk to the child process to non-blocking, and when you pass a scalar in as the data to pass to a child socket, IPC::Run expects to use it as a queue and thus doesn't send EOF to the child process when the input is exhausted.

This release works around both problems by handling non-blocking writes to the child using select and using a socket to write the passphrase to the child process instead of a scalar variable. It also adds a test to ensure that signing long input keeps working.

You can get the latest release from CPAN or from the PGP::Sign distribution page.

,

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Sam VargheseWe need to talk about Tom Switzer’s spin about News Limited

Australia was hit by horrendous bushfires in 2019. Picture: Pixabay

Tom Switzer is a right-wing writer in Melbourne, who is executive director at the Centre for Independent Studies and is a presenter on ABC Radio National.

He often writes in support of Rupert Murdoch and his media empire, for the simple reason that if he were to lose his current gigs, then he could go back on the Murdoch teat.

Thus his defence of Murdoch against criticism by two former Australian prime ministers, Malcolm Turnbull and Kevin, is not surprising. Sucking up to power is a common game used by writers who have an avenue to vent. Switzer has the Nine newspapers open to his rantings.

One of the claims made by Rudd and Turnbull is that Murdoch publications publish stories that are full of incorrect information and slanted. This is correct. While there is nothing like objective journalism, there is indeed something called fact-based journalism.

Some publications may approach an issue from the left when they comment on it. Others may approach the same subject from a centrist or right perspective. There is nothing wrong with any of these occurrences.

Switzer cites the existence of a vast number of small publications to claim that there is media diversity in Australia. But how much reach do these publications have? And, more importantly, how much influence do they have?

As an example, let me cite the case of Arthur Sinodinos. The former adviser to ex-prime minister John Howard was under a cloud over some financial issues a few years back. Naturally, all newspapers that cover federal politics gave the story plenty of air, with many of them calling for him to step down.

But Sinodinos stayed put – until The Australian’s senior staffer Dennis Shanahan wrote a piece suggesting that he should go. He resigned that very day.

When the Murdoch press takes up an issue, one never knows the extent to which it will go, no matter whether the issue affects a group, company or a single individual. Yasmin Abdel-Meguid, a public figure, felt the effect after she issued a tweet that offended some nationalistic sentiments. There were more than 50 articles writen about her and it stopped when she left the country.

The Murdoch press generally backs the Liberal and other rightist parties in Australia. Occasionally, when it suits Murdoch’s business interests he tilts the other way.

Another of Turnbull’s accusations has been that The Australian spread incorrect information about the cause behind the bushires that Australian experienced in 2019, putting many of them down to arson.

The Murdoch defence was to say that only a small percentage of the total had mentioned arson. But what was forgotten is that if even one article had mentioned arson — when there was no evidence to back this up — then the paper was at fault. You cannot print 200 articles saying that a man was killed by his wife and justify the one article that said he took his own life.

It is true that Turnbull and Rudd have their own skeletons which they do not speak about in public. But that does not mean they cannot speak out about publications that operate in a way that only looks to further their proprietior’s interests.

Planet DebianJunichi Uekawa: Rewrote my build system in C++.

Rewrote my build system in C++. I used to write build rules in Nodejs, but I figured if my projects are mostly C++ I should probably write them in C++. I wanted to make it a bit more like BUILD files but couldn't really and ended up looking more C++ than I wanted to. Seems like key-value struct initialization isn't available until C++20.

,

Planet DebianMartin Michlmayr: beancount2ledger 1.3 released

I released version 1.3 of beancount2ledger, the beancount to ledger converter that was moved from bean-report ledger into a standalone tool.

You can get beancount2ledger from GitHub or via pip install.

Here are the changes in 1.3:

  • Add rounding postings only when required (issue #9)
  • Avoid printing too much precision for a currency (issue #21)
  • Avoid creating two or more postings with null amount (issue #23)
  • Add price to cost when needed by ledger (issue #22)
  • Preserve posting order (issue #18)
  • Add config option indent
  • Show metadata with hledger output
  • Support setting auxiliary dates and posting dates from metadata (issue #14)
  • Support setting the code of transactions from metadata
  • Support mapping of account and currency names (issue #24)
  • Improve documentation:
    • Add user guide
    • Document limitations (issue #12)

Worse Than FailureError'd: Hate the Error and Hate the Game

"Somehow, a busy day for Blizzard's servers is going to last for around 6 months," writes James G.

 

"So, is interpreting error messages a sport now?" Jay C. wrote.

 

Drew W. writes, "I'm not sure how, but Sparkpost thinks I've had over 130 emials opened for every one I've sent!"

 

"I...may have a problem staying off of my phone," Kevin V. writes.

 

Gordon wrote, "Kind of sums up the 2020 season, doesn't it?"

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Sam VargheseVale, Robert Fisk

The veteran Middle East correspondent Robert Fisk died recently at the age of 74, and his death means one of the Western world’s journalists who best understood the region has left the scene.

Fisk lived in Beirut for most of the 30-plus years he covered the region and reported the troubles in Northern Ireland before venturing out of the country.

He reported on the Soviet invasion of Afghanistan, the Israeli invasion of Lebanon and the continuing woes in that country. Fisk interviewed the al-Qaeda chief Osama bin Laden thrice and also covered the US invasion of Iraq.

Robert Fisk.

Some questioned his approach to journalism; he did not believe in getting opinions from both sides, so-called balanced journalism. Rather, it was his belief that the job of a reporter was to provide an outlet for the underdog.

His famous example was that of the liberation of a concentration camp. And he asked whether one should be expected to get a quote from a SS guard for balance, a query which nobody has attempted to answer.

When the terrorist attacks took place in 2001, Fisk was on a flight which was turned back due to the incident. He was invited on a TV talk show, along with the American lawyer Alan Dershowitz. When the attacks discussed, Fisk asked the natural question: what was the motive for the attacks. For this, he was denounced as an anti-Semite by Dershowitz, and he has often told this tale to illustrate the level of stupidity in the debate over the Middle East.

Fisk got into journalism at the Newcastle Chronicle and then moved to the Sunday Express. From there, he went to work for The Times as a correspondent in Northern Ireland, Portugal and the Middle East, a role for which he based himself in Beirut intermittently from 1976.

After 1989, he worked for The Independent. Fisk received many British and international journalism awards, including the Press Awards Foreign Reporter of the Year seven times.

At one stage of his career, he expressed doubts about whether all the reporting being done to cover trouble spots in the world was really of any use, because it seemed to change nothing.

But then the journalist within him prevailed and he continued filing his dispatches from Beirut until he was taken from this earth.

He was a man with a deep understanding of issues and one who took great pains with his reporting. He will be sorely missed.

,

Planet DebianDirk Eddelbuettel: tidyCpp 0.0.2: More documentation and features

A first update of the still fairly new package tidyCpp is now on CRAN. The packages offers a C++ layer on top of the C API for R which aims to make its use a little easier and more consistent.

The vignette has been extended with a new examples, a new section and some general editing. A few new defines have been added mostly from the Rinternals.h header. We also replaced the Shield class with a simpler yet updated version class Protect. The name better represent the core functionality of offering a simpler alternative to the PROTECT and UNPROTECT macro pairing. We also added a short discussion to the vignette of a gotcha one has to be mindful of, and that we fell for ourselves in version 0.0.1. We also added a typedef so that code using Shield can still be used.

The NEWS entry follows.

Changes in tidyCpp version 0.0.2 (2020-11-12)

  • Expanded definitions in internals.h to support new example.

  • The vignette has been extended with an example based on package uchardet.

  • Class Shield has been replaced by an new class Protect; a compatibility typdef has been added.

  • The examples and vignette have been clarified with respect to proper ownership of protected objects; a new vignette section was added.

Thanks to my CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBits from Debian: "Homeworld" will be the default theme for Debian 11

The theme "Homeworld" by Juliette Taka has been selected as default theme for Debian 11 'bullseye'. Juliette says that this theme has been inspired by the Bauhaus movement, an art style born in Germany in the 20th century.

Homeworld wallpaper. Click to see the whole theme proposal

Homeworld debian-installer theme. Click to see the whole theme proposal

After the Debian Desktop Team made the call for proposing themes, a total of eighteen choices have been submitted. The desktop artwork poll was open to the public, and we received 5,613 responses ranking the different choices, of which Homeworld has been ranked as the winner among them.

This is the third time that a submission by Juliette has won. Juliette is also the author of the lines theme that was used in Debian 8 and the softWaves theme that was used in Debian 9.

We'd like to thank all the designers that have participated and have submitted their excellent work in the form of wallpapers and artwork for Debian 11.

Congratulations, Juliette, and thank you very much for your contribution to Debian!

LongNowA Timely Reflection on our Changing Climate

Image for post
Antarctic Sea Ice Melt — 02019 (Source: Maxar)

The Ancient Greeks had two different words fortime. The first, chronos, is time as we think of it now: marching forward, ceaselessly creating our past, present, and future. The second, kairos, is time in the opportune sense: the ideal moment to act, as captured by the phrase, “It’s time.”

My work, like many other photographers, has been a dedicated search forkairos — finding that ideal confluence of place and time that helps to tell a particular story. For me, that story has focused on the manmade world. In 02013, I launched Daily Overview, which features compositions created from satellite imagery focused on the places on the planet where humans have left their mark. Partnerships with some of the world’s best satellite imaging companies gave me access to libraries from which I could compose a visual compendium of the world we are creating. It’s a world that we are harvesting, mining, exploring, and powering. And it’s a world that we are changing faster than ever before.

Image for post
Image for post
Image for post
Left: Mount Whaleback Iron Ore Mine in Australia. Center: Development in Boca Raton, Florida. Right: Singapore Tankers (Source: Maxar)

The atmospheric chemist and Nobel Prize laureate Paul J. Crutzen coined the term Anthropocene to describe this new geological era, one in which a single species — human beings — is the most powerful force affecting the planet’s natural systems. My work to date has captured macro-view moments in this era so that we might get a better understanding of what we, collectively — with all of the good and all of the bad — have done.

Thousands of image installments on Daily Overview over the past six years have covered a lot of ground. But in some ways, our earlier work does not include a crucial element — chronos — needed to convey the severity of what we face in this new Anthropocentric era. A single picture reflects the story of a moment in time. With two or more pictures of that same place, you can tell a richer story about change: its breadth, its pace, its cause. That is the idea behind our newest project, Overview Timelapse. What might we learn when we combine chronoskairos, and this awe-inspiring perspective from above?

Image for post
Las Vegas Expansion — 01989 / 02019 (Source: ESA)

The story of the current moment is that far-reaching human activity on the planet, primarily the continued burning of fossil fuels, is releasing a vast and unprecedented amount of carbon that is, in turn, causing a drastic (and widely-predicted) reaction by the planet’s natural systems. By looking for the locations that convey the magnitude of what is taking place, my co-author Timothy Dougherty and I spent hundreds of hours of seeking out and observing change that has taken place on the macroscale — and the reaction from the climate that we have already begun to see as a result.

Image for post
Amazon Rainforest Deforestation — 01989 / 02019 (Source: ESA)

As I write this, smoke from a nearby wildfire is obscuring the sun outside of my window. Perhaps I’m not as surprised as I should be to see that my home state of California has burned at an extraordinary rate this summer. Or that there were five tropical cyclones in the Atlantic Ocean last month for only the second time on record. Or the horrible destruction and loss of wildlife from the Australian Bushfires earlier this year. Or the once-in-a-thousand-year floods and Derechos in the Midwest. Or the recent reports of faster-than-predicted melting of the Greenland Ice Sheet.

Image for post
Miami Beach Red Tide — May 02017 / June 02018 (Source: Nearmap)

What scares me most now is what this project has taught me about how all of these interconnected events can cascade. These conditions build upon on one another such that something like unprecedented heat leads to drought, which leads to conditions ripe for fires, which leads to fires, which destroys trees, which returns all of the carbon stored in those trees since the Industrial Revolution into the atmosphere, which leads to more unprecedented heat, and so on, and so on, and so on.

Image for post
Westmont Rooftop Solar Project — 02014 / 02017 (Source: Nearmap)

Despite all this, I still maintain a healthy dose of optimism for what lies ahead. I have witnessed a changing climate and all the destruction it brings to bear, but I have also seen solutions which can make for a safer, better civilization and world. Throughout Overview Timelapse we have featured some of these innovations that are slated to bring positive change in the coming years.

Image for post
Great Green Wall of Africa — 02018 / 02019 (Source: Maxar)

So what will come next for a human species trying to thrive on a rapidly warming planet? The only certain constant is change. Looking to the future, it is in our hands, collectively, to determine the nature of the change to come. Let us work together to build awareness of the well-researched and considered solutions that already exist. Ones that get us excited about what lies ahead, not paralyzed by the magnitude of the problem. Ones that can be scaled to meet the severity of the challenge of an increasingly carbon-rich atmosphere.

Perhaps we will soon come to an overdue, yet opportune moment — our kairos — to reverse the course of human-induced planetary warming, and change the Earth for the better.

It’s time.

Learn More

Worse Than FailureCodeSOD: The Default Value

Cicely (previously) returned to the codebase which was providing annoyances last time.

This time, the code is meant for constructing objects based on a URL pattern. Specifically, the URL might have a format like api/resource/{id}. Looking at one of the constructors, though, it didn’t want an ID, it wanted an array of them. Cicely wasn’t passing multiple IDs off the URL, and wasn’t clear, from the documentation, how it worked, how you supplied those IDs, or frankly, what they were used for. Digging into the C# code made it clear, but still raised some additional questions.

int[] ids = Request.FormOrQuerystring("ids").EnsureNotNull().Split(",").
Select(item => item.ToInt32()).Concat(new int[] { id }).ToArray();

Whitespace added for readability, the original was on one line.

This is one of those cases where the code isn’t precisely bad, or wrong. At worst, it’s inefficient with all the LINQs and new arrays. It’s just… why would you do this this way?

At its core, we check the request for an ids property. EnsureNotNull() guarantees that we’ll see a value, whether there is one or not, we Split it on commas, project the text into Int32 using Select… and then concatenate a one element array onto the end, containing our id off the URL.

Perhaps someone wanted to avoid branching logic (because it’s potentially hard to debug) or maybe wanted some “functional purity” in their programming. Maybe they were just trying to see how much they could cram into a single line of code? Regardless, Cicely considers it a “most imaginative way to set a default value”. It’s certainly clever, I’ll give it that.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianMike Hommey: Announcing git-cinnabar 0.5.6

Please partake in the git-cinnabar survey.

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.5?

  • Updated git to 2.29.2 for the helper.
  • git cinnabar git2hg and git cinnabar hg2git now have a --batch flag.
  • Fixed a few issues with experimental support for python 3.
  • Fixed compatibility issues with mercurial >= 5.5.
  • Avoid downloading unsupported clonebundles.
  • Provide more resilience to network problems during bundle download.
  • Prebuilt helper for Apple Silicon macos now available via git cinnabar download.

,

Planet DebianVincent Fourmond: Solution for QSoas quiz #1: averaging spectra

This post describes the solution to the Quiz #1, based on the files found there. The point is to produce both the average and the standard deviation of a series of spectra. Below is how the final averaged spectra shoud look like:
I will present here two different solutions.

Solution 1: using the definition of standard deviation

There is a simple solution using the definition of the standard deviation: $$\sigma_y = \sqrt{<y^2> - {<y>}^2}$$ in which \(<y^2>\) is the average of \(y^2\) (and so on). So the simplest solution is to construct datasets with an additional column that would contain \(y^2\), average these columns, and replace the average with the above formula. For that, we need first a companion script that loads a single data file and adds a column with \(y^2\). Let's call this script load-one.cmds:
load ${1}
apply-formula y2=y**2 /extra-columns=1
flag /flags=processed
When this script is run with the name of a spectrum file as argument, it loads it (replaces ${1} by the first argument, the file name), adds a column y2 containing the square of the y column, and flag it with the processed flag. This is not absolutely necessary, but it makes it much easier to refer to all the spectra when they are processed. Then to process all the spectra, one just has to run the following commands:
run-for-each load-one.cmds Spectrum-1.dat Spectrum-2.dat Spectrum-3.dat
average flagged:processed
apply-formula y2=(y2-y**2)**0.5
dataset-options /yerrors=y2
The run-for-each command runs the load-one.cmds script for all the spectra (one could also have used Spectra-*.dat to not have to give all the file names). Then, the average averages the values of the columns over all the datasets. To be clear, it finds all the values that have the same X (or very close X values) and average them, column by column. The result of this command is therefore a dataset with the average of the original \(y\) data as y column and the average of the original \(y^2\) data as y2 column. So now, the only thing left to do is to use the above equation, which is done by the apply-formula code. The last command, dataset-options, is not absolutely necessary but it signals to QSoas that the standard error of the y column should be found in the y2 column. This is now available as script method-one.cmds in the git repository.

Solution 2: use QSoas's knowledge of standard deviation

The other method is a little more involved but it demonstrates a good approach to problem solving with QSoas. The starting point is that, in apply-formula, the value $stats.y_stddev corresponds to the standard deviation of the whole y column... Loading the spectra yields just a series of x,y datasets. We can contract them into a single dataset with one x column and several y columns:
load Spectrum-*.dat /flags=spectra
contract flagged:spectra
After these commands, the current dataset contains data in the form of:
lambda1	a1_1	a1_2	a1_3
lambda2	a2_1	a2_2	a2_3
...
in which the ai_1 come from the first file, ai_2 the second and so on. We need to use transpose to transform that dataset into:
0	a1_1	a2_1	...
1	a1_2	a2_2	...
2	a1_3	a2_3	...
In this dataset, values of the absorbance for the same wavelength for each dataset is now stored in columns. The next step is just to use expand to obtain a series of datasets with the same x column and a single y column (each corresponding to a different wavelength in the original data). The game is now to replace these datasets with something that looks like:
0	a_average
1	a_stddev
For that, one takes advantage of the $stats.y_average and $stats.y_stddev values in apply-formula, together with the i special variable that represents the index of the point:
apply-formula "if i == 0; then y=$stats.y_average; end; if i == 1; then y=$stats.y_stddev; end"
strip-if i>1
Then, all that is left is to apply this to all the datasets created by expand, which can be just made using run-for-datasets, and then, we reverse the splitting by using contract and transpose ! In summary, this looks like this. We need two files. The first, process-one.cmds contains the following code:
apply-formula "if i == 0; then y=$stats.y_average; end; if i == 1; then y=$stats.y_stddev; end"
strip-if i>1
flag /flags=processed
The main file, method-two.cmds looks like this:
load Spectrum-*.dat /flags=spectra
contract flagged:spectra
transpose
expand /flags=tmp
run-for-datasets process-one.cmds flagged:tmp
contract flagged:processed
transpose
dataset-options /yerrors=y2
Note some of the code above can be greatly simplified using new features present in the upcoming 3.0 version, but that is the topic for another post.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code and compile it yourself or buy precompiled versions for MacOS and Windows there.

Cryptogram “Privacy Nutrition Labels” in Apple’s App Store

Apple will start requiring standardized privacy labels for apps in its app store, starting in December:

Apple allows data disclosure to be optional if all of the following conditions apply: if it’s not used for tracking, advertising or marketing; if it’s not shared with a data broker; if collection is infrequent, unrelated to the app’s primary function, and optional; and if the user chooses to provide the data in conjunction with clear disclosure, the user’s name or account name is prominently displayed with the submission.

Otherwise, the privacy labeling is mandatory and requires a fair amount of detail. Developers must disclose the use of contact information, health and financial data, location data, user content, browsing history, search history, identifiers, usage data, diagnostics, and more. If a software maker is collecting the user’s data to display first or third-party adverts, this has to be disclosed.

These disclosures then get translated to a card-style interface displayed with app product pages in the platform-appropriate App Store.

The concept of a privacy nutrition label isn’t new, and has been well-explored at CyLab at Carnegie Mellon University.

Cryptogram New Zealand Election Fraud

It seems that this election season has not gone without fraud. In New Zealand, a vote for “Bird of the Year” has been marred by fraudulent votes:

More than 1,500 fraudulent votes were cast in the early hours of Monday in the country’s annual bird election, briefly pushing the Little-Spotted Kiwi to the top of the leaderboard, organizers and environmental organization Forest & Bird announced Tuesday.

Those votes — which were discovered by the election’s official scrutineers — have since been removed. According to election spokesperson Laura Keown, the votes were cast using fake email addresses that were all traced back to the same IP address in Auckland, New Zealand’s most populous city.

It feels like writing this story was a welcome distraction from writing about the US election:

“No one has to worry about the integrity of our bird election,” she told Radio New Zealand, adding that every vote would be counted.

Asked whether Russia had been involved, she denied any “overseas interference” in the vote.

I’m sure that’s a relief to everyone involved.

Cryptogram The Security Failures of Online Exam Proctoring

Proctoring an online exam is hard. It’s hard to be sure that the student isn’t cheating, maybe by having reference materials at hand, or maybe by substituting someone else to take the exam for them. There are a variety of companies that provide online proctoring services, but they’re uniformly mediocre:

The remote proctoring industry offers a range of services, from basic video links that allow another human to observe students as they take exams to algorithmic tools that use artificial intelligence (AI) to detect cheating.

But asking students to install software to monitor them during a test raises a host of fairness issues, experts say.

“There’s a big gulf between what this technology promises, and what it actually does on the ground,” said Audrey Watters, a researcher on the edtech industry who runs the website Hack Education.

“(They) assume everyone looks the same, takes tests the same way, and responds to stressful situations in the same way.”

The article discusses the usual failure modes: facial recognition systems that are more likely to fail on students with darker faces, suspicious-movement-detection systems that fail on students with disabilities, and overly intrusive systems that collect all sorts of data from student computers.

I teach cybersecurity policy at the Harvard Kennedy School. My solution, which seems like the obvious one, is not to give timed closed-book exams in the first place. This doesn’t work for things like the legal bar exam, which can’t modify itself so quickly. But this feels like an arms race where the cheater has a large advantage, and any remote proctoring system will be plagued with false positives.

Planet DebianReproducible Builds: Reproducible Builds in October 2020

Welcome to the October 2020 report from the Reproducible Builds project.

In our monthly reports, we outline the major things that we have been up to over the past month. As a brief reminder, the motivation behind the Reproducible Builds effort is to ensure flaws have not been introduced in the binaries we install on our systems. If you are interested in contributing to the project, please visit our main website.

General

On Saturday 10th October, Morten Linderud gave a talk at Arch Conf Online 2020 on The State of Reproducible Builds in Arch. The video should be available later this month, but as a teaser:

The previous year has seen great progress in Arch Linux to get reproducible builds in the hands of the users and developers. In this talk we will explore the current tooling that allows users to reproduce packages, the rebuilder software that has been written to check packages and the current issues in this space.

During the Reproducible Builds summit in Marrakesh in 2019, developers from the GNU Guix, NixOS and Debian distributions were able to produce a bit-for-bit identical GNU Mes binary despite using three different versions of GCC. Since this summit, additional work resulted in a bit-for-bit identical Mes binary using tcc, and last month a fuller update was posted to this effect by the individuals involved. This month, however, David Wheeler updated his extensive page on Fully Countering Trusting Trust through Diverse Double-Compiling, remarking that:

GNU Mes rebuild is definitely an application of [Diverse Double-Compiling]. [..] This is an awesome application of DDC, and I believe it’s the first publicly acknowledged use of DDC on a binary

There was a small, followup discussion on our mailing list.

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update.

This month, the Reproducible Builds project restarted our IRC meetings, managing to convene twice: the first time on October 12th (summary & logs), and later on the 26th (logs). As mentioned in previous reports, due to the unprecedented events throughout 2020, there will be no in-person summit event this year.

On our mailing list this month Elías Alejandro posted a request for help with a local configuration

In August, Lucas Nussbaum performed an archive-wide rebuild of packages to test enabling the reproducible=+fixfilepath Debian build flag by default. Enabling this fixfilepath feature will likely fix reproducibility issues in an estimated 500-700 packages. However, this month Vagrant Cascadian posted to the debian-devel mailing list:

It would be great to see the reproducible=+fixfilepath feature enabled by default in dpkg-buildflags, and we would like to proceed forward with this soon unless we hear any major concerns or other outstanding issues. […] We would like to move forward with this change soon, so please raise any concerns or issues not covered already.

Debian Developer Stuart Prescott has been improving python-debian, a Python library that is used to parse Debian-specific files such as changelogs, .dscs, etc. In particular, Stuart is working on adding support for .buildinfo files used for recording reproducibility-related build metadata:

This can mostly be a very thin layer around the existing Deb822 types, using the existing Changes code for the file listings, the existing PkgRelations code for the package listing and gpg_* functions for signature handling.

A total of 159 Debian packages were categorised, 69 had their categorisation updated, and 33 had their classification removed this month, adding to our knowledge about identified issues. As part of this, Chris Lamb identified and classified two new issues: build_path_captured_in_emacs_el_file and rollup_embeds_build_path.

Software development

This month, we tried to fix a large number of currently-unreproducible packages, including:

Bernhard M. Wiedemann also reported three issues against bison, ibus and postgresql12.

Tools

diffoscope is our in-depth and content-aware diff utility. Not only could you locate and diagnose reproducibility issues, it provides human-readable diffs of all kinds too. This month, Chris Lamb uploaded version 161 to Debian (later backported by Mattia Rizzolo), as well as made the following changes:

  • Move test_ocaml to the assert_diff helper. []
  • Update tests to support OCaml version 4.11.1. Thanks to Sebastian Ramacher for the report. (#972518)
  • Bump minimum version of the Black source code formatter to 20.8b1. (#972518)

In addition, Jean-Romain Garnier temporarily updated the dependency on radare2 to ensure our test pipelines continue to work [], and for the GNU Guix distribution Vagrant Cascadian diffoscope to version 161 [].

In related development, trydiffoscope is the web-based version of diffoscope. This month, Chris Lamb made the following changes:

  • Mark a --help-only test as being a ‘superficial’ test. (#971506)
  • Add a real, albeit flaky, test that interacts with the try.diffoscope.org service. []
  • Bump debhelper compatibility level to 13 [] and bump Standards-Version to 4.5.0 [].

Lastly, disorderfs version 0.5.10-2 was uploaded to Debian unstable by Holger Levsen, which enabled security hardening via DEB_BUILD_MAINT_OPTIONS [] and dropped debian/disorderfs.lintian-overrides [].

Website and documentation

This month, a number of updates to the main Reproducible Builds website and related documentation were made by Chris Lamb:

  • Add a citation link to the academic article regarding dettrace [], and added yet another supply-chain security attack publication [].
  • Reformatted the Jekyll’s Liquid templating language and CSS formatting to be consistent [] as well as expand a number of tab characters [].
  • Used relative_url to fix missing translation icon on various pages. []
  • Published two announcement blog posts regarding the restarting of our IRC meetings. [][]
  • Added an explicit note regarding the lack of an in-person summit in 2020 to our events page. []

Testing framework

The Reproducible Builds project operates a Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, Holger Levsen made the following changes:

  • Debian-related changes:

    • Refactor and improve the Debian dashboard. [][][]
    • Track bugs which are usertagged as ‘filesystem’, ‘fixfilepath’, etc.. [][][]
    • Make a number of changes to package index pages. [][][]
  • System health checks:

    • Relax disk space warning levels. []
    • Specifically detect build failures reported by dpkg-buildpackage. []
    • Fix a regular expression to detect outdated package sets. []
    • Detect Lintian issues in diffoscope. []

  • Misc:

    • Make a number of updates to reflect that our sponsor Profitbricks has renamed itself to IONOS. [][][][]
    • Run a F-Droid maintenance routine twice a month to utilise its cleanup features. []
    • Fix the target name in OpenWrt builds to ath79 from ath97. []
    • Add a missing Postfix configuration for a node. []
    • Temporarily disable Arch Linux builds until a core node is back. []
    • Make a number of changes to our “thanks” page. [][][]

Build node maintenance was performed by both Holger Levsen [][] and Vagrant Cascadian [][][], Vagrant Cascadian also updated the page listing the variations made when testing to reflect changes for in build paths [] and Hans-Christoph Steiner made a number of changes for F-Droid, the free software app repository for Android devices, including:

  • Do not fail reproducibility jobs when their cleanup tasks fail. []
  • Skip libvirt-related sudo command if we are not actually running libvirt. []
  • Use direct URLs in order to eliminate a useless HTTP redirect. []

If you are interested in contributing to the Reproducible Builds project, please visit the Contribute page on our website. However, you can also get in touch with us via:

Worse Than FailureCodeSOD: Testing Architectures

Marlyn’s employer ships software for a wide variety of CPU architectures. And depending on which branch of the product you were digging into, you might have code that builds for just i386, x86_64, PPC, and PPC64, while another branch might add s390, s390x, and aarch64.

As you might imagine, they have a huge automated test suite, meant to ensure that changes don’t break functionality or compatibility. So it’s a pity that their tests were failing.

The error messages implied that there were either missing or too many files, depending on the branch in question, but Marlyn could see that the correct build outputs were there, so nothing should be missing. It must be the test suite that had the error.

Marlyn dug into the Python script which drove their tests, and found the get_num_archs function, which theoretically would detect how many architectures this branch should output. Unfortunately, its implementation was straight out of XKCD.

def get_num_archs(self):
    return 7  # FIXME

At least they left a comment.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Krebs on SecurityPatch Tuesday, November 2020 Edition

Adobe and Microsoft each issued a bevy of updates today to plug critical security holes in their software. Microsoft’s release includes fixes for 112 separate flaws, including one zero-day vulnerability that is already being exploited to attack Windows users. Microsoft also is taking flak for changing its security advisories and limiting the amount of information disclosed about each bug.

Some 17 of the 112 issues fixed in today’s patch batch involve “critical” problems in Windows, or those that can be exploited by malware or malcontents to seize complete, remote control over a vulnerable Windows computer without any help from users.

Most of the rest were assigned the rating “important,” which in Redmond parlance refers to a vulnerability whose exploitation could “compromise the confidentiality, integrity, or availability of user data, or of the integrity or availability of processing resources.”

A chief concern among all these updates this month is CVE-2020-17087, which is an “important” bug in the Windows kernel that is already seeing active exploitation. CVE-2020-17087 is not listed as critical because it’s what’s known as a privilege escalation flaw that would allow an attacker who has already compromised a less powerful user account on a system to gain administrative control. In essence, it would have to be chained with another exploit.

Unfortunately, this is exactly what Google researchers described witnessing recently. On Oct. 20, Google released an update for its Chrome browser which fixed a bug (CVE-2020-15999) that was seen being used in conjunction with CVE-2020-17087 to compromise Windows users.

If you take a look at the advisory Microsoft released today for CVE-2020-17087 (or any others from today’s batch), you might notice they look a bit more sparse. That’s because Microsoft has opted to restructure those advisories around the Common Vulnerability Scoring System (CVSS) format to more closely align the format of the advisories with that of other major software vendors.

But in so doing, Microsoft has also removed some useful information, such as the description explaining in broad terms the scope of the vulnerability, how it can be exploited, and what the result of the exploitation might be. Microsoft explained its reasoning behind this shift in a blog post.

Not everyone is happy with the new format. Bob Huber, chief security officer at Tenable, praised Microsoft for adopting an industry standard, but said the company should consider that folks who review Patch Tuesday releases aren’t security practitioners but rather IT counterparts responsible for actually applying the updates who often aren’t able (and shouldn’t have to) decipher raw CVSS data.

“With this new format, end users are completely blind to how a particular CVE impacts them,” Huber said. “What’s more, this makes it nearly impossible to determine the urgency of a given patch. It’s difficult to understand the benefits to end-users. However, it’s not too difficult to see how this new format benefits bad actors. They’ll reverse engineer the patches and, by Microsoft not being explicit about vulnerability details, the advantage goes to attackers, not defenders. Without the proper context for these CVEs, it becomes increasingly difficult for defenders to prioritize their remediation efforts.”

Dustin Childs with Trend Micro‘s Zero Day Initiative also puzzled over the lack of details included in Microsoft advisories tied to two other flaws fixed today — including one in Microsoft Exchange Server (CVE-2020-16875) and CVE-2020-17051, which is a scary-looking weakness in the Windows Network File System (NFS).

The Exchange problem, Childs said, was reported by the winner of the Pwn2Own Miami bug finding contest.

“With no details provided by Microsoft, we can only assume this is the bypass of CVE-2020-16875 he had previously mentioned,” Childs said. “It is very likely he will publish the details of these bugs soon. Microsoft rates this as important, but I would treat it as critical, especially since people seem to find it hard to patch Exchange at all.”

Likewise, with CVE-2020-17051, there was a noticeable lack of detail for bug that earned a CVSS score of 9.8 (10 is the most dangerous).

“With no description to work from, we need to rely on the CVSS to provide clues about the real risk from the bug,” Childs said. “Considering this is listed as no user interaction with low attack complexity, and considering NFS is a network service, you should treat this as wormable until we learn otherwise.”

Separately, Adobe today released updates to plug at least 14 security holes in Adobe Acrobat and Reader. Details about those fixes are available here. There are no security updates for Adobe’s Flash Player, which Adobe has said will be retired at the end of the year. Microsoft, which has bundled versions of Flash with its Web browsers, says it plans to ship an update in December that will remove Flash from Windows PCs, and last month it made the removal tool available for download.

Windows 10 users should be aware that the operating system will download updates and install them on its own schedule, closing out active programs and rebooting the system. If you wish to ensure Windows has been set to pause updating so you can back up your files and/or system, see this guide.

But please do back up your system before applying any of these updates. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

,

Krebs on SecurityRansomware Group Turns to Facebook Ads

It’s bad enough that many ransomware gangs now have blogs where they publish data stolen from companies that refuse to make an extortion payment. Now, one crime group has started using hacked Facebook accounts to run ads publicly pressuring their ransomware victims into paying up.

On the evening of Monday, Nov. 9, an ad campaign apparently taken out by the Ragnar Locker Team began appearing on Facebook. The ad was designed to turn the screws to the Italian beverage vendor Campari Group, which acknowledged on Nov. 3 that its computer systems had been sidelined by a malware attack.

On Nov. 6, Campari issued a follow-up statement saying “at this stage, we cannot completely exclude that some personal and business data has been taken.”

“This is ridiculous and looks like a big fat lie,” reads the Facebook ad campaign from the Ragnar crime group. “We can confirm that confidential data was stolen and we talking about huge volume of data.”

The ad went on to say Ragnar Locker Team had offloaded two terabytes of information and would give the Italian firm until 6 p.m. EST today (Nov. 10) to negotiate an extortion payment in exchange for a promise not to publish the stolen files.

The Facebook ad blitz was paid for by Hodson Event Entertainment, an account tied to Chris Hodson, a deejay based in Chicago. Contacted by KrebsOnSecurity, Hodson said his Facebook account indeed was hacked, and that the attackers had budgeted $500 for the entire campaign.

“I thought I had two-step verification turned on for all my accounts, but now it looks like the only one I didn’t have it set for was Facebook,” Hodson said.

Hodson said a review of his account shows the unauthorized campaign reached approximately 7,150 Facebook users, and generated 770 clicks, with a cost-per-result of 21 cents. Of course, it didn’t cost the ransomware group anything. Hodson said Facebook billed him $35 for the first part of the campaign, but apparently detected the ads as fraudulent sometime this morning before his account could be billed another $159 for the campaign.

The results of the unauthorized Facebook ad campaign. Image: Chris Hodson.

It’s not clear whether this was an isolated incident, or whether the fraudsters also ran ads using other hacked Facebook accounts. A spokesperson for Facebook said the company is still investigating the incident. A request for comment sent via email to Campari’s media relations team was returned as undeliverable.

But it seems likely we will continue to see more of this and other mainstream advertising efforts by ransomware groups going forward, even if victims really have no expectation that paying an extortion demand will result in criminals actually deleting or not otherwise using stolen data.

Fabian Wosar, chief technology officer at computer security firm Emsisoft, said some ransomware groups have become especially aggressive of late in pressuring their victims to pay up.

“They have also started to call victims,” Wosar said. “They’re outsourcing to Indian call centers, who call victims asking when they are going to pay or have their data leaked.”

Planet DebianThorsten Alteholz: My Debian Activities in October 2020

FTP master

This month I accepted 208 packages and rejected 29. The overall number of packages that got accepted was 563, so yeah, I was not alone this month :-).

Anyway, this month marked another milestone in my NEW package handling. My overall number of ACCEPTed package exceeded the magic number of 20000 packages. This is almost 30% of all packages accepted in Debian. I am a bit proud of this achievement.

Debian LTS

This was my seventy-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 20.75h. During that time I did LTS uploads of:

  • [DLA 2415-1] freetype security update for one CVE
  • [DLA 2419-1] dompurify.js security update for two CVEs
  • [DLA 2418-1] libsndfile security update for eight CVEs
  • [DLA 2421-1] cimg security update for eight CVEs

I also started to work on golang-1.7 and golang-1.8

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty eighth ELTS month.

During my allocated time I uploaded:

  • ELA-289-2 for python3.4
  • ELA-304-1 for freetype
  • ELA-305-1 for libsndfile

The first upload of python3.4, last month, did not build on armel, so I had to reupload an improved package this month. For amd64 and i386 the ELTS packages are built in native mode, whereas the packages on armel are cross-built. There is some magic in debian/rules of python to detect in which mode the package is built. This is important as some tests of the testsuite are not really working in cross-build-mode. Unfortunately I had to learn this the hard way …

The upload of libsndfile now aligns the number of fixed CVEs in all releases.

Last but not least I did some days of frontdesk duties.

Other stuff

Despite my NEW-handling and LTS/ELTS stuff I hadn’t much fun with Debian packages this month. Given the approaching freeze, I hope this will change again in November.

LongNowScenario Planning for the Long-term

The following transcript has been edited for length and clarity. 

The Role of Mental Maps

This is a map of North America. It was made by a Dutch map maker by the name of Herman Moll, working in London in 01701. I bought it on Portobello Road for about 60 pounds back in 01981. Which is to say, it’s not a particularly valuable map. But there is something unusual about it: California is depicted as an island. 

What’s interesting to me as a scenario planner is how the map came to be, how it was used, and how it was changed.

The Spanish came up from the South, and they found what we now call Mexico, and the tip of the Baja Peninsula, and sailed up into the Sea of Cortez. Those who went further north along the West Coast eventually came to the Strait of Juan de Fuca, a channel separating present-day Washington and British Columbia. Assuming the two bodies of water were connected, they created the Island of California. 

Now, this would only be a historical curiosity were it not for the problem of the missionaries who actually used the map to go inland. And of course, they would have to take their boats with them to cross the Sea of California. And when they went over the Sierra Nevada Mountains down the other side, they found this beach that went on and on and on and on until finally they realized there was no Sea of California.

And they went back to the map makers in Spain and said, “Your bloody map is wrong!” And the map makers fought back and said, “No, no, no, you’re in the wrong place, the map is right.” Anybody who works in a large organization understands this logic very well. Because the map is always right. 

The first maps depicting California as an island were drawn in 01605. In 01685, the King of Spain finally figured out that this was wrong and ordered the maps in Spain to be corrected. But we still have maps dating from 01765 that were drawn this way.

So what’s the message? If you get your facts wrong, you get your map wrong. If you get your map wrong, you do the wrong thing. But worst of all, once you believe a map, it’s very hard to change. 

We make our decisions about the future based on our own mental maps about how the world works. And we are very much prisoners of those mental maps. Part of the function of scenario planning is figuring out how to break out of the constraints. How do we challenge those mental maps that we see about how people behave, how organizations work and how institutions evolve?

The Importance of Diversity in Scenario-Planning 

This is the slide that IBM used to make a decision about the future of personal computers. It is the costliest slide in business history. This is a $200 billion slide.

In 01980, IBM made the above forecast of what they believed the demand of personal computers would be to decide whether or not they should get into the business. They projected that the total computers sold through all channels, over five years, would be 241,683, peaking in 01983 and heading south. After all, why would anyone buy a second computer?

This product [the personal computer] was so funky that the theory was that pursuing it was going to kill Apple. That was the goal: get people back to real computers [large mainframes] because real men use big machines. It was nine men in the room making this call, and they got it totally wrong. The correct answer was 25 million units sold over that five year period. 

So, they were a little bit off and they said, “Okay, 241,000 units, a couple thousand dollars a unit. Well, it’s not worth developing a chip. Intel, give us the chip. And there’s this kid from Seattle, Gates or something, who has got an Operating System called QDOS. He’ll give it to us for free, we’ll put it on the machine, we’ll put it in a box, we’ll call it an IBM computer.”

This was the moment they almost lost the future of the company, because they could not imagine that the world could be so different, that people would like a box with 16K of memory in it. That just seemed inconceivable to these nine men, who were the smartest people in their industry, who knew everything about what they were doing and yet still got it completely wrong. And it literally almost killed IBM as a result. And of course, they no longer make PCs because they could no longer compete, et cetera. This slide is why Bill Gates is one of the richest men in the world. If IBM had said, “You know, maybe there really is a future here, and we’ll develop our own operating system and our own chips,” it might’ve been a very different story.

So, again, part of our challenge today in our thinking is precisely how we challenge each other’s mental maps. And for that, you need diversity. Diversity is the single most important characteristic for thinking about the future. Every time I have been wrong, with no exceptions, it’s because we had inadequate diversity in the room. There are a number of embarrassing moments in the history of the Global Business Network (GBN), the Mexico meeting being maybe the lowest point. Two weeks before the collapse of the Mexican peso, we said there were three scenarios for Mexico: a good scenario, a better scenario, and a best scenario. Two weeks later, it all went in the tank. Why? Because we were all just talking to ourselves, and as a result, we got it completely wrong.

So, one of the most important messages about long-term thinking is the inclusion of diverse points of view. And if you’re trapped in one mindset, you’re going to miss an enormous amount. 

The Spirit of Surprise

“Often do the spirits of great events stride on before the events, and in today already walks tomorrow.”

Friedrich Schiller (01759-01805)

The future is being born. We all remember the Bill Gibson quote: “The future’s already here, it’s just not evenly distributed yet.” The signals are out there.

And why diversity matters so much is because it enables you to pick up on a variety of signals from a variety of different disciplines, contexts, cultures, et cetera. And that’s an important part of what scenario thinking is about.

Scenario planning is rooted in the concept of Multiple Possibilities

The way most organizations have thought about the future is to project out from the present. And then, if they were concerned about uncertainty, they shaded it up and down a little bit, 10% up 10% down in what was called sensitivity analysis. It didn’t require much imagination; it required math. 

But scenario planning involves imagining different possibilities and then figuring out how we can get from here to there. It requires a combination of two things: imagination and analytic realism. If it’s just forecasts, just analysis, it’s pedestrian. You miss the big surprises. You don’t see that new mental map. If it’s just imagination without analysis and rigor behind it, it is just that; it’s good fiction.

So, it’s important that both of these come to bear in the task of thinking about scenarios. 

The Test of a Good Scenario

Scenario planning is not about prediction; it’s about making better decisions. That is, if you really do your homework well in multiple scenarios, you’re probably going to see this future. That’s not the hard part. The hard part is: what do you do? And if your scenarios are brilliant and nobody pays any attention to them, you have failed. Having been a consultant for many years, it was not a way to get more business to say to a CEO, “Well, we gave you the future and you blew it. You didn’t make the right decision.” That was our failure as consultants. Our job was to actually affect the mindset of decision makers. 

In the end, what we want to do with the Organizational Continuity Project is not simply understand long-term institutions, but influence them. How do we actually make better decisions about our societies, our governments, our corporations, our educational institutions, our communities? How do we actually think long-term and make better choices? That’s the real goal here. It’s beginning to think long-term about what is likely to happen. 

The Strategic Conversation and Strategic Options

It’s about a strategic conversation. What we want to empower is thinking about different scenarios, going out subsequently, creating new knowledge, doing research, doing serious homework, beginning to think about how you create what we call emergent strategies—strategies that emerge out of that conversation, as opposed to top-down control. And then testing those emergent strategies against what we’re already doing, and thereby improving the quality of decision-making.

So what this is really about is an orchestrated strategic conversation, with inputs from a variety of different sources, thinking about possible scenarios, thinking about how we might influence the shape of institutions going forward, and what the rules for those might be. And this continues on. This doesn’t stop. This conversation is a perpetual Long Now conversation.

Planet DebianJonathan Dowland: Borg, confidence in backups, GtkPod and software preservation

Over the summer I decided to migrate my backups from rdiff-backup to borg, which offers some significant advantages, in particular de-duplication, but comes at a cost of complexity, and a corresponding sense of unease about how sound my backup strategy might be. I've now hit the Point Of No Return: my second external backup drive is overdue being synced with my NAS, which will delete the last copy of the older rdiff-backup backups.

Whilst I hesitate over this last action to commit to borg, something else happened. My wife wanted to put a copy of her iTunes music library on her new phone, and I couldn't find it: not only could I not find it on any of our computers, I also couldn't find a copy on the NAS, or in backups, or even in old DVD-Rs. This has further knocked my confidence in our family data management, and makes me even more nervous to commit to borg. I'm now wondering about stashing the contents of the second external backup disk on some cloud service as a fail-safe.

There was one known-good copy of Sarah's music: on her ancient iPod Nano. Apple have gone to varying lengths to prevent you from copying music from an iPod. When Music is copied to an iPod, the files are stripped of all their metadata (artist, title, album, etc.) and renamed to something non-identifying (e.g. F01/MNRL.m4a), and the metadata (and correlation to the obscure file name) is saved in separate database files. The partition of the flash drive containing all this is also marked as "hidden" to prevent it appearing on macOS and Windows systems. We are lucky that the iPod is so old, because Apple went even further in more recent models, adding a layer of encryption.

To get the music off the iPod, one has to undo all of these steps.

Luckily, other fine folks have worked out reversing all these steps and implemented it in software such as libgpod and its frontend, GtkPod, which is still currently available as a Debian package. It mostly worked, and I got back 95% of the tracks. (It would have been nice if GtkPod had reported the tracks it hadn't recovered, it was aware they existed based on the errors it did print. But you can't have everything.)

GtkPod is a quirky, erratic piece of software, that is only useful for old Apple equipment that is long out of production, prior to the introduction of the encryption. The upstream homepage is dead, and I suspect it is unmaintained. The Debian package is orphaned. It's been removed from testing, because it won't build with GCC 10. On the other hand, my experience shows that it worked, and was useful for a real problem that someone had today.

I'm in two minds about GtkPod's fate. On the one hand, I think Debian has far too many packages, with a corresponding burden of maintenance responsibility (for the whole project, not just the individual package maintainers), and there's a quality problem: once upon a time, if software had been packaged in a distribution like Debian, that was a mark of quality, a vote of confidence, and you could have some hope that the software would work and integrate well with the rest of the system. That is no longer true, and hasn't been in my experience for many years. If we were more discerning about what software we included in the distribution, and what we kept, perhaps we could be a leaner distribution, faster to adapt to the changing needs in the world, and of a higher quality.

On the other hand, this story about GtkPod is just one of many similar stories. Real problems have been solved in open source software, and computing historians, vintage computer enthusiasts, researchers etc. can still benefit from that software long into the future. Throwing out all this stuff in the name of "progress", could be misguided. I'm especially sad when I see the glee which people have expressed when ditching libraries like Qt4 from the archive. Some software will not be ported on to Qt5 (or Gtk3, Qt6, Gtk4, Qt7, etc., in perpetuity). Such software might be all of: unmaintained, "finished", and useful for some purpose (however niche), all at the same time.

Planet DebianJonathan Dowland: Red Hat at the Turing Institute

In Summer 2019 Red Hat were invited to the Turing Institute to provide a workshop on issues around building and sustaining an Open Source community. I was part of a group of about 6 people to visit the Turing and deliver the workshop. It seemed to have been well received by the audience.

The Turing Institute is based within the British Library. For many years I have enjoyed visiting the British Library if I was visiting or passing through London for some reason or other: it's such a lovely serene space in a busy, hectic part of London. On one occasion they had Jack Kerouac's manuscript for "On The Road" on display in one of the public gallery spaces: it's a continuous 120-foot long piece of paper that Kerouac assembled to prevent the interruption of changing sheets of paper in his typewriter from disturbing his flow whilst writing.

The Institute itself is a really pleasant-looking working environment. I got a quick tour of it back in February 2019 when visiting a friend who worked there, but last year's visit was my first prolonged experience of working there. (I also snuck in this February, when passing through London, to visit my supervisor who is a Turing Fellow)

I presented a section of a presentation entitled "How to build a successful Open Source community". My section attempted to focus on the "how". We've put out all the presentations under a Creative Commons license, and we've published them on the Red Hat Research website: https://research.redhat.com/blog/2020/08/12/open-source-at-the-turing/

The workshop participants were drawn from PhD students, research associates, research software engineers and Turing Institute fellows. We had some really great feedback from them which we've fed back into revisions of the workshop material including the presentations.

I'm hoping to stay involve in further collaborations between the Turing and Red Hat. I'm pleased to say that we participated in a recent Tools, practices and systems seminar (although I was not involved).

Worse Than FailureCodeSOD: Tranposing the Key

Russell F sends us this C# "fuction", and I have to be honest: I have no idea what it's supposed to do. I can trace through the logic, I can see what it does, but I don't understand why it does it.

private List<LaborService> Tranpose(List<LaborService> laborService) { int half = (int)Math.Ceiling((decimal)(laborService.Count)/2); for (int i = 0; i < laborService.Count; i++) { if (i < half) laborService[i].Order = 2 * i; else laborService[i].Order = (i - half) + 1; } return laborService.OrderBy(x => x.Order).ToList(); }

So this starts by finding the rough midpoint of our list. Then we iterate across each element, and if it's position is less than half, we place double its index into the Order field. If it's half or greater, we store its index minus half, plus one, into its order field. Finally, we sort by Order.

Now, based on the name, we can assume this was inspired by a matrix transposition- oh, I'm sorry, tranposition- based on the method name. It isn't one. It's almost an interleaving operation, but it also isn't one of those.

You can play with the code or just look at this table.

Ceiling of half of 10 is 5. Indexes: 0 1 2 3 4 5 6 7 8 9 Values: A B C D E F G H I J Order: 0 2 4 6 8 1 2 3 4 5 ----------------------------- New Sort: A F B G H C I J D E

What you can notice here is that as we re-number our Orders, the bottom half gets doubled, but the top half increases incrementally. This means that we end up with ties, and that means that we end up with sections where elements from the either half of the list end up next to each other- see G, H, I,J and D, E in my example.

What is this for? Why does this exist? Why does it matter? No idea.

But Russell has another detail to add:

The Order field is never used anywhere but in this one function -- it appears to have been added solely to allow this.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Cryptogram 2020 Was a Secure Election

Over at Lawfare: “2020 Is An Election Security Success Story (So Far).”

What’s more, the voting itself was remarkably smooth. It was only a few months ago that professionals and analysts who monitor election administration were alarmed at how badly unprepared the country was for voting during a pandemic. Some of the primaries were disasters. There were not clear rules in many states for voting by mail or sufficient opportunities for voting early. There was an acute shortage of poll workers. Yet the United States saw unprecedented turnout over the last few weeks. Many states handled voting by mail and early voting impressively and huge numbers of volunteers turned up to work the polls. Large amounts of litigation before the election clarified the rules in every state. And for all the president’s griping about the counting of votes, it has been orderly and apparently without significant incident. The result was that, in the midst of a pandemic that has killed 230,000 Americans, record numbers of Americans voted­ — and voted by mail — ­and those votes are almost all counted at this stage.

On the cybersecurity front, there is even more good news. Most significantly, there was no serious effort to target voting infrastructure. After voting concluded, the director of the Cybersecurity and Infrastructure Security Agency (CISA), Chris Krebs, released a statement, saying that “after millions of Americans voted, we have no evidence any foreign adversary was capable of preventing Americans from voting or changing vote tallies.” Krebs pledged to “remain vigilant for any attempts by foreign actors to target or disrupt the ongoing vote counting and final certification of results,” and no reports have emerged of threats to tabulation and certification processes.

A good summary.

Planet DebianJoachim Breitner: Distributing Haskell programs in a multi-platform zip file

My maybe most impactful piece of code is tttool and the surrounding project, which allows you to create your own content for the Ravensburger Tiptoi™ platform. The program itself is a command line tool, and in this blog post I want to show how I go about building that program for Linux (both normal and static builds), Windows (cross-compiled from Linux), OSX (only on CI), all combined into and released as a single zip file.

Maybe some of it is useful or inspiring to my readers, or can even serve as a template. This being a blob post, though, note that it may become obsolete or outdated.

Ingredients

I am building on the these components:

Without the nix build system and package manger I probably woudn’t even attempt to pull of complex tasks that may, say, require a patched ghc. For many years I resisted learning about nix, but when I eventually had to, I didn’t want to go back.

This project provides an alternative Haskell build infrastructure for nix. While this is not crucial for tttool, it helps that they tend to have some cross-compilation-related patches more than the official nixpkgs. I also like that it more closely follows the cabal build work-flow, where cabal calculates a build plan based on your projects dependencies. It even has decent documentation (which is a new thing compared to two years ago).

Niv is a neat little tool to keep track of your dependencies. You can quickly update them with, say niv update nixpkgs. But what’s really great is to temporarily replace one of your dependencies with a local checkout, e.g. via NIV_OVERRIDE_haskellNix=$HOME/build/haskell/haskell.nix nix-instantiate -A osx-exe-bundle There is a Github action that will keep your niv-managed dependencies up-to-date.

This service (proprietary, but free to public stuff up to 10GB) gives your project its own nix cache. This means that build artifacts can be cached between CI builds or even build steps, and your contributors. A cache like this is a must if you want to use nix in more interesting ways where you may end up using, say, a changed GHC compiler. Comes with GitHub actions integration.

  • CI via Github actions

Until recently, I was using Travis, but Github actions are just a tad easier to set up and, maybe more important here, the job times are high enough that you can rebuild GHC if you have to, and even if your build gets canceled or times out, cleanup CI steps still happen, so that any new nix build products will still reach your nix cache.

The repository setup

All files discussed in the following are reflected at https://github.com/entropia/tip-toi-reveng/tree/7020cde7da103a5c33f1918f3bf59835cbc25b0c.

We are starting with a fairly normal Haskell project, with a single .cabal file (but multi-package projects should work just fine). To make things more interesting, I also have a cabal.project which configures one dependency to be fetched via git from a specific fork.

To start building the nix infrastructure, we can initialize niv and configure it to use the haskell.nix repo:

niv init
niv add input-output-hk/haskell.nix -n haskellNix

This creates nix/sources.json (which you can also edit by hand) and nix/sources.nix (which you can treat like a black box).

Now we can start writing the all-important default.nix file, which defines almost everything of interest here. I will just go through it line by line, and explain what I am doing here.

{ checkMaterialization ? false }:

This defines a flag that we can later set when using nix-build, by passing --arg checkMaterialization true, and which is off by default. I’ll get to that flag later.

let
  sources = import nix/sources.nix;
  haskellNix = import sources.haskellNix {};

This imports the sources as defined niv/sources.json, and loads the pinned revision of the haskell.nix repository.

  # windows crossbuilding with ghc-8.10 needs at least 20.09.
  # A peek at https://github.com/input-output-hk/haskell.nix/blob/master/ci.nix can help
  nixpkgsSrc = haskellNix.sources.nixpkgs-2009;
  nixpkgsArgs = haskellNix.nixpkgsArgs;

  pkgs = import nixpkgsSrc nixpkgsArgs;

Now we can define pkgs, which is “our” version of the nixpkgs package set, extended with the haskell.nix machinery. We rely on haskell.nix to pin of a suitable revision of the nixpkgs set (see how we are using their niv setup).

Here we could our own configuration, overlays, etc to nixpkgsArgs. In fact, we do in

  pkgs-osx = import nixpkgsSrc (nixpkgsArgs // { system = "x86_64-darwin"; });

to get the nixpkgs package set of an OSX machine.

  # a nicer filterSource
  sourceByRegex =
    src: regexes: builtins.filterSource (path: type:
      let relPath = pkgs.lib.removePrefix (toString src + "/") (toString path); in
      let match = builtins.match (pkgs.lib.strings.concatStringsSep "|" regexes); in
      ( type == "directory"  && match (relPath + "/") != null
      || match relPath != null)) src;

Next I define a little helper that I have been copying between projects, and which allows me to define the input to a nix derivation (i.e. a nix build job) with a set of regexes. I’ll use that soon.

  tttool-exe = pkgs: sha256:
    (pkgs.haskell-nix.cabalProject {

The cabalProject function takes a cabal project and turns it into a nix project, running cabal v2-configure under the hood to let cabal figure out a suitable build plan. Since we want to have multiple variants of the tttool, this is so far just a function of two arguments pkgs and sha256, which will be explained in a bit.

      src = sourceByRegex ./. [
          "cabal.project"
          "src/"
          "src/.*/"
          "src/.*.hs"
          ".*.cabal"
          "LICENSE"
        ];

The cabalProject function wants to know the source of the Haskell projects. There are different ways of specifying this; in this case I went for a simple whitelist approach. Note that cabal.project.freze (which exists in the directory) is not included.

      # Pinning the input to the constraint solver
      compiler-nix-name = "ghc8102";

The cabal solver doesn’t find out which version of ghc to use, that is still my choice. I am using GHC-8.10.2 here. It may require a bit of experimentation to see which version works for your project, especially when cross-compiling to odd targets.

      index-state = "2020-11-08T00:00:00Z";

I want the build to be deterministic, and not let cabal suddenly pick different package versions just because something got uploaded. Therefore I specify which snapshot of the Hackage package index it should consider.

      plan-sha256 = sha256;
      inherit checkMaterialization;

Here we use the second parameter, but I’ll defer the explanation for a bit.

      modules = [{
        # smaller files
        packages.tttool.dontStrip = false;
      }] ++

These “modules” are essentially configuration data that is merged in a structural way. Here we say that we want the tttool binary to be stripped (saves a few megabyte).

      pkgs.lib.optional pkgs.hostPlatform.isMusl {
        packages.tttool.configureFlags = [ "--ghc-option=-static" ];

Also, when we are building on the musl platform, that’s when we want to produce a static build, so let’s pass -static to GHC. This seems to be enough in terms of flags to produce static binaries. It helps that my project is using mostly pure Haskell libraries; if you link against C libraries you might have to jump through additional hoops to get static linking going. The haskell.nix documentation has a section on static building with some flags to cargo-cult.

        # terminfo is disabled on musl by haskell.nix, but still the flag
        # is set in the package plan, so override this
        packages.haskeline.flags.terminfo = false;
      };

This (again only used when the platform is musl) seems to be necessary to workaround what might be a big in haskell.nix.

    }).tttool.components.exes.tttool;

The cabalProject function returns a data structure with all Haskell packages of the project, and for each package the different components (libraries, tests, benchmarks and of course executables). We only care about the tttool executable, so let’s project that out.

  osx-bundler = pkgs: tttool:
   pkgs.stdenv.mkDerivation {
      name = "tttool-bundle";

      buildInputs = [ pkgs.macdylibbundler ];

      builder = pkgs.writeScript "tttool-osx-bundler.sh" ''
        source ${pkgs.stdenv}/setup

        mkdir -p $out/bin/osx
        cp ${tttool}/bin/tttool $out/bin/osx
        chmod u+w $out/bin/osx/tttool
        dylibbundler \
          -b \
          -x $out/bin/osx/tttool \
          -d $out/bin/osx \
          -p '@executable_path' \
          -i /usr/lib/system \
          -i ${pkgs.darwin.Libsystem}/lib
      '';
    };

This function, only to be used on OSX, takes a fully build tttool, finds all the system libraries it is linking against, and copies them next to the executable, using the nice macdylibbundler. This way we can get a self-contained executable.

A nix expert will notice that this probably should be written with pkgs.runCommandNoCC, but then dylibbundler fails because it lacks otool. This should work eventually, though.

in rec {
  linux-exe      = tttool-exe pkgs
     "0rnn4q0gx670nzb5zp7xpj7kmgqjmxcj2zjl9jqqz8czzlbgzmkh";
  windows-exe    = tttool-exe pkgs.pkgsCross.mingwW64
     "01js5rp6y29m7aif6bsb0qplkh2az0l15nkrrb6m3rz7jrrbcckh";
  static-exe     = tttool-exe pkgs.pkgsCross.musl64
     "0gbkyg8max4mhzzsm9yihsp8n73zw86m3pwvlw8170c75p3vbadv";
  osx-exe        = tttool-exe pkgs-osx
     "0rnn4q0gx670nzb5zp7xpj7kmgqjmxcj2zjl9jqqz8czzlbgzmkh";

Time to create the four versions of tttool. In each case we use the tttool-exe function from above, passing the package set (pkgs,…) and a SHA256 hash.

The package set is either the normal one, or it is one of those configured for cross compilation, building either for Windows or for Linux using musl, or it is the OSX package set that we instantiated earlier.

The SHA256 hash describes the result of the cabal plan calculation that happens as part of cabalProject. By noting down the expected result, nix can skip that calculation, or fetch it from the nix cache etc.

How do we know what number to put there, and when to change it? That’s when the --arg checkMaterialization true flag comes into play: When that is set, cabalProject will not blindly trust these hashes, but rather re-calculate them, and tell you when they need to be updated. We’ll make sure that CI checks them.

  osx-exe-bundle = osx-bundler pkgs-osx osx-exe;

For OSX, I then run the output through osx-bundler defined above, to make it independent of any library paths in /nix/store.

This is already good enough to build the tool for the various systems! The rest of the the file is related to packaging up the binaries, to tests, and various other things, but nothing too essentially. So if you got bored, you can more or less stop now.

  static-files = sourceByRegex ./. [
    "README.md"
    "Changelog.md"
    "oid-decoder.html"
    "example/.*"
    "Debug.yaml"
    "templates/"
    "templates/.*\.md"
    "templates/.*\.yaml"
    "Audio/"
    "Audio/digits/"
    "Audio/digits/.*\.ogg"
  ];

  contrib = ./contrib;

The final zip file that I want to serve to my users contains a bunch of files from throughout my repository; I collect them here.

  book = …;

The project comes with documentation in the form of a Sphinx project, which we build here. I’ll omit the details, because they are not relevant for this post (but of course you can peek if you are curious).

  os-switch = pkgs.writeScript "tttool-os-switch.sh" ''
    #!/usr/bin/env bash
    case "$OSTYPE" in
      linux*)   exec "$(dirname "''${BASH_SOURCE[0]}")/linux/tttool" "$@" ;;
      darwin*)  exec "$(dirname "''${BASH_SOURCE[0]}")/osx/tttool" "$@" ;;
      msys*)    exec "$(dirname "''${BASH_SOURCE[0]}")/tttool.exe" "$@" ;;
      cygwin*)  exec "$(dirname "''${BASH_SOURCE[0]}")/tttool.exe" "$@" ;;
      *)        echo "unsupported operating system $OSTYPE" ;;
    esac
  '';

The zipfile should provide a tttool command that works on all systems. To that end, I implement a simple platform switch using bash. I use pks.writeScript so that I can include that file directly in default.nix, but it would have been equally reasonable to just save it into nix/tttool-os-switch.sh and include it from there.

  release = pkgs.runCommandNoCC "tttool-release" {
    buildInputs = [ pkgs.perl ];
  } ''
    # check version
    version=$(${static-exe}/bin/tttool --help|perl -ne 'print $1 if /tttool-(.*) -- The swiss army knife/')
    doc_version=$(perl -ne "print \$1 if /VERSION: '(.*)'/" ${book}/book.html/_static/documentation_options.js)

    if [ "$version" != "$doc_version" ]
    then
      echo "Mismatch between tttool version \"$version\" and book version \"$doc_version\""
      exit 1
    fi

Now the derivation that builds the content of the release zip file. First I double check that the version number in the code and in the documentation matches. Note how ${static-exe} refers to a path with the built static Linux build, and ${book} the output of the book building process.

    mkdir -p $out/
    cp -vsr ${static-files}/* $out
    mkdir $out/linux
    cp -vs ${static-exe}/bin/tttool $out/linux
    cp -vs ${windows-exe}/bin/* $out/
    mkdir $out/osx
    cp -vsr ${osx-exe-bundle}/bin/osx/* $out/osx
    cp -vs ${os-switch} $out/tttool
    mkdir $out/contrib
    cp -vsr ${contrib}/* $out/contrib/
    cp -vsr ${book}/* $out
  '';

The rest of the release script just copies files from various build outputs that we have defined so far.

Note that this is using both static-exe (built on Linux) and osx-exe-bundle (built on Mac)! This means you can only build the release if you either have setup a remote osx builder (a pretty nifty feature of nix, which I unfortunately can’t use, since I don't have access to a Mac), or the build product must be available in a nix cache (which it is in my case, as I will explain later).

The output of this derivation is a directory with all the files I want to put in the release.

  release-zip = pkgs.runCommandNoCC "tttool-release.zip" {
    buildInputs = with pkgs; [ perl zip ];
  } ''
    version=$(bash ${release}/tttool --help|perl -ne 'print $1 if /tttool-(.*) -- The swiss army knife/')
    base="tttool-$version"
    echo "Zipping tttool version $version"
    mkdir -p $out/$base
    cd $out
    cp -r ${release}/* $base/
    chmod u+w -R $base
    zip -r $base.zip $base
    rm -rf $base
  '';

And now these files are zipped up. Note that this automatically determines the right directory name and basename for the zipfile.

This concludes the step necessary for a release.

  gme-downloads = …;
  tests = …;

These two definitions in default.nix are related to some simple testing, and again not relevant for this post.

  cabal-freeze = pkgs.stdenv.mkDerivation {
    name = "cabal-freeze";
    src = linux-exe.src;
    buildInputs = [ pkgs.cabal-install linux-exe.env ];
    buildPhase = ''
      mkdir .cabal
      touch .cabal/config
      rm cabal.project # so that cabal new-freeze does not try to use HPDF via git
      HOME=$PWD cabal new-freeze --offline --enable-tests || true
    '';
    installPhase = ''
      mkdir -p $out
      echo "-- Run nix-shell -A check-cabal-freeze to update this file" > $out/cabal.project.freeze
      cat cabal.project.freeze >> $out/cabal.project.freeze
    '';
  };

Above I mentioned that I still would like to be able to just run cabal, and ideally it should take the same library versions that the nix-based build does. But pinning the version of ghc in cabal.project is not sufficient, I also need to pin the precise versions of the dependencies. This is best done with a cabal.project.freeze file.

The above derivation runs cabal new-freeze in the environment set up by haskell.nix and grabs the resulting cabal.project.freeze. With this I can run nix-build -A cabal-freeze and fetch the file from result/cabal.project.freeze and add it to the repository.

  check-cabal-freeze = pkgs.runCommandNoCC "check-cabal-freeze" {
      nativeBuildInputs = [ pkgs.diffutils ];
      expected = cabal-freeze + /cabal.project.freeze;
      actual = ./cabal.project.freeze;
      cmd = "nix-shell -A check-cabal-freeze";
      shellHook = ''
        dest=${toString ./cabal.project.freeze}
        rm -f $dest
        cp -v $expected $dest
        chmod u-w $dest
        exit 0
      '';
    } ''
      diff -r -U 3 $actual $expected ||
        { echo "To update, please run"; echo "nix-shell -A check-cabal-freeze"; exit 1; }
      touch $out
    '';

But generated files in repositories are bad, so if that cannot be avoided, at least I want a CI job that checks if they are up to date. This job does that. What’s more, it is set up so that if I run nix-shell -A check-cabal-freeze it will update the file in the repository automatically, which is much more convenient than manually copying.

Lately, I have been using this pattern regularly when adding generated files to a repository: * Create one nix derivation that creates the files * Create a second derivation that compares the output of that derivation against the file in the repo * Create a derivation that, when run in nix-shell, updates that file. Sometimes that derivation is its own file (so that I can just run nix-shell nix/generate.nix), or it is merged into one of the other two.

This concludes the tour of default.nix.

The CI setup

The next interesting bit is the file .github/workflows/build.yml, which tells Github Actions what to do:

name: "Build and package"
on:
  pull_request:
  push:

Standard prelude: Run the jobs in this file upon all pushes to the repository, and also on all pull requests. Annoying downside: If you open a PR within your repository, everything gets built twice. Oh well.

jobs:
  build:
    strategy:
      fail-fast: false
      matrix:
        include:
        - target: linux-exe
          os: ubuntu-latest
        - target: windows-exe
          os: ubuntu-latest
        - target: static-exe
          os: ubuntu-latest
        - target: osx-exe-bundle
          os: macos-latest
    runs-on: ${{ matrix.os }}

The “build” job is a matrix job, i.e. there are four variants, one for each of the different tttool builds, together with an indication of what kind of machine to run this on.

    - uses: actions/checkout@v2
    - uses: cachix/install-nix-action@v12

We begin by checking out the code and installing nix via the install-nix-action.

    - name: "Cachix: tttool"
      uses: cachix/cachix-action@v7
      with:
        name: tttool
        signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'

Then we configure our Cachix cache. This means that this job will use build products from the cache if possible, and it will also push new builds to the cache. This requires a secret key, which you get when setting up your Cachix cache. See the nix and Cachix tutorial for good instructions.

    - run: nix-build --arg checkMaterialization true -A ${{ matrix.target }}

Now we can actually run the build. We set checkMaterialization to true so that CI will tell us if we need to update these hashes.

    # work around https://github.com/actions/upload-artifact/issues/92
    - run: cp -RvL result upload
    - uses: actions/upload-artifact@v2
      with:
        name: tttool (${{ matrix.target }})
        path: upload/

For convenient access to build products, e.g. from pull requests, we store them as Github artifacts. They can then be downloaded from Github’s CI status page.

  test:
    runs-on: ubuntu-latest
    needs: build
    steps:
    - uses: actions/checkout@v2
    - uses: cachix/install-nix-action@v12
    - name: "Cachix: tttool"
      uses: cachix/cachix-action@v7
      with:
        name: tttool
        signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
    - run: nix-build -A tests

The next job repeats the setup, but now runs the tests. Because of needs: build it will not start before the builds job has completed. This also means that it should get the actual tttool executable to test from our nix cache.

  check-cabal-freeze:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: cachix/install-nix-action@v12
    - name: "Cachix: tttool"
      uses: cachix/cachix-action@v7
      with:
        name: tttool
        signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
    - run: nix-build -A check-cabal-freeze

The same, but now running the check-cabal-freeze test mentioned above. Quite annoying to repeat the setup instructions for each job…

  package:
    runs-on: ubuntu-latest
    needs: build
    steps:
    - uses: actions/checkout@v2
    - uses: cachix/install-nix-action@v12
    - name: "Cachix: tttool"
      uses: cachix/cachix-action@v7
      with:
        name: tttool
        signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'

    - run: nix-build -A release-zip

    - run: unzip -d upload ./result/*.zip
    - uses: actions/upload-artifact@v2
      with:
        name: Release zip file
        path: upload

Finally, with the same setup, but slightly different artifact upload, we build the release zip file. Again, we wait for build to finish so that the built programs are in the nix cache. This is especially important since this runs on linux, so it cannot build the OSX binary and has to rely on the cache.

Note that we don’t need to checkMaterialization again.

Annoyingly, the upload-artifact action insists on zipping the files you hand to it. A zip file that contains just a zipfile is kinda annoying, so I unpack the zipfile here before uploading the contents.

Conclusion

With this setup, when I do a release of tttool, I just bump the version numbers, wait for CI to finish building, run nix-build -A release-zip and upload result/tttool-n.m.zip. A single file that works on all target platforms. I have not yet automated making the actual release, but with one release per year this is fine.

Also, when trying out a new feature, I can easily create a branch or PR for that and grab the build products from Github’s CI, or ask people to try them out (e.g. to see if they fixed their bugs). Note, though, that you have to sign into Github before being able to download these artifacts.

One might think that this is a fairly hairy setup – finding the right combinations of various repertories so that cross-compilation works as intended. But thanks to nix’s value propositions, this does work! The setup presented here was a remake of a setup I did two years ago, with a much less mature haskell.nix. Back then, I committed a fair number of generated files to git, and juggled more complex files … but once it worked, it kept working for two years. I was indeed insulated from upstream changes. I expect that this setup will also continue to work reliably, until I choose to upgrade it again. Hopefully, then things are even more simple, and require less work-around or manual intervention.

Charles StrossEntanglements!

Entanglements Cover.jpg

Many thanks to Charlie for giving me the chance to write about editing and my latest project. I'm very excited about the publication of Entanglements. The book has received a starred review from Publishers Weekly and terrific reviews in Lightspeed, Science, and the Financial Times. MIT Press has created a very nice "Pubpub" page about Entanglements, with information about the book and its various contributors. The "On the Stories" section has an essay about by Nick Wolven about his amazing story, "Sparkly Bits," and a fun Zoom conversation with James Patrick Kelly, Nancy Kress, and Sam J. Miller. I think the site is well worth checking out, and here's the Pubpub description of the book:

Science fiction authors offer original tales of relationships in a future world of evolving technology.

In a future world dominated by the technological, people will still be entangled in relationships--in romances, friendships, and families. This volume in the Twelve Tomorrows series considers the effects that scientific and technological discoveries will have on the emotional bonds that hold us together.

The strange new worlds in these stories feature AI family therapy, floating fungitecture, and a futuristic love potion. A co-op of mothers attempts to raise a child together, lovers try to resolve their differences by employing a therapeutic sexbot, and a robot helps a woman dealing with Parkinson's disease. Contributions include Xia Jia's novelette set in a Buddhist monastery, translated by the Hugo Award-winning writer Ken Liu; a story by Nancy Kress, winner of six Hugos and two Nebulas; and a profile of Kress by Lisa Yaszek, Professor of Science Fiction Studies at Georgia Tech. Stunning artwork by Tatiana Plakhova--"infographic abstracts" of mixed media software--accompanies the texts.

Worse Than FailureCodeSOD: Utility Functions

As a personal perspective, I don't tend to believe that mastery of a programming tool is nearly as important as mastery of the codebase and problem domain you're working on. But there are some developers who just don't want to learn the codebase or what other developers are doing.

Take Jessica's latest co-worker, which is similar to some previous co-workers. In this case, there was a project in flight that was starting to fall behind schedule. Management did what management does in this situation: they threw warm bodies at the project and ensured that it fell further behind.

Brant was one of those warm bodies, and Brant did not want to learn what was already in the code base. He was going to do part of the JavaScript front end, he was going to rush to get it done, and he was going to copy-paste his way through.

Which lead to this:

function setMailingsReceivedCountLabel(e) { // Implement sting prototye format so that we can use string token replacement if (!String.prototype.format) { String.prototype.format = function() { var args = arguments; return this.replace(/{(\d+)}/g, function(match, number) { return typeof args[number] != 'undefined' ? args[number] : match ; }); }; } // Get values var recordCount = $("#mailingsGrid").data("kendoGrid").dataSource.total(); $("#Mailings_Count").text("(" + recordCount + ")"); }

Now, a format method for strings is a useful function. It's not wrong to implement your own- you can't rely on template literals being supported by every browser. In fact, it's such a useful function that Jessica and the team had already added one in a generic file of utility functions. A more robust one, coupled with some unit tests, and y'know, the one you should use.

Brant had no interest in learning that there was already a function which did what he needed, so instead he implemented this one. In fact, he copy-and-pasted this blob into any method he wrote that might potentially do any sort of string formatting. I stress "might potentially", because as you can see, this method doesn't actually use his format method.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Krebs on SecurityBody Found in Canada Identified as Neo-Nazi Spam King

The body of a man found shot inside a burned out vehicle in Canada three years ago has been identified as that of Davis Wolfgang Hawke, a prolific spammer and neo-Nazi who led a failed anti-government march on Washington, D.C. in 1999, according to news reports.

Homicide detectives said they originally thought the man found June 14, 2017 in a torched SUV on a logging road in Squamish, British Columbia was a local rock climber known to others in the area as a politically progressive vegan named Jesse James.

Davis Wolfgang Hawke. Image: Spam Kings, by Brian McWilliams.

But according to a report from CTV News, at a press conference late last month authorities said new DNA evidence linked to a missing persons investigation has confirmed the man’s true identity as Davis Wolfgang Hawke.

A key subject of the book Spam Kings by Brian McWilliams, Hawke was a Jewish-born American who’d legally changed his name from Andrew Britt Greenbaum. For many years, Hawke was a big time purveyor of spam emails hawking pornography and male enhancement supplements, such as herbal Viagra.

Hawke had reportedly bragged about the money he earned from spam, but told friends he didn’t trust banks and decided to convert his earnings into gold and platinum bars. That sparked rumors that he had possibly buried his ill-gotten gains on his parents’ Massachusetts property.

In 2005, AOL won a $12.8 million lawsuit against him for relentlessly spamming its users. A year later, AOL won a court judgment authorizing them to dig on that property, although no precious metals were ever found.

More recently, Hawke’s Jesse James identity penned a book called Psychology of Seduction, which claimed to merge the “shady world of the pickup artist with modern science, unraveling the mystery of attraction using evolutionary biology and examining seduction through the lens of social and evolutionary psychology.”

The book’s “about the author” page said James was a “disruptive technology pioneer” who was into rock climbing and was a resident of Squamish. It also claimed James held a PhD in theoretical physics from Stanford, and that he was an officer in the Israeli Defense Force.

It might be difficult to fathom why, but Hawke may have made a few enemies over the years. Spam Kings author McWilliams notes that Hawke changed his name with regularity and used many pseudonyms.

“I could definitely see this guy making someone so mad at him they’d want to kill him,” McWilliams told CTV. “He was a guy who really pushed people that way and was a crook. I mean, he was a conman. That was what he was and I can see how somebody might get mad. I can also see him staging his own death or committing suicide in a fashion like that, if that’s what he chose to do. He was just a perplexing guy. I still don’t feel like I have a handle on him and I spent the better part of a year trying to figure out what made him tick.”

The father of the deceased, Hy Greenbaum, has offered a $10,000 reward to any tipster who can help solve his son’s homicide. British Columbia’s Integrated Homicide Investigation Team also is seeking clues, and can be reached at ihitinfo@rcmp-grc.gc.ca.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 22)

Here’s part twenty-two of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:


Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

Planet DebianSean Whitton: Combining repeat and repeat-complex-command

In Emacs, you can use C-x z to repeat the last command you input, and subsequently you can keep tapping the ‘z’ key to execute that command again and again. If the command took minibuffer input, however, you’ll be asked for that input again. For example, suppose you type M-z : to delete through the next colon character. If you want to keep going and delete through the next few colons, you would need to use C-x z : z : z : etc. which is pretty inconvenient. So there’s also C-x ESC ESC RET or C-x M-: RET, which will repeat the last command which took minibuffer input, as if you’d given it the same minibuffer input. So you could use M-z : C-x M-: RET C-x M-: RET etc., but then you might as well just keep typing M-z : over and over. It’s also quite inconvenient to have to remember whether you need to use C-x z or C-x M-: RET.

I wanted to come up with a single command which would choose the correct repetition method. It turns out it’s a bit involved, but here’s what I came up with. You can use this under the GPL-3 or any later version published by the FSF. Assumes lexical binding is turned on for the file you have this in.

;; Adapted from `repeat-complex-command&apos as of November 2020
(autoload &aposrepeat-message "repeat")
(defun spw/repeat-complex-command-immediately (arg)
  "Like `repeat-complex-command&apos followed immediately by RET."
  (interactive "p")
  (if-let ((newcmd (nth (1- arg) command-history)))
      (progn
        (add-to-history &aposcommand-history newcmd)
        (repeat-message "Repeating %S" newcmd)
        (apply #&aposfuncall-interactively
               (car newcmd)
               (mapcar (lambda (e) (eval e t)) (cdr newcmd))))
    (if command-history
        (error "Argument %d is beyond length of command history" arg)
      (error "There are no previous complex commands to repeat"))))

(let (real-last-repeatable-command)
  (defun spw/repeat-or-repeat-complex-command-immediately ()
    "Call `repeat&apos or `spw/repeat-complex-command-immediately&apos as appropriate.

Note that no prefix argument is accepted because this has
different meanings for `repeat&apos and for
`spw/repeat-complex-command-immediately&apos, so that might cause surprises."
    (interactive)
    (if (eq last-repeatable-command this-command)
        (setq last-repeatable-command real-last-repeatable-command)
      (setq real-last-repeatable-command last-repeatable-command))
    (if (eq last-repeatable-command (caar command-history))
        (spw/repeat-complex-command-immediately 1)
      (repeat nil))))

;; `suspend-frame&apos is bound to both C-x C-z and C-z
(global-set-key "\C-z" #&aposspw/repeat-or-repeat-complex-command-immediately)

Planet DebianIan Jackson: Gazebo out of scaffolding

Today we completed our gazebo, which we designed and built out of scaffolding:

Picture of gazebo

Scaffolding is fairly expensive but building things out of it is enormous fun! You can see a complete sequence of the build process, including pictures of the "engineering maquette", at https://www.chiark.greenend.org.uk/~ijackson/2020/scaffold/

Post-lockdown maybe I will build a climbing wall or something out of it...

edited 2020-11-08 20:44Z to fix img url following hosting reorg



comment count unavailable comments

Planet DebianKentaro Hayashi: debexpo: adding "Already in Debian" field for packages list

I've sent a merge request to show "Already in Debian" column in packages list on mentors.debian.net.

salsa.debian.org

At first, I've used Emoji, but for consistency, it has been modified to use "Yes or No".

This feature is not fully merged yet, but it may be useful to distinguish "This package is already in Debian or not" for sponsor.

f:id:kenhys:20201108151153p:plainAready in Debian

Planet DebianRussell Coker: Links November 2020

KDE has a long term problem of excessive CPU time used by the screen locker [1]. Part of it is due to software GL emulation, and part of it is due to the screen locker doing things like flashing the cursor when nothing else is happening. One of my systems has an NVidia card and enabling GL would cause it to crash. So now I have kscreenlocker using 30% of a CPU core even when the screen is powered down.

Informative NYT article about the latest security features for iPhones [2]. Android needs new features like this!

Russ Allbery wrote an interesting review of the book Hand to Mouth by Linda Tirado [3], it’s about poverty in the US and related things. Linda first became Internet famous for her essay “Why I Make Terrible Decisions or Poverty Thoughts” which is very insightful and well written, this is the latest iteration of that essay [4].

This YouTube video by Ruby Payne gives great insights to class based attitudes towards time and money [5].

News Week has an interesting article about chicken sashimi, apparently you can safely eat raw chicken if it’s prepared well [6].

Vanity Fair has an informative article about how Qanon and Trumpism have infected the Catholic Church [7]. Some of Mel Gibson’s mental illness is affecting a significant portion of the Catholic Church in the US and some parts in the rest of the world.

Noema has an interesting article on toxic Internet culture, Japan’s 2chan, 4chan, 8chan/8kun, and the conspiracy theories they spawned [8].

Benjamin Corey is an ex-Fundie who wrote an amusing analysis of the Biblical statements about the anti-Christ [9].

NYMag has an interesting article The Final Gasp of Donald Trump’s Presidency [10].

Mother Jones has an informative article about the fact that Jim Watkins (the main person behind QAnon) has a history of hosting child porn on sites he runs [11], but we all knew QAnon was never about protecting kids.

Eand has an insightful article America’s Problem is That White People Want It to Be a Failed State [12].

Rondam RamblingsI'm proud to be an American again

,

Planet DebianSteinar H. Gunderson: plocate in backports

plocate 1.0.7 hit Debian backports.org today, which means that it's now available for use on Debian stable (and machines with zillions of files are not unlikely to run stable). (The package page still says 1.0.5, but 1.0.7 really is up.)

The other big change from 1.0.5 is that plocate now escapes potentially dangerous filenames, like modern versions of GNU ls does. I'm not overly happy about this, but it's really hard not to do it; any user can create a publicly-viewable file on the system, and allowing them to sneak in arbitrary escape sequences just is too much. Most terminals should be good by now, but there have been many in the past with potentially really dangerous behavior (like executing arbitrary commands!), and just being able to mess up another user's terminal isn't a good idea. Most users should never really see filenames being escaped, but if you have a filename with \n or other nonprintable characters in, they will be escaped using bash' $'foo\nbar' syntax.

So, anyone up for uploading to Fedora? :-)

LongNowThe Role of Geology in US Presidential Elections

In an article in Forbes, David Bressan writes that the giant rift in the USA’s political voting blocs is in part a consequence of collisions between continental plates, the literal giant rift that used to separate the two halves of North America, and recent glacial activity:

The same region that had once been covered in ocean water, leading to the fertile Black Belt, was almost an exact replica of the districts that had voted for Clinton.

The rich coal fields in Ohio, West Virginia, Pennsylvania and Maryland formed as a result of two continents colliding some 300 million years ago. The coal fueled the economic growth of cities like Pittsburg, Detroit, Chicago and Cleveland.

The Driftless Area is a region west to the Great Lakes that escaped glaciation during the last ice-age. Farming is more difficult here. The election map shows that most countries in the Driftless Area voted Democrats in 2012. It seems that more liberal politics, combined with financial hardship experienced by the local farmers and accentuated by the poor soils, convinced them to vote for Barack Obama.

Last year, astrobiologist Lewis Dartnell made a similar point in a Conversation at The Interval:

Worse Than FailureError'd: Not So Smart After All!

"Today I learned that the time between 12 PM and 1 PM is "12:28 noon" according to CNN," Drew W. writes.

 

Robert L. wrote, "The Trump campaign website is a little bit confused as to when election day is."

 

"I changed over to playing on my Nintendo Switch from watching Netflix on my so-called 'smart' TV and apparently, the subtitles didn't get the message," writes Josh.

 

"You know how after watching a really scary horror movie, bumps in the night can leave you feeling on edge?" Matia W. wrote, "Well, seeing error messages popping up while watching a show about professional hackers is a little bit like that."

 

Eric L. writes, "Raising the capital for any of these seems like a pain. Yeah...I'll pass."

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianLouis-Philippe Véronneau: Book Review: Working in Public by Nadia Eghbal

I have a lot of respect for Nadia Eghbal, partly because I can't help to be jealous of her work on the economics of Free Software1. If you are not already familiar with Eghbal, she is the author of Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure, a great technical report published for the Ford Foundation in 2016. You may also have caught her excellent keynote at LCA 2017, entitled Consider the Maintainer.

Her latest book, Working in Public: The Making and Maintenance of Open Source Software, published by Stripe Press a few months ago, is a great read and if this topic interests you, I highly recommend it.

The book itself is simply gorgeous; bright orange, textured hardcover binding, thick paper, wonderful typesetting — it has everything to please. Well, nearly everything. Sadly, it is only available on Amazon, exclusively in the United States. A real let down for a book on Free and Open Source Software.

The book is divided in five chapters, namely:

  1. Github as a Platform
  2. The Structure of an Open Source Project
  3. Roles, Incentives and Relationships
  4. The Work Required by Software
  5. Managing the Costs of Production

A picture of the book cover

Contrary to what I was expecting, the book feels more like an extension of the LCA keynote I previously mentioned than Roads and Bridges. Indeed, as made apparent by the following quote, Eghbal doesn't believe funding to be the primary problem of FOSS anymore:

We still don't have a common understanding about who's doing the work, why they do it, and what work needs to be done. Only when we understand the underlying behavioral dynamics of open source today, and how it differs from its early origins, can we figure out where money fits in. Otherwise, we're just flinging wet paper towels at a brick wall, hoping that something sticks. — p.184

That is to say, the behavior of maintainers and the challenges they face — not the eternal money problem — is the real topic of this book. And it feels refreshing. When was the last time you read something on the economics of Free Software without it being mostly about what licences projects should pick and how business models can be tacked on them? I certainly can't.

To be clear, I'm not sure I agree with Eghbal on this. Her having worked at Github for a few years and having interviewed mostly people in the Ruby on Rails and Javascript communities certainly shows in the form of a strong selection bias. As she herself admits, this is a book on how software on Github is produced. As much as this choice irks me (the Free Software community certainly cannot be reduced to Github), this exercise had the merit of forcing me to look at my own selection biases.

As such, reading Working in Public did to me something I wasn't expecting it to do: it broke my Free Software echo chamber. Although I consider myself very familiar with the world of Free and Open Source Software, I now understand my somewhat ill-advised contempt for certain programming languages — mostly JS — skewed my understanding of what FOSS in 2020 really is.

My Free Software world very much revolves around Debian, a project with a strong and opinionated view of Free Software, rooted in a historical and political understanding of the term. This, Eghbal argues, is not the case for a large swat of developers anymore. They are The Github Generation, people attached to Github as a platform first and foremost, and who feel "Open Source" is just a convenient way to make things.

Although I could intellectualise this, before reading the book, I didn't really grok how communities akin to npm have been reshaping the modern FOSS ecosystem and how different they are from Debian itself. To be honest, I am not sure I like this tangent and it is certainly part of the reason why I had a tendency to dismiss it as a fringe movement I could safely ignore.

Thanks to Nadia Eghbal, I come out of this reading more humble and certainly reminded that FOSS' heterogeneity is real and should not be idly dismissed. This book is rich in content and although I could go on (my personal notes clock-in at around 2000 words and I certainly disagree with a number of things), I'll stop here for now. Go and grab a copy already!


  1. She insists on using the term open source, but I won't :) 

Planet DebianSandro Tosi: QNAP firmware 4.5.1.1465: disable ssh management menu

as a good boy i just upgraded my QNAP NAS to the latest available firmware, 4.5.1.1465, but after the reboot there's an ugly surprise awaiting for me

once i ssh'd into the box to do my stuff, instead of a familiar bash prompt i'm greeted by a management menu that allows me to perform some basic management tasks or quit it and go back to the shell. i dont really need this menu (in particular because i have automations that regularly ssh into the box and they are not meant to be interactive).

to disable it: edit /etc/profile and comment the line "[[ "admin" = "$USER" ]] && /sbin/qts-console-mgmt -f" (you can judge me later for sshing as root)

,

Cryptogram Detecting Phishing Emails

Research paper: Rick Wash, “How Experts Detect Phishing Scam Emails“:

Abstract: Phishing scam emails are emails that pretend to be something they are not in order to get the recipient of the email to undertake some action they normally would not. While technical protections against phishing reduce the number of phishing emails received, they are not perfect and phishing remains one of the largest sources of security risk in technology and communication systems. To better understand the cognitive process that end users can use to identify phishing messages, I interviewed 21 IT experts about instances where they successfully identified emails as phishing in their own inboxes. IT experts naturally follow a three-stage process for identifying phishing emails. In the first stage, the email recipient tries to make sense of the email, and understand how it relates to other things in their life. As they do this, they notice discrepancies: little things that are “off” about the email. As the recipient notices more discrepancies, they feel a need for an alternative explanation for the email. At some point, some feature of the email — usually, the presence of a link requesting an action — triggers them to recognize that phishing is a possible alternative explanation. At this point, they become suspicious (stage two) and investigate the email by looking for technical details that can conclusively identify the email as phishing. Once they find such information, then they move to stage three and deal with the email by deleting it or reporting it. I discuss ways this process can fail, and implications for improving training of end users about phishing.

Cryptogram California Proposition 24 Passes

California’s Proposition 24, aimed at improving the California Consumer Privacy Act, passed this week. Analyses are very mixed. I was very mixed on the proposition, but on the whole I supported it. The proposition has some serious flaws, and was watered down by industry, but voting for privacy feels like it’s generally a good thing.

Planet DebianMike Gabriel: Welcome, Fre(i)e Software GmbH

Last week I received the official notice: There is now a German company named "Fre(i)e Software GmbH" registered with the German Trade Register.

Founding a New Company

Over the past months I have put my energy into founding a new company. As a freelancing IT consultant I started facing the limitation of other companies having strict policies that forbid the cooperation with one person businesses (Personengesellschaften).

Thus, the requirement for setting up a GmbH business came onto my agenda. I will move some of my business activities into this new company, starting next year.

Policy Ideas

The "Fre(i)e Software GmbH" will be a platform to facilitate the growth and spreading of Free Software on this planet.

Here are some first ideas for company policies:

  • The idea is to bring together teams of developers and consultants that provide the highest expertise in FLOSS.

  • Everything this company will do, will finally (or already during the development cycles) be published under some sorf of a free software / content license (for software, ideally a copyleft license).

  • Staff members will work and live across Europe, freelancers may possibly live in any country where German businesses may do business with.

  • Ideally, staff members and freelancers work on projects that they can identify themselves with, projects that they love.

  • Software development and software design is an art. In the company we will honour this. We will be artists.

  • In software development, we will enroll our customers in non-CLA FLOSS copyright holdership policies: developers can become copyright holders of the worked-on code projects as persons. This will strengthen the liberty nature of the FLOSS licensed code brought forth in the company.

  • The Fre(i)e Software GmbH will be a business focussing on sustainability and sufficiency. We will be gentle to our planet. We won't work on projects that create artificial needs.

  • We all will be experts in communication. We all will continually work on improving our communication skills.

  • Integrity shall be a virtue to strive for in the company.

  • We will be honest to ourselves and the customers about the mistakes we do, the misassumptions we have.

  • We will honour and support diversity.

This is all pretty fresh. I'll be happy about hearing your feedback, ideas and worries. If you are interested in joining the company, please let me know. If you are interested in supporting a company with such values, please also let me know.

light+love
Mike Gabriel (aka sunweaver)

Planet DebianJonathan Dowland: PhD Year 3 progression

I'm now into my 4th calendar year of my part-time PhD, corresponding to half-way through Stage 2, or Year 2 for a full-time student. Here's a report I wrote that describes what I did last year and what I'm working on going forward.

year3 progression report.pdf (223K PDF)

Worse Than FailureCodeSOD: Frist Item

In .NET, if you want to get the first item from an IList object, you could just use the index: list[0]. You also have a handy-dandy function called First, or even better FirstOrDefault. FirstOrDefault helpfully doesn’t throw an exception if the list is empty (though depending on what’s in the list, it may give you a null).

What I’m saying is that there are plenty of easy, and obvious ways to get the first element of a list.

Stevie’s co-worker did this instead:

IList<Order> orderList = db.GetOrdersByDateDescending().ToList();
int i = 1;
foreach (Order order in orderList)
{
    if (i == 1)
    {
        PrintOrder(order);
    }
    i++;
}

So, for starters, GetOrdersByDateDescending() is a LINQ-to-SQL call which invokes a stored procedure. Because LINQ does all sorts of optimizations on how that SQL gets generated, if you were to do GetOrdersByDateDescending().FirstOrDefault(), it would fetch only the first row, cutting down on how much data crosses the network.

But because they did ToList, it will fetch all the rows.

And then… then they loop over the result. Every single row. But they only want the first one, so they have an if that only triggers when i == 1, which I mean, at this point, doing 1-based indexing is just there to taunt us.

Stevie adds: “This is a common ‘pattern’ throughout the project.” Well clearly, the developer responsible isn’t going to do something once when they could do it every single time.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianJunichi Uekawa: Sent a pull request to document sshfs slave mode.

Sent a pull request to document sshfs slave mode. Every time I try to do it I forget, so at least I have a document about how to do it. Also changed the name from slave to passive, but I don't think that will help me remember... Not feeling particularly creative about the name.

,

Planet DebianBen Hutchings: Debian LTS work, October 2020

I was assigned 6.25 hours of work by Freexian's Debian LTS initiative and carried over 17.5 hours from earlier months. I worked 11.5 hours this month and returned 7.75 hours to the pool, so I will carry over 4.5 hours to December.

I updated linux-4.19 to include the changes in DSA-4772-1, and issued DLA-2417-1 for this.

I updated linux (4.9 kernel) to include upstream stable fixes, and issued DLA-2420-1. This resulted in a regression on some Xen PV environments. Ian Jackson identified the upstream fix for this, which had not yet been applied to all the stable branches that needed it. I made a further update with just that fix, and issued DLA-2420-2.

I have also been working to backport fixes for some less urgent security issues in Linux 4.9, but have not yet applied those fixes.

Krebs on SecurityWhy Paying to Delete Stolen Data is Bonkers

Companies hit by ransomware often face a dual threat: Even if they avoid paying the ransom and can restore things from scratch, about half the time the attackers also threaten to release sensitive stolen data unless the victim pays for a promise to have the data deleted. Leaving aside the notion that victims might have any real expectation the attackers will actually destroy the stolen data, new research suggests a fair number of victims who do pay up may see some or all of the stolen data published anyway.

The findings come in a report today from Coveware, a company that specializes in helping firms recover from ransomware attacks. Coveware says nearly half of all ransomware cases now include the threat to release exfiltrated data.

“Previously, when a victim of ransomware had adequate backups, they would just restore and go on with life; there was zero reason to even engage with the threat actor,” the report observes. “Now, when a threat actor steals data, a company with perfectly restorable backups is often compelled to at least engage with the threat actor to determine what data was taken.”

Coveware said it has seen ample evidence of victims seeing some or all of their stolen data published after paying to have it deleted; in other cases, the data gets published online before the victim is even given a chance to negotiate a data deletion agreement.

“Unlike negotiating for a decryption key, negotiating for the suppression of stolen data has no finite end,” the report continues. “Once a victim receives a decryption key, it can’t be taken away and does not degrade with time. With stolen data, a threat actor can return for a second payment at any point in the future. The track records are too short and evidence that defaults are selectively occurring is already collecting.”

Image: Coveware Q3 2020 report.

The company said it advises clients never to pay a data deletion ransom, but rather to engage competent privacy attorneys, perform an investigation into what data was stolen, and notify any affected customers according to the advice of counsel and application data breach notification laws.

Fabian Wosar, chief technology officer at computer security firm Emsisoft, said ransomware victims often acquiesce to data publication extortion demands when they are trying to prevent the public from learning about the breach.

“The bottom line is, ransomware is a business of hope,” Wosar said. “The company doesn’t want the data to be dumped or sold. So they pay for it hoping the threat actor deletes the data. Technically speaking, whether they delete the data or not doesn’t matter from a legal point of view. The data was lost at the point when it was exfiltrated.”

Ransomware victims who pay for a digital key to unlock servers and desktop systems encrypted by the malware also are relying on hope, Wosar said, because it’s also not uncommon that a decryption key fails to unlock some or all of the infected machines.

“When you look at a lot of ransom notes, you can actually see groups address this very directly and have texts that say stuff along the lines of, Yeah, you are fucked now. But if you pay us, everything can go back to before we fucked you.'”

Planet DebianMartin-Éric Racine: Migrating to Predictable Network Interface Names

A couple of years ago, I moved into a new flat that comes with RJ45 sockets wired for 10 Gigabit (but currently offering 1 Gigabit) Ethernet.

This also meant changing the settings on my router box for my new ISP.

I took this opportunity to review my router's other settings too. I'll be blogging about these over the next few posts.

Migrating to Predictable Network Interface Names

Ever since Linus decided to flip the network interface enumeration order in the Linux kernel, I had been relying on udev's persistent network interface rules to maintain some semblance of consistency in the NIC naming scheme of my hosts. It has never been a totally satisfactory method, since it required manually editing the file to list the MAC addresses of all Ethernet cards and WiFi dongles likely to appear on that host to consistently use an easy-to-remember name that I could adopt for ifupdown configuration files.

Enter predictable interface names. What started as a Linux kernel module project at Dell was eventually re-implemented in systemd. However, clear documentation on the naming scheme had been difficult to find and udev's persistent network interface rules gave me what I needed, so I postponed the transition for years. Relocating to a new flat and rethinking my home network to match gave me an opportunity to revisit the topic.

The naming scheme is surprisingly simple and logical, once proper explanations have been found. The short version:

  • Ethernet interfaces are called en i.e. Ether Net.
  • Wireless interfaces are called wl i.e. Wire Less. (yes, the official documentation call this Wireless Local but, in everyday usage, remembering Wire Less is simpler)

The rest of the name specifies on which PCI bus and which slot the interface is found. On my old Dell laptop, it looks like this:

  • enp9s0: Ethernet interface at PCI bus 9 slot 0.
  • wlp12s0: Wireless interface at PCI bus 12 slot 0.

An added bonus of the naming scheme is that it makes replacing hardware a breeze, since the naming scheme is bus and slot specific, not MAC address specific. No need to edit any configuration file. I saw this first-hand when I got around upgrading my router's network cards to Gigabit-capable ones to take advantage of my new home's broadband LAN. All it took was to power off the host, swap the Ethernet cards and power on the host. That's it. systemd took care of everything else.

Still, migrating looked like a daunting task. Debian's wiki page gave me some answers, but didn't provide a systematic approach. I came up with the following shell script:

#!/bin/sh
lspci | grep -i -e ethernet -e network
sudo dmesg | grep -i renamed
for n in $(ls -X /sys/class/net/ | grep -v lo);
do
  echo $n: && udevadm test-builtin net_id /sys/class/net/$n 2>/dev/null | grep NAME;
  sudo rgrep $n /etc
  sudo find /etc -name '*$n*'
done

This combined ideas found on the Debian wiki with a few of my own. Running the script before and after the migration ensured that I hadn't missed any configuration file. Once I was satisfied with that, I commented out the old udev persistent network interface rules, ran dpkg-reconfigure on all my Linux kernel images to purge the rules from the initrd images, and called it a day.

... well, not quite. It turns out that with bridge-utils, bridge_ports all no longer works. One must manually list all interfaces to be bridged. Debian bug report filed.

PS: Luca Capello pointed out that Debian 10/Buster's Release Notes include migration instructions.

Planet DebianJonathan Dowland: Amiga mouse pointer

I've started struggling to see my mouse pointer on my desktop. I probably need to take an eye test and possibly buy some new glasses. In the meantime, I've changed my mouse pointer to something larger and more colourful: The Amiga Workbench 1.3 mouse pointer that I grew up with, but scaled up to 128x128 pixels (from 16x16, I think)

Desktop screenshot with Workbench 1.3 mouse pointer

See if you can spot it

The X file format for mouse pointers is a bit strange but it turned out to be fairly easy to convert it over.

An enormous improvement!

Cryptogram Determining What Video Conference Participants Are Typing from Watching Shoulder Movements

Accuracy isn’t great, but that it can be done at all is impressive.

Murtuza Jadiwala, a computer science professor heading the research project, said his team was able to identify the contents of texts by examining body movement of the participants. Specifically, they focused on the movement of their shoulders and arms to extrapolate the actions of their fingers as they typed.

Given the widespread use of high-resolution web cams during conference calls, Jadiwala was able to record and analyze slight pixel shifts around users’ shoulders to determine if they were moving left or right, forward or backward. He then created a software program that linked the movements to a list of commonly used words. He says the “text inference framework that uses the keystrokes detected from the video … predict[s] words that were most likely typed by the target user. We then comprehensively evaluate[d] both the keystroke/typing detection and text inference frameworks using data collected from a large number of participants.”

In a controlled setting, with specific chairs, keyboards and webcam, Jadiwala said he achieved an accuracy rate of 75 percent. However, in uncontrolled environments, accuracy dropped to only one out of every five words being correctly identified.

Other factors contribute to lower accuracy levels, he said, including whether long sleeve or short sleeve shirts were worn, and the length of a user’s hair. With long hair obstructing a clear view of the shoulders, accuracy plummeted.

Kevin RuddReimagine Podcast with Eric Schmidt: Democracy After the Pandemic

Podcast originally published 4 November 2020

 

Madeleine Albright (00:05):

We can’t expect miracles immediately. But there has to be an assessment of how the international system works, and also, what America’s role in the world is. I happen to believe that president Clinton was the person that said that we were an indispensable power. He said it first, I just repeated it so often it became identified with me. But there’s nothing about the word indispensable that says, alone. It means that we need to be engaged and a partner, not some country that bosses everybody around and then says that we’ve been victimized. But to have, as a partnership, that deals with what are a whole set of new problems.

Eric Schmidt (00:50):

The Coronavirus pandemic is a global tragedy, but it’s also an opportunity to rethink the world. To make it better, faster for more people than ever before. I’m Eric Schmidt, former CEO of Google and now co-founder of Schmidt Futures, and this is Reimagine, a podcast where trailblazing leaders imagine how we can build back better.

Eric Schmidt (01:17):

In 1945 the world was reeling from successive catastrophes, the previous 30 years included two world wars, The Great Depression, and a global pandemic that left hundreds of millions dead or impoverished. That summer, leaders from democratic nations around the world convened in San Francisco to reimagine global cooperation and leadership. They created a set of institutions and norms that we refer to as the liberal world order, to insure the tragic calamities of the prior decades would never happen again. 75 years later, we find ourselves in a similar place, amid another devastating public health crisis, authoritarianism is surging, the leaders and institutions that have historically guided us through various crises are faltering amid rampant tribalism, conflict and fear.

Eric Schmidt (02:18):

On this episode of Reimagine, former Secretary of State, Madeleine Albright, and former Prime Minister of Australia, Kevin Rudd, will help us understand the trajectory of democracy in global leadership in an increasingly unstable world order. The pandemic has deepened divisions and mistrust and set the world on a different course than it was just a short while ago. We must find a way back to the right path for peace and prosperity to flourish.

Eric Schmidt (02:48):

Joining us now, is former Secretary of State, Madeleine Albright. Secretary Albright was our nation’s 64th Secretary of State, and the first woman to lead the state department. During her illustrious career, she has helped the United States navigate many international crises, and has spent much of the last 50 years advocating for freedom around the world. Born in Prague on the cusp of World War II, Secretary Albright has also seen fascism up close. Her experiences have made her one of the worlds foremost experts on democracy and authoritarianism. She presently teaches at Georgetown and chairs the National Democratic Institute, which works to safeguard elections and promote openness and accountability in governments around the world. Secretary Albright, welcome.

Madeleine Albright (03:31):

Eric, it’s great to be with you. Thank you so much.

Eric Schmidt (03:33):

Now, you’re doing a book tour and promoting a book that you’ve just recently published called Hell and Other Destinations. Tell us about the book. What’s special about this insight?

Madeleine Albright (03:44):

Well, first of all, let me tell you, it kind of starts out by saying, people want to know how I want to be remembered. And I say, “I don’t want to be remembered, because I’m still here.” And I wanted to kind of show the things that I’ve done since I left office. And one of the things I’ve always tried to do is to make whatever I do next more interested than what I did before. Which is a little hard if you’ve been Secretary of State. So the book is based on the fact that as we were leaving the department people were saying, “Well, what are you going to do? You can go back to teaching. You can write books. You can start a company. You can do your democracy work. You can continue with the Truman Foundation. So what do you want to do?” And I said, “I want to do it all.”

Madeleine Albright (04:32):

And so what I’m doing are all those different things, and rationalizing that they all go together and that one informs another, and that I learn an awful lot, and I have. The only problem that I’m having was that I was trying to prove that I was not old, by showing how much I do. And then all of a sudden, I’m categorized as “elderly”, and so making that point, while I’m doing something virtually, is a little bit harder. But I had fun.

Eric Schmidt (05:04):

But you were peripatetic and incredibly productive as Secretary of State, so I think this is just a character of who you are. I don’t think it’s true of before state and after state. You just work this way. This is who you are.

Madeleine Albright (05:17):

Well, I certainly love traveling, and maybe I didn’t like airports, but once I got on the plane it was nice.

Eric Schmidt (05:24):

I’ve always been interested in America’s view of fascism and our lack of understanding of kind of bad government outcomes that we have. In the United States, we assume that democracy is first, always the winner, and second, that it’s always been true. But you have personal knowledge that this is not true. And we hear about fascism, but we don’t really know what it is. Tell us in a way that we can understand, why fascism is to be fought at all costs.

Madeleine Albright (05:53):

Well, first of all, I think people throw around fascism as a term without understanding it, a fascist is somebody that you disagree with or I often talk about a teenage boy who’s father doesn’t allow him to drive and he calls him a fascist. First of all, fascism is not an ideology, it is a method for gaining control, and it is a way of controlling population. And the way I describe it is that, a fascist leader is somebody who take the divisions in society, which happened just to exist for any number of reasons, and exacerbates them. So that it is based on the fact that a fascist leader identifies himself, and by the way, they’re all himselves, and identifies with that group at an expense of another which is then the scapegoat that is to blame for everything and makes the divisions worse. The second characteristic of fascism is that the leader thinks that he’s above the law, and then also calls the press, the enemy of the people. But it is a way to control the population ultimately. But to gain power, and control the population.

Madeleine Albright (07:06):

I decided that in order to understand fascism, I had to go back and see where it originated. And it did obviously originate with Mussolini. And what was interesting about him and how he gained power, was the Italians felt unappreciated because of the role that they had played at the end of World War I by supporting The Allies. So there was an anger and a disappointment. And then also, there were divisions in society and all of a sudden, this leader who was an outsider took advantage of those divisions and exacerbated them. The interesting part was that both he and Hitler, gained power constitutionally. And I think that is also something that is worth thinking about.

Madeleine Albright (07:52):

And so then, I began to look at some of the things that I saw going on in Europe, in Hungary, and in Poland, and then in the Philippians with Duterte, Venezuela. So it’s not something that’s gone, it’s definitely there.

Eric Schmidt (08:07):

So when you think about fascism and you think about democracy, we obviously prefer democracy. We also have authoritarian systems, which don’t appear to me to be too fascist. So for example, China, clearly authoritarian, but it’s at least a system of governance without a lot of freedom. What’s happening with democracy? Is democracy weakening now? For a decade or so, democracy was getting quite a bit stronger.

Madeleine Albright (08:35):

First of all, by the way, I decided that I would say communists were fascist also, because they do control the system. But what I do think is true, is that democracy is a process as much as anything, and it is complicated and it takes time. And it is based on a social contract in which people gave up some of their individual rights, in order to have the government take on duties which were protective or did help the system move forward in exchange for the fact that the citizens would participate and vote and play the role that they need to do in a free society.

Madeleine Albright (09:16):

But, and this is where we have found the issues complicated is, democracy depends on information in many different ways, and democracy also has to provide a system which allows people to speak freely and figure out who they are. But at the same time, also allows them to make a living. And so I always say that, democracy has to deliver both in the political and in the economic field, because people want to vote and eat. But it is complicated.

Eric Schmidt (09:50):

It seems to me that, people are positing what I think is a false choice, between order and freedom. And it should be possible to achieve both. The narrative today about democracy has to do with the impact of the internet and social media, and the fact that specialized groups are getting weaponized if you will, by a combination of finding each other and then exploiting either vulnerabilities loopholes or features of the social media world, where they can get an outlandish level on impact, far greater than they would have before that. Do you believe that this is a threat to the way democracy works or do you think that this is going to get solved relatively easily as people understand it?

Madeleine Albright (10:34):

I think it will get solved. And by the way Eric, something that you don’t know about me is that, I wrote my dissertation on the role of the Czechoslovak press in 1968 because I was always interested in the role of information and political change. And the thing that happened in that was that the people actually knew what the truth was because of Radio for Europe and Voice of America, but their censored pressed wasn’t printing it in any way. So they weren’t able to act on it. They couldn’t figure out how it all went together. And what was interesting was, systematically, the press became uncensored. Also, information played a huge role in what was happening with solidarity in Poland. Which by the way, had a new form of passing on information which was a taped cassette. So when Lech Wałęsa spoke in one factory, they could send it to another one and motivate people to be supportive.

Madeleine Albright (11:33):

And so I’ve always been fascinated by the role of information, and so I am very much, by the technology that is taking place now. I do think that in order for people to participate in a democracy, they need information. That is a key to being able to be a participant that knows what is going on. The question is, and I think this is obviously something that you and others are dealing with is, “How do you allow the freedom to put all kinds of information into the system and yet, not have it be undercut by those who are trying to do something else with it? And how do people distinguish between what is true and what isn’t?”

Madeleine Albright (12:18):

And so I hate to be a relativist in this, but I think it is hard to figure out what the truth is these days. And therefore, just the way any professor, I will say, read or listen to a lot of different sources and try to figure it out. But I do think that at the moment, there is an exploitation by some of the incredible advances in technology that have been made. And the question is, how one has some kind of regulation without undercutting the aspect of the freedom of it. And I think that is very hard, as all of you in silicon valley are really experiencing.

Eric Schmidt (12:58):

It remains an unsolved problem. But lets consider the Chinese argument, and their argument goes something like this, the West has had a disease, and failing for a long time. The Chinese model, which is much more organized, much less free if you will, is more effective at producing the things that people care about. And indeed, if you look at the Coronavirus, even if you take a factor of ten discount on the numbers that they quote, there’s no question that China is largely working. The economic growth is quite strong now, there’s plenty of signals that their demand is growing, while the rest of the world is still struggling with no end in sight to the impact of the virus. One scenario is that this is the beginning of the acceleration of the Chinese model, and that the democracies can not get their act together because of the reasons that we discussed. How do you argue that one way or the other?

Madeleine Albright (13:59):

Well, I can take the opposite view, frankly. First of all, I do think it’s worth going back on something in Chinese history. There is an anger that has created a lot of this, from the fact that China felt disrespected by the West all the years, and imposed upon by some of the Western systems, some good some bad, like the opium war and variety of aspects of things, and felt that there needed to be one party. What is interesting is that we all had a theory, which turns out to have been wrong. Which is, having looked at South Korea, that had a dictatorship, and then that was disposed of and all of a sudden there was the development of a middle class, that the middle class brought with it, a sense of wanting to be able to make decisions about their own lives.

Madeleine Albright (14:53):

They were doing fairly well, but having that capability of not working under a dictatorship, they then began to adopt democratic principles. So there was the thought, that as China was experiencing economic growth and developing a middle class, that they would also go in the direction of having a more open system. It didn’t work, because there was a question about what had happened to the communist party and a new leadership with Xi Jinping, who felt that he had to reinvigorate the base of the party by calling on nationalism in a very strong way, i.e. then going back and say, “We had been limited by the imposition of Western ideas and now we’re going to do things our way.”

Madeleine Albright (15:44):

I do think that I could also argue, that the Chinese system made it difficult for on the virus itself in the beginning, because the people that knew about it were quickly expunged from the system and they weren’t able to speak outside about what was happening in Wuhan and how that was effecting people. And then, because of the way that they hid what was going on, we don’t have to speak about what was going on here, but the Chinese were undermining a lot of the really, way of getting information out. They clearly, have a better system of controlling things. Even if we were functioning better, they can tell people what to do in a way that we never can or want to do.

Madeleine Albright (16:35):

So I think it’s a system that is aggressive in the way that it sees itself and the world. It is, as I said earlier, operating off of the base of nationalism, that they were mistreated, and they still have people that would like to be doing something else. So I don’t see it as a better system. And I can’t, given my own background, see any kind of authoritarian system as one that allows for the evolution of society in a way where people feel that they want and can’t make decisions about their own lives. I can see where it is a competitive system, because at the moment, we are totally disorganized. They have somehow managed also, to get some control over the virus. They have no compunctions. And it isn’t just tracing, as far as the virus is concerned, but it’s literally having images of everybody and knowing where people and what people are doing in society. So once the virus is dealt with, it will be hard to get rid of the control system that has been established by the Chinese Communist Party.

Eric Schmidt (17:48):

I agree with that fear. It turns out the ranking system and the rating system can clearly be used for other forms of social oppression as well as of course, tracking the spread of the disease. Madame Secretary, you mentioned earlier a little bit about fascism and that they were always men. Why is it that most of the successful governments dealing with these problems seem to be headed by women now?

Madeleine Albright (18:15):

Well, I’ve been asked that question and I’ve tried to analyze it, and it is very interesting. First of all, I do think that women have a way of worrying about how other people are doing, and these are generalizations, and our caregivers. I think that one of the aspects is that, I think women are better at multitasking, which allows there to be peripheral vision, to see where the problems are coming from and look at them as ways to solve the problem rather than blaming it one somebody else. I think also, there is an attempt I think, to tell the truth to people and not try to hide how to deal with it and not domineer the aspect of being able to really use the various parts of their governments to spread the word without dominating it.

Madeleine Albright (19:10):

And frankly, I have also made clear that fighting fascism, the women do better with that, frankly. Because again, it is not trying to divide people. Mothers do not like to have one set of their children arguing with another. And I think that thinks are not based so much on ego. The countries that have been successful are Taiwan and New Zealand and Germany, and then Norway and Sweden, Iceland. And a lot has to do with having good communication between the head of state and the people, and trying not to treat them as if they can be totally manipulated, but to level with them and say, “You need to be part of the solution.” And they actually believe in science too, that helps.

Eric Schmidt (20:01):

You were the first female Secretary of State for this country, what advice would you have for Kamala Harris if she were to become the first female Vice President?

Madeleine Albright (20:09):

Well, first of all, it’s an honor to be the first but it’s not the easiest to be first.

Eric Schmidt (20:15):

Yes.

Madeleine Albright (20:16):

Because you are constantly being compared with your predecessors, and there are those, and I’ll say this in my own case, who wonder how I ever got the job. And I have to tell you, when my name came up to be Secretary of State, there were people who said, “Well, Arab countries will not deal with a woman Secretary of State.” And so the Arab ambassadors at the UN got together and said, “We’ve had no problems dealing with Ambassador Albright, we won’t have any trouble deal with Secretary Albright.” I had more problems with the men in our own government.

Eric Schmidt (20:51):

Oh my God. Really?

Madeleine Albright (20:52):

And partially, it had to do with the fact that they had known me too long. I had had them over for dinner, which I helped to pass the plates around. I had been a carpool mother. I was good friends with their family. And then, and I’m sure that this will also happen to Senator Harris, many of them thought, well, why weren’t they in the job when they should be the ones doing it. So I think there will be issues. I think also, that one has to be conscious of the fact that you are also being judged by other women. And I think we have a tendency to be very critical of each other, judgemental, and then also, many times, do project our own sense of inadequacy on other women.

Madeleine Albright (21:40):

And that is partially what I was writing about in this book. Because the most famous statement I ever made was that, “There’s a special place in hell for women who don’t help each other,” which came out of my own experience. So when I was writing that dissertation I was talking about, there were other women who said, “Why aren’t you home with your children or in the carpool line?” And then, and this is very germane to your question just now, I was Geraldine Ferraro’s foreign policy advisor, and traveled with her in 1984, when she was the first women to be on a national ticket. And we were somewhere and a women came up to me and said, “How can she deal with a Russian? I can’t deal with a Russian.” Well, nobody was asking this woman to deal with a Russian.

Madeleine Albright (22:26):

So I think that Kamala will also be judged I think, as to whether X woman could be doing the job that she is doing. So I think we do need to be supportive of each other. That has sometimes been interpreted to mean that I say, “Women have to vote for each other.” I have never said that. I do think however, we need to be supportive of each other.

Eric Schmidt (22:49):

On the COVID response, you’ve actually written extensively about how we need to reorganize ourselves and in particular, around international responses. You’re uniquely, I think, concerned about the structure of the world going forward, after this is hopefully over. I was reading about this, you talked about additional resources for low income countries, especially Latin America and Africa, conflict areas where the disease is going to be terrible, but more importantly, they’re in conflict anyway, and then support democracy and good governance in general. Is that going to happen? How will it happen? How will you make that happen from your position?

Madeleine Albright (23:31):

Well, let me just say, one of the things that I have been very conscious of is, we are operating with international organizations that were created, most of them 1945, at the end of World War II. And they do need refurbishing. They need updating in a number of different ways. So that’s for number one. But I also think that we have to recognize that the threats that are out there now know no borders. So whereas the virus might have started in China, it has definitely spread, climate change in another issue that is multinational, nuclear proliferation. So there are a number of aspects that have to be considered multilaterally. And by the way, Americans don’t like the word multilateralism, it has too many syllables and it ends in an ism, but the bottom line, is that some of the issues can only be solved by more than one country. So that is for starters.

Madeleine Albright (24:31):

I think that what has to happen is to recognize the fact that the virus has hit different countries in different ways, and countries have their own way of dealing with it. And part of the issue, and as you raised it, is that the developing countries have been working very hard in terms of dealing with some of their economic issues as well as their governance issues. And this is hitting them very hard now in terms of how they deal with, what are the combination of the issues of environmental problems that pushes them to have to move into refugee camps or in fact, how to deal with the various struggles that are going on, and then not enough in terms of resources. If they are told to wash their hands every five minutes, they don’t have enough water to drink. So one has to consider what the issues are.

Madeleine Albright (25:30):

I also do believe that the international system has the capabilities of helping them economically as well as with advice. And we have done that in other cases in terms of being able to control smallpox or working also on control of Polio or later, Ebola. But the system has failed on dealing with COVID. And it’s partially because of what the Chinese didn’t do, and then what they did do. Which is I do think that they have contributed a lot more than was expected to the World Health Organization, and there are politics everywhere and it all needs to be fixed in some form or another. But also, the fact that the United States has not seen it as a threat and has not recognized the fact that not only is it that the virus knows no borders, but that it’s effect will also affect our economic policies, trade, what can be done, and how people can exist within their countries and whether it is then contributing to a deficit in democracy. Because we are not the best example at the moment.

Madeleine Albright (26:46):

So there are an awful lot of things that have to happen, we can’t expect miracles immediately. But there has to be an assessment of how the international system works and also what America’s role in the world is. I happen to believe that President Clinton was the person that said that we were an indispensable power. He said it first, I just repeated so often it became identified with me. But there’s nothing about the word indispensable that says, alone. It means that we need to be engaged and a partner. Not some country that bosses everybody around and then says that we’ve been victimized, but as a partnership that deals with what are a whole set of new problems.

Eric Schmidt (27:32):

A few weeks ago, you wrote an op-ed about all of this, talking about the American election. And you wrote and I’ll quote, “Mr. Biden, if elected, will inherit a country diminished by his predecessor’s surge for greatness in all the wrong places. The new president’s task will be daunting to reassure allies, reassert leadership on climate change and world health, forage effect coalitions to check the ambitions of China, Russia, and Iran, and establish the U.S.’s identity as a champion of democracy.” Do you believe incoming Biden presidency will be able to do this?

Madeleine Albright (28:08):

I do believe. I don’t think that it can happen all at once. And I also believe the opposite, that another four years of Trump will make our situation impossible in so many different ways. I really do think that another four years of this will be a disaster. There’s no other way to state that. I have been around enough and even now, virtually, to think that it is un-American in every single way, and we are part of the major issue in the functioning of the world.

Madeleine Albright (28:40):

But I do think that Vice President Biden is Uniquely qualified given his experience, to deal with a variety of these issue. He has seen how the system can work in terms of the international aspect of it. He believes and he’s talked about, having a summit of democracies, which would really look at best practices and what can be done. He also, I think, has talked about the power of our example, that I mentioned, just generally. But I think we have to also recognize that it’s going to take a certain amount of humility. We can’t all of a sudden say, “Okay, we’ve had the selection and now we’re in charge again.”

Madeleine Albright (29:24):

I think it is going to take a deliberate effort to explain where we are, the issues that we’ve had. Then in fact, also spend time as a partner trying to sort out how to generally behave in this 21st century, and think about how technology can be out partner and our friend, how we can acclimate our selves to… I don’t think anythings going to be the same after this whole pandemic. And that we need to sort out what the tools are that we have, in order to have a functional world, where we do not divide people more, and where the United States does have a partnership role, and understand that our domestic situation can only be made better in partnership with others.

Madeleine Albright (30:18):

So it’s a very big assignment, there’s no question about that. And my last foreign trip frankly, was to go to Munich, for the Munich Security Conference. And we were a joke, because Pompeo and Esper were there, and the way they talked about the United States was just totally out of lala land. And the other countries were looking at what some of the solutions could be, were concerned. And I think that we need to get a reality check about the way we are viewed. And by the way, one of the things that we need to go back and look at is, how did this all happen. And the best quote in my book on fascism is from Mussolini, that, “If you pluck a chicken one feather at a time, nobody notices.” So there has been an awful lot of feather plucking and we need to either get a new chicken or stop the feather plucking.

Eric Schmidt (31:15):

Madame Secretary, I want to congratulate you on your new book which is called, Hell and Other Destinations: A 21st-Century Memoir. Thank you again, I look forward to your next book and the product of your great work at Georgetown.

Madeleine Albright (31:28):

Thank you very much. I’ve enjoyed being with you Eric, and what you’re doing in your podcast.

Eric Schmidt (31:32):

The primary goal of a democracy is to keeps it’s people safe and get them to be prosperous. Our democracies have failed on both parts of that so far. We accept that democracies are really groups of people who are lobbying and shaping information, and so forth. But ultimately, great leaders should emerge, leaders that somehow can judge where the risks are and make the right balance of trade offs, so that the society can, at the end of the transaction, be more prosperous, safer, and so forth. Where will those leaders come from? They’re not going to come from leaders who spend all their day testing their popularity, and they’re not going to come from leaders that are beholden to special interests. They’re going to come from the leaders of the old time. The ones who started with a principle of what they were trying to do and stuck to it, a principle around greatness, and success, and safety, and so forth. The leaders who choose to pander to the crowds, to ignore facts, and to focus only on themselves and their own narcissism are destined for a terrible history.

Eric Schmidt (32:48):

Secretary Albright’s experience is invaluable as we lay the foundation for the next chapter of international coexistence. Our next guest, former Australian Prime Minister, Kevin Rudd, will help us continue to look toward the future by helping us understand one of the growing forces shaping world affairs, China. Prime Minister Rudd is an expert on China and currently serves as the president of the Asia Society Policy Institute in New York City. As Prime Minister of Australia from 2007 to 2010, and then in 2013, Kevin was an active leader in global affairs. He ratified the Kyoto Protocol and committed Australia to decreasing carbon emissions. On the domestic front, he helped Australia survive the global financial crisis as the only major developed country to not slip into a recession. Among many other accomplishments, he delivered Australia’s first national apology to indigenous Australians as his first act as prime minister, and made significant investments in schools and education. Prime Minister Rudd, thank you so much for being here with me.

Kevin Rudd (33:50):

It’s good to be with you, Eric.

Eric Schmidt (33:51):

So lets look at what happened with Australia and COVID. As best I can tell, the COVID crisis accelerated a break between Australia and China. Can you explain how this break happened and how the positioning of COVID in China now feel if you’re in Australia?

Kevin Rudd (34:11):

I think the first think is, as you and I both know, is that China has significantly changed. Xi Jinping’s China is radically different from the China before 2012, 2013. It’s certainly more assertive in terms of it’s international policy across the board. And so that’s been building over the last six or seven years. Plus the second point is this, being a Western country located in the East, we’ve kind of been the first Western country down the Chinese mineshaft, that is first Western canary down Chinese mineshaft so we’ve experienced first and upfront a lot of the direct challenges in terms of the ultimate tension between economic policy and security policy. Australia is one of America’s oldest allies. China takes, would you believe, more than one third of Australian turtle exports. And of course, we’re from radically different human rights traditions. So for those reasons, it’s structural.

Kevin Rudd (35:05):

And finally, what’s happened most recently, I think it’s because we had the eruption of the virus coming out of China, we had it’s impact on all countries in the world, including the horror that unfolds in the United States, and a more manageable problem here in Australia. But still, big questions on the mind of the Australian public as to how this thing came about in the first place. Put all those things together and Australian advocacy for international inquiry into the origins of the Coronavirus, and it adds up to a cocktail of a deeply negative state of the Australia, China relationship.

Kevin Rudd (35:46):

And one final point is, what our Chinese friends have been doing with various American allies around the world, and friends around the world, is kind of make and example of them. You’ve seen that with the Canadians, over the Madame Meng affair on Huawei. You’ve seen it recently with the Swedes who have had their own human rights challenges with China considering various of their Chinese Swedish citizens. You now see it of course, with emerging problems for the British over Huawei. And then there’s the Australians. So I think what tends to happen is that, individual countries are singled out to particular treatment if they don’t comply with China’s foreign policy wishes, in order to set examples for the rest.

Eric Schmidt (36:31):

Well, it’s interesting that Australia was the first to call for an independent investigation of what was going on, which ultimately the WHO took on, and Australia and the current prime minister pushed very hard. What penalty has China extracted from Australia today, in your opinion?

Kevin Rudd (36:54):

Well, the complexity of this is a bit like this, I suppose number one it, as a middle power like Australia again, to call for such an independent investigation of the origins of Coronavirus, it’s usually helpful to hunt in packs. By which I mean, bring a Coalition of the Policy Willing with you. What the Australian government did was, went out there and unilaterally call for this, which makes it much easier for the Chinese then to single you out.

Kevin Rudd (37:18):

The second point is, just for the clarity of the record, that the independent inquiry into the origins of the virus is somewhat different to what we ended up with, with this WHO investigation into effectively, the WHO’s performance and not much beyond that. But it is something. And I suppose on the key question of, how is Australia been singled out, I suppose I’d point to three measures. One, is travel warnings to Chinese tourists not to come back to Australia because it’s unsafe. Two, warnings to Chinese students studying in Australia, that it’s also unsafe because of alleged racist reaction to Chinese in Australia. And number three, in specific commodity areas, like Australian barley, Australian Beef, and potentially Australian wines. The Chinese have used various so called quarantine and WTO related measures to effectively switch their sources of supply. And ironically, American supplies is moving into some of those opportunities. So there you go. That’s the background.

Eric Schmidt (38:30):

But building on this, you have been critical of the American response. I’m quoting you, “America would have mobilized the world, but in this time, in America’s absence, no one did.” And indeed France convened the G7, and the G20 was summited by Saudi Arabia and so forth. Do you have a view now of this that’s different? Do you see any change in the American role? Is it getting worse or better from your perspective?

Kevin Rudd (39:02):

If you’re concerned about the stability and effectiveness of the global rules based order, which through painstaking leadership, Americans, together with their friends and allies have put together out of the ashes of the second world war, then you’ve got to stand back and look at the policies and posture and actions of the Trump administration and just kind of scratch your head. So take the COVID-19 crisis, yes, it’s been a domestic challenge for all of us. But when you have a monumental global assault on public health and a global assault on the economy and employment in virtually all countries, than the instantaneous response for those of us who are friends and allies in the United States, and others, is to look for American global leadership.

Kevin Rudd (39:51):

Instead what we found with Trump was, the guy behaving domestically, as I read recently, like some 19th century quack apothecary, recommending kind of unbelievable medical treatments for this condition. But when it came to global action, either the global provision of PPE protective equipment, or global leadership on vaccine development, et cetera then the America we’ve come to know and respect, and most of us to love over the decades, was just not there.

Kevin Rudd (40:28):

So this creates a significant vacuum in the mind of global public opinion. And this fall, we look forward to the next presidential election and Joe Biden’s elected, it’s what I’ve described and stuff I’ve written recently for Foreign Affairs Magazine, is kind of the last chance saloon for American global leadership. We want America back. We want you to work closely with your allies. There’s so much to be done in the world not just on pandemics but climate and the rest, and having America back in the saddle is what we’d really like to see. But it is frankly, a last chance saloon to get this done.

Eric Schmidt (41:05):

In the Foreign Affairs piece that you’d recently published, you actually argue that both china and the U.S. will emerge “Severely damaged,” I think is the phrase. And severe damage is a pretty strong statement. And it seems to me that it’s a race to the bottom, whether it’s the politics or the change in the politics inside of China, she is as you pointed out, much more authoritarian, Trump is a different kind of leader than our previous presidents, as everyone is established. Describe the weakening and then tell us how you would fix it, on both sides.

Kevin Rudd (41:45):

Wow, there’s a big question, or a couple of big questions. Firstly on the diagnostics, lets just look at the United States first. There’s a huge economic hit on the United States, which we will not know the full dimensions of for several years. And that’s going to effect the future budget resilience of the United States as well. Ultimately, America can only print money for so long. Ultimately, there has to be a rebalancing of the system. And I say that as someone who has a deeply cleansing approach for how you fix economies in a time of systemic international crisis. But the truth is, the objective truth is, it’s a massive economic hit and it’s a massive budgetary hit. Which obviously then has implications to what you can do with the government in the future and funding the future of the U.S. [inaudible 00:42:39] and the rest.

Kevin Rudd (42:40):

But do you know something? There’s also been this hit on the American soft power. What we talked about before Eric, was American global leadership. And frankly, you friends and allies are just around the world, holding their breath and waiting for November for a decision by the American people as to what leadership they want America to exercise in the world in the future as well. But in the meantime, there’s been a huge reputational hit on the U.S. standing.

Kevin Rudd (43:07):

But what I find, is people often then therefore go into an automatic equation which says, “America down, therefore China up.” Well, not so. The Chinese economy has taken a huge hit itself. We really have to go back to the cultural revolution to see such disastrous economic numbers as we’ve seen emerge from China in recent quarters. And therefore, that flows through their own budgetary capacity to fund The Belt and Road Initiative, to fund what they’re doing through their military, to fund their expanding international development program, et cetera. And so it becomes a huge economic and financial equation for the Chinese State as well. And remember, China is probably, I wouldn’t double dependent, but significantly dependent on the global economy through trade and investment flows as a key part of their formula for long term, sustainable growth for themselves.

Kevin Rudd (44:06):

So what do I say emerges as a result of that, post COVID, whenever post COVID comes, Eric, is likely to see these two wounded elephants roaming around in the global living room, and as a consequence, we no longer have anyone leading effectively, the global order, and the systems and institutions of international governance, which have kept us basically, outside of barbarism for the last three quarters of a century. And what I see is these institutions dying the death of a thousand cuts, and now increasingly becoming, as it were, vulcanized into pro-American and pro-Chinese camps with neither of the super powers willing or able to exercise effective leadership. So it leads to what I’ve described as an emerging international anarchy.

Kevin Rudd (44:54):

So what can you do about it? Two things, perhaps three. Start with those of us who are not Americans and Chinese, what I’ve written about extensively in the Economist and elsewhere is, it’s time for a Coalition of the Policy Willing what I call the M7 or the M10, the middle power 7, or the middle power 10, countries like France, Germany, the U.K., once it decides what it wants to do in the future, maybe the Swedes, the Japanese, the South Koreans, the Australians, the Indians, and the Canadians, and the Mexicans, these are all democracies, they’re all middle powers, and that is, how do you exercise through them financial, diplomatic, and political measures to triage the international system until we have the reestablishment of the level of geo-political equilibrium, involving the great powers.

Kevin Rudd (45:49):

And as for the United States, as I said, it really hinges on November. If Americans decide they wish to be the worlds leaders in the future or be it perhaps in a different way in the past, and not simply a replication of past forms, then the world is looking to see what America under Biden would do. And that means fixing your house at home, Black Lives Matter, but basically the inequality which drives it, and rediscovering your confidence in the world.

Kevin Rudd (46:19):

And as for China, China’s not a done deal under Xi Jinping. Now you’ve just said before in your intro to this part of our conversation, Eric, that you and I share many friends. And lets say there are world views in China quite different to the ones we see articulated by Xi Jinping’s administration. And these are essentially internationalizing world views. These are more liberal internationalist world views. These are more open economy and increasingly open society world view, though with a question mark on the continued centrality of the Chinese Communist Party in a one party state. And so it really depends where shakes down in Chinese politics and the lead up to the 2022, 20th Party Congress, and whether Xi Jinping easily secures his reappointment.

Eric Schmidt (47:06):

My final question, you’ve spent your whole life studying China, you studied the language, you did it academically, you wrote a PhD on the dissidents. Did you foresee the rise of China in this way, the new strong, powerful china? When did you know this was the path?

Kevin Rudd (47:29):

Did I actually see China turning out this way? I think most of us who lived and worked Beijing as I did in the 1980s, when I was a Junior Woodchuck in the Australian Embassy back then, analyzing the earliest days of political and economic reform in the Chinese system, we had a degree of optimism that China would evolve in the direction of more open economy, more open society, and perhaps in time open politics. I think though, having been myself, in Tiananmen Square about a week or so before the tanks moved in, and having spent the better part of a week prior to that walking around and talking to the students back then in the square, many of whom were subsequently killed, I was always deeply skeptical about whether a Leninist Party, like the Chinese Communist Party, would ever voluntarily surrender political power. As we saw with a combination of Galsnost, and Perestroika in the then, Soviet Union.

Kevin Rudd (48:38):

So I’ve seen China as moving in the direction of certainly a more open economy, because they don’t want to return to poverty. I see that as generating the social pressures that you and I have both experienced in China in people wanting more freedom in their personal lives. But to be honest, I’ve always been skeptical as to whether the communist party, being deeply rooted in it’s Leninist traditions, would ever see itself and it’s self interest handing over power to a more open elected political entity. The Chinese communist party calls this the theory of “Peaceful transitionism,” and it’s something which the communist party regards internally, as political enemy number one. So yes, I saw China becoming more open, but always with a big doubt in my mind having been in Tiananmen way back when, 30 years ago now, that it would every voluntarily open it’s politics to sort of transitions we’ve seen elsewhere.

Eric Schmidt (49:46):

Thank you Prime Minister Kevin Rudd, you’re incredibly insightful on all such matters. Thank you again.

Kevin Rudd (49:53):

Thanks for having me on your podcast, Eric.

Eric Schmidt (49:58):

Where are we now? The liberal world order is not as free, global, or organized as it could be 75 years after the democratic nations created it. COVID has deepened fissures in the international system and accelerated our slide toward anti-democracy. In a pandemic, we have not seen tremendous leadership out of the largest democracies. Instead, we’ve seen compromise, and in compromise comes death. Because they have not figured out how to collectively manage both health and economic growth. It’s a false choice to tell people to choose between heath and economic growth, you have to solve both at the same time.

Eric Schmidt (50:40):

75 years ago, not just the winner of the war, but the leader of the free world, the United States, set the global world order, set the rules, set the way that the institutions would work, and set a style of approach of solving problems. Today, the United States has relegated that role to others. That loss of leadership means that the world does not have a natural organizational point. It’s probable that the world will devolve a bit, becoming a little bit more confusing. And during a pandemic, you need strong centralized leadership as opposed to confusion and lack of leadership. The most important thing now with democracies, is to recognize that democracies have a certain shape and a certain set of values and to restate them, and to call out behaviors that are inconsistent with democratic values, and strengthen those democratic values.

Eric Schmidt (51:35):

I’m quite convinced that democracies with strong values and a lot of voter participation, will do just fine. The most important thing in our democracy is to increase voter participation so that people have a share in the outcome. Study after study indicates that generations that don’t participate, don’t buy into the leadership, they don’t buy into the decisions, they don’t have a shared sense of the outcome, and they ultimately become troublemakers. Over and over again, we want very high participation and I think we’re going to get it this time.

Eric Schmidt (52:08):

Secretary Albright and Prime Minister Rudd, have helped us understand some of the major past, present, and future forces shaping the story, but thankfully, the story’s not over. We must reimagine democracy in global leadership for a hyper connected and technological world. We must reaffirm liberal democracy as the most fair and effective form of governance. And we must call on the nations that uphold these human values and rights to steer the international system through this century and beyond. We’ve done this before, I know we can do it again.

Eric Schmidt (52:40):

On the next episode of Reimagine, we’ll finish our season by reimagining our lives, planet, and universe with astrophysicist, Neil deGrasse Tyson.

The post Reimagine Podcast with Eric Schmidt: Democracy After the Pandemic appeared first on Kevin Rudd.

Kevin RuddNikkei Asia: China, Japan and South Korea have good news for planet Earth

Published in Nikkei Asia on 4 November 2020

No matter who is declared the winner of the U.S. presidential election, Asia’s pathway to becoming a carbon-neutral continent is now increasingly clear.

Six months ago, Asia was lagging desperately behind the rest of the world, including South America and Africa, in its commitment to achieving net zero emissions by midcentury. Only the governments of New Zealand, plus the Marshall Islands and Fiji as the usual vanguards of international climate leadership, had made such a commitment and — importantly — also enshrined it in domestic legislation.

The recent groundbreaking commitments by China, Japan, and South Korea, mean the three largest economies in East Asia now have clear pathways to decarbonization by mid-century. In terms of Asia’s G-20 membership, only India, Australia and Indonesia now lag behind.

Importantly, Japan and Korea’s announcements will also help put pressure on China to hopefully reach carbon neutrality closer to 2050 — around the time of the 100th anniversary of the founding of the People’s Republic of China — and to achieve net zero greenhouse gas emissions a decade later. These pathways remain an open debate in Beijing’s political circles, including in the wake of last week’s Fifth Plenum, and as preparations continue toward their next Five Year Plan.

We may see further signals from China on this by the time of the UN Secretary-General’s event to celebrate the 5-year anniversary of the signing of the Paris Agreement on December 12, and the world will certainly be watching closely. More so if Joe Biden is set to move into the White House the following month, meaning close to 60% of the world’s carbon emissions will then be from countries committed to net zero emissions.

Beyond the symbolism of these political commitments, they are first and foremost massive market signals. This is especially the case for China, Japan and Korea’s major trading partners, including their largest import markets for coal. But, it is important to note, they are not out of step with the direction of Asia’s biggest companies in recent years.

For example, the Thai conglomerate C.P. Group, one of the world’s largest agri-food producers, had already committed to net zero emissions. In recent days, Malaysia’s Petronas — the region’s largest oil and gas producer — joined them. Even BHP Billiton in my own country — one of the world’s largest mining companies and no fan of climate action — has adopted the same goal.

These announcements reflect what is happening at a subnational level in Asia. In the last year, Asia has outpaced the rest of the world in terms of commitments by cities and regions to net zero emissions with Tokyo, Wuhan, Hong Kong and eight Australian states and territories all joining the list. Taken together, they alone represent over 223 million people, or 10% of the region’s population. This leadership has been a key part of why the approach of these three national governments has now shifted. The challenge now for the region is threefold.

First, at a political level, to holistically embrace the vision of becoming a carbon-neutral continent in the same way Europe has done. This will be a much harder enterprise than it has been in Europe, even with some of their coal-dependent economies and right-wing governments, but it is not impossible.

Key to this will be driving consideration of more national-level commitments to net zero emissions, especially among the 10-member Association of Southeast Asian Nations, which represent more of a mixed bag in this regard. This could be a key area for cooperative regional leadership between China’s President Xi Jinping, Japan’s Prime Minister Yoshihide Suga, and South Korea’s President Moon Jae-in, including in the lead-up to next year’s COP26 Climate Conference in Glasgow.

Second, these governments must put their money where their mouths are and stop underpinning the development of carbon-intensive infrastructure — especially coal-fired power plants — across the rest of the region. Japan and Korea have taken important steps in this regard recently, but there is more still to be done. China’s Belt and Road Initiative is obviously a particular concern in that context, especially when Chinese investment, development finance, and support via equipment or personnel is taken together.

Third, these governments should align their short-term actions with their long-term vision. In China, Japan and South Korea, the challenges to do so may be different, but the problem is the same. Unless each of these three countries can also enhance their Paris targets for 2030 by the time they get to COP26, the depth and sincerity of their long-term commitments will come increasingly under the spotlight. For China, this must mean peaking emissions by 2025, accelerating action in other areas that they committed to do in Paris, and getting on a pathway to phase out coal by 2040. For Japan and Korea, it must mean phasing out coal even sooner — by 2030 — and seriously ramping up the share of renewables in their energy mix.

There is clearly a new wave of climate leadership emerging across Asia. The main question now for the region is whether it is able to ride that wave successfully, or whether its own actions in the short term, or lack of wider regional momentum, risks bringing it to a shuddering halt as the rest of the world moves forward.

The post Nikkei Asia: China, Japan and South Korea have good news for planet Earth appeared first on Kevin Rudd.

Worse Than FailureAnnouncements: What The Fun Holiday Activity?

The holidays are a time of traditions, but traditions do change. For example, classic holiday specials have gone from getting cut down for commercials, to getting snapped up by streaming services. Well, perhaps it's time for a new holiday tradition. A holiday tradition which includes a minor dose of… WTF.

We're happy to announce our "Worse Than Failure Holiday Special" Contest. This is your chance to submit your own take on a very special holiday story. Not only is it your chance to get your place in history secured for all eternity, but also win some valuable prizes.

What We Want

We want your best holiday story. Any holiday is valid, though given the time of year, we're expecting one of the many solstice-adjacent holidays. This story can be based on real experiences, or it can be entirely fictional, because what we really want is a new holiday tradition.

The best submissions will:

  • Contain a core WTF, whether it's a bad boss, bad technology decisions, or incompetent team members
  • Prominently feature your chosen holiday
  • End with a valuable moral lesson, that leave us feeling full of holiday cheer

Are you going to write a traditional story? Or maybe a Dr. Seussian rhyme? A long letter to Santa? That's up to you.

How We Want It

Submissions are open from now until December 11th. Use our submission form. Check the "Story" box, and set the subject to WTF Holiday Special. Make sure to fill out the email address field, so we can contact you if you win!

What You Get

The best story will be a feature on our site, and also receive some of our new swag: a brand new TDWTF hoodie, a TDWTF mug, and a variety of stickers and other small swag.

The 2 runners up will also get a mug, stickers and other small swag.

Get writing, and let's create a new holiday tradition which helps us remember the true meaning of WTFs.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianChristian Kastner: qemu-sbuild-utils 0.1: sbuild with QEMU

qemu-sbuild-utils, which were recently accepted into unstable, are a collection of scripts that wrap standard features of sbuild, autopkgtest-build-qemu and vmdb2 to provide a trivial interface for creating and using QEMU-based environments for package building:

  • qemu-sbuild-create creates VM images

  • qemu-sbuild-update is to VM images what sbuild-update is to chroots

  • qemu-sbuild wraps sbuild, adding all necessary (and some useful) options for autopkgtest mode.

Here's a simple two-line example for creating an image, and then using it to build a package:

$ sudo qemu-sbuild-create -o simple.img unstable http://deb.debian.org/debian

$ qemu-sbuild --image simple.img -d unstable [sbuild-options] FOO.dsc

That's it.

Both qemu-sbuild-create and qemu-sbuild automate certain things, but also accept a number of options. For example, qemu-sbuild-create can install additional packages either from one of the APT sources, or .deb packages from the local filesystem.

qemu-sbuild will pass on every option it does not consume itself to sbuild, so it should mostly work as a drop-in replacement for it (see the Limitations section below for where it doesn't).

The created images can also be used for running autopkgtest itself, of course.

Advantages

Excellent isolation. One can go nuts in an environment, change or even break things, and the VM can always simply be reset, or rolled back to an earlier state. Snapshots are just terrific for so many reasons.

With KVM acceleration and a fast local APT cache, builds are really fast. There's an overhead of a few seconds for booting the VM on my end, but that overhead is negligible in comparison to the overall build time. On the upside, with everything being memory-backed, even massive dependency installations are lightning fast.

With the parameters of the target environment being configurable, it's possible to test builds in various settings (for example: single-core vs. multi-core, or with memory constraints).

Technically, it should be possible to emulate, on one host, any other guest architecture (even if emulation might be slow because of missing hardware acceleration). This would present an attractive alternative to (possibly distant and/or slow) porter boxes. However, support for that in qemu-sbuild-utils is not quite there yet.

Limitations

The utilities are currently only available on the amd64 architecture, for building packages in amd64 and i386 VMs. There are plans to support arm64 in the near future.

qemu-sbuild-create needs root, for the debootstrap stage. I'm looking into ways around this (by extending vmdb2). In any case, image updating and package building do not need privileges.

autopkgtest mode does not yet support interactivity, so one cannot drop into a shell with --build-failed-commands, for example. The easy workaround is to connect to the VM with SSH. For this, the image must contain the openssh-server package.

Alternatives

I looked at qemubuilder, but had troubles getting it to work. In any case, the autopkgtest chroot mode of sbuild seemed far more powerful and useful to me.

vectis looks incredibly promising, but I had already written qemu-sbuild-utils by the time I stumbled over it, and as my current setup works well for me for now and is simple enough to maintain, I decided to polish and publish it.

I'm also looking into Docker- and other namespace-based isolation solutions (of which there are many), which I think are the way forward for the majority of packages (those that aren't too close to the kernel and/or hardware).

Rather than relying on the kernel for isolation, KVM-based solutions like Amazon's Firecracker and QEMU's microvm machine type provide minimal VMs with almost no boot overhead. Firecracker, for example, claims less than 125ms from launch to /sbin/init. Investigating these is a medium-term project for me.

Why not schroot?

I have a strong aversion towards chroot-based build environments. The concept is archaic. Far superior software- and/or hardware-based technologies for process isolation have emerged in the past two decades, and I think it is high time to leave chroot-based solutions behind.

Acknowledgments

These utilities are just high-level abstractions. All the heavy lifting is done by sbuild, autopkgtest, and vmdb2.

Worse Than FailureCodeSOD: When All You Have Is .Sort, Every Problem Looks Like a List(of String)

When it comes to backwards compatibility, Microsoft is one of those vendors that really commits to it. It’s not that they won’t make breaking changes once in awhile, but they recognize that they need to be cautious about it, and give customers a long window to transition.

This was true back when Microsoft made it clear that .NET was the future, and that COM was going away. To make the transition easier, they created a COM Interop system which let COM code call .NET code, and vice versa. The idea was that you would never need to rewrite everything from scratch, you could just transition module by module until all the COM code was gone and just .NET remained. This also meant you could freely mix Visual Basic and Visual Basic.Net, which never caused any problems.

Well Moritz sends us some .NET code that gets called by COM code, and presents us with the rare case where we probably should just rewrite everything from scratch.

    ''' <summary>
    ''' Order the customer list alphabetically
    ''' </summary>
    ''' <returns></returns>
    ''' <remarks></remarks>
    Public Function orderCustomerAZ() As Boolean
      Try
        Dim tmpStrList As New List(Of String)
        Dim tmpCustomerList As New List(Of Customer)
        ' We create a list of ID strings and order it      
        For i = 0 To CustomerList.Count - 1
          tmpStrList.Add(CustomerList(i).ID)
        Next i
        tmpStrList.Sort()
        ' We create the new tmp list of customers
        For i = 0 To tmpStrList.Count - 1
          For j = 0 To CustomerList.Count - 1
            If CustomerList(j).ID = tmpStrList(i) Then
              tmpCustomerList.Add(CustomerList(j).Clone)
              Exit For
            End If
          Next j
        Next i
        ' We update the list of customers
        CustomerList.Clear()
        CustomerList = tmpCustomerList
        Return True
      Catch ex As Exception
        CompanyName.Logging.ErrorLog.LogException(ex)
        Return False
      End Try
    End Function

As the name implies, our goal is to sort a list of customers… by ID. That’s not really implied by the name. The developer responsible knew how to sort a list of strings, and didn’t feel any need to learn what the correct way to sort a list of objects were.

So first, they build a tmpStrList which holds all their IDs. Then they Sort() that.

Now that the IDs are sorted, they need to organize the original data in that order. So they compare each element of the sorted list to each element of the unsorted list, and if there’s a match, copy the element into tmpCustomerList, ensuring that list holds the elements in the sorted order.

Finally, we clear out the original list and replace it with the sorted version. Return True on success, return False on failure. This last bit makes the most sense: chucking exceptions across COM Interop is fraught, so it’s easier to just return status codes.

Everything else though is a clear case of someone who didn’t want to read the documentation. They knew that a list had a Sort method which would sort things like numbers or strings, so boom. Why look at all the other ways you can sort lists? What’s a “comparator” or a lambda? Seems like useless extra classes.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianNorbert Preining: Debian KDE/Plasma Status 2020-11-04

About a month worth of updates on KDE/Plasma in Debian has accumulated, so here we go. The highlights are: Plasma 5.19.5 based on Qt 5.15 is in Debian/experimental and hopefully soon in Debian/unstable, and my own builds at OBS have been updated to Plasma 5.20.2, Frameworks 5.75, Apps 20.08.2.

Thanks to the dedicated work of the Qt maintainers, Qt 5.15 has finally entered Debian/unstable and we can finally target Plasma 5.20.

OBS packages

The OBS packages as usual follow the latest release, and currently ship KDE Frameworks 5.75, KDE Apps 20.08.2, and new, Plasma 5.20.2. The package sources are as usual (note the different path for the Plasma packages and the App packages, containing the release version!), for Debian/unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma520/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2008/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./

and the same with Testing instead of Unstable for Debian/testing.

The update to Plasma 5.20 took a bit of time, not only because of the wait for Qt 5.15, but also because I couldn’t get it running on my desktop, only in the VM. It turned out that the Plasmoid Event Calendar needed an update, and the old version crashed Plasma (“v68 and below crash in Arch after the Qt 5.15.1 update.”). After I realized that, it was only a question of updating to get Plasma 5.20 running.

There are two points I have to mention (and I will fix sooner or later):

  • Update will need two trials because files moved from plasma-desktop to plasma-workspace. I will add the required replace/conflicts later.
  • Make sure that kwayland-server packages (libkwaylandserver5, libkwaylandserver-dev are at version 5.20.2. Some old versions had an epoch so automatic updates will not work.

As usual, let me know your experience!

Debian main packages

The packages in Debian/experimental are at the most current state, 5.19.5. We have waited with the upload to unstable until the Qt 5.15 transition is over, but hope to upload to unstable rather soon. After the upload is done, we will work on getting 5.20 into unstable.

My aim is to get the most recent version of Plasma 5.20 into Debian Bullseye, so we need to do that before the freeze early next year. Let us hope for the best.

,

Planet DebianMartin-Éric Racine: Adding IPv6 support to my home LAN

A couple of year ago, I moved into a new flat that comes with RJ45 sockets wired for 10 Gigabit (but currently offering 1 Gigabit) Ethernet.

This also meant changing the settings on my router box for my new ISP.

I took this opportunity to review my router's other settings too. I'll be blogging about these over the next few posts.

Adding IPv6 support to my home LAN

I have been following the evolution of IPv6 ever since the KAME project produced the first IPv6 implementation. I have also been keeping track of the IPv4 address depletion.

Around the time the IPv6 Day was organized in 2011, I started investigating the situation of IPv6 support at local ISPs.

Well, never mind all those rumors about Finland being some high-tech mecca. Back then, no ISP went beyond testing their routers for IPv6 compatibility and producing white papers on what their limited test deployments accomplished.

Not that it matters much, in practice. Most IPv6 documentation out there, including Debian's own, still focuses on configuring transitional mechanisms, especially how to connect to a public IPv6 tunnel broker.

Relocating to a new flat and rethinking my home network to match gave me an opportunity to revisit the topic. Much to my delight, my current ISP offers native IPv6.

This prompted me to go back and read up on IPv6 one more time. One important detail:

IPv6 hosts are globally reachable.

The implications of this don't immediately spring to mind for someone used to IPv4 network address translation (NAT):

Any network service running on an IPv6 host can be reached by anyone anywhere.

Contrary to IPv4, there is no division between private and public IP addresses. Whereas a device behind an IPv4 NAT essentially is shielded from the outside world, IPv6 breaks this assumption in more than one way. Not only is the host reachable from anywhere, its default IPv6 address is a mathematical conversion (EUI-64) of the network interface's MAC address, which makes every connection forensically traceable to a unique device.

Basically, if you hadn't given much thought to firewalls until now, IPv6 should give you enough goose bumps to get around it. Tightening the configuration of every network service is also an absolute must. For instance, I configured sshd to only listen to private IPv4 addresses.

What /etc/network/interfaces might look like on an dual-stack (IPv4 + IPv6) host:

allow-hotplug enp9s0

iface enp9s0 inet dhcp
iface enp9s0 inet6 auto
	privext 2
	dhcp 1

The auto method means that IPv6 will be auto-configured using SLAAC; privext 2 enables IPv6 privacy options and specifies that we prefer connecting via the randomly-generated IPv6 address, rather than the EUI-64 calculated MAC specific address; dhcp 1 enables passive DHCPv6 to fetch additional routing information.

The above works for most desktop and laptop configurations.

Where things got more complicated is on the router. I decided early on to keep NAT to provide an IPv4 route to the outside world. Now how exactly is IPv6 routing done? Every node along the line must have its own IPv6 address... including the router's LAN interface. This is accomplished using the sample script found in Debian's IPv6 prefix delegation wiki page. I modified mine as follows (the rest of the script is omitted for clarity):

#Both LAN interfaces on my private network are bridged via br0
IA_PD_IFACE="br0"
IA_PD_SERVICES="dnsmasq"
IA_PD_IPV6CALC="/usr/bin/ipv6calc"

Just put the script at the suggested location. We'll need to request a prefix on the router's outside interface to utilize it. This gives us the following interfaces file:

allow-hotplug enp2s4 enp2s8 enp2s9
auto br0

iface enp2s4 inet dhcp
iface enp2s4 inet6 auto
	request_prefix 1
	privext 2
	dhcp 1

iface enp2s8 inet manual
iface enp2s8 inet6 manual

iface enp2s9 inet manual
iface enp2s9 inet6 manual

iface br0 inet static
	bridge_ports enp2s8 enp2s9
	address 10.10.10.254

iface br0 inet6 manual
	bridge_ports enp2s8 enp2s9
	# IPv6 from /etc/dhcp/dhclient-exit-hooks.d/prefix_delegation

The IPv4 NAT and IPv6 Bridge script on my router looks as follows:

#!/bin/sh
PATH="/usr/sbin:/sbin:/usr/bin:/bin"
wan=enp2s4
lan=br0
########################################################################
# IPv4 NAT
iptables -F; iptables -t nat -F; iptables -t mangle -F
iptables -X; iptables -t nat -X; iptables -t mangle -X
iptables -Z; iptables -t nat -Z; iptables -t mangle -Z
iptables -t nat -A POSTROUTING -o $wan -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward
########################################################################
# IPv6 bridge
ip6tables -F; ip6tables -X; ip6tables -Z
# Default policy DROP
ip6tables -P FORWARD DROP
# Allow ICMPv6 forwarding
ip6tables -A FORWARD -p ipv6-icmp -j ACCEPT
# Allow established connections
ip6tables -I FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
# Accept packets FROM LAN to everywhere
ip6tables -I FORWARD -i $lan -j ACCEPT
echo 1 > /proc/sys/net/ipv6/conf/all/forwarding
echo 1 > /proc/sys/net/ipv6/conf/default/forwarding
# IPv6 propagation via /etc/dhcp/dhclient-exit-hooks.d/prefix_delegation

The above already provided enough IPv6 connectivity to pass the IPv6 test on my desktop inside the LAN.

To make things more fun, I enabled DHCPv6 support for my LAN on the router's dnsmasq by adding the last 3 lines to the configuration:

dhcp-hostsfile=/etc/dnsmasq-ethersfile
bind-interfaces
interface=br0
except-interface=enp2s4
no-dhcp-interface=enp2s4
dhcp-range=tag:br0,10.10.10.0,static,infinite
dhcp-range=tag:br0,::1,constructor:br0,ra-names,ra-stateless,infinite
enable-ra
dhcp-option=option6:dns-server,[::],[2606:4700:4700::1111],[2001:4860:4860::8888]

The 5 first lines (included here for emphasis) are extremely important: they ensure that dnsmasq won't provide any IPv4 or IPv6 service to the outside interface (enp2s4) and that DHCP will only be provided for LAN hosts whose MAC address is known. Line 6 shows how dnsmasq's DHCP service syntax differs between IPv4 and IPv6. The rest of my configuration was omitted on purpose.

Enabling native IPv6 on my LAN has been an interesting experiment. I'm sure that someone could come up with even better ip6tables rules for the router or for my desktop hosts. Feel free to mention them in the blog's comment.

Krebs on SecurityTwo Charged in SIM Swapping, Vishing Scams

Two young men from the eastern United States have been hit with identity theft and conspiracy charges for allegedly stealing bitcoin and social media accounts by tricking employees at wireless phone companies into giving away credentials needed to remotely access and modify customer account information.

Prosecutors say Jordan K. Milleson, 21 of Timonium, Md. and 19-year-old Kingston, Pa. resident Kyell A. Bryan hijacked social media and bitcoin accounts using a mix of voice phishing or “vishing” attacks and “SIM swapping,” a form of fraud that involves bribing or tricking employees at mobile phone companies.

Investigators allege the duo set up phishing websites that mimicked legitimate employee portals belonging to wireless providers, and then emailed and/or called employees at these providers in a bid to trick them into logging in at these fake portals.

According to the indictment (PDF), Milleson and Bryan used their phished access to wireless company employee tools to reassign the subscriber identity module (SIM) tied to a target’s mobile device. A SIM card is a small, removable smart chip in mobile phones that links the device to the customer’s phone number, and their purloined access to employee tools meant they could reassign any customer’s phone number to a SIM card in a mobile device they controlled.

That allowed them to seize control over a target’s incoming phone calls and text messages, which were used to reset the password for email, social media and cryptocurrency accounts tied to those numbers.

Interestingly, the conspiracy appears to have unraveled over a business dispute between the two men. Prosecutors say on June 26, 2019, “Bryan called the Baltimore County Police Department and falsely reported that he, purporting to be a resident of the Milleson family residence, had shot his father at the residence.”

“During the call, Bryan, posing as the purported shooter, threatened to shoot himself and to shoot at police officers if they attempted to confront him,” reads a statement from the U.S. Attorney’s Office for the District of Maryland. “The call was a ‘swatting’ attack, a criminal harassment tactic in which a person places a false call to authorities that will trigger a police or special weapons and tactics (SWAT) team response — thereby causing a life-threatening situation.”

The indictment alleges Bryan swatted his alleged partner in retaliation for Milleson failing to share the proceeds of a digital currency theft. Milleson and Bryan are facing charges of wire fraud, unauthorized access to protected computers, aggravated identity theft and wire fraud conspiracy.

The indictment doesn’t specify the wireless companies targeted by the phishing and vishing schemes, but sources close to the investigation tell KrebsOnSecurity the two men were active members of OGusers, an online forum that caters to people selling access to hijacked social media accounts.

Bryan allegedly used the nickname “Champagne” on OGusers. On at least two occasions in the past few years, the OGusers forum was hacked and its user database — including private messages between forum members — were posted online. In a private message dated Nov. 15, 2019, Champagne can be seen asking another OGusers member to create a phishing site mimicking T-Mobile’s employee login page (t-mobileupdates[.]com).

Sources tell KrebsOnSecurity the two men are part of a larger conspiracy involving individuals from the United States and United Kingdom who’ve used vishing and phishing to trick work-at-home employees into giving away credentials needed to remotely access their employers’ networks.

Planet DebianWouter Verhelst: Dear Google

... Why do you have to be so effing difficult about a YouTube API project that is used for a single event per year?

FOSDEM creates 600+ videos on a yearly basis. There is no way I am going to manually upload 600+ videos through your webinterface, so we use the API you provide, using a script written by Stefano Rivera. This script grabs video filenames and metadata from a YAML file, and then uses your APIs to upload said videos with said metadata. It works quite well. I run it from cron, and it uploads files until the quota is exhausted, then waits until the next time the cron job runs. It runs so well, that the first time we used it, we could upload 50+ videos on a daily basis, and so the uploads were done as soon as all the videos were created, which was a few months after the event. Cool!

The second time we used the script, it did not work at all. We asked one of our key note speakers who happened to be some hotshot at your company, to help us out. He contacted the YouTube people, and whatever had been broken was quickly fixed, so yay, uploads worked again.

I found out later that this is actually a normal thing if you don't use your API quota for 90 days or more. Because it's happened to us every bloody year.

For the 2020 event, rather than going through back channels (which happened to be unavailable this edition), I tried to use your normal ways of unblocking the API project. This involves creating a screencast of a bloody command line script and describing various things that don't apply to FOSDEM and ghaah shoot me now so meh, I created a new API project instead, and had the uploads go through that. Doing so gives me a limited quota that only allows about 5 or 6 videos per day, but that's fine, it gives people subscribed to our channel the time to actually watch all the videos while they're being uploaded, rather than being presented with a boatload of videos that they can never watch in a day. Also it doesn't overload subscribers, so yay.

About three months ago, I started uploading videos. Since then, every day, the "fosdemtalks" channel on YouTube has published five or six videos.

Given that, imagine my surprise when I found this in my mailbox this morning...

Google lies, claiming that my YouTube API project isn't being used for 90 days and informing me that it will be disabled

This is an outright lie, Google.

The project has been created 90 days ago, yes, that's correct. It has been used every day since then to upload videos.

I guess that means I'll have to deal with your broken automatic content filters to try and get stuff unblocked...

... or I could just give up and not do this anymore. After all, all the FOSDEM content is available on our public video host, too.

Planet DebianMartin-Éric Racine: GRUB fine-tuning

A couple of years ago, I moved into a new flat that comes with RJ45 sockets wired for 10 Gigabit (but currently offering 1 Gigabit) Ethernet.

This also meant changing the settings on my router box for my new ISP.

I took this opportunity to review my router's other settings too. I'll be blogging about these over the next few posts.

GRUB fine-tuning

One thing that had been annoying me ever since Debian migrated to systemd as /sbin/init is that boot message verbosity hasn't been the same. Previously, the cmdline option quiet merely suppressed the kernel's output to the bootscreen, but left the daemon startup messages alone. Not anymore. Nowadays, quiet produces a blank screen.

After some googling, I found the solution to that:

GRUB_CMDLINE_LINUX_DEFAULT="noquiet loglevel=5"

The former restores daemon startup messages, while the later makes the kernel output only significant notices or more serious messages. On most of my hosts, it mostly reports inconsistencies in the ACPI configuration of the BIOS.

Another setting I find useful is a reboot delay in case a kernel panic happens:

GRUB_CMDLINE_LINUX="panic=15"

This gives me enough time to snap a picture of the screen output to attach to the bug report that will follow.

LongNowExplorers Discover Pinnacle of Coral Taller Than Empire State Building in Great Barrier Reef

Even now, even in shallow waters, the sea continues to surprise us with new wonders (many of them rich in “living fossils” like the chambered nautilus and various sharks).

Reefs are themselves fabulous living examples of multitudinous pace layers, not unlike the structural layers of a house Stewart Brand details in How Buildings Learn—only these buildings literally do learn, as scaffolded colonial organisms with their own inarguable (and manifold) agencies:

Explorers of the Great Barrier Reef have discovered a giant pinnacle of coral taller than the Empire State Building.

Mariners long ago charted seven pinnacle reefs off the cape that, by definition, lie apart from the main barrier system. Bathed in clear waters, the detached reefs swarm with sponges, corals and brightly colored fish — as well as sharks — and are oases for migrating sea life. Their remoteness makes the pinnacles little-studied, and Australia’s Great Barrier Reef Marine Park Authority has assigned them its highest levels of protection, which limit such activities as commercial fishing. One detached reef at Raine Island is the world’s most important nesting area for green sea turtles.

The new pinnacle was found a mile and a half from a known detached reef. Dr. Beaman, who formerly served in the Royal Australian Navy as a hydrographic surveyor, said he and his team were certain it was previously unknown. Its seven relatives, he added, were all charted in the 1880s, more than 120 years ago.

Charles StrossEditorial Entanglements

A young editor once asked me what was the biggest secret to editing a fiction magazine. My answer was "confidence." I have to be confident that the stories I choose will fit together, that people will read them and enjoy them, and most importantly, that each month I'll receive enough publishable material to fill the pages of the magazine.

Asimov's Science Fiction comes out as combined monthly issues six times a year. A typical issue contains ten to twelve stories. That means I buy about 65 stories a year. Roughly speaking, I need to buy five to six stories per month--although I may actually buy two one month and ten the next. That I will receive these stories should seem inevitable. I get to choose them from about eight hundred submissions per month. Yet, since I know that I will have to reject over 99 percent of the stories that wing their way to me, there is always a slight concern that that someday 100 percent of the submissions won't be right for the magazine.

Luckily, this anxiety is strongly offset by a lifetime of experience. For sixteen years as the editor-in-chief, and far longer as a staff member, I've seen that each issue of the magazine has been filled with wonderful stories. Asimov's tales are balanced, they are long and short, amusing and tragic, near- and distant-future explorations of hard SF, far-flung space opera, time travel, surreal tales and a little fantasy. They're by well-known names and brand new authors. I have confidence these stories will show up and that I'll know them when I see them.

I have edited or co-edited more than two-dozen reprint anthologies. These books consisted of stories that previously appeared in genre magazines. Pulling them together mostly required sifting through years and years of published fiction. The tales have been united by a common theme such as Robots or Ghosts or The Solar System.

Editing my first original anthology was not like editing these earlier books or like editing an issue of the magazine. Entanglements: Tomorrows Lovers, Families, and Friends, which I edited as part of the Twelve Tomorrow Series, has just come out from MIT Press. The tales are connected by a theme--the effect of emerging technologies on relationships--but the stories are brand new. Instead of waiting for eight hundred stories to come to me, I asked specific authors for their tales. I approached prominent authors like Nancy Kress (who is also profiled in the book by Lisa Yaszek), Annalee Newitz, James Patrick Kelly, and Mary Robinette Kowal, as well as up and coming authors like Sam J. Miller, Cadwell Turnbull, and Rich Larson. I was working with some writers for the first time. Others, like Suzanne Palmer and Nick Wolven, were people I'd published on several occasions.

I deliberately chose authors who I felt were capable of writing the sort of hard science fiction that the Twelve Tomorrows series is famous for. I was also pretty sure that I was contacting people who were good at making deadlines! I knew I enjoyed the work of Chinese author Xia Jia and I was delighted to have an opportunity to work with her translator, Ken Liu. I was also thrilled to get artwork from Tatiana Plakhova.

Once I commissioned the stories, I had to wait with fingers crossed. What if an author went off in the wrong direction? What if an author failed to get inspired? What if they all missed their deadlines? It turned out that I had no need to worry. Each author came through with a story that perfectly fit the anthology's theme. The material was diverse, with stories ranging from tales about lovers and mentors and friends to stories populated with children and grandparents. The book includes charming and amusing tales, heart-rending stories, and exciting thrillers.

I learned so much from editing Entanglements. The next time I edit an original anthology, I expect to approach it with a self-assurance akin to the confidence I feel when I read through a month of submissions to Asimov's.

Planet DebianMartin Michlmayr: ledger2beancount 2.5 released

I released version 2.5 of ledger2beancount, a ledger to beancount converter.

Here are the changes in 2.5:

  • Don't create negative cost for lot without cost
  • Support complex implicit conversions
  • Handle typed metadata with value 0 correctly
  • Set per-unit instead of total cost when cost is missing from lot
  • Support commodity-less amounts
  • Convert transactions with no amounts or only 0 amounts to notes
  • Fix parsing of transaction notes
  • Keep tags in transaction notes on same line as transaction header
  • Add beancount config options for non-standard root names automatically
  • Fix conversion of fixated prices to costs
  • Fix removal of price when price==cost but when they use different number formats
  • Fix removal of price when price==cost but per-unit and total notation mixed
  • Fix detection of tags and metadata after posting/aux date
  • Use D directive to set default commodity for hledger
  • Improve support for postings with commodity-less amounts
  • Allow empty comments
  • Preserve leading whitespace in comments in postings and transaction headers
  • Preserve indentation for tags and metadata
  • Preserve whitespace between amount and comment
  • Refactor code to use more data structures
  • Remove dependency on Config::Onion module

Thanks to input from Remco Rijnders, Yuri Khan, and Thierry. Thanks to Stefano Zacchiroli and Kirill Goncharov for testing my changes.

You can get ledger2beancount from GitHub

Worse Than FailureSweet Release

READ FASTER READ BETTER

Release Notes: October 31, 2019

  • Added auto-save feature every five minutes. Auto-saves can be found in C:\Users\[username]\Documents\TheApp\autosaves.
  • Added ability to format text with bold, underline, and italics.
  • Removed confusing About page. Terms and conditions can now be found under Help.

"And ... send." Mark sent the weekly release notes to the distribution list, copying them from where the app itself would display them on boot. "Now everyone should be on the same page, and I can get to work on my next big feature."

Two hours later, Janine, the product manager, stopped by his cube. "Hey, Mark. I was thinking. You know that About page? I keep getting complaints. What would it take to just axe it?"

"Already done in the latest version," he replied, not even looking up from the code.

"So that's, what, three hours of work?"

Mark had to tear his eyes away from the screen to look at Janine, baffled. "Huh? No, it's done. Already. It's gone. Didn't you update this morning?"

"Oh! Already! Okay, thanks. Good work." She vanished, leaving him to reload his train of thought and focus on the refactor he was doing.

Half an hour later, just as he was in the middle of something, one of the users, Roger, dropped in. "Hey, Mark! I know this should go through Janine, but I have a great idea, and I wanted to see if it was feasible."

"Hang on ... okay ... shoot." Mark hit Ctrl-S and focused on Roger. Remember, think customer service.

"Listen," Roger said. "Every once in a while, right, I'm working on something, and someone comes by to interrupt, right?"

"Okay?" began Mark, unclear where this was going.

"And you know how it goes. One thing leads to another, and so on, and eventually, I forget what I was doing, and I close out the program."

"Sure." Mark risked a glance at his IDE, wondering if he had time to start compiling or not.

"So, what if the program saved automatically, like, when I exit or something?" Roger asked.

"Oh, actually, as of this morning it auto-saves every five minutes," Mark said.

"Okay, cool, cool, but like, it should save when I exit."

"Um, I think it asks if you want to save, but I could maybe put that—"

"Or," Roger interrupted, "better yet, it should know when I get distracted, and save then, so I don't lose anything."

"It should ... know? How would it know?"

"Eh, you're right. Maybe it should just save every ten minutes."

Mark pinched the bridge of his nose. "I can do that. What about every five?"

"Perfect! Get right on that," Roger declared, striding away. "Good man."

He'll figure it out eventually, Mark decided, going back to his IDE.

He compiled, ran the software, and was in the middle of testing when Janine came by in a panic, carrying her open laptop. "Mark! We have to roll back the release!"

He didn't wait for auto-save, but exited his debugger, immediately pulling up the release console. "What, what's wrong? What happened?"

"You know how you killed the About page?" she demanded, eyes wide with horror.

"Yeah?"

"Well the Terms and conditions were in there! Legal says we can't ship without terms and conditions! This is a huge priority-one bug, I don't know how you missed it!"

Mark's shoulders slumped as he stopped logging into the release console. "Oh. I put them under Help."

"But I told you to put them under About!"

"And then you told me to kill the About page but keep the Terms and Conditions, so I moved them under Help. Didn't you read the release notes?"

"Oh, right, right, hang on, let me just pull it up here ... oh, never mind, it's under Help. False alarm! Carry on."

So Mark carried on, one eye on the time. I barely got anything done, as usual for a Monday. I really don't want to stay late tonight ... Still, he managed to get into the flow of things, and was just refactoring a critical class when Sue, Mark's boss, stopped by. Mark of course pulled his attention away from the code to talk to the boss, though already he was beginning to resent the constant interruptions.

"Hey, Marky Mark, how's it going?" asked Sue.

"Fine."

"Good, good. Listen, I know you're busy, so I'll get right to it: we have a request from the CEO, so it'll need to get into next week's release for sure."

Feeling his odds of getting the refactor committed evaporating, Mark nodded. "All right, I'm on it. What is it?"

"So, you know how the product can send email, right?"

My least favorite feature. "Yup. What about it?"

"Well the CEO was thinking, he can do stuff in Gmail that you can't do in our product, and he wants to know why."

He wants me to replicate all of Gmail in the product?! "What things, specifically?" Mark managed to ask calmly.

"He's not super technical, but he's talking about things like bold, italics, and underlines. Those are the big three."

Mark smashed his forehead into the keyboard for a moment before lifting his head to mutter: "Why do I even send release notes?"

"What?"

"We released that feature this morning!"

"Oh. Good show! Thanks Mark, you're the best."

Just as he was packing up for the day, Janine stopped by again, knocking on the edge of his cubicle, a phone to her ear. "Mark! Listen, I've got the CEO on the phone, he wants to know where we find the autosaves, and I can't figure it out. Do you know?"

Mark looked at the clock: 5:10. "Nope!" he said cheerily. "Check the release notes, I'm sure it's in there somewhere."

"I looked, I didn't see it."

"Shame, but I'm already logged out of everything. Tell him to do a real save and we'll get back to him in the morning."

"Oh, never mind, he found it! Turns out it was in the release notes. Thanks Mark, you're a lifesaver!"

If you say so. Mark walked out the door, not bothering to reply, and headed directly across the street to the pub for his weekly Monday Evening Beer.

Six days until we start from the top, he thought.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Planet DebianJoerg Jaspert: Debian NEW Queue, Rust packaging

Debian NEW Queue

So for some reason I got myself motivated again to deal with some packages in Debians NEW Queue. We had 420 source packages waiting for some kind of processing when I started, now we are down to something around 10. (Silly, people keep uploading stuff…)

That’s not entirely my own work, others from the team have been active too, but for those few days I went through a lot of stuff waiting. And must say it still feels mostly like it did when I somehow stopped doing much in NEW.

Except - well, I feel that maintainers are much better in preparing their packages, especially that dreaded task of getting the copyright file written seems to be one that is handled much better. Now, thats not supported by any real numbers, just a feeling, but a good one, I think.

Rust

Dealing with NEW meant I got in contact with one part that currently generates some friction between the FTP Team and one group of package maintainers - the Rust team.

Note: this is, of course, entirely written from my point of view. Though with the intention of presenting it as objective as possible. Also, I know what rust is, and have tried a “Hello world” in it, but that’s about my deep knowledge of it…

The problem

Libraries in rust are bundled/shipped/whatever in something called crates, and you manage what your stuff needs and provides with a tool called cargo.

A library (one per crate) can provide multiple features, say a TLS lib can link against gnutls or openssl or some other random implementation. Such features may even be combinable in various different ways, so one can have a high number of possible feature combinations for one crate.

There is a tool called debcargo which helps creating a Debian package out of a crate. And that tool generates so-called feature-packages, one per feature / combination thereof.

Those feature packages are empty packages, only containing a symlink for their /usr/share/doc/… directory, so their size is smaller than the metadata they will produce. Inside the archive and the files generated by it, stuff that every user everywhere has to download and their apt has to process. Additionally, any change of those feature sets means one round through NEW, which is also not ideal.

So, naturally, the FTP Team dislikes those empty feature packages. Really, a lot.

There appears to be a different way. Not having the feature packages, but putting all the combinations into a Provides header. That sometimes works, but has two problems:

  • It can generate really long Provides: lines. I mean, REALLY REALLY REALLY long. Somewhat around 250kb is the current record. Thats long enough that a tool (not dak itself) broke on it. Sure, that tool needs to be fixed, but still, that’s not nice. Currently preferred from us, though.
  • Some of the features may need different dependencies (say, gnutls vs openssl), should those conflict with each other, you can not combine them into one package.

Solutions

Currently we do not have a good one. The rust maintainers and the ftp team are talking, exploring various ideas, we will see what will come out.

Devel archive / Component

One of the possible solutions for the feature package problem would be something that another set of packages could also make good use of, I think. The introduction of a new archive or component, meant only for packages that are needed to build something, but where users are discouraged from ever using them.

What?

Well, take golang as an example. While we have a load of golang-something packages in Debian, and they are used for building applications written in go - none of those golang-something are meant to be installed by users. If you use the language and develop in it, the go get way is the one you are expected to use.

So having an archive (or maybe component like main or contrib) that, by default, won’t be activated for users, but only for things like buildds or archive rebuilds, will make one problem (hated metadata bloat) be evaluated wildly different.

It may also allow a more relaxed processing of binary-NEW (easier additions of new feature packages).

But but but

Yes, it is not the most perfect solution. Without taking much energy to think about, it requires

  • an adjustment in how main is handled. Right now we have the golden rule that main is self contained, that is, things in it may not need anything outside it for building or running. That would need to be adjusted for building. (Go as well as currently rust are always building static binaries, so no library dependencies there).
  • It would need handling for the release, that is, the release team would need to deal with that archive/component too. We haven’t, yet, talked to them (still, slowly, discussing inside FTP Team). So, no idea how many rusty knives they want to sink into our nice bodies for that idea…

Final

Well, it is still very much open. Had an IRC meeting with the rust people, will have another end of November, it will slowly go forward. And maybe someone comes up with an entire new idea that we all love. Don’t know, time will tell.

Cryptogram New Windows Zero-Day

Google’s Project Zero has discovered and published a buffer overflow vulnerability in the Windows Kernel Cryptography Driver. The exploit doesn’t affect the cryptography, but allows attackers to escalate system privileges:

Attackers were combining an exploit for it with a separate one targeting a recently fixed flaw in Chrome. The former allowed the latter to escape a security sandbox so the latter could execute code on vulnerable machines.

The vulnerability is being exploited in the wild, although Microsoft says it’s not being exploited widely. Everyone expects a fix in the next Patch Tuesday cycle.

Worse Than FailureCodeSOD: An Impossible Problem

One of the lines between code that's "not great, but like, it's fine, I guess" and "wow, WTF" is confidence.

For example, Francis Gauthier inherited a WinForms application. One of the form fields in this application was a text box that held a number, and the developers wanted to always display whatever the user entered without leading zeroes.

Now, WinForms is pretty simplistic as UI controls go, so there isn't really a great "oh yes, do this!" solution to solving that simple problem. A mix of using MVC-style patterns with a formatter between the model and the UI would be "right", but might be more setup than the problem truly calls for.

Which is why, at first blush, without more context, I'd be more apt to put this bad code into the "not great, but whatever" category:

int percent = Int32.Parse(ctrl.Text); ctrl.Text = percent.ToString();

On an update, we grab the context of the text box, parse it as an integer, and then store the result back into the text box. This will effectively strip off the leading zeroes.

It's fine. Until we zoom out a step.

// Matched - Remove leading zero try { int percent = Int32.Parse(ctrl.Text); ctrl.Text = percent.ToString(); } catch { // impossible.. }

Here, we can see that they… "wisely" have wrapped the Parse in an exception handler. The developer knew that there was a validator on the control which would prevent non-numeric characters from being entered, and thus they were able with a great degree of confidence to declare that an exception was "impossible".

There's just one problem with that. The validator in question allows numeric characters, not just integer characters. So the validator would allow you to enter 0.99. Which of course, won't parse. So the exception gets triggered, the catch ignores it, the user believes their input- a percentage- has been accepted as valid. The end result is that many users might enter "0.99" to mean "99%", and then see "0" be what actually gets stored as the unexpected floating point gets truncated.

All because an exception was declared "impossible". To misapply a quote: "You keep using that word. I do not think it means what you think it means."

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 21)

Here’s part twenty-one of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Please, vote if you can.

Here’s how my publisher described it when it came out:


Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

Sam VargheseAustralia seems to be living in another world when it comes to rugby contests with New Zealand

When Australian scrum-half Nic White was walking off the field after the whistle blew for half-time in the third Bledisloe Cup game on 31 October, he was given a headset and microphone by Fox Sports and asked for his take on the game upto that point.

Australia had been outplayed by New Zealand in the first 40 minutes and were trailing 0-26, meaning that the horse had well and truly bolted and any chance of them making a fight of it had disappeared.

But White seemed to be in an alternate universe. “No disrespect, but they haven’t done a whole lot, it’s just been all our mistakes. We’re just gifting them points,” was what he had to offer.

When commentator Phil Kearns, a man who played 67 Tests for the Wallabies, came back with “Sixty-seven percent possession they got, mate,” White quickly took off the headphones, handed them to a man on the Fox Sports team and walked away into the change rooms.

The exchange reminded me of the way the American tennis player Serena Williams reacts when she loses during a Grand Slam – it’s because she played badly, not because her vanquisher played a good game.

One offers this exchange to illustrate one point: unless one acknowledges one’s mistakes, it is not possible to correct them. True, White may have been indulging in spin as many people do when confronted by the media, but had he acknowledged that Australia was behind because it had come up against a side that was doing all the basics extremely well, he probably would have been more accurate. Like many other Australians, White seems to have a big blind spot when it comes to acknowledging that one has been outplayed.

To the match itself, it was practically over after the first half. Few teams can come back from such a deficit – and bear in mind that two additional tries were not awarded to the men in black. One was due to a marvellous save by Australian winger Marike Koroibete, who got under the ball when Kiwi wing-three-quarter Caleb Clarke, no easy customer to tackle, was trying to force down.

The other try that was disallowed was debatable; hooker Dane Coles charged onto a kick into the in-goal area by fly-half Richie Mo’unga, and tried to get his hands on the ball and effect a touchdown. From some angles, it looked like he had succeeded. From others, it appeared that he did not have full control of the ball.

But even then, the All Blacks ran in four tries, some of which should never be allowed at the international level. Mo’unga was in top form and used all his guile and skills to cross for two of the four tries which his team scored before half-time.

The great West Indies teams of the 1980s and early 1990s had a tactic of targeting bowlers who either were becoming a threat to them, or else thought they were becoming a threat, and demoralising them. The main method used was for Viv Richards to attack the bowler in question and take him to the cleaners. It worked in many cases.

Similarly, the All Blacks appear to have a strategy of making newcomers in opposing teams feel out of place and this often results in the newbie suffering a major crisis of confidence. In the case of Noah Lolesio, picked to make his debut as standoff, perhaps their task was easier for the pint-sized fly-half seemed to be intent on making himself a small target.

The No 10 is normally the playmaker, but Lolesio seemed content to operate from where a full-back normally positions himself/herself, and kick when he got the ball. His kicking was poor, and in one case, when kicking for touch, he landed the ball in the in-goal area. It looks like picking him for his international debut against the All Blacks was not the most judicious move by Australian coach Dave Rennie.

The fact that Rennie had to pick Lolesio to fill the No 10 spot is a glaring admission that Australia has very little depth when it comes to players. Bernard Foley, a reliable if unspectacular fly-half, went off to Japan after the 2019 World Cup, but it is doubtful he would have been picked this year given his disastrous performance in the one game he played during that tournament.

One wonders what Rennie will do in the remaining game against New Zealand. He cannot drop Lolesio for it would destroy the man’s confidence. He will have to bring back James O’Connor to fill this pivotal role as he is now fit to resume playing. But Lolesio will have to be in the match-day 23.

One hopes that Rennie will bring Tom Banks who did a decent job at full-back in the first two Tests against New Zealand before being suddenly dropped for the third. The coach should also pick Isi Naisarani in the No 8 position and jettison Harry Wilson; the latter appears to be a hot-head and woefully short of common-sense and ability. Exactly why Naisarani, a Fijian who did a wonderful job last year, has been kept out of the team is not known.

Brisbane has been a somewhat happier hunting ground for Australia against New Zealand. But the scars suffered in the third game — where they went down by the biggest margin in any game against New Zealand — may not be so easy to heal. But at least this time there will be no overblown expectations that Australia will make a contest of the game.

,

Cryptogram The Legal Risks of Security Research

Sunoo Park and Kendra Albert have published “A Researcher’s Guide to Some Legal Risks of Security Research.”

From a summary:

Such risk extends beyond anti-hacking laws, implicating copyright law and anti-circumvention provisions (DMCA §1201), electronic privacy law (ECPA), and cryptography export controls, as well as broader legal areas such as contract and trade secret law.

Our Guide gives the most comprehensive presentation to date of this landscape of legal risks, with an eye to both legal and technical nuance. Aimed at researchers, the public, and technology lawyers alike, its aims both to provide pragmatic guidance to those navigating today’s uncertain legal landscape, and to provoke public debate towards future reform.

Comprehensive, and well worth reading.

Here’s a Twitter thread by Kendra.

Worse Than FailureError'd: Nothing for You!

"No, that's not the coupon code. They literally ran out of digital coupons," Steve wrote.

 

"Wow! Amazon is absolutely SLASHING their prices out of existence," wrote Brian.

 

Björn S. writes, "IKEA now employs what I'd call psychological torture kiosks. The text translates to 'Are you satisfied with your waiting time?' but the screen below displays an eternal spinner. Gaaah!"

 

Daniel O. writes, "I mean, I could change my password right now, but I'm kind of tempted to wait and see when it'll actually expire."

 

"This bank seems to offer its IT employees some nice perks! For example, this ATM is reserved strictly for its administrators," Oton R. wrote.

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Sam VargheseAustralia pulls in new kids on the block for crucial Bledisloe Cup game

The focal point of the third Bledisloe Cup game in Sydney on Saturday will be the Australian back-line where two rookies will be playing as fly-half and centre; that, incidentally, is the place on the field which many opposition players slip through when making a line-break.

Noah Lolesio and Irae Simone will be under a lot of scrutiny and it may well be the game that establishes them. Both have come in because of injuries to the regulars in these positions, James O’Connor and Matt Toomua respectively. It will be a literal baptism of fire.

For the second time in as many years, Australia will be going into a Bledisloe Cup game against New Zealand with more Pacific Islanders in its ranks than Anglo-Saxons.

Of the 15 picked by the new coach, Dave Rennie, to take the field in Sydney on Saturday (31 October), eight will be Islanders. And on the bench, there will be another four from the same geographical area.

In the first game of 2019, former coach Michael Cheika picked nine islanders and one Aboriginal player as well for the team that thrashed New Zealand 47-26 in Perth. And on the bench, half the number were again islanders.

2019 game 1: Kurtley Beale, Reece Hodge, James O’Connor, Samu Kerevi, Marika Koroibete, Christian Lealiifano, Nic White, Isi Naisarani, Michael Hooper, Lukhan Salakaia-Loto, Rory Arnold, Izack Rodda, Allan Alaalatoa, Tolu Latu and Scott Sio.

Bench: Folau Fainga’a, James Slipper, Taniela Tupou, Adam Coleman, Luke Jones, Will Genia, Matt To’omua and Tom Banks.

2020 game 3: James Slipper, Brandon Paenga-Amosa, Allan Alaalatoa, Lukhan Salakaia-Loto, Matt Philip, Ned Hanigan, Michael Hooper, Harry Wilson, Nic White, Noah Lolesio, Marika Koroibete, Irae Simone, Jordan Petaia, Filipo Daugunu and Dane Haylett-Petty.

Bench: Jordan Uelese, Scott Sio, Taniela Tupou, Rob Simmons, Fraser McReight, Tate McDermott, Reece Hodge and Hunter Paisami.

Given Rennie is from New Zealand, his inclusion of this many islanders is not a surprise. New Zealand has benefitted greatly by attracting players from the many islands in the Pacific, with some notable names like the late Jonah Lomu and former All Blacks captain Jonathan Falafesa “Tana” Umaga. Rennie knows the worth of players from this region.

But will this ensure a win for Australia to keep the series alive? One doubts that Rennie is basing his selection on that criterion; rather, like all rugby coaches of teams that have a chance of being number one in the world, he will be looking to identify the best 15 for the next Rugby World Cup which is in 2023.

New Zealand is also trying out a few new faces for the third game of the Bledisloe Cup, which is the opening game of the Rugby Championship. [The latter contest will be a three-nation affair this year as South Africa has pulled out.]

Coach Ian Foster has had to call in Hoskins Sotutu to fill in for Ardie Savea who has taken paternity leave and Karl Tu’inukuafe will come in for Joe Moody who is under observation after being knocked out during the second game in Auckland. Sam Whitelock will return as lock, taking over from newcomer Tupou Vaa’i.

,

Cryptogram Friday Squid Blogging: Ram’s Horn Squid Video

This is the first video footage of a ram’s horn squid (Spirula spirula) .

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Tracking Users on Waze

A security researcher discovered a wulnerability in Waze that breaks the anonymity of users:

I found out that I can visit Waze from any web browser at waze.com/livemap so I decided to check how are those driver icons implemented. What I found is that I can ask Waze API for data on a location by sending my latitude and longitude coordinates. Except the essential traffic information, Waze also sends me coordinates of other drivers who are nearby. What caught my eyes was that identification numbers (ID) associated with the icons were not changing over time. I decided to track one driver and after some time she really appeared in a different place on the same road.

The vulnerability has been fixed. More interesting is that the researcher was able to de-anonymize some of the Waze users, proving yet again that anonymity is hard when we’re all so different.

Worse Than FailureCodeSOD: Graceful Depredations

Cloud management consoles are, in most cases, targeted towards enterprise customers. This runs into Remy’s Law of Enterprise Software: if a piece of software is in any way described as being “enterprise”, it’s a piece of garbage.

Richard was recently poking around on one of those cloud provider’s sites. The software experience was about as luxurious as one expects, which is to say it was a pile of cryptically named buttons for the 57,000 various kinds of preconfigured services this particular service had on offer.

At the bottom of each page, there was a small video thumbnail, linking back to a YouTube video, presumably to provide marketing information, or support guidance for whatever the user was trying to do.

This was the code which generated them (whitespace added for readability):

<script>function lazyLoadThumb(e){var t='<img loading="lazy" 
  data-lazy-src="https://i.ytimg.com/vi/ID/hqdefault.jpg" alt="" 
  width="480" height="360">
  <noscript><img src="https://i.ytimg.com/vi/ID/hqdefault.jpg" 
  alt="" width="480" height="360"></noscript>'
  ,a='<div class="play"></div>';return t.replace("ID",e)+a}</script>

I appreciate the use of a <noscript> tag. Providing a meaningful fallback for browsers that, for whatever reason, aren’t executing JavaScript is a good thing. In the olden days, we called that “progressive enhancement” or “graceful degradation”: the page might work better with JavaScript turned on, but you can still get something out of it even if it’s not.

Which is why it’s too bad the <noscript> is being output by JavaScript.

And sure, I’ve got some serious questions with the data-lazy-src attribute, the fact that we’re dumping a div with a class="play" presumably to get wired up as our play button, and all the string mangling to get the correct video ID in there for the thumbnail, and just generating DOM elements by strings at all. But outputting <noscript>s from JavaScript is a new one on me.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Krebs on SecurityFBI, DHS, HHS Warn of Imminent, Credible Ransomware Threat Against U.S. Hospitals

On Monday, Oct. 26, KrebsOnSecurity began following up on a tip from a reliable source that an aggressive Russian cybercriminal gang known for deploying ransomware was preparing to disrupt information technology systems at hundreds of hospitals, clinics and medical care facilities across the United States. Today, officials from the FBI and the U.S. Department of Homeland Security hastily assembled a conference call with healthcare industry executives warning about an “imminent cybercrime threat to U.S. hospitals and healthcare providers.”

The agencies on the conference call, which included the U.S. Department of Health and Human Services (HHS), warned participants about “credible information of an increased and imminent cybercrime threat to US hospitals and healthcare providers.”

The agencies said they were sharing the information “to provide warning to healthcare providers to ensure that they take timely and reasonable precautions to protect their networks from these threats.”

The warning came less than two days after this author received a tip from Alex Holden, founder of Milwaukee-based cyber intelligence firm Hold Security. Holden said he saw online communications this week between cybercriminals affiliated with a Russian-speaking ransomware group known as Ryuk in which group members discussed plans to deploy ransomware at more than 400 healthcare facilities in the U.S.

One participant on the government conference call today said the agencies offered few concrete details of how healthcare organizations might better protect themselves against this threat actor or purported malware campaign.

“They didn’t share any IoCs [indicators of compromise], so it’s just been ‘patch your systems and report anything suspicious’,” said a healthcare industry veteran who sat in on the discussion.

However, others on the call said IoCs may be of little help for hospitals that have already been infiltrated by Ryuk. That’s because the malware infrastructure used by the Ryuk gang is often unique to each victim, including everything from the Microsoft Windows executable files that get dropped on the infected hosts to the so-called “command and control” servers used to transmit data between and among compromised systems.

Nevertheless, cybersecurity incident response firm Mandiant today released a list of domains and Internet addresses used by Ryuk in previous attacks throughout 2020 and up to the present day. Mandiant refers to the group by the threat actor classification “UNC1878,” and aired a webcast today detailing some of Ryuk’s latest exploitation tactics.

Charles Carmakal, senior vice president for Mandiant, told Reuters that UNC1878 is one of most brazen, heartless, and disruptive threat actors he’s observed over the course of his career.

“Multiple hospitals have already been significantly impacted by Ryuk ransomware and their networks have been taken offline,” Carmakal said.

One health industry veteran who participated in the call today and who spoke with KrebsOnSecurity on condition of anonymity said if there truly are hundreds of medical facilities at imminent risk here, that would seem to go beyond the scope of any one hospital group and may implicate some kind of electronic health record provider that integrates with many care facilities.

So far, however, nothing like hundreds of facilities have publicly reported ransomware incidents. But there have been a handful of hospitals dealing with ransomware attacks in the past few days.

Becker’s Hospital Review reported today that a ransomware attack hit Klamath Falls, Ore.-based Sky Lakes Medical Center’s computer systems.

WWNY’s Channel 7 News in New York reported yesterday that a Ryuk ransomware attack on St. Lawrence Health System led to computer infections at Caton-Potsdam, Messena and Gouverneur hospitals.

SWNewsMedia.com on Monday reported on “unidentified network activity” that caused disruption to certain operations at Ridgeview Medical Center in Waconia, Minn. SWNews says Ridgeview’s system includes Chaska’s Two Twelve Medical Center, three hospitals, clinics and other emergency and long-term care sites around the metro area.

NBC5 reports The University of Vermont Health Network is dealing with a “significant and ongoing system-wide network issue” that could be a malicious cyber attack.

-A story at BleepingComputer.com says Wyckoff Hospital in New York suffered a Ryuk ransomware attack on Oct. 28.

This is a developing story. Stay tuned for further updates.

Update, 10:11 p.m. ET: The FBI, DHS and HHS just jointly issued an alert about this, available here.

Update, Oct. 30, 11:14 a.m. ET: Added mention of Wyckoff hospital Ryuk compromise.

,

Kevin RuddStatement on the International Peace Institute

October 28, 2020

Mr Rudd has issued a statement regarding the International Peace Institute.

Click here to read the statement.

The post Statement on the International Peace Institute appeared first on Kevin Rudd.

Krebs on SecuritySecurity Blueprints of Many Companies Leaked in Hack of Swedish Firm Gunnebo

In March 2020, KrebsOnSecurity alerted Swedish security giant Gunnebo Group that hackers had broken into its network and sold the access to a criminal group which specializes in deploying ransomware. In August, Gunnebo said it had successfully thwarted a ransomware attack, but this week it emerged that the intruders stole and published online tens of thousands of sensitive documents — including schematics of client bank vaults and surveillance systems.

The Gunnebo Group is a Swedish multinational company that provides physical security to a variety of customers globally, including banks, government agencies, airports, casinos, jewelry stores, tax agencies and even nuclear power plants. The company has operations in 25 countries, more than 4,000 employees, and billions in revenue annually.

Acting on a tip from Milwaukee, Wis.-based cyber intelligence firm Hold Security, KrebsOnSecurity in March told Gunnebo about a financial transaction between a malicious hacker and a cybercriminal group which specializes in deploying ransomware. That transaction included credentials to a Remote Desktop Protocol (RDP) account apparently set up by a Gunnebo Group employee who wished to access the company’s internal network remotely.

Five months later, Gunnebo disclosed it had suffered a cyber attack targeting its IT systems that forced the shutdown of internal servers. Nevertheless, the company said its quick reaction prevented the intruders from spreading the ransomware throughout its systems, and that the overall lasting impact from the incident was minimal.

Earlier this week, Swedish news agency Dagens Nyheter confirmed that hackers recently published online at least 38,000 documents stolen from Gunnebo’s network. Linus Larsson, the journalist who broke the story, says the hacked material was uploaded to a public server during the second half of September, and it is not known how many people may have gained access to it.

Larsson quotes Gunnebo CEO Stefan Syrén saying the company never considered paying the ransom the attackers demanded in exchange for not publishing its internal documents. What’s more, Syrén seemed to downplay the severity of the exposure.

“I understand that you can see drawings as sensitive, but we do not consider them as sensitive automatically,” the CEO reportedly said. “When it comes to cameras in a public environment, for example, half the point is that they should be visible, therefore a drawing with camera placements in itself is not very sensitive.”

It remains unclear whether the stolen RDP credentials were a factor in this incident. But the password to the Gunnebo RDP account — “password01” — suggests the security of its IT systems may have been lacking in other areas as well.

After this author posted a request for contact from Gunnebo on Twitter, KrebsOnSecurity heard from Rasmus Jansson, an account manager at Gunnebo who specializes in protecting client systems from electromagnetic pulse (EMP) attacks or disruption, short bursts of energy that can damage electrical equipment.

Jansson said he relayed the stolen credentials to the company’s IT specialists, but that he does not know what actions the company took in response. Reached by phone today, Jansson said he quit the company in August, right around the time Gunnebo disclosed the thwarted ransomware attack. He declined to comment on the particulars of the extortion incident.

Ransomware attackers often spend weeks or months inside of a target’s network before attempting to deploy malware across the network that encrypts servers and desktop systems unless and until a ransom demand is met.

That’s because gaining the initial foothold is rarely the difficult part of the attack. In fact, many ransomware groups now have such an embarrassment of riches in this regard that they’ve taken to hiring external penetration testers to carry out the grunt work of escalating that initial foothold into complete control over the victim’s network and any data backup systems  — a process that can be hugely time consuming.

But prior to launching their ransomware, it has become common practice for these extortionists to offload as much sensitive and proprietary data as possible. In some cases, this allows the intruders to profit even if their malware somehow fails to do its job. In other instances, victims are asked to pay two extortion demands: One for a digital key to unlock encrypted systems, and another in exchange for a promise not to publish, auction or otherwise trade any stolen data.

While it may seem ironic when a physical security firm ends up having all of its secrets published online, the reality is that some of the biggest targets of ransomware groups continue to be companies which may not consider cybersecurity or information systems as their primary concern or business — regardless of how much may be riding on that technology.

Indeed, companies that persist in viewing cyber and physical security as somehow separate seem to be among the favorite targets of ransomware actors. Last week, a Russian journalist published a video on Youtube claiming to be an interview with the cybercriminals behind the REvil/Sodinokibi ransomware strain, which is the handiwork of a particularly aggressive criminal group that’s been behind some of the biggest and most costly ransom attacks in recent years.

In the video, the REvil representative stated that the most desirable targets for the group were agriculture companies, manufacturers, insurance firms, and law firms. The REvil actor claimed that on average roughly one in three of its victims agrees to pay an extortion fee.

Mark Arena, CEO of cybersecurity threat intelligence firm Intel 471, said while it might be tempting to believe that firms which specialize in information security typically have better cybersecurity practices than physical security firms, few organizations have a deep understanding of their adversaries. Intel 471 has published an analysis of the video here.

Arena said this is a particularly acute shortcoming with many managed service providers (MSPs), companies that provide outsourced security services to hundreds or thousands of clients who might not otherwise be able to afford to hire cybersecurity professionals.

“The harsh and unfortunate reality is the security of a number of security companies is shit,” Arena said. “Most companies tend to have a lack of ongoing and up to date understanding of the threat actors they face.”

Cryptogram Friday Squid Blogging: Underwater Robot Uses Squid-Like Propulsion

This is neat:

By generating powerful streams of water, UCSD’s squid-like robot can swim untethered. The “squidbot” carries its own power source, and has the room to hold more, including a sensor or camera for underwater exploration.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram The NSA is Refusing to Disclose its Policy on Backdooring Commercial Products

Senator Ron Wyden asked, and the NSA didn’t answer:

The NSA has long sought agreements with technology companies under which they would build special access for the spy agency into their products, according to disclosures by former NSA contractor Edward Snowden and reporting by Reuters and others.

These so-called back doors enable the NSA and other agencies to scan large amounts of traffic without a warrant. Agency advocates say the practice has eased collection of vital intelligence in other countries, including interception of terrorist communications.

The agency developed new rules for such practices after the Snowden leaks in order to reduce the chances of exposure and compromise, three former intelligence officials told Reuters. But aides to Senator Ron Wyden, a leading Democrat on the Senate Intelligence Committee, say the NSA has stonewalled on providing even the gist of the new guidelines.

[…]

The agency declined to say how it had updated its policies on obtaining special access to commercial products. NSA officials said the agency has been rebuilding trust with the private sector through such measures as offering warnings about software flaws.

“At NSA, it’s common practice to constantly assess processes to identify and determine best practices,” said Anne Neuberger, who heads NSA’s year-old Cybersecurity Directorate. “We don’t share specific processes and procedures.”

Three former senior intelligence agency figures told Reuters that the NSA now requires that before a back door is sought, the agency must weigh the potential fallout and arrange for some kind of warning if the back door gets discovered and manipulated by adversaries.

The article goes on to talk about Juniper Networks equipment, which had the NSA-created DUAL_EC PRNG backdoor in its products. That backdoor was taken advantage of by an unnamed foreign adversary.

Juniper Networks got into hot water over Dual EC two years later. At the end of 2015, the maker of internet switches disclosed that it had detected malicious code in some firewall products. Researchers later determined that hackers had turned the firewalls into their own spy tool here by altering Juniper’s version of Dual EC.

Juniper said little about the incident. But the company acknowledged to security researcher Andy Isaacson in 2016 that it had installed Dual EC as part of a “customer requirement,” according to a previously undisclosed contemporaneous message seen by Reuters. Isaacson and other researchers believe that customer was a U.S. government agency, since only the U.S. is known to have insisted on Dual EC elsewhere.

Juniper has never identified the customer, and declined to comment for this story.

Likewise, the company never identified the hackers. But two people familiar with the case told Reuters that investigators concluded the Chinese government was behind it. They declined to detail the evidence they used.

Okay, lots of unsubstantiated claims and innuendo here. And Neuberger is right; the NSA shouldn’t share specific processes and procedures. But as long as this is a democratic country, the NSA has an obligation to disclose its general processes and procedures so we all know what they’re doing in our name. And if it’s still putting surveillance ahead of security.

Worse Than FailureCodeSOD: A Type of Useless

TypeScript offers certain advantages over JavaScript. Compile time type-checking can catch a lot of errors, it can move faster than browsers, so it offers the latest standards (and the compiler handles the nasty details of shimming them into browsers), plus it has a layer of convenient, syntactic sugar.

If you’re using TypeScript, you can use the compiler to find all sorts of ugly problems with your code, and all you need to do is turn the right flags on.

Or, you can be like Quintus’s co-worker, who checked in this… thing.

/**
 * Container holding definition information.
 *
 * @param String version
 * @param String date

 */
export class Definition {
  private id: string;
  private name: string;
  constructor(private version, private data) {}

  /**
   * get the definition version
   *
   * @return String version
   */
  getVersion() {
    return this.id;
  }

  /**
   * get the definition date
   *
   * @return String date
   */
  getDate() {
    return this.name;
  }
}

Now, if you were to try this on the TypeScript playground, you’d find that while it compiles and generates JavaScript, the compiler has a lot of reasonable complaints about it. However, if you were just to compile this with the command line tsc, it gleefully does the job without complaint, using the default settings.

So the code is bad, and the tool can tell you it’s bad, but you have to actually ask the tool to tell you that.

In any case, it’s easy to understand what happened with this bad code: this is clearly programming by copy/paste. They had a class that tied an id to a name. They copy/pasted to make one that mapped a version to a date, but got distracted halfway through and ended up with this incomplete dropping. And then they somehow checked it in, and nobody noticed it until Quintus was poking around.

Now, a little bit about this code. You’ll note that there are private id and name properties. The constructor defines two more properties (and wires the constructor params up to map to them) with its private version, private data params.

So if you call the constructor, you initialize two private members that have no accessors at all. There are accessors, but they point to id and name, which never get initialized in the constructor, and have no mutators.

Of course, TypeScript compiles down into JavaScript, so those private keywords don’t really matter. JavaScript doesn’t have private.

My suspicion is that this class ended up in the code base, but is never actually used. If it is used, I bet it’s used like:

let f = new Definition();
f.id = "1.0.1"
f.name = "28-OCT-2020"
…
let ver = f.getVersion();

That would work and do what the original developer expected. If they did that, the TypeScript compiler might complain, but as we saw, they don’t really care about what the compiler says.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Cryptogram Reverse-Engineering the Redactions in the Ghislaine Maxwell Deposition

Slate magazine was able to cleverly read the Ghislaine Maxwell deposition and reverse-engineer many of the redacted names.

We’ve long known that redacting is hard in the modern age, but most of the failures to date have been a result of not realizing that covering digital text with a black bar doesn’t always remove the text from the underlying digital file. As far as I know, this reverse-engineering technique is new.

EDITED TO ADD: A similar technique was used in 1991 to recover the Dead Sea Scrolls.

LongNowHow “Forest Floors” in Finland’s Daycares Changed Children’s Immune Systems

Once again on the theme of how the technological/cultural pace layer’s accelerating decoupling from the ecological pace layer in which we evolved poses serious risks to the integrity of both the human body and biosphere:

When daycare workers in Finland rolled out a lawn, planted forest undergrowth such as dwarf heather and blueberries, and allowed children to care for crops in planter boxes, the diversity of microbes in the guts and on the skin of young kids appeared healthier in a very short space of time.

Compared to other city kids who play in standard urban daycares with yards of pavement, tile and gravel, 3-, 4-, and 5-year-olds at these greened-up daycare centres in Finland showed increased T-cells and other important immune markers in their blood within 28 days.

“We also found that the intestinal microbiota of children who received greenery was similar to the intestinal microbiota of children visiting the forest every day,” says environmental scientist Marja Roslund from the University of Helsinki.

Daycares in Finland Built a ‘Forest Floor’, And It Changed Children’s Immune Systems in Science Alert

That said, the hopeful news from this pilot project is that it may be easier to restore a healthy balance between the modern and premodern from within the built environment than most people believe.

Charles StrossAll Glory to the New Management!

Dead Lies Dreaming - UK cover

Today is September 27th, 2020. On October 27th, Dead Lies Dreaming will be published in the USA and Canada: the British edition drops on October 29th. (Yes, there will be audio editions too, via the usual outlets.)

This book is being marketed as the tenth Laundry Files novel. That's not exactly true, though it's not entirely wrong, either: the tenth Laundry book, about the continuing tribulations of Bob Howard and his co-workers, hasn't been written yet. (Bob is a civil servant who by implication deals with political movers and shakers, and politics has turned so batshit crazy in the past three years that I just can't go there right now.)

There is a novella about Bob coming next summer. It's titled Escape from Puroland and Tor.com will be publishing it as an ebook and hardcover in the USA. (No UK publication is scheduled as yet, but we're working on it.) I've got one more novella planned, about Derek the DM, and then either one or two final books: I'm not certain how many it will take to wrap the main story arc yet, but rest assured that the tale of SOE's Q-Division, the Laundry, reaches its conclusion some time in 2015. Also rest assured that at least one of our protagonists survives ... as does the New Management.

All Glory to the Black Pharaoh! Long may he rule over this spectred isle!

(But what's this book about?)

Dead Lies Dreaming - US cover

Dead Lies Dreaming is the first book in a project I dreamed up in (our world's) 2017, with the working title Tales of the New Management. It came about due to an unhappy incident: I found out the hard way that writing productively while one of your parents is dying is rather difficult. The first time it happened, it took down a promising space opera project. I plan to pick it up and re-do it next year, but it was the kind of learning experience I could happily have done without. The second time it happened, I had to stop work on Invisible Sun, the third and final Empire Games novel—I just couldn't get into the right head-space. (Empire Games is now written and in the hands of the production folks at Tor. It will almost certainly be published next September, if the publishing industry survives the catastrophe novel we're all living through right now.)

Anyway, I was unable work on the a project with a fixed deadline, but I couldn't not write: so I gave myself license to doodle therapeutically. The therapeutic doodles somehow colonized the abandoned first third of a magical realist novel I pitched in 2014, and turned into an unexpected attack novel titled Lost Boys. (It was retitled Dead Lies Dreaming because a cult comedy movie from 1987 got remade for TV in 2020—unless you're a major bestseller you do not want your book title to clash with an unrelated movie—but it's still Lost Boys in my headcanon.)

Lost Boys—that is, Dead Lies Dreaming—riffs heavily off Peter and Wendy, the original taproot of Peter Pan, a stage play and novel by J. M. Barrie that predates the more familiar, twee, animated Disney version of Peter Pan from 1953 by some decades. (Actually Peter and Wendy recycled Barrie's character from an earlier work, The Little White Bird, from 1902, but let's not get into the J. M. Barrie arcana at this point.) Peter and Wendy can be downloaded from Project Gutenberg here. And if you only know Pan from Disney, you're in for a shock.

Barrie was writing in an era when antibiotics hadn't been discovered, and far fewer vaccines were available for childhood diseases. Almost 20% of children died before reaching their fifth birthday, and this was a huge improvement over the earlier decades of the 19th century: parents expected some of their babies to die, and furthermore, had to explain infant deaths to toddlers and pre-tweens. Disney's Peter is a child of the carefree first flowering of the antibiotic age, and thereby de-fanged, but the original Peter Pan isn't a twee fairy-like escapist fantasy. He's a narcissistic monster, a kidnapper and serial killer of infants who is so far detached from reality that his own shadow can't keep up. Barrie's story is a metaphor designed to introduce toddlers to the horror of a sibling's death. And I was looking at it in this light when I realized, "hey, what if Peter survived the teind of infant mortality, only to grow up under the dictatorship of the New Management?"

This led me down all sorts of rabbit holes, only some of which are explored in Dead Lies Dreaming. The nerdish world-building impulse took over: it turns out that civilian life under the rule of N'yar lat-Hotep, the Black Pharaoh (in his current incarnation as Fabian Everyman MP), is harrowing and gruesome in its own right—there's a Tzompantli on Marble Arch: indications that Lovecraft's Elder Gods were worshipped under other names by other cultures: oligarchs and private equity funds employ private armies: and Brexit is still happening—but nevertheless, ordinary life goes on. There are jobs for cycle couriers, administrative assistants, and ex-detective constables-turned-security guards. People still need supermarkets and high street banks and toy shops. The displays of severed heads on the traffic cameras on the M25 don't stop drivers trying to speed. Boys who never grew up are still looking for a purpose in life, at risk of their necks, while their big sisters try to save them. And so on.

Dead Lies Dreaming is the first of the Tales of the New Management, which are being positioned as a continuation of the Laundry Files (because Marketing). There will be more. A second novel, In His House, already exists in first draft. Tt's a continuation of the story, remixed with Sweeney Todd and Mary Poppins—who in the original form is, like Peter Pan, much more sinister than the Disney whitewash suggests. A third novel, Bones and Nightmares, is planned. (However, I can't give you a publication date, other than to say that In His house can't be published before late 2022: COVID19 has royally screwed up publishers' timetables.)

Anyway, you probably realized that instead of riffing off classic British spy thrillers or urban fantasy tropes, I'm now perverting beloved childhood icons for my own nefarious purposes—and I'm having a gas. Let's just hope that the December of 2016 in which Dead Lies Dreaming is set doesn't look impossibly utopian and optimistic by the time we get to the looming and very real December of 2020! I really hate it when reality front-runs my horror novels ...

Worse Than FailureCodeSOD: On the Creation

Understanding the Gang of Four design patterns is a valuable bit of knowledge for a programmer. Of course, instead of understanding them, it sometimes seems like most design pattern fans just… use them. Sometimes- often- overuse them. The Java Spring Framework infamously has classes with names like SimpleBeanFactoryAwareAspectInstanceFactory. Whether that's a proper use of patterns and naming conventions is a choice I leave to the reader, but boy do I hate looking at it.

The GoF patterns break down into four major categories: Behavioral, Structural, Concurrency, and Creational patterns. The Creational category, as the name implies, is all about code which can be used to create instances of objects, like that Factory class above. It is a useful collection of patterns for writing reusable, testable, and modular code. Most Dependency Injection/Inversion of Control frameworks are really just applied creational patterns.

It also means that some people decide that "directly invoking constructors is considered harmful". And that's why Emiko found this Java code:

/** * Creates an empty {@link MessageCType}. * @return {@link MessageCType} */ public static MessageCType createMessage() { MessageCType retVal = new MessageCType() return retVal; }

This is just a representitive method; the code was littered with piles of these. It'd be potentially forgiveable if they also used a fluent interface with method chaining to intialize the object, buuut… they don't. Literally, this ends up getting used like:

MessageCType msg = MessageCType.createMessage(); msg.type = someMessageType; msg.body = …

Emiko sums up:

At work, we apparently pride ourselves in using the most fancyful patterns available.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecurityGoogle Mending Another Crack in Widevine

For the second time in as many years, Google is working to fix a weakness in its Widevine digital rights management (DRM) technology used by online streaming sites like Disney, Hulu and Netflix to prevent their content from being pirated.

The latest cracks in Widevine concern the encryption technology’s protection for L3 streams, which is used for low-quality video and audio streams only. Google says the weakness does not affect L1 and L2 streams, which encompass more high-definition video and audio content.

“As code protection is always evolving to address new threats, we are currently working to update our Widevine software DRM with the latest advancements in code protection to address this issue,” Google said in a written statement provided to KrebsOnSecurity.

In January 2019, researcher David Buchanan tweeted about the L3 weakness he found, but didn’t release any proof-of-concept code that others could use to exploit it before Google fixed the problem.

This latest Widevine hack, however, has been made into an extension for Microsoft Windows users of the Google Chrome web browser and posted for download on the software development platform Github.

Tomer Hadad, the researcher who developed the browser extension, said his proof-of-concept code “was done to further show that code obfuscation, anti-debugging tricks, whitebox cryptography algorithms and other methods of security-by-obscurity will eventually by defeated anyway, and are, in a way, pointless.”

Google called the weakness a circumvention that would be fixed. But Hadad took issue with that characterization.

“It’s not a bug but an inevitable flaw because of the use of software, which is also why L3 does not offer the best quality,” Hadad wrote in an email. “L3 is usually used on desktops because of the lack of hardware trusted zones.”

Media companies that stream video online using Widevine can select different levels of protection for delivering their content, depending on the capabilities of the device requesting access. Most modern smartphones and mobile devices support much more robust L1 and L2 Widevine protections that do not rely on L3.

Further reading: Breaking Content Protection on Streaming Websites

LongNowHow Long-term Thinking Can Help Earth Now

Inside Finland’s Olkiluoto nuclear waste repository, 1,500 feet underground. Photo Credit: Peter Guenzel

With half-lives ranging from 30 to 24,000, or even 16 million years, the radioactive elements in nuclear waste defy our typical operating time frames. The questions around nuclear waste storage — how to keep it safe from those who might wish to weaponize it, where to store it, by what methods, for how long, and with what markings, if any, to warn humans who might stumble upon it thousands of years in the future — require long-term thinking.

These questions brought the anthropologist Vincent Ialenti to Finland’s Olkiluoto nuclear waste repository in 02012, where he began more than two years of field work with a team of experts seeking to answer them.

Ialenti’s goal was to examine how these experts conceived of the future:

What sort of scientific ethos, I wondered, do Safety Case experts adopt in their daily dealings with seemingly unimaginable spans of time? Has their work affected how they understand the world and humanity’s place within it? If so, how? If not, why not?

Ialenti has crystallized his learnings in his new book, Deep Time Reckoning: How Future Thinking Can Help Earth Now. It is a book of extraordinary insight and erudition, thoughtful and lucid throughout its surprisingly-light 180 pages.

Long Now’s Director of Development, Nicholas Paul Brysiewicz, recently posed a few questions to Ialenti about the genesis of his book; the “deflation of expertise” in North America, Western Europe and beyond; conceiving long-term thinking as a kind of exercise for the mind; and more.


Vincent, thanks for sending over a copy of your new book. And thanks for making time to unpack some of those ideas with me here for The Long Now Foundation and our members around the globe.

I’m curious: what drew you to study long-term thinking in the wild?

Vincent Ialenti: In 02008, I was a Masters student in “Law, Anthropology, and Society” at the London School of Economics. I had a growing interest in long-term engineering projects like the Svalbard Global Seed Vault and the Clock of the Long Now. I decided to write my thesis (now published here) on the currently-defunct U.S. Yucca Mountain nuclear waste repository’s million-year federal licensing procedure.

The licensing procedure stretched legal adjudication’s foundational rubric of “fact, rule, and judge” into distant futures. The U.S. Department of Energy developed quantitative models of million-year futures as facts. The U.S. Environmental Protection Agency defined multi-millennial radiological dose-limit standards as rules. The U.S. Nuclear Regulatory Commission subsumed the DOE’s facts to the EPA’s rules to produce a judgment on whether the repository could accept waste. In media commentaries, the Yucca Mountain project was enchanted with aesthetics of high modernist innovation and sci-fi futurology. Yet its decision-making procedure was grounded on an ancient legal procedural bedrock — rubrics formulated as far back as the Roman Empire.

I grew fascinated by this temporal mashup. To delve deeper, though, I knew I’d have to conduct long-term fieldwork. I enrolled in a PhD program at Cornell University. With the help of a U.S. National Science Foundation grant, I spent 32 months among Finland’s Olkiluoto nuclear waste repository Safety Caseexperts from 02012–02014. These experts developed models of geological, hydrological, and ecological events that could occur in Western Finland over the coming tens of thousands — or even hundreds of thousands — of years. They reckoned with distant future glaciations, climate changes, earthquakes, floods, human and animal population changes, and more. I ended up recording 121 interviews. Those became the basis of my recent book.

Early on in the book you introduce and discuss “the deflation of expertise.” Could you tell us what you mean by this phrase and how you see it shaping long-termism?

We’re witnessing a troubling institutional erosion of expert authority in North America, Western Europe, and beyond. Vaccine science, stem cell research, polling data, climate change models, critical social theories, cell phone radiation studies, pandemic disease advisories, and human evolution research are routinely attacked in cable news free-for-alls and social media echo-chambers. Political power is increasingly gained through charismatic, populist performances that energize crowds by mocking expert virtues of cautious speech and detail-driven analysis. Expert voices are drowned out by Twitter mobs, dulled by corporate-bureaucratic “knowledge management” red tape, exhausted by universities’ adjunctification trends, warped by contingent “gig economy” research funding contracts, and rushed by publish-or-perish productivity pressures.

As enthusiasm for liberal arts education and scientific inquiry declines, societies enter into a state of futurelessness: they develop a manic fixation on the present moment that incessantly shoots down proposals for envisioning better worlds. My book argues that anthropological engagement with Finland’s nuclear waste experts can help us (a) widen our thinking’s time horizons during a moment of ecological crisis some call the Anthropocene and (b) resist the deflation of expertise by opening our ears to a uniquely longsighted form of expert inquiry.

I was heartened each time you pointed to the importance of multidisciplinarity, discursive diversity, and strategic redundancy for doing things at long timescales. Our experience has borne this out, as well. How did you arrive at an emphasis on these themes? What are some of the pitfalls of homogeneity? What about generational homogeneity?

The Safety Case project convened several teams of experts — each with different disciplinary backgrounds — to pursue what my informants called “multiple lines of reasoning.”

Some were systems analysts developing models of how different kinds of radionuclides could “migrate” through future underground water channels. Others were engineers reporting on the mechanical strength tests conducted on Finland’s copper nuclear waste canisters. Some were geologists making analogies that compared the Olkiluoto’s far future Ice Age conditions to those of a glacial ice sheet found in Greenland today. Others studied “archaeological analogues.” This meant comparing future repository components to ancient Roman iron nails found in Scotland, to a bronze cannon from the 17th century Swedish warship Kronan, and to a 2,100-year-old cadaver preserved in clay in China. Still others wrote prose scenarios with titles like The Evolution of the Repository System Beyond a Million Years in the Future.

The Safety Case encompassed multiple disciplinary sensibilities to ensure that one potentially inaccurate assumption doesn’t invalidate the wider range of forecasts. For me, this holistic ethos was a refreshing counterpoint to the reductionist, homogeneous scientism that led us to many of today’s ecological crises.

Certainly, it is important to hedge against generational homogeneity in thought too. Finland’s nuclear waste experts planned to release updated versions of the Safety Case throughout the coming century. They called these successive versions “iterations.” The first major report was 01985’s “Safety Analysis of Disposal of Spent Nuclear Fuel: Normal and Disturbed Evolution Scenarios.” The iteration I studied was the 02012 Construction License Safety Case. The next iteration will be the Operating License Safety Case, scheduled for submission to Finland’s nuclear regulator STUK in late 02021. Each iteration is, ideally, supposed to be more robust than the one before it. The Safety Case is an intergenerational work-in-progress.

Throughout the work you describe long-term thinking as an imaginative exercise — a “calisthenics for the mind.” This suggestion floored me when I read it. I just couldn’t agree more. And earlier this year, prior to reading your book, I even published an essay arguing for that same interpretation. What do you think makes exercise such an apt metaphor for understanding this phenomenon we’re discussing?

We all need to integrate a more vivid awareness of deep time into our everyday habits, actions, and intuitions. We need to override the shallow time discipline into which we’ve been enculturated. This requires self-discipline and ongoing practice. I believe putting aside time to do long-termist intellectual workouts or deep time mental exercise routines can help get us there.

Here’s an example. From 02017 to 02020, I was a researcher at George Washington University in Washington DC. In 01922, fossilized ~100,000-year-old bald cypress trees were found just twenty feet below the nearby city surface. Back then, America’s capital city was a literal swamp. Today, four bald cypresses, planted in the mid-1800s, grow in Lafayette Square right near the White House. I approached them as intellectual exercise equipment for stretching my mind across time. The cypresses provided me with tree imageries I could draw upon when re-imagining the U.S. capital as a prehistoric swamp.

Here’s another example. I sometimes headed west to hike in West Virginia. Hundreds of millions of years ago, Appalachia was home to much taller mountains. Some say their elevations rivaled those of today’s Rockies, Alps, or Himalayas. I tried to discipline my imagination, while hiking, into reimagining the hills in a wider temporal frame. I drew upon on the images I had in my head of what taller mountain ranges look like today. This helped me stretch the momentary “now” of my hike by endowing it with a deeper history and future.

Anyone can do these long-termist exercises. A person in Bangladesh, New York, Rio de Janeiro, Osaka, or Shanghai could, for instance, try imagining their area submerged by, or fighting off, future sea level rise. But what inspired me to integrate these exercises into my own life?

Well, it was — again — my fieldwork among Finland’s nuclear waste experts. I modeled these exercises on the Safety Case’s natural and archaeological analogue studies. The key was to (a) make an analogical connection between one’s immediate surroundings and a dramatically long-term future or past and then (b) try to envision it as accurately as possible by drawing, analogically, from scientific information and imageries we already have in our heads of real-world locales out there today.

What was the most surprising thing you discovered while working on this book?

Early on, I decided to end each chapter with five or six takeaway lessons in long-termism. I call these lessons “reckonings.” As I wrote, however, I was surprised to discover that, even as I engaged with very alien far future Finlands, most of the “reckonings” I collected ended up pertaining to some of the most ordinary features of everyday experience. These include the power of analogy (Chapter 1), the power of pattern-making (Chapter 2), the power of shifting and re-shifting perspectives (Chapter 3), and the problem of human mortality (Chapter 4). I found that these familiarities can be useful. Their sheer relatability can serve as a launching-off point for the rest of us as we pursue long-termist learning. The analogical exercises I mentioned previously are a good example of this.

It’s been so exciting for us to see this next generation of long-term thinkers publishing excellent new books on the topic — from your penetrating work in Deep Time Reckoning to Long Now Seminar Speaker Bina Venkataraman’s encompassing work in The Optimist’s Telescope to Long Now Seminar Speaker (and author of the foreword to your book) Marcia Bjornerud’s geological work in Timefulness to Long Now Research Fellow Roman Krznaric’s philosophical work in The Good Ancestor. What role do you think books play in helping the world think long-term?

Those are important books! I’ll add a few more: David Farrier’s Footprints: In Search of Future Fossils, Richard Irvine’s An Anthropology of Deep Time, and Hugh Raffles’ Book of Unconformities: Speculations on Lost Time. These pose crucial questions about time, geology, and human (and non-human) imagination. An argument could be made that there’s sort of a diffuse, de-centralized, interdisciplinary “deep time literacy” movement coalescing (mostly on its own!).

This is urgent work. Earlier this year, the Trump Administration advanced a proposal to reform key National Environmental Policy Act regulations to read: “effects should not be considered significant if they are remote in time, geographically remote, or the product of a lengthy causal chain.” This is out of sync with our mission to become more mindful of far future worlds. I won’t speak for the others, but my hope is that these books can inch us closer to escaping the virulent short-termism that our current ecological woes, deflations of expertise, and political crises exploit and reinforce.

In closing, I’ll mention that, thanks to you, I now know there is no direct way for me to say “Someday we will have an in-person discussion about all this at The Interval” in Finnish because Finnish does not have a future tense. And yet here we are, discussing Finnish expertise at thinking about the future. What’s up with that?

Hah! Yes, that’s right. There’s no future tense in Finnish. Finns tend to use the present tense instead. There is something sensible about this: all visions of the future are, indeed, tethered to the present moment. Marcia Bjornerud cleverly linked this linguistic quirk to my book’s broader arguments:

“There is some irony in studying Finns as exemplars of future thinkers: as Ialenti points out, the Finnish language has no future tense. Instead, either present tense or conditional mode verbs are used, which seems a rather oblique way of speaking of times to come. But this linguistic treatment of the future may reflect a deep wisdom in Finnish culture that informs the philosophy of the Safety Case. Making declarative pronouncements about the future is imprudent; the best that can be done is to envisage a spectrum of possible futures and develop a sense for how likely each is to unfold.”

From all of us at and around The Long Now Foundation: thank you for your time and expertise, Vincent.

And thank you! I’ll get back to reading your essay on long-termist askēsis. Keep up the great work.


Learn More

  • Read Long Now Editor Ahmed Kabil’s 02017 feature on the nuclear waste storage problem.
  • Read Vincent Ialenti’s 02016 essay for Long Now, “Craters & Mudrock: Tools for Imagining Distant Finlands.”
  • Watch Ralph Cavanagh and Peter Schwartz’s 02006 Long Now Seminar on Nuclear Power, Climate Change, and the Next 10,000 Years.

Worse Than FailureSerial Problems

If we presume there is a Hell for IT folks, we can only assume the eternal torment involves configuring or interfacing with printers. Perhaps Anabel K is already in Hell, because that describes her job.

Anabel's company sells point-of-sale tools, including receipt printers. Their customers are not technical, so a third-party installer handles configuring those printers in the field. To make the process easy and repeatable, Anabel maintains an app which automates the configuration process for the third party.

The basic flow is like this:

The printer gets shipped to the installer. At their facility, someone from the installer opens the box, connects the printer to power, and then to the network. They then run the app Anabel maintains, which connects to the printer's on-board web server and POSTs a few form-data requests. Assuming everything works, the app reports success. The printer goes back in the box and is ready to get installed at the client site at some point in the near future.

The whole flow was relatively painless until the printer manufacturer made a firmware change. Instead of the username/password being admin/admin, it was now admin/serial-number. No one was interested in having the installer techs key in the long serial number, but digging around in the documentation, Anabel found a simple fix.

In addition to the on-board web-server, there was also a TCP port running. If you connected to the port and sent the correct command, it would reply with the serial number.

Anabel made the appropriate changes. Now, her app would try and authenticate as admin/admin, and if it failed, it'd open a TCP connection, query the serial number, and then try again. Anabel grabbed a small pile of printers from storage, a mix of old and new firmware, loaded them up with receipt paper, and ran the full test suite to make sure everything still worked.

Within minutes, they were all happily churning out test prints. Anabel released her changes to production, and off it went to the installer technicians.

A few weeks later, the techs call in through support, in an absolute panic. "The configuration app has stopped working. It doesn't work on any of the printers we received in the past few weeks."

There was a limited supply of the old version of printers, and dozens got shipped out every day. If this didn't get fixed ASAP, they would very quickly find themselves with a pile of printers the installers couldn't configure. Management got on conference calls, roped Anabel in on the middle of long email chains, and they all agreed: there must be something wrong with Anabel's changes.

It wasn't unreasonable to suspect, but Anabel had tested it thoroughly. Heck, she had a few of the new printers on her desk and couldn't replicate the failure. So she got on a call with a tech and started from square one. Is it plugged in. Is it plugged into the network. Are there any restrictions on the network, or on the machine running the app, that might prevent access to non-standard ports?

Over the next few days, while the stock of old printers kept dwindling, this escalated up to sending a router with a known configuration out to the technicians. It was just to ensure that there were no hidden firewalls or network policies preventing access to the TCP port. Even still, on its own dedicated network, nothing worked.

"Okay, let's double check the printer's network config," Anabel said on the call. "When it boots up, it should print out its network config- IP, subnet, gateway, DHCP config, all of that. What does that say?"

The tech replied, "Oh. We don't have paper in it. One sec." While rooting around in the box, they added, "We don't normally bother. It's just one more thing to unbox before putting it right back in the box."

The printer booted up, faithfully printed out its network config, which was what everyone expected. "Okay, I guess… try running the tool again?" Anabel suggested.

And it worked.

Anabel turned to one of the test printers she had been using, and pulled out the roll of receipt paper. She ran the configuration tool… and it failed.

The TCP service only worked when there was paper in the printer. Anabel reported it as a bug to the printer vendor, but if and when that gets fixed is anyone's guess. The techs didn't want to have to fully unbox the printers, including the paper, for every install, but that was an easy fix: with each shipment of printers Anabel's company just started shipping a few packs of receipt paper for the techs. They can just crack one open and use it to configure a bunch of printers before it runs out.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cryptogram IMSI-Catchers from Canada

Gizmodo is reporting that Harris Corp. is no longer selling Stingray IMSI-catchers (and, presumably, its follow-on models Hailstorm and Crossbow) to local governments:

L3Harris Technologies, formerly known as the Harris Corporation, notified police agencies last year that it planned to discontinue sales of its surveillance boxes at the local level, according to government records. Additionally, the company would no longer offer access to software upgrades or replacement parts, effectively slapping an expiration date on boxes currently in use. Any advancements in cellular technology, such as the rollout of 5G networks in most major U.S. cities, would render them obsolete.

The article goes on to talk about replacement surveillance systems from the Canadian company Octasic.

Octasic’s Nyxcell V800 can target most modern phones while maintaining the ability to capture older GSM devices. Florida’s state police agency described the device, made for in-vehicle use, as capable of targeting eight frequency bands including GSM (2G), CDMA2000 (3G), and LTE (4G).

[…]

A 2018 patent assigned to Octasic claims that Nyxcell forces a connection with nearby mobile devices when its signal is stronger than the nearest legitimate cellular tower. Once connected, Nyxcell prompts devices to divulge information about its signal strength relative to nearby cell towers. These reported signal strengths (intra-frequency measurement reports) are then used to triangulate the position of a phone.

Octasic appears to lean heavily on the work of Indian engineers and scientists overseas. A self-published biography of the company notes that while the company is headquartered in Montreal, it has “R&D facilities in India,” as well as a “worldwide sales support network.” Nyxcell’s website, which is only a single page requesting contact information, does not mention Octasic by name. Gizmodo was, however, able to recover domain records identifying Octasic as the owner.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 20)

Here’s part twenty of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:


Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

Charles StrossIntroducing a new guest blogger: Sheila Williams

It's been ages since I last hosted a guest blogger here, but today I'd like to introduce you to Sheila Williams, who will be talking about her work next week.

Normally my gues bloggers are other SF/F authors, but Sheila is something different: she's the multiple Hugo-Award winning editor of Asimov's Science Fiction magazine. She is also the winner of the 2017 Kate Wilhelm Solstice Award for distinguished contributions to the science fiction and fantasy community.

Sheila started at Asimov's in June 1982 as the editorial assistant. Over the years, she was promoted to a number of different editorial positions at the magazine and she also served as the executive editor of Analog from 1998 until 2004. With Rick Wilber, she is also the co-founder of the Dell Magazines Award for Undergraduate Excellence in Science Fiction and Fantasy. This annual award has been bestowed on the best short story by an undergraduate student at the International Conference on the Fantastic since 1994.

In addition, Sheila is the editor or co-editor of twenty-six anthologies. Her newest anthology, Entanglements: Tomorrow's Lovers, Families, and Friends, is the 2020 volume of the Twelve Tomorrow series. The book is just out from MIT Press.

,

Sam VargheseAustralian sports writer’s predictions prove to be those of a false prophet

After the first match in the Bledisloe Cup series ended in a 16-all draw, Australian sports writers were on a giddy high, predicting that the dominance of the All Blacks had more or less ended and the big boys had been caught with their pants down.

Well before this hype began, at the end of the game, there was a gesture by the Australian team which showed that its mental state was still very fragile. When the final whistle blew, the ball was still live, so the referee let play proceed.

A thrilling nine minutes ensued, with first Australia, and then New Zealand, threatening to score. Strangely, though, neither team thought of attempting a drop-goal to win the game. After one of the New Zealand forays, the Australians regained the ball and fly-half James O’Connor kicked it into touch, ending the game.

Now O’Connor could have continued play, by running the ball from his own end. The All Blacks never took the option of ending the game when they got the ball during that nine-minute stretch. O’Connor’s gesture gave the game away: for Australia, a draw was as good as a win. It indicated the extent to which he ranked his team against the All Blacks, despite the heroics they had showed.

With that kind of mental attitude, it was only to be expected that Australia would lose at Eden Park the following week. As they did, by a 27-7 margin, at a venue where they have not beaten New Zealand since 1986.

During the week, there were several triumphal essays in the Australian press; one, by Jamie Pandaram, a senior sports writer at The Daily Telegraph, gives an insight into the type of shallow understanding that sports writers on this side of the Tasman have when it comes to rugby union, and the nationalistic fervour that surrounds sport (as it does everything else).

The headline gave an indication of the bombast that was to follow, reading: “Why All Blacks are finally vulnerable.” It started off saying that while facing a beaten All Blacks team was a dangerous exercise, “there’s a different feel about 2020”.

And he cited concerns about the coaching, selection, tactics and loss of seniority in All Black ranks as factors that had contributed to what he called a decline in their ranks.

He cited the absence of four players – Kieran Read, Brodie Retallick, Sonny Bill Williams and Ryan Crotty – as making the team unable to produce those moments of inspiration for which they have become famous. And he claimed that Ian Foster, who took over as coach from Steve Hansen after the 2019 World Cup, was not the best coach among those who could be chosen.

As evidence that stress was allegedly mounting on New Zealand, Pandaram cited the complaints made by the assistant coach John Plumtree about illegal tactics employed by Australia in taking out players.

The first game was unusual in that it was not refereed by a neutral referee – the first time this has happened in a long time and mainly due to the travel issues cause by the coronavirus pandemic.

The referee, New Zealand’s Paul Williams, had to tread a difficult path; he had to ensure that his rulings could not be criticised as being partial to his own country and at the same time he had to police Australia’s thuggery properly without accusations of bias. Having watched the match twice, I can say with confidence that Williams only erred once, in not calling Rieko Ioane for stepping on the sideline boundary when he began what ended up as a try. This was the fault of Australian Angus Gardner who was the linesman on the side concerned.

“It’s a common Kiwi play; turn the referees’ and public’s attention to perceived cheating by their opposition – we’ve seen them previously call out the Wallabies’ scrumming and breakdown play – to take the spotlight away from their own,” wrote Pandaram, completely forgetting that this was exactly what former Wallabies coach Michael Cheika did after every game.

He opined that Foster would be under “intense scrutiny” during the second game as many people in New Zealand felt that the job of chief coach should have gone to Scott Robertson instead, the latter being one who has taken the Crusaders to four Super Rugby titles in his first four years as their coach.

And Pandaram went on and on, outlining what he perceived to be issues with the team, about playing this player and that in this position or that.

I haven’t seen anything he wrote after the second game when Australia was competitive for just one half, and unable to score in the second half. All those perceived “problems” he pointed out were gone.

One crucial factor that he forgot was that this was the first game for both teams this year. Normally, both Australia and New Zealand play two or three Tests before the games against each other, South Africa and Argentina, which make up the Bledisloe Cup and Rugby Championship each year, begin. Both teams were quite rusty.

In 2015, New Zealand lost more talent after the World Cup than they did in 2019; that time Daniel Carter, Richie McCaw, Ma’a Nonu, Conrad Smith, Tony Woodcock and Keven Mealamu all retired from international rugby. But the side picked up and carried on.

A lot of Pandaram’s moaning comes out of nationalism; Australians are the most one-sided sports writers I have seen. When one is shown up like this, they tend to lie quiet until the public forgets. As Pandaram is doing now.

,

Cryptogram New Report on Police Decryption Capabilities

There is a new report on police decryption capabilities: specifically, mobile device forensic tools (MDFTs). Short summary: it’s not just the FBI that can do it.

This report documents the widespread adoption of MDFTs by law enforcement in the United States. Based on 110 public records requests to state and local law enforcement agencies across the country, our research documents more than 2,000 agencies that have purchased these tools, in all 50 states and the District of Columbia. We found that state and local law enforcement agencies have performed hundreds of thousands of cellphone extractions since 2015, often without a warrant. To our knowledge, this is the first time that such records have been widely disclosed.

Lots of details in the report. And in this news article:

At least 49 of the 50 largest U.S. police departments have the tools, according to the records, as do the police and sheriffs in small towns and counties across the country, including Buckeye, Ariz.; Shaker Heights, Ohio; and Walla Walla, Wash. And local law enforcement agencies that don’t have such tools can often send a locked phone to a state or federal crime lab that does.

[…]

The tools mostly come from Grayshift, an Atlanta company co-founded by a former Apple engineer, and Cellebrite, an Israeli unit of Japan’s Sun Corporation. Their flagship tools cost roughly $9,000 to $18,000, plus $3,500 to $15,000 in annual licensing fees, according to invoices obtained by Upturn.

Worse Than FailureError'd: Errors by the Pound

"I can understand selling swiss cheese by the slice, but copier paper by the pound?" Dave P. wrote.

 

Amanda R. writes, "Ok, that's fine, but can the 1% correctly spell 'people'?"

 

"In this form, language is quite variable as is when you are able to cancel your reservation ...which are in fact, actual variables," wrote Jean-Pierre M.

 

Barry M. wrote, "Hey, Royal Caribbean, you know what? I'll take the win-win: total control AND save $7!"

 

"Oh wow! The secret on how to write good articles is out!" writes Barry L.

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Krebs on SecurityThe Now-Defunct Firms Behind 8chan, QAnon

Some of the world’s largest Internet firms have taken steps to crack down on disinformation spread by QAnon conspiracy theorists and the hate-filled anonymous message board 8chan. But according to a California-based security researcher, those seeking to de-platform these communities may have overlooked a simple legal solution to that end: Both the Nevada-based web hosting company owned by 8chan’s current figurehead and the California firm that provides its sole connection to the Internet are defunct businesses in the eyes of their respective state regulators.

In practical terms, what this means is that the legal contracts which granted these companies temporary control over large swaths of Internet address space are now null and void, and American Internet regulators would be well within their rights to cancel those contracts and reclaim the space.

The IP address ranges in the upper-left portion of this map of QAnon and 8kun-related sites — some 21,000 IP addresses beginning in “206.” and “207.” — are assigned to N.T. Technology Inc. Image source: twitter.com/Redrum_of_Crows

That idea was floated by Ron Guilmette, a longtime anti-spam crusader who recently turned his attention to disrupting the online presence of QAnon and 8chan (recently renamed “8kun”).

On Sunday, 8chan and a host of other sites related to QAnon conspiracy theories were briefly knocked offline after Guilmette called 8chan’s anti-DDoS provider and convinced them to stop protecting the site from crippling online attacks (8Chan is now protected by an anti-DDoS provider in St. Petersburg, Russia).

The public face of 8chan is Jim Watkins, a pig farmer in the Philippines who many experts believe is also the person behind the shadowy persona of “Q” at the center of the conspiracy theory movement.

Watkin owns and operates a Reno, Nev.-based hosting firm called N.T. Technology Inc. That company has a legal contract with the American Registry for Internet Numbers (ARIN), the non-profit which administers IP addresses for entities based in North America.

ARIN’s contract with N.T. Technology gives the latter the right to use more than 21,500 IP addresses. But as Guilmette discovered recently, N.T. Technology is listed in Nevada Secretary of State records as under an “administrative hold,” which according to Nevada statute is a “terminated” status indicator meaning the company no longer has the right to transact business in the state.

N.T. Technology’s listing in the Nevada Secretary of State records. Click to Enlarge.

The same is true for Centauri Communications, a Freemont, Calif.-based Internet Service Provider that serves as N.T. Technology’s colocation provider and sole connection to the larger Internet. Centauri was granted more than 4,000 IPv4 addresses by ARIN more than a decade ago.

According to the California Secretary of State, Centauri’s status as a business in the state is “suspended.” It appears that Centauri hasn’t filed any business records with the state since 2009, and the state subsequently suspended the company’s license to do business in Aug. 2012. Separately, the California State Franchise Tax Board (FTB) suspended this company as of April 1, 2014.

Centauri Communications’ listing with the California Secretary of State’s office.

Neither Centauri Communications nor N.T. Technology responded to repeated requests for comment.

KrebsOnSecurity shared Guilmette’s findings with ARIN, which said it would investigate the matter.

“ARIN has received a fraud report from you and is evaluating it,” a spokesperson for ARIN said. “We do not comment on such reports publicly.”

Guilmette said apart from reclaiming the Internet address space from Centauri and NT Technology, ARIN could simply remove each company’s listings from the global WHOIS routing records. Such a move, he said, would likely result in most ISPs blocking access to those IP addresses.

“If ARIN were to remove these records from the WHOIS database, it would serve to de-legitimize the use of these IP blocks by the parties involved,” he said. “And globally, it would make it more difficult for the parties to find people willing to route packets to and from those blocks of addresses.”

Kevin RuddHope Radio: Murdoch Royal Commission

RADIO AUDIO
HOPE 103.2 SYDNEY
DRIVE WITH RAY KINGTON
22 OCTOBER 2020

 

The post Hope Radio: Murdoch Royal Commission appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Query Elegance

It’s generally hard to do worse than a SQL injection vulnerability. Data access is fundamental to pretty much every application, and every programming environment has some set of rich tools that make it easy to write powerful, flexible queries without leaving yourself open to SQL injection attacks.

And yet, and yet, they’re practically a standard feature of bad code. I suppose that’s what makes it bad code.

Gidget W inherited a PHP application which, unsurprisingly, is rife with SQL injection vulnerabilities. But, unusually, it doesn’t leverage string concatenation to get there. Gidget’s predecessor threw a little twist on it.

$fields = "t1.id, t1.name, UNIX_TIMESTAMP(t1.date) as stamp, ";
$fields .= "t2.idT1, t2.otherDate, t2.otherId";
$join = "table1 as t1 join table2 as t2 on t1.id=t2.idT1";
$where = "where t1.lastModified > $val && t2.lastModified = '$val2'";
$query = "select $field from $join $where";

This pattern appears all through the code. Because it leverages string interpolation, the same core structure shows up again and again, almost copy/pasted, with one line repeated each time.

$query = "select $field from $join $where";

What goes into $field and $join and $where may change each time, but "select $field from $join $where" is eternal, unchanging, and omnipresent. Every database query is constructed this way.

It’s downright elegant, in its badness. It simultaneously shows an understanding of how to break up a pattern into reusable code, but also no understanding of why all of this is a bad idea.

But we shouldn’t let that distract us from the little nuances of the specific query that highlight more WTFs.

t1.lastModified > $val && t2.lastModified = '$val2'

lastModified in both of these tables is a date, as one would expect. Which raises the question: why does one of these conditions get quotes and why does the other one not? It implies that $val probably has the quotes baked in?

Gidget also asks: “Why is the WHERE keyword part of the $where variable instead of inline in the query, but that isn’t the case for SELECT or FROM?”

That, at least, I can answer. Not every query has a filter condition. Since you can’t have WHERE followed by nothing, just make the $where variable contain that.

See? Elegant in its badness.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

LongNowThe Data of Long-lived Institutions

The following transcript has been edited for length and clarity. 

I want to lead you through some of the research that I’ve been doing on a meta-level around long-lived institutions, as well as some observations of the ways various systems have lasted for hundreds of thousands of years. 

Long Now as a Long-lived Institution

This is one of the early projects I worked with Stewart Brand on at Long Now. We were trying to define our problem space and explore the ways we think on different timescales. Generally, companies are working in the “nowadays,” although that’s been shortening to some extent, with more quarterly thinking than decade-level thinking.

It was Peter Schwartz who suggested this 10,000 year timeframe. Danny Hillis’ original idea for what would ultimately become The 10,000 Year Clock was that it would be a Millennium Clock: it would tick once a year, bong once a century, and the cuckoo would come out once a millennium. He didn’t really have an end date. 

We use the 10,000 year time frame to orient our efforts at Long Now because global civilization arose when the last Interglacial period ended 10,000 years ago. It was only then, around 8,000 BC, that we had the emergence of agriculture and the first cities. If we can look back that far, we should be able to look forward that far. Thinking about ourselves as in the middle of a 20,000 year story is very different than thinking about ourselves as at the end of a 10,000 year story. 

This pace layers diagram is the very first thing I worked on at Long Now. The notion of pace layers came out of a discussion between Stewart and Long Now co-founder Brian Eno. They were trying to tease apart these layers of human time. 

Institutions can be mapped across the pace layers diagram as well. Take Apple Computer, for example. They’re coming out with new iPhones every six months, which is the fashion layer. The commerce layer is Apple selling these devices. The infrastructure layer is the cell phone networks and chip fabs that it’s all built on. The governance layer—and note that it is governance, not government; they’re mostly working with governments, but they also have to work with general governing systems. Some of these companies are hitting walls against different types of governments who have different ideas of privacy, different ideas of commercialization, and they’re now having to shape their companies around that. And then obviously, culture is moving slower underneath all of this, but Apple is starting to affect culture. And then there’s the last pace layer, nature, moving the slowest. At some point, Apple is going to have to come to terms with the level of environmental damage and problems that are happening on the nature pace layer if it is going to be a company that lasts for hundreds or a thousand years. So we could imagine any large institution mapped across this and I think it’s a useful tool for that. 

Also very early on in Long Now’s history, in 01997, Kees van der Heijden, who wrote the book on scenario planning, came to a charrette that Long Now organized to come up with business ideas for our organization. He formulated a business plan that was strangely prophetic:

The squares are areas where we have core competencies. The dotted lines indicate temporary competencies, like the founders. The other items indicate all the things we hadn’t really gotten to yet or figured out: we didn’t have a way of funding ourselves; we didn’t have a membership program; we didn’t have a large community of donors; we didn’t have an endowment; and we didn’t have people willing to give their estates to us. We still don’t have an endowment or people willing to give us our estates, but we’ve achieved the rest. And now that we’ve been around for 22 years, we can imagine how those two items are going to start to happen next.  

I also want to point out the cyclical nature of this diagram. There’s no system in the world that I’ve found that is linear that has lasted on these timescales. You need to have a cyclical business model, not a linear business model.

The Longest-Lived Institutions in the World

I’ve been collecting data on all of the longest lived institutions in the world. As you look at these, there’s a few things that stick out. Notice: brewery, brewery, winery, hotel, bar, pub, right? And also notice that a lot of them are in Japan. There’s been a rough system of government there for over 2,000 years (the Royal Family) that’s held together enough to enable something like the Royal Confectioner of Japan to be one of the oldest companies in the world. But there’s also temple builders and things like that.

In the West, most of the companies that have survived for a very long time are basically service companies. It’s a lot easier to reinvent yourself as a service-oriented company than it is as a commodity company when that particular commodity goes out of use.

Colgate Palmolive (founded 01806) and DuPont (founded 01802) are commodity companies that are broad enough to change the kinds of products they sell over time. I’m interested in learning more about all these companies, as they probably all have some kind of special sauce in their stories of longevity. 

Something else that came out of this research is the fact that the length of company’s lives is shrinking at almost one year per year. In 01950, the average company on the Fortune 500 had been around for 61 years. Now it’s 18 years. Companies’ lives are getting shorter.

As I mentioned, most of the oldest companies in the world are in Japan. In a survey of 5,500 companies over 200 years old, 3,100 are based in Japan. The rest are in some of the older countries in Europe. 

But—and this was a fact I found curious, and one that speaks to the cyclical nature of things—90% of the companies that are over 200 years old have 300 employees or less; they’re not mega companies. 

In surveying 1,000 companies over 300 years old, you find a huge amount of disparity concerning which industries they’re a part of. But there were a few big groupings that I found interesting. 23% are in the alcohol industry, and this doesn’t even include pubs and restaurants and hotels that may sell alcohol. 

Patrick McGovern, a biomolecular archeologist who I talked to when we were building The Interval, has done DNA analysis on vines, which are a clonal species. From that analysis, we  know that civilization started cultivating wine 8,000 BC. McGovern supposes that it’s not at all clear if civilization stopped being nomadic in order to ferment things, or because they started fermenting things, they stopped being nomadic. It’s an intriguing correlation, and notable that some of this overwhelmingly large segment of the oldest companies in the world deal in alcohol.

Long-term Thinking is Not Inherently Good

A quick word about values: long-term thinking, and aspiring to be a long-term institution, is not inherently good. At Long Now, we’ve always emphasized the importance of long-term thinking without trying to ascribe a lot of values to it. But I don’t think that’s intellectually honest. We have to ask ourselves what we’re trying to perpetuate. We have to step back far enough and ensure that the kinds of things we’re perpetuating are generally good for society. 

How to Build Things That Last

One way that things have lasted for a really long time is to just take a really long time to build them. Cathedrals are a famous example of this. The most dangerous time for anything that’s lasting is really just one generation after it was built. It’s no longer a new, cool thing; it’s the thing that your parents did, and it’s not cool anymore. And it’s not until another couple generations later where everyone values it, largely because it is old and venerable and has become a kind of cultural icon. 

And we already see this with this cathedral: the Sagrada Familia in Barcelona.  It’s still under construction, 125 years into its build process, and it’s already a UNESCO World Heritage Site. 

The other way things last for a really long time, and this is the Japanese model, is that they’re just extremely well-maintained. 

At about 1,400 years old, these are the two oldest continuously standing wooden structures in the world. And they’ve replaced a lot of parts of them. They keep the roofs on them, and even in a totally humid and raining environment, the central timbers of these buildings have stayed true. Interestingly, this temple was also the place where, over a thousand years ago, a Japanese princess had a vision that she needed to send a particular prayer out to the world to make sure that it survived into the future. And so she had, literally, a million wooden pagodas made with the prayer put inside them, and distributed these little pagodas as far and wide as she could. You can still buy these on eBay right now. It’s an early example of the philosophy of “Lots of Copies Keep Stuff Safe” (LOCKSS). 

Another Japanese example that uses a totally different strategy is this Shinto shrine.

Shinto is an animist religion whose adherents believe that spirits are in everything, unlike Buddhism, which came to Japan later. In the Shinto belief system, temples have this renewing technology, if you will, where they’re rebuilt in a site right next to each other in different periodicities. This one, which is the most famous in Japan, is the Ise Shrine, which is rebuilt every 20 years. A few years ago, I was fortunate enough to attend the rebuilding ceremony. (One of the oldest companies in the world, I should add, is the Japanese temple building company that builds these temples. 

The emphasis here is not on maintenance, but renewal. These temples made of thatch and wood—totally ephemeral materials—have lasted for 2,000 years. They have documented evidence of exact rebuilding for 1,600 years, but this site was founded in 4 AD—also by a visionary Japanese princess. And every 20 years, with the Japanese princess in attendance, they move the treasures from one from one temple to the other. And the kami—the spirits—follow that. And then they deconstruct the temple, the old one, and they send all those parts out to various Shinto temples in Japan as religious artifacts.

I think the most important thing about this particular example is that each generation gets to have this moment where the older master teaches the apprentice the craft. So you get this handing off of knowledge that’s extremely physical and real and has a deliverable. It’s got to ship, right? “The princess is coming to move the stuff; we have to do this.” It’s an eight year process with tons of ritual that goes into rebuilding these temples.

I think an interesting counterexample to things lasting a very long time is when they ascribe to certain ideologies. And I think it’s curious, one of our longest lived institutions is the Catholic Church, and the ideology of something like the Buddha’s Obamian has lasted, but a lot of the artifacts become targets for people who don’t believe in that ideology. The Taliban spent weeks dynamiting and using artillery to destroy these Buddhas. You would think that Buddhism, a relatively innocuous religion, is unthreatening—but not so much to the Taliban.

This is the University of Bologna, which is largely credited as the earliest university in the world. It’s almost 1000 years old at this point. Oxford was shortly behind it. And there’s another 40 or so universities over 500 years old.

Universities have this ability to do a kind of continual refresh where every four years, especially in undergraduate programs, you have a whole new set of people. And so they have to sell themselves to a new generation every single year. Their customer is a whole class. And we see universities now struggle when they aren’t teaching relevant things to people and they have to adjust. And that has kept them around as some of the longest lived institutions in the world. 

I think the idea of communities of practice is a really interesting one. In these communities, knowledge of practice is handed down from generation to generation. Such is the case with martial arts, which we have evidence for dating back at least 2,000 or 3,000 years.

There’s several strategies in nature that allow systems to last for thousands of years. There’s clonal strategies like the Aspen tree. We’ve measured Mesquite rings in the desert where they die and then they grow up in a ring from the root structure that indicates that a Mesquite ring has the same DNA, effectively for 50,000 years. And these clonal forests have definitely been around for thousands of years, even though each individual will only last a few years in some cases.

In other cases, things are cultivated. Going back to the wine example, where we know we have effectively the DNA of clonal species like these grapes from ancient Rome where we have taken a clipping and cultivated it from generation to generation. So there’s been this kind of interplay between humans and the natural world and we also see this in a lot of tree-caring practices. 

The bristlecone exemplifies how an existential crisis gives you practice in terms of how you’re going to survive. The bristlecone is the oldest continuously living single organism that we know of in the world. And the funny thing about the bristlecone is it was not discovered by coring to be the oldest living species in the world; it was postulated because a particular tree scientist had cored other pine species, and as they did that, they found that all the ones in the worst environments were the oldest. And he said: “If you can find the pine species that is living in the absolute worst environment, you will find the oldest species of pine in the world.” And he coined this term: adversity breeds longevity. And so then people went to go find the pines in the worst environment and up at the top of the White Mountains and in the Snake Range in Nevada, and some in Colorado as well, they found three different species of bristlecone, which have been dated to over 5,000 years at this point.

Taking the Future into Account

If any of us are to build an institution that’s going to last for the next few hundred or 1,000 years, the changes in demographics and the climate are a big part of it. This is the projected impact of climate change on agricultural yields until the 02080s. And as you can see, agricultural yields in the global south are going to be going down. In the global north and the much further north, more like Canada and Russia, they’re going to be getting a lot better. And this is going to change the world markets, world populations, and what we’re warring over for the next 100 years.

In all natural systems, you have these sigmoid curves that things always go down. We are always assuming things like our population and our economies to always go up, but that is not the way the world works; it has never been that way, and we always have these kinds of corrections. In this case, a predator follows the prey as a lower sigmoid. Once its prey runs out, then the predators start dying off. 

How do we get good at failing, but not totally dying out? The lynx never dies out totally, but companies that do one thing are bad at recovering when that one thing is no longer the big commodity. It wasn’t record companies that invented iTunes; it was an outsider company. Record companies were adept at selling plastic circles and when there were no plastic circles to sell music on, they didn’t know how to adjust for that. The crux of anything that’s going to last for a long time is: how do you get good at reinvention and retooling?

There’s no scenario that I’ve seen where the world population doesn’t start going down by at least a hundred years from now, if not less than 50 years from now. 

So even the median projection, that red line in the middle, tapers off. But this data is a couple of years old, and it’s now starting to increasingly look a lot more like that dotted blue line at the bottom. And the world has really never lived through a time, except for a few short plague seasons, where the world population was going down—and, by extension, where the customer base was going down.

Even more dangerous than the population going down is that the population is changing. The red line here is the number of 15 to 64 year olds. And the blue line is the zero to 14 year olds. If the world is made up largely of older people who hoard wealth, don’t work hard, and don’t make huge contributions of creativity to the world the way 20 year olds do, that world is a world that I don’t think we’re prepared to live in right now.

We’re seeing this now happening in a lot of the developed world and most notably in Japan. Those of you who remember the 01980s recall that there was no scenario where Japan was not an absolute dominant part of the economy of the world. And now they’re struggling just to be relevant in a lot of ways, and it’s largely because this population change happened and the young people were not there. They wouldn’t allow any immigration, and that creativity, and that thrust of civilization, went out of a country that was a dominant world economic power.

Watch the video of Alexander Rose’s talk on the Data of Long-lived Institutions:

LongNowA Long Now Drive-in Double Feature at Fort Mason

Join the Long Now Community for a night of films that inspire long-term thinking. On October 27, 02020, we’ll screen Samsara followed by 2001: A Space Odyssey at Fort Mason.

SAMSARA

Drive-in Screening on Tuesday October 27, 02020 at 6:00pm PT

SAMSARA is a Sanskrit word that means “the ever turning wheel of life” and is the point of departure for the filmmakers as they search for the elusive current of interconnection that runs through our lives.  SAMSARA transports us to sacred grounds, disaster zones, industrial sites, global gatherings and natural wonders. By dispensing with dialogue and descriptive text, the film subverts our expectations of a traditional documentary, instead encouraging our own inner interpretations inspired by images and music that infuses the ancient with the modern. 

Filmed over five years in twenty-five countries, SAMSARA (02011) is a non-verbal documentary from filmmakers Ron Fricke and Mark Magidson, the creators of BARAKA. It is one of only a handful of films shot on 70mm in the last forty years. Through powerful images, the film illuminates the links between humanity and the rest of nature, showing how our life cycle mirrors the rhythm of the planet.

2001: A Space Odyssey

Drive-in Screening on Tuesday October 27, 02020 at 8:45pm PT

The genius is not in how much Stanley Kubrick does in “2001: A Space Odyssey,” but in how little. This is the work of an artist so sublimely confident that he doesn’t include a single shot simply to keep our attention. He reduces each scene to its essence, and leaves it on screen long enough for us to contemplate it, to inhabit it in our imaginations. Alone among science-fiction movies, “2001″ is not concerned with thrilling us, but with inspiring our awe. 

What Kubrick had actually done was make a philosophical statement about man’s place in the universe, using images as those before him had used words, music or prayer. And he had made it in a way that invited us to contemplate it — not to experience it vicariously as entertainment, as we might in a good conventional science-fiction film, but to stand outside it as a philosopher might, and think about it.

 Roger Ebert

Ticket & Event Information:

  • Tickets are $30 per vehicle for members, General Public Tickets are $60 per vehicle.
  • Separate tickets must be purchased for each of the screenings.
  • Parking opens at 5:00pm for the 6:00pm showing, and 7:45pm for the 8:45pm showing.
  • Please have your ticket printed out or on your phone so we can check you in.
  • Parking location will be chosen by the venue to insure that everyone can best see the screen.
  • The film audio will be through your FM radio receiver. 
  • There will be concessions available at the event! The Interval will be open to purchase to-go drinks, there will be a Food Truck and popcorn, candy and other snacks for sale.

COVID-19 Safety Information:

  • This is a socially distant event. Please do not attend if you are experiencing any symptoms of COVID-19. 
  • Bathrooms will be cleaned throughout the evening.
  • Masks are required when outside of your vehicle. Masks with exhalation valves are not allowed.
  • Attendees must remain inside their vehicles except to use the restroom facilities or pickup concessions.
  • Each vehicle may only be occupied by members of a “pod” who have already been in close contact with each other.
  • Attendees who fail to follow safe distancing at the request of staff will cause the attendee to be subject to ejection of the event. No refund will be given.

About FORT MASON FLIX  

With drive-in theaters experiencing a renaissance around the country, Fort Mason Center for Arts & Culture (FMCAC) announces FORT MASON FLIX, a pop-up drive-in theater launching September 18, 2020. Housed on FMCAC’s historic waterfront campus, FORT MASON FLIX will present a cornucopia of film programming, from family favorites and cult classics to blockbusters and arthouse cinema.

Cryptogram NSA Advisory on Chinese Government Hacking

The NSA released an advisory listing the top twenty-five known vulnerabilities currently being exploited by Chinese nation-state attackers.

This advisory provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks. Most of the vulnerabilities listed below can be exploited to gain initial access to victim networks using products that are directly accessible from the Internet and act as gateways to internal networks. The majority of the products are either for remote access (T1133) or for external web services (T1190), and should be prioritized for immediate patching.

Worse Than FailureCodeSOD: Delete This

About three years ago, Consuela inherited a giant .NET project. It was… not good. To communicate how “not good” it was, Consuela had a lot of possible submissions. Sending the worst code might be the obvious choice, but it wouldn’t give a good sense of just how bad the whole thing was, so they opted instead to find something that could roughly be called the “median” quality.

This is a stored procedure that is roughly about the median sample of the overall code. Half of it is better, but half of it gets much, much worse.

CREATE proc [dbo].[usermgt_DeleteUser]
  (
    @ssoid uniqueidentifier
  )
AS
  begin
    declare @username nvarchar(64)
    select @username = Username from Users where SSOID = @ssoid
    if (not exists(select * from ssodata where ssoid = @ssoid))
      begin
        insert into ssodata (SSOID, UserName, email, givenName, sn)
        values (@ssoid, @username, 'Email@email.email', 'Firstname', 'Lastname')
        delete from ssodata where ssoid = @ssoid
      end
    else begin
      RAISERROR ('This user still exists in sso', 10, 1)
    end

Let’s talk a little bit about names. As you can see, they’re using an “internal” schema naming convention- usermgt clearly is defining a role for a whole class of stored procedures. Already, that’s annoying, but what does this procedure promise to do? DeleteUser.

But what exactly does it do?

Well, first, it checks to see if the user exists. If the user does exist… it raises an error? That’s an odd choice for deleting. But what does it do if the user doesn’t exist?

It creates a user with that ID, then deletes it.

Not only is this method terribly misnamed, it also seems to be utterly useless. At best, I think they’re trying to route around some trigger nonsense, where certain things happen ON INSERT and then different things happen ON DELETE. That’d be a WTF on its own, but that’s possibly giving this more credit than it deserves, because that assumes there’s a reason why the code is this way.

Consuela adds a promise, which hopefully means some follow-ups:

If you had access to the complete codebase, you would not EVER run out of new material for codesod. It’s basically a huge collection of “How Not To” on all possible layers, from single lines of code up to the complete architecture itself.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.