Planet Russell

,

Planet DebianMichael Stapelberg: Linux distributions: Can we do without hooks and triggers?

Hooks are an extension feature provided by all package managers that are used in larger Linux distributions. For example, Debian uses apt, which has various maintainer scripts. Fedora uses rpm, which has scriptlets. Different package managers use different names for the concept, but all of them offer package maintainers the ability to run arbitrary code during package installation and upgrades. Example hook use cases include adding daemon user accounts to your system (e.g. postgres), or generating/updating cache files.

Triggers are a kind of hook which run when other packages are installed. For example, on Debian, the man(1) package comes with a trigger which regenerates the search database index whenever any package installs a manpage. When, for example, the nginx(8) package is installed, a trigger provided by the man(1) package runs.

Over the past few decades, Open Source software has become more and more uniform: instead of each piece of software defining its own rules, a small number of build systems are now widely adopted.

Hence, I think it makes sense to revisit whether offering extension via hooks and triggers is a net win or net loss.

Hooks preclude concurrent package installation

Package managers commonly can make very little assumptions about what hooks do, what preconditions they require, and which conflicts might be caused by running multiple package’s hooks concurrently.

Hence, package managers cannot concurrently install packages. At least the hook/trigger part of the installation needs to happen in sequence.

While it seems technically feasible to retrofit package manager hooks with concurrency primitives such as locks for mutual exclusion between different hook processes, the required overhaul of all hooks¹ seems like such a daunting task that it might be better to just get rid of the hooks instead. Only deleting code frees you from the burden of maintenance, automated testing and debugging.

① In Debian, there are 8620 non-generated maintainer scripts, as reported by find shard*/src/*/debian -regex ".*\(pre\|post\)\(inst\|rm\)$" on a Debian Code Search instance.

Triggers slow down installing/updating other packages

Personally, I never use the apropos(1) command, so I don’t appreciate the man(1) package’s trigger which updates the database used by apropos(1). The process takes a long time and, because hooks and triggers must be executed serially (see previous section), blocks my installation or update.

When I tell people this, they are often surprised to learn about the existance of the apropos(1) command. I suggest adopting an opt-in model.

Unnecessary work if programs are not used between updates

Hooks run when packages are installed. If a package’s contents are not used between two updates, running the hook in the first update could have been skipped. Running the hook lazily when the package contents are used reduces unnecessary work.

As a welcome side-effect, lazy hook evaluation automatically makes the hook work in operating system images, such as live USB thumb drives or SD card images for the Raspberry Pi. Such images must not ship the same crypto keys (e.g. OpenSSH host keys) to all machines, but instead generate a different key on each machine.

Why do users keep packages installed they don’t use? It’s extra work to remember and clean up those packages after use. Plus, users might not realize or value that having fewer packages installed has benefits such as faster updates.

I can also imagine that there are people for whom the cost of re-installing packages incentivizes them to just keep packages installed—you never know when you might need the program again…

Implemented in an interpreted language

While working on hermetic packages (more on that in another blog post), where the contained programs are started with modified environment variables (e.g. PATH) via a wrapper bash script, I noticed that the overhead of those wrapper bash scripts quickly becomes significant. For example, when using the excellent magit interface for Git in Emacs, I encountered second-long delays² when using hermetic packages compared to standard packages. Re-implementing wrappers in a compiled language provided a significant speed-up.

Similarly, getting rid of an extension point which mandates using shell scripts allows us to build an efficient and fast implementation of a predefined set of primitives, where you can reason about their effects and interactions.

② magit needs to run git a few times for displaying the full status, so small overhead quickly adds up.

Incentivizing more upstream standardization

Hooks are an escape hatch for distribution maintainers to express anything which their packaging system cannot express.

Distributions should only rely on well-established interfaces such as autoconf’s classic ./configure && make && make install (including commonly used flags) to build a distribution package. Integrating upstream software into a distribution should not require custom hooks. For example, instead of requiring a hook which updates a cache of schema files, the library used to interact with those files should transparently (re-)generate the cache or fall back to a slower code path.

Distribution maintainers are hard to come by, so we should value their time. In particular, there is a 1:n relationship of packages to distribution package maintainers (software is typically available in multiple Linux distributions), so it makes sense to spend the work in the 1 and have the n benefit.

Can we do without them?

If we want to get rid of hooks, we need another mechanism to achieve what we currently achieve with hooks.

If the hook is not specific to the package, it can be moved to the package manager. The desired system state should either be derived from the package contents (e.g. required system users can be discovered from systemd service files) or declaratively specified in the package build instructions—more on that in another blog post. This turns hooks (arbitrary code) into configuration, which allows the package manager to collapse and sequence the required state changes. E.g., when 5 packages are installed which each need a new system user, the package manager could update /etc/passwd just once.

If the hook is specific to the package, it should be moved into the package contents. This typically means moving the functionality into the program start (or the systemd service file if we are talking about a daemon). If (while?) upstream is not convinced, you can either wrap the program or patch it. Note that this case is relatively rare: I have worked with hundreds of packages and the only package-specific functionality I came across was automatically generating host keys before starting OpenSSH’s sshd(8)³.

There is one exception where moving the hook doesn’t work: packages which modify state outside of the system, such as bootloaders or kernel images.

③ Even that can be moved out of a package-specific hook, as Fedora demonstrates.

Conclusion

Global state modifications performed as part of package installation today use hooks, an overly expressive extension mechanism.

Instead, all modifications should be driven by configuration. This is feasible because there are only a few different kinds of desired state modifications. This makes it possible for package managers to optimize package installation.

Planet DebianMichael Stapelberg: Linux package managers are slow

I measured how long the most popular Linux distribution’s package manager take to install small and large packages (the ack(1p) source code search Perl script and qemu, respectively).

Where required, my measurements include metadata updates such as transferring an up-to-date package list. For me, requiring a metadata update is the more common case, particularly on live systems or within Docker containers.

All measurements were taken on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz running Docker 1.13.1 on Linux 4.19, backed by a Samsung 970 Pro NVMe drive boasting many hundreds of MB/s write performance.

See Appendix B for details on the measurement method and command outputs.

Measurements

Keep in mind that these are one-time measurements. They should be indicative of actual performance, but your experience may vary.

ack (small Perl program)

distribution package manager data wall-clock time rate
Fedora dnf 107 MB 29s 3.7 MB/s
NixOS Nix 15 MB 14s 1.1 MB/s
Debian apt 15 MB 4s 3.7 MB/s
Arch Linux pacman 6.5 MB 3s 2.1 MB/s
Alpine apk 10 MB 1s 10.0 MB/s

qemu (large C program)

distribution package manager data wall-clock time rate
Fedora dnf 266 MB 1m8s 3.9 MB/s
Arch Linux pacman 124 MB 1m2s 2.0 MB/s
Debian apt 159 MB 51s 3.1 MB/s
NixOS Nix 262 MB 38s 6.8 MB/s
Alpine apk 26 MB 2.4s 10.8 MB/s


The difference between the slowest and fastest package managers is 30x!

How can Alpine’s apk and Arch Linux’s pacman be an order of magnitude faster than the rest? They are doing a lot less than the others, and more efficiently, too.

Pain point: too much metadata

For example, Fedora transfers a lot more data than others because its main package list is 60 MB (compressed!) alone. Compare that with Alpine’s 734 KB APKINDEX.tar.gz.

Of course the extra metadata which Fedora provides helps some use case, otherwise they hopefully would have removed it altogether. The amount of metadata seems excessive for the use case of installing a single package, which I consider the main use-case of an interactive package manager.

I expect any modern Linux distribution to only transfer absolutely required data to complete my task.

Pain point: no concurrency

Because they need to sequence executing arbitrary package maintainer-provided code (hooks and triggers), all tested package managers need to install packages sequentially (one after the other) instead of concurrently (all at the same time).

In my blog post “Can we do without hooks and triggers?”, I outline that hooks and triggers are not strictly necessary to build a working Linux distribution.

Thought experiment: further speed-ups

Strictly speaking, the only required feature of a package manager is to make available the package contents so that the package can be used: a program can be started, a kernel module can be loaded, etc.

By only implementing what’s needed for this feature, and nothing more, a package manager could likely beat apk’s performance. It could, for example:

  • skip archive extraction by mounting file system images (like AppImage or snappy)
  • use compression which is light on CPU, as networks are fast (like apk)
  • skip fsync when it is safe to do so, i.e.:
    • package installations don’t modify system state
    • atomic package installation (e.g. an append-only package store)
    • automatically clean up the package store after crashes

Current landscape

Here’s a table outlining how the various package managers listed on Wikipedia’s list of software package management systems fare:

name scope package file format hooks/triggers
AppImage apps image: ISO9660, SquashFS no
snappy apps image: SquashFS yes: hooks
FlatPak apps archive: OSTree no
0install apps archive: tar.bz2 no
nix, guix distro archive: nar.{bz2,xz} activation script
dpkg distro archive: tar.{gz,xz,bz2} in ar(1) yes
rpm distro archive: cpio.{bz2,lz,xz} scriptlets
pacman distro archive: tar.xz install
slackware distro archive: tar.{gz,xz} yes: doinst.sh
apk distro archive: tar.gz yes: .post-install
Entropy distro archive: tar.bz2 yes
ipkg, opkg distro archive: tar{,.gz} yes

Conclusion

As per the current landscape, there is no distribution-scoped package manager which uses images and leaves out hooks and triggers, not even in smaller Linux distributions.

I think that space is really interesting, as it uses a minimal design to achieve significant real-world speed-ups.

I have explored this idea in much more detail, and am happy to talk more about it in my post “Introducing the distri research linux distribution".

There are a couple of recent developments going into the same direction:

Appendix B: measurement details

ack

You can expand each of these:

Fedora’s dnf takes almost 30 seconds to fetch and unpack 107 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y ack
Fedora Modular 30 - x86_64            4.4 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  3.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           17 MB/s |  19 MB     00:01
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  44 Packages

Total download size: 13 M
Installed size: 42 M
[…]
real	0m29.498s
user	0m22.954s
sys	0m1.085s

NixOS’s Nix takes 14s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i perl5.28.2-ack-2.28'
unpacking channels...
created 2 symlinks in user environment
installing 'perl5.28.2-ack-2.28'
these paths will be fetched (14.91 MiB download, 80.83 MiB unpacked):
  /nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2
  /nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48
  /nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man
  /nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27
  /nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31
  /nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53
  /nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16
  /nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28
copying path '/nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man' from 'https://cache.nixos.org'...
copying path '/nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27' from 'https://cache.nixos.org'...
copying path '/nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16' from 'https://cache.nixos.org'...
copying path '/nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48' from 'https://cache.nixos.org'...
copying path '/nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53' from 'https://cache.nixos.org'...
copying path '/nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31' from 'https://cache.nixos.org'...
copying path '/nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2' from 'https://cache.nixos.org'...
copying path '/nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28' from 'https://cache.nixos.org'...
building '/nix/store/q3243sjg91x1m8ipl0sj5gjzpnbgxrqw-user-environment.drv'...
created 56 symlinks in user environment
real	0m 14.02s
user	0m 8.83s
sys	0m 2.69s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y ack-grep)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [233 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8270 kB]
Fetched 8502 kB in 2s (4764 kB/s)
[…]
The following NEW packages will be installed:
  ack ack-grep libfile-next-perl libgdbm-compat4 libgdbm5 libperl5.26 netbase perl perl-modules-5.26
The following packages will be upgraded:
  perl-base
1 upgraded, 9 newly installed, 0 to remove and 60 not upgraded.
Need to get 8238 kB of archives.
After this operation, 42.3 MB of additional disk space will be used.
[…]
real	0m9.096s
user	0m2.616s
sys	0m0.441s

Arch Linux’s pacman takes a little over 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            132.2 KiB  1033K/s 00:00
 extra          1629.6 KiB  2.95M/s 00:01
 community         4.9 MiB  5.75M/s 00:01
[…]
Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
[…]
real	0m3.354s
user	0m0.224s
sys	0m0.049s

Alpine’s apk takes only about 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
/ # time apk add ack
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/4) Installing perl-file-next (1.16-r0)
(2/4) Installing libbz2 (1.0.6-r7)
(3/4) Installing perl (5.28.2-r1)
(4/4) Installing ack (3.0.0-r0)
Executing busybox-1.30.1-r2.trigger
OK: 44 MiB in 18 packages
real	0m 0.96s
user	0m 0.25s
sys	0m 0.07s

qemu

You can expand each of these:

Fedora’s dnf takes over a minute to fetch and unpack 266 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y qemu
Fedora Modular 30 - x86_64            3.1 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  2.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           20 MB/s |  19 MB     00:00
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  262 Packages
Upgrade    4 Packages

Total download size: 172 M
[…]
real	1m7.877s
user	0m44.237s
sys	0m3.258s

NixOS’s Nix takes 38s to fetch and unpack 262 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i qemu-4.0.0'
unpacking channels...
created 2 symlinks in user environment
installing 'qemu-4.0.0'
these paths will be fetched (262.18 MiB download, 1364.54 MiB unpacked):
[…]
real	0m 38.49s
user	0m 26.52s
sys	0m 4.43s

Debian’s apt takes 51 seconds to fetch and unpack 159 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [149 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8426 kB]
Fetched 8574 kB in 1s (6716 kB/s)
[…]
Fetched 151 MB in 2s (64.6 MB/s)
[…]
real	0m51.583s
user	0m15.671s
sys	0m3.732s

Arch Linux’s pacman takes 1m2s to fetch and unpack 124 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core       132.2 KiB   751K/s 00:00
 extra     1629.6 KiB  3.04M/s 00:01
 community    4.9 MiB  6.16M/s 00:01
[…]
Total Download Size:   123.20 MiB
Total Installed Size:  587.84 MiB
[…]
real	1m2.475s
user	0m9.272s
sys	0m2.458s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
[…]
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Planet DebianMichael Stapelberg: distri: a Linux distribution to research fast package management

Over the last year or so I have worked on a research linux distribution in my spare time. It’s not a distribution for researchers (like Scientific Linux), but my personal playground project to research linux distribution development, i.e. try out fresh ideas.

This article focuses on the package format and its advantages, but there is more to distri, which I will cover in upcoming blog posts.

Motivation

I was a Debian Developer for the 7 years from 2012 to 2019, but using the distribution often left me frustrated, ultimately resulting in me winding down my Debian work.

Frequently, I was noticing a large gap between the actual speed of an operation (e.g. doing an update) and the possible speed based on back of the envelope calculations. I wrote more about this in my blog post “Package managers are slow”.

To me, this observation means that either there is potential to optimize the package manager itself (e.g. apt), or what the system does is just too complex. While I remember seeing some low-hanging fruit¹, through my work on distri, I wanted to explore whether all the complexity we currently have in Linux distributions such as Debian or Fedora is inherent to the problem space.

I have completed enough of the experiment to conclude that the complexity is not inherent: I can build a Linux distribution for general-enough purposes which is much less complex than existing ones.

① Those were low-hanging fruit from a user perspective. I’m not saying that fixing them is easy in the technical sense; I know too little about apt’s code base to make such a statement.

Key idea: packages are images, not archives

One key idea is to switch from using archives to using images for package contents. Common package managers such as dpkg(1) use tar(1) archives with various compression algorithms.

distri uses SquashFS images, a comparatively simple file system image format that I happen to be familiar with from my work on the gokrazy Raspberry Pi 3 Go platform.

This idea is not novel: AppImage and snappy also use images, but only for individual, self-contained applications. distri however uses images for distribution packages with dependencies. In particular, there is no duplication of shared libraries in distri.

A nice side effect of using read-only image files is that applications are immutable and can hence not be broken by accidental (or malicious!) modification.

Key idea: separate hierarchies

Package contents are made available under a fully-qualified path. E.g., all files provided by package zsh-amd64-5.6.2-3 are available under /ro/zsh-amd64-5.6.2-3. The mountpoint /ro stands for read-only, which is short yet descriptive.

Perhaps surprisingly, building software with custom prefix values of e.g. /ro/zsh-amd64-5.6.2-3 is widely supported, thanks to:

  1. Linux distributions, which build software with prefix set to /usr, whereas FreeBSD (and the autotools default), which build with prefix set to /usr/local.

  2. Enthusiast users in corporate or research environments, who install software into their home directories.

Because using a custom prefix is a common scenario, upstream awareness for prefix-correctness is generally high, and the rarely required patch will be quickly accepted.

Key idea: exchange directories

Software packages often exchange data by placing or locating files in well-known directories. Here are just a few examples:

  • gcc(1) locates the libusb(3) headers via /usr/include
  • man(1) locates the nginx(1) manpage via /usr/share/man.
  • zsh(1) locates executable programs via PATH components such as /bin

In distri, these locations are called exchange directories and are provided via FUSE in /ro.

Exchange directories come in two different flavors:

  1. global. The exchange directory, e.g. /ro/share, provides the union of the share sub directory of all packages in the package store.
    Global exchange directories are largely used for compatibility, see below.

  2. per-package. Useful for tight coupling: e.g. irssi(1) does not provide any ABI guarantees, so plugins such as irssi-robustirc can declare that they want e.g. /ro/irssi-amd64-1.1.1-1/out/lib/irssi/modules to be a per-package exchange directory and contain files from their lib/irssi/modules.

Search paths sometimes need to be fixed

Programs which use exchange directories sometimes use search paths to access multiple exchange directories. In fact, the examples above were taken from gcc(1) ’s INCLUDEPATH, man(1) ’s MANPATH and zsh(1) ’s PATH. These are prominent ones, but more examples are easy to find: zsh(1) loads completion functions from its FPATH.

Some search path values are derived from --datadir=/ro/share and require no further attention, but others might derive from e.g. --prefix=/ro/zsh-amd64-5.6.2-3/out and need to be pointed to an exchange directory via a specific command line flag.

FHS compatibility

Global exchange directories are used to make distri provide enough of the Filesystem Hierarchy Standard (FHS) that third-party software largely just works. This includes a C development environment.

I successfully ran a few programs from their binary packages such as Google Chrome, Spotify, or Microsoft’s Visual Studio Code.

Fast package manager

I previously wrote about how Linux distribution package managers are too slow.

distri’s package manager is extremely fast. Its main bottleneck is typically the network link, even at high speed links (I tested with a 100 Gbps link).

Its speed comes largely from an architecture which allows the package manager to do less work. Specifically:

  1. Package images can be added atomically to the package store, so we can safely skip fsync(2) . Corruption will be cleaned up automatically, and durability is not important: if an interactive installation is interrupted, the user can just repeat it, as it will be fresh on their mind.

  2. Because all packages are co-installable thanks to separate hierarchies, there are no conflicts at the package store level, and no dependency resolution (an optimization problem requiring SAT solving) is required at all.
    In exchange directories, we resolve conflicts by selecting the package with the highest monotonically increasing distri revision number.

  3. distri proves that we can build a useful Linux distribution entirely without hooks and triggers. Not having to serialize hook execution allows us to download packages into the package store with maximum concurrency.

  4. Because we are using images instead of archives, we do not need to unpack anything. This means installing a package is really just writing its package image and metadata to the package store. Sequential writes are typically the fastest kind of storage usage pattern.

Fast installation also make other use-cases more bearable, such as creating disk images, be it for testing them in qemu(1) , booting them on real hardware from a USB drive, or for cloud providers such as Google Cloud.

Fast package builder

Contrary to how distribution package builders are usually implemented, the distri package builder does not actually install any packages into the build environment.

Instead, distri makes available a filtered view of the package store (only declared dependencies are available) at /ro in the build environment.

This means that even for large dependency trees, setting up a build environment happens in a fraction of a second! Such a low latency really makes a difference in how comfortable it is to iterate on distribution packages.

Package stores

In distri, package images are installed from a remote package store into the local system package store /roimg, which backs the /ro mount.

A package store is implemented as a directory of package images and their associated metadata files.

You can easily make available a package store by using distri export.

To provide a mirror for your local network, you can periodically distri update from the package store you want to mirror, and then distri export your local copy. Special tooling (e.g. debmirror in Debian) is not required because distri install is atomic (and update uses install).

Producing derivatives is easy: just add your own packages to a copy of the package store.

The package store is intentionally kept simple to manage and distribute. Its files could be exchanged via peer-to-peer file systems, or synchronized from an offline medium.

distri’s first release

distri works well enough to demonstrate the ideas explained above. I have branched this state into branch jackherer, distri’s first release code name. This way, I can keep experimenting in the distri repository without breaking your installation.

From the branch contents, our autobuilder creates:

  1. disk images, which…

  2. a package repository. Installations can pick up new packages with distri update.

  3. documentation for the release.

The project website can be found at https://distr1.org. The website is just the README for now, but we can improve that later.

The repository can be found at https://github.com/distr1/distri

Project outlook

Right now, distri is mainly a vehicle for my spare-time Linux distribution research. I don’t recommend anyone use distri for anything but research, and there are no medium-term plans of that changing. At the very least, please contact me before basing anything serious on distri so that we can talk about limitations and expectations.

I expect the distri project to live for as long as I have blog posts to publish, and we’ll see what happens afterwards. Note that this is a hobby for me: I will continue to explore, at my own pace, parts that I find interesting.

My hope is that established distributions might get a useful idea or two from distri.

There’s more to come: subscribe to the distri feed

I don’t want to make this post too long, but there is much more!

Please subscribe to the following URL in your feed reader to get all posts about distri:

https://michael.stapelberg.ch/posts/tags/distri/feed.xml

Next in my queue are articles about hermetic packages and good package maintainer experience (including declarative packaging).

Feedback or questions?

I’d love to discuss these ideas in case you’re interested!

Please send feedback to the distri mailing list so that everyone can participate!

Planet DebianCyril Brulebois: Sending HTML messages with Net::XMPP (Perl)

Executive summary

It’s perfectly possible! Jump to the HTML demo!

Longer version

This started with a very simple need: wanting to improve the notifications I’m receiving from various sources. Those include:

  • changes or failures reported during Puppet runs on my own infrastructure, and on at a customer’s;
  • build failures for the Debian Installer;
  • changes in banking amounts;
  • and lately: build status for jobs in a customer’s Jenkins instance.

I’ve been using plaintext notifications for a number of years but I decided to try and pimp them a little by adding some colors.

While the XMPP-sending details are usually hidden in a local module, here’s a small self-contained example: connecting to a server, sending credentials, and then sending a message to someone else. Of course, one might want to tweak the Configuration section before trying to run this script…

#!/usr/bin/perl
use strict;
use warnings;

use Net::XMPP;

# Configuration:
my $hostname = 'example.org';
my $username = 'bot';
my $password = 'call-me-alan';
my $resource = 'demo';
my $recipient = 'human@example.org';

# Open connection:
my $con = Net::XMPP::Client->new();
my $status = $con->Connect(
    hostname       => $hostname,
    connectiontype => 'tcpip',
    tls            => 1,
    ssl_ca_path    => '/etc/ssl/certs',
);
die 'XMPP connection failed'
    if ! defined($status);

# Log in:
my @result = $con->AuthSend(
    hostname => $hostname,
    username => $username,
    password => $password,
    resource => $resource,
);
die 'XMPP authentication failed'
    if $result[0] ne 'ok';

# Send plaintext message:
my $msg = 'Hello, World!';
my $res = $con->MessageSend(
    to   => $recipient,
    body => $msg,
    type => 'chat',
);
die('ERROR: XMPP message failed')
    if $res != 0;

For reference, here’s what the XML message looks like in Gajim’s XML console (on the receiving end):

<message type='chat' to='human@example.org' from='bot@example.org/demo'>
  <body>Hello, World!</body>
</message>

Issues start when one tries to send some HTML message, e.g. with the last part changed to:

# Send plaintext message:
my $msg = 'This is a <b>failing</b> test';
my $res = $con->MessageSend(
    to   => $recipient,
    body => $msg,
    type => 'chat',
);

as that leads to the following message:

<message type='chat' to='human@example.org' from='bot@example.org/demo'>
  <body>This is a &lt;b&gt;failing&lt;/b&gt; test</body>
</message>

So tags are getting encoded and one gets to see the uninterpreted “HTML code”.

Trying various things to embed that inside <body> and <html> tags, with or without namespaces, led nowhere.

Looking at a message sent from Gajim to Gajim (so that I could craft an HTML message myself and inspect it), I’ve noticed it goes this way (edited to concentrate on important parts):

<message xmlns="jabber:client" to="human@example.org/Gajim" type="chat">
  <body>Hello, World!</body>
  <html xmlns="http://jabber.org/protocol/xhtml-im">
    <body xmlns="http://www.w3.org/1999/xhtml">
      <p>Hello, <strong>World</strong>!</p>
    </body>
  </html>
</message>

Two takeaways here:

  • The message is send both in plaintext and in HTML. It seems Gajim archives the plaintext version, as opening the history/logs only shows the textual version.

  • The fact that the HTML message is under a different path (/message/html as opposed to /message/body) means that one cannot use the MessageSend method to send HTML messages…

This was verified by checking the documentation and code of the Net::XMPP::Message module. It comes with various getters and setters for attributes. Those are then automatically collected when the message is serialized into XML (through the GetXML() method). Trying to add handling for a new HTML attribute would mean being extra careful as that would need to be treated with $type = 'raw'

Oh, wait a minute! While using git grep in the sources, looking for that raw type thing, I’ve discovered what sounded promising: an InsertRawXML() method, that doesn’t appear anywhere in either the code or the documentation of the Net::XMPP::Message module.

It’s available, though! Because Net::XMPP::Message is derived from Net::XMPP::Stanza:

use Net::XMPP::Stanza;
use base qw( Net::XMPP::Stanza );

which then in turn comes with this function:

##############################################################################
#
# InsertRawXML - puts the specified string onto the list for raw XML to be
#                included in the packet.
#
##############################################################################

Let’s put that aside for a moment and get back to the MessageSend() method. It wants parameters that can be passed to the Net::XMPP::Message SetMessage() method, and here is its entire code:

###############################################################################
#
# MessageSend - Takes the same hash that Net::XMPP::Message->SetMessage
#               takes and sends the message to the server.
#
###############################################################################
sub MessageSend
{
    my $self = shift;

    my $mess = $self->_message();
    $mess->SetMessage(@_);
    $self->Send($mess);
}

The first assignment is basically equivalent to my $mess = Net::XMPP::Message->new();, so what this function does is: creating a Net::XMPP::Message for us, passing all parameters there, and handing the resulting object over to the Send() method. All in all, that’s merely a proxy.

HTML demo

The question becomes: what if we were to create that object ourselves, then tweaking it a little, and then passing it directly to Send(), instead of using the slightly limited MessageSend()? Let’s see what the rewritten sending part would look like:

# Send HTML message:
my $text = 'This is a working test';
my $html = 'This is a <b>working</b> test';

my $message = Net::XMPP::Message->new();
$message->SetMessage(
    to   => $recipient,
    body => $text,
    type => 'chat',
);
$message->InsertRawXML("<html><body>$html</body></html>");
my $res = $con->Send($message);

And tada!

<message type='chat' to='human@example.org' from='bot@example.org/demo'>
  <body>This is a working test</body>
  <html>
    <body>This is a <b>working</b> test</body>
  </html>
</message>

I’m absolutely no expert when it comes to XMPP standards, and one might need/want to set some more metadata like xmlns but I’m happy enough with this solution that I thought I’d share it as is. ;)

,

CryptogramFriday Squid Blogging: Robot Squid Propulsion

Interesting research:

The squid robot is powered primarily by compressed air, which it stores in a cylinder in its nose (do squids have noses?). The fins and arms are controlled by pneumatic actuators. When the robot wants to move through the water, it opens a value to release a modest amount of compressed air; releasing the air all at once generates enough thrust to fire the robot squid completely out of the water.

The jumping that you see at the end of the video is preliminary work; we're told that the robot squid can travel between 10 and 20 meters by jumping, whereas using its jet underwater will take it just 10 meters. At the moment, the squid can only fire its jet once, but the researchers plan to replace the compressed air with something a bit denser, like liquid CO2, which will allow for extended operation and multiple jumps. There's also plenty of work to do with using the fins for dynamic control, which the researchers say will "reveal the superiority of the natural flying squid movement."

I can't find the paper online.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianBits from Debian: Debian celebrates 26 years, Happy DebianDay!

26 years ago today in a single post to the comp.os.linux.development newsgroup, Ian Murdock announced the completion of a brand new Linux release named #Debian.

Since that day we’ve been into outer space, typed over 1,288,688,830 lines of code, spawned over 300 derivatives, were enhanced with 6,155 known contributors, and filed over 975,619 bug reports.

We are home to a community of thousands of users around the globe, we gather to host our annual Debian Developers Conference #DebConf which spans the world in a different country each year, and of course today's many #DebianDay celebrations held around the world.

It's not too late to throw an impromptu #DebianDay celebration or to go and join one of the many celebrations already underway.

As we celebrate our own anniversary, we also want to celebrate our many contributors, developers, teams, groups, maintainers, and users. It is all of your effort, support, and drive that continue to make Debian truly: The universal operating system.

Happy #DebianDay!

Planet DebianJonathan McDowell: DebConf19: Brazil

My first DebConf was DebConf4, held in Porte Alegre, Brazil back in 2004. Uncle Steve did the majority of the travel arrangements for 6 of us to go. We had some mishaps which we still tease him about, but it was a great experience. So when I learnt DebConf19 was to be in Brazil again, this time in Curitiba, I had to go. So last November I realised flights were only likely to get more expensive, that I’d really kick myself if I didn’t go, and so I booked my tickets. A bunch of life happened in the meantime that mean the timing wasn’t particularly great for me - it’s been a busy 6 months - but going was still the right move.

One thing that struck me about DC19 is that a lot of the faces I’m used to seeing at a DebConf weren’t there. Only myself and Steve from the UK DC4 group made it, for example. I don’t know if that’s due to the travelling distances involved, or just the fact that attendance varies and this happened to be a year where a number of people couldn’t make it. Nonetheless I was able to catch up with a number of people I only really see at DebConfs, as well as getting to hang out with some new folk.

Given how busy I’ve been this year and expect to be for at least the next year I set myself a hard goal of not committing to any additional tasks. That said DebConf often provides a welcome space to concentrate on technical bits. I reviewed and merged dkg’s work on WKD and DANE for the Debian keyring under debian.org - we’re not exposed to the recent keyserver network issues due to the fact the keyring is curated, but providing additional access to our keyring makes sense if it can be done easily. I spent some time with Ian Jackson talking about dgit - I’m not a user of it at present, but I’m intrigued by the potential for being able to do Debian package uploads via signed git tags. Of course I also attended a variety of different talks (and, as usual, at times the schedule conflicted such that I had a difficult choice about which option to chose for a particular slot).

This also marks the first time I did a non-team related talk at DebConf, warbling about my home automation (similar to my NI Dev Conf talk but with some more bits about the Debian involvement thrown in):

In addition I co-presented a couple of talks for teams I’m part of:

I only realised late in the week that 2 talks I’d normally expect to attend, an Software in the Public Interest BoF and a New Member BoF, were not on the schedule, but to be honest I don’t think I’d have been able to run either even if I’d realised in advance.

Finally, DebConf wouldn’t be DebConf without playing with some embedded hardware at some point, and this year it was the Caninos Loucos Labrador. This is a Brazilian grown single board ARM based computer with a modular form factor designed for easy integration into bigger projects. There;s nothing particularly remarkable about the hardware and you might ask why not just use a Pi? The reason is that import duties in Brazil make such things prohibitively expensive - importing a $35 board can end up costing $150 by the time shipping, taxes and customs fees are all taken into account. The intent is to design and build locally, as components can be imported with minimal taxes if the final product is being assembled within Brazil. And Mercosul allows access to many other South American countries without tariffs. I’d have loved to get hold of one of the boards, but they’ve only produced 1000 in the initial run and really need to get them into the hands of people who can help progress the project rather than those who don’t have enough time.

Next year DebConf20 is in Haifa - a city I’ve spent some time in before - but I’ve made the decision not to attend; rather than spending a single 7-10 day chunk away from home I’m going to aim to attend some more local conferences for shorter periods of time.

CryptogramSoftware Vulnerabilities in the Boeing 787

Boeing left its software unprotected, and researchers have analyzed it for vulnerabilities:

At the Black Hat security conference today in Las Vegas, Santamarta, a researcher for security firm IOActive, plans to present his findings, including the details of multiple serious security flaws in the code for a component of the 787 known as a Crew Information Service/Maintenance System. The CIS/MS is responsible for applications like maintenance systems and the so-called electronic flight bag, a collection of navigation documents and manuals used by pilots. Santamarta says he found a slew of memory corruption vulnerabilities in that CIS/MS, and he claims that a hacker could use those flaws as a foothold inside a restricted part of a plane's network. An attacker could potentially pivot, Santamarta says, from the in-flight entertainment system to the CIS/MS to send commands to far more sensitive components that control the plane's safety-critical systems, including its engine, brakes, and sensors. Boeing maintains that other security barriers in the 787's network architecture would make that progression impossible.

Santamarta admits that he doesn't have enough visibility into the 787's internals to know if those security barriers are circumventable. But he says his research nonetheless represents a significant step toward showing the possibility of an actual plane-hacking technique. "We don't have a 787 to test, so we can't assess the impact," Santamarta says. "We're not saying it's doomsday, or that we can take a plane down. But we can say: This shouldn't happen."

Boeing denies that there's any problem:

In a statement, Boeing said it had investigated IOActive's claims and concluded that they don't represent any real threat of a cyberattack. "IOActive's scenarios cannot affect any critical or essential airplane system and do not describe a way for remote attackers to access important 787 systems like the avionics system," the company's statement reads. "IOActive reviewed only one part of the 787 network using rudimentary tools, and had no access to the larger system or working environments. IOActive chose to ignore our verified results and limitations in its research, and instead made provocative statements as if they had access to and analyzed the working system. While we appreciate responsible engagement from independent cybersecurity researchers, we're disappointed in IOActive's irresponsible presentation."

This being Black Hat and Las Vegas, I'll say it this way: I would bet money that Boeing is wrong. I don't have an opinion about whether or not it's lying.

Worse Than FailureError'd: What About the Fish?

"On the one hand, I don't want to know what the fish has to do with Boris Johnson's love life...but on the other hand I have to know!" Mark R. writes.

 

"Not sure if that's a new GDPR rule or the Slack Mailbot's weekend was just that much better then mine," Adam G. writes.

 

Connor W. wrote, "You know what, I think I'll just stay inside."

 

"It's great to see that an attempt at personalization was made, but whatever happened to 'trust but verify'?" writes Rob H.

 

"For a while, I thought that, maybe, I didn't actually know how to use my iPhone's alarm. Instead, I found that it just wasn't working right. So, I contacted Apple Support, and while they were initially skeptical that it was an iOS issue, this morning, I actually have proof!" Markus G. wrote.

 

Tim G. writes, "I guess that's better than an angry error message."

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianFrançois Marier: Passwordless restricted guest account on Ubuntu

Here's how I created a restricted but not ephemeral guest account on an Ubuntu 18.04 desktop computer that can be used without a password.

Create a user that can login without a password

First of all, I created a new user with a random password (using pwgen -s 64):

adduser guest

Then following these instructions, I created a new group and added the user to it:

addgroup nopasswdlogin
adduser guest nopasswdlogin

In order to let that user login using GDM without a password, I added the following to the top of /etc/pam.d/gdm-password:

auth    sufficient      pam_succeed_if.so user ingroup nopasswdlogin

Note that this user is unable to ssh into this machine since it's not part of the sshuser group I have setup in my sshd configuration.

Privacy settings

In order to reduce the amount of digital traces left between guest sessions, I logged into the account using a GNOME session and then opened gnome-control-center. I set the following in the privacy section:

Then I replaced Firefox with Brave in the sidebar, set it as the default browser in gnome-control-center:

and configured it to clear everything on exit:

Create a password-less system keyring

In order to suppress prompts to unlock gnome-keyring, I opened seahorse and deleted the default keyring.

Then I started Brave, which prompted me to create a new keyring so that it can save the contents of its password manager securely. I set an empty password on that new keyring, since I'm not going to be using it.

I also made sure to disable saving of passwords, payment methods and addresses in the browser too.

Restrict user account further

Finally, taking an idea from this similar solution, I prevented the user from making any system-wide changes by putting the following in /etc/polkit-1/localauthority/50-local.d/10-guest-policy.pkla:

[guest-policy]
Identity=unix-user:guest
Action=*
ResultAny=no
ResultInactive=no
ResultActive=no

If you know of any other restrictions that could be added, please leave a comment!

,

Planet DebianJulian Andres Klode: APT Patterns

If you have ever used aptitude a bit more extensively on the command-line, you’ll probably have come across its patterns. This week I spent some time implementing (some) patterns for apt, so you do not need aptitude for that, and I want to let you in on the details of this merge request !74.

so, what are patterns?

Patterns allow you to specify complex search queries to select the packages you want to install/show. For example, the pattern ?garbage can be used to find all packages that have been automatically installed but are no longer depended upon by manually installed packages. Or the pattern ?automatic allows you find all automatically installed packages.

You can combine patterns into more complex ones; for example, ?and(?automatic,?obsolete) matches all automatically installed packages that do not exist any longer in a repository.

There are also explicit targets, so you can perform queries like ?for x: ?depends(?recommends(x)): Find all packages x that depend on another package that recommends x. I do not fully comprehend those yet - I did not manage to create a pattern that matches all manually installed packages that a meta-package depends upon. I am not sure it is possible.

reducing pattern syntax

aptitude’s syntax for patterns is quite context-sensitive. If you have a pattern ?foo(?bar) it can have two possible meanings:

  1. If ?foo takes arguments (like ?depends did), then ?bar is the argument.
  2. Otherwise, ?foo(?bar) is equivalent to ?foo?bar which is short for ?and(?foo,?bar)

I find that very confusing. So, when looking at implementing patterns in APT, I went for a different approach. I first parse the pattern into a generic parse tree, without knowing anything about the semantics, and then I convert the parse tree into a APT::CacheFilter::Matcher, an object that can match against packages.

This is useful, because the syntactic structure of the pattern can be seen, without having to know which patterns have arguments and which do not - basically, for the parser ?foo and ?foo() are the same thing. That said, the second pass knows whether a pattern accepts arguments or not and insists on you adding them if required and not having them if it does not accept any, to prevent you from confusing yourself.

aptitude also supports shortcuts. For example, you could write ~c instead of config-files, or ~m for automatic; then combine them like ~m~c instead of using ?and. I have not implemented these short patterns for now, focusing instead on getting the basic functionality working.

So in our example ?foo(?bar) above, we can immediately dismiss parsing that as ?foo?bar:

  1. we do not support concatenation instead of ?and.
  2. we automatically parse ( as the argument list, no matter whether ?foo supports arguments or not
apt not understanding invalid patterns

apt not understanding invalid patterns

Supported syntax

At the moment, APT supports two kinds of patterns: Basic logic ones like ?and, and patterns that apply to an entire package as opposed to a specific version. This was done as a starting point for the merge, patterns for versions will come in the next round.

We also do not have any support for explicit search targets such as ?for x: ... yet - as explained, I do not yet fully understand them, and hence do not want to commit on them.

The full list of the first round of patterns is below, helpfully converted from the apt-patterns(7) docbook to markdown by pandoc.

logic patterns

These patterns provide the basic means to combine other patterns into more complex expressions, as well as ?true and ?false patterns.

?and(PATTERN, PATTERN, ...)

Selects objects where all specified patterns match.

?false

Selects nothing.

?not(PATTERN)

Selects objects where PATTERN does not match.

?or(PATTERN, PATTERN, ...)

Selects objects where at least one of the specified patterns match.

?true

Selects all objects.

package patterns

These patterns select specific packages.

?architecture(WILDCARD)

Selects packages matching the specified architecture, which may contain wildcards using any.

?automatic

Selects packages that were installed automatically.

?broken

Selects packages that have broken dependencies.

?config-files

Selects packages that are not fully installed, but have solely residual configuration files left.

?essential

Selects packages that have Essential: yes set in their control file.

?exact-name(NAME)

Selects packages with the exact specified name.

?garbage

Selects packages that can be removed automatically.

?installed

Selects packages that are currently installed.

?name(REGEX)

Selects packages where the name matches the given regular expression.

?obsolete

Selects packages that no longer exist in repositories.

?upgradable

Selects packages that can be upgraded (have a newer candidate).

?virtual

Selects all virtual packages; that is packages without a version. These exist when they are referenced somewhere in the archive, for example because something depends on that name.

examples

apt remove ?garbage

Remove all packages that are automatically installed and no longer needed - same as apt autoremove

apt purge ?config-files

Purge all packages that only have configuration files left

oddities

Some things are not yet where I want them:

  • ?architecture does not support all, native, or same
  • ?installed should match only the installed version of the package, not the entire package (that is what aptitude does, and it’s a bit surprising that ?installed implies a version and ?upgradable does not)

the future

Of course, I do want to add support for the missing version patterns and explicit search patterns. I might even add support for some of the short patterns, but no promises. Some of those explicit search patterns might have slightly different syntax, e.g. ?for(x, y) instead of ?for x: y in order to make the language more uniform and easier to parse.

Another thing I want to do ASAP is to disable fallback to regular expressions when specifying package names on the command-line: apt install g++ should always look for a package called g++, and not for any package containing g (g++ being a valid regex) when there is no g++ package. I think continuing to allow regular expressions if they start with ^ or end with $ is fine - that prevents any overlap with package names, and would avoid breaking most stuff.

There also is the fallback to fnmatch(): Currently, if apt cannot find a package with the specified name using the exact name or the regex, it would fall back to interpreting the argument as a glob(7) pattern. For example, apt install apt* would fallback to installing every package starting with apt if there is no package matching that as a regular expression. We can actually keep those in place, as the glob(7) syntax does not overlap with valid package names.

Maybe I should allow using [] instead of () so larger patterns become more readable, and/or some support for comments.

There are also plans for AppStream based patterns. This would allow you to use apt install ?provides-mimetype(text/xml) or apt install ?provides-lib(libfoo.so.2). It’s not entirely clear how to package this though, we probably don’t want to have libapt-pkg depend directly on libappstream.

feedback

Talk to me on IRC, comment on the Mastodon thread, or send me an email if there’s anything you think I’m missing or should be looking at.

CryptogramBypassing Apple FaceID's Liveness Detection Feature

Apple's FaceID has a liveness detection feature, which prevents someone from unlocking a victim's phone by putting it in front of his face while he's sleeping. That feature has been hacked:

Researchers on Wednesday during Black Hat USA 2019 demonstrated an attack that allowed them to bypass a victim's FaceID and log into their phone simply by putting a pair of modified glasses on their face. By merely placing tape carefully over the lenses of a pair glasses and placing them on the victim's face the researchers demonstrated how they could bypass Apple's FaceID in a specific scenario. The attack itself is difficult, given the bad actor would need to figure out how to put the glasses on an unconscious victim without waking them up.

Worse Than FailureCodeSOD: A Devil With a Date

Jim was adding a feature to the backend. This feature updated a few fields on an object, and then handed the object off as JSON to the front-end.

Adding the feature seemed pretty simple, but when Jim went to check out its behavior in the front-end, he got validation errors. Something in the data getting passed back by his web service was fighting with the front end.

On its surface, that seemed like a reasonable problem, but when looking into it, Jim discovered that it was the record_update_date field which was causing validation issues. The front-end displayed this as a read only field, so there was no reason to do any client-side validation in the first place, and that field was never sent to the backend, so there was even less than no reason to do validation.

Worse, the field had, at least to the eye, a valid date: 2019-07-29T00:00:00.000Z. Even weirder, if Jim changed the backend to just return 2019-07-29, everything worked. He dug into the validation code to see what might be wrong about it:

/**
 * Custom validation
 *
 * This is a callback function for ajv custom keywords
 *
 * @param  {object} wsFormat aiFormat property content
 * @param  {object} data Data (of element type) from document where validation is required
 * @param  {object} itemSchema Schema part from wsValidation keyword
 * @param  {string} dataPath Path to document element
 * @param  {object} parentData Data of parent object
 * @param  {string} key Property name
 * @param  {object} rootData Document data
 */
function wsFormatFunction(wsFormat, data, itemSchema, dataPath, parentData, key, rootData) {

    let valid;
    switch (aiFormat) {
        case 'date': {
            let regex = /^\d\d\d\d-[0-1]\d-[0-3](T00:00:00.000Z)?\d$/;
            valid = regex.test(data);
            break;
        }
        case 'date-time': {
            let regex = /^\d\d\d\d-[0-1]\d-[0-3]\d[t\s](?:[0-2]\d:[0-5]\d:[0-5]\d|23:59:60)(?:\.\d+)?(?:z|[+-]\d\d:\d\d)$/i;
            valid = regex.test(data);
            break;
        }
        case 'time': {
            let regex = /^(0[0-9]|1[0-9]|2[0-3]):[0-5][0-9]:[0-5][0-9]$/;
            valid = regex.test(data);
            break;
        }
        default: throw 'Unknown wsFormat: ' + wsFormat;
    }

    if (!valid) {
        wsFormatFunction['errors'] = wsFormatFunction['errors'] || [];

        wsFormatFunction['errors'].push({
            keyword: 'wsFormat',
            dataPath: dataPath,
            message: 'should match format "' + wsFormat + '"',
            schema: itemSchema,
            data: data
        });
    }

    return valid;
}

When it starts with “Custom validation” and it involves dates, you know you’re in for a bad time. Worse, it’s custom validation, dates, and regular expressions written by someone who clearly didn’t understand regular expressions.

Let’s take a peek at the branch which was causing Jim’s error, and examine the regex:

/^\d\d\d\d-[0-1]\d-[0-3](T00:00:00.000Z)?\d$/

It should start with four digits, followed by a dash, followed by a value between 0 and 1. Then another digit, then a dash, then a number between 0 and 3, then the time (optionally), then a final digit.

It’s obvious why Jim’s perfectly reasonable date wasn’t working: it needed to be 2019-07-2T00:00:00.000Z9. Or, if Jim just didn’t include the timestamp, not only would 2019-07-29 be a valid date, but so would 2019-19-39, which just so happens to be my birthday. Mark your calendars for the 39th of Undevigintiber.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

CryptogramSide-Channel Attack against Electronic Locks

Several high-security electronic locks are vulnerable to side-channel attacks involving power monitoring.

Cory DoctorowMy appearance on the MMT podcast

I’ve been following the Modern Monetary Theory debate for about 18 months, and I’m largely a convert: governments spend money into existence and tax it out of existence, and government deficit spending is only inflationary if it’s bidding against the private sector for goods or services, which means that the government could guarantee every unemployed person a job (say, working on the Green New Deal), and which also means that every unemployed person and every unfilled social services role is a political choice, not an economic necessity.

I was delighted to be invited onto the MMT Podcast to discuss the ways that MMT dovetails with the fight against monopoly and inequality, and how science-fiction storytelling can bring complicated technical subjects (like adversarial interoperability) to life.

We talked so long that they’ve split it into two episodes, the first of which is now live (MP3).

Krebs on SecurityMeet Bluetana, the Scourge of Pump Skimmers

Bluetana,” a new mobile app that looks for Bluetooth-based payment card skimmers hidden inside gas pumps, is helping police and state employees more rapidly and accurately locate compromised fuel stations across the nation, a study released this week suggests. Data collected in the course of the investigation also reveals some fascinating details that may help explain why these pump skimmers are so lucrative and ubiquitous.

The new app, now being used by agencies in several states, is the brainchild of computer scientists from the University of California San Diego and the University of Illinois Urbana-Champaign, who say they developed the software in tandem with technical input from the U.S. Secret Service (the federal agency most commonly called in to investigate pump skimming rings).

The Bluetooth pump skimmer scanner app ‘Bluetana’ in action.

Gas pumps are a perennial target of skimmer thieves for several reasons. They are usually unattended, and in too many cases a handful of master keys will open a great many pumps at a variety of filling stations.

The skimming devices can then be attached to electronics inside the pumps in a matter of seconds, and because they’re also wired to the pump’s internal power supply the skimmers can operate indefinitely without the need of short-lived batteries.

And increasingly, these pump skimmers are fashioned to relay stolen card data and PINs via Bluetooth wireless technology, meaning the thieves who install them can periodically download stolen card data just by pulling up to a compromised pump and remotely connecting to it from a Bluetooth-enabled mobile device or laptop.

According to the study, some 44 volunteers  — mostly law enforcement officials and state employees — were equipped with Bluetana over a year-long experiment to test the effectiveness of the scanning app.

The researchers said their volunteers collected Bluetooth scans at 1,185 gas stations across six states, and that Bluetana detected a total of 64 skimmers across four of those states. All of the skimmers were later collected by law enforcement, including two that were reportedly missed in manual safety inspections of the pumps six months earlier.

While several other Android-based apps designed to find pump skimmers are already available, the researchers said Bluetana was developed with an eye toward eliminating false-positives that some of these other apps can fail to distinguish.

“Bluetooth technology used in these skimmers are also used for legitimate products commonly seen at and near gas stations such as speed-limit signs, weather sensors and fleet tracking systems,” said Nishant Bhaskar, UC San Diego Ph.D. student and principal author of the study. “These products can be mistaken for skimmers by existing detection apps.”

BLACK MARKET VALUE

The fuel skimmer study also helps explain how quickly these hidden devices can generate huge profits for the organized gangs that typically deploy them. The researchers found the skimmers their app found collected data from roughly 20 -25 payment cards each day — evenly distributed between debit and credit cards (although they note estimates from payment fraud prevention companies and the Secret Service that put the average figure closer to 50-100 cards daily per compromised machine).

The academics also studied court documents which revealed that skimmer scammers often are only able to “cashout” stolen cards — either through selling them on the black market or using them for fraudulent purchases — a little less than half of the time. This can result from the skimmers sometimes incorrectly reading card data, daily withdrawal limits, or fraud alerts at the issuing bank.

“Based on the prior figures, we estimate the range of per-day revenue from a skimmer is $4,253 (25 cards per day, cashout of $362 per card, and 47% cashout success rate), and our high end estimate is $63,638 (100 cards per day per day, $1,354 cashout per card, and cashout success rate of 47%),” the study notes.

Not a bad haul either way, considering these skimmers typically cost about $25 to produce.

Those earnings estimates assume an even distribution of credit and debit card use among customers of a compromised pump: The more customers pay with a debit card, the more profitable the whole criminal scheme may become. Armed with your PIN and debit card data, skimmer thieves or those who purchase stolen cards can clone your card and pull money out of your account at an ATM.

“Availability of a PIN code with a stolen debit card in particular, can increase its value five-fold on the black market,” the researchers wrote.

This highlights a warning that KrebsOnSecurity has relayed to readers in many previous stories on pump skimming attacks: Using a debit card at the pump can be way riskier than paying with cash or a credit card.

The black market value, impact to consumers and banks, and liability associated with different types of card fraud.

And as the above graphic from the report illustrates, there are different legal protections for fraudulent transactions on debit vs. credit cards. With a credit card, your maximum loss on any transactions you report as fraud is $50; with a debit card, that protection only extends for within two days of the unauthorized transaction. After that, the maximum consumer liability can increase to $500 within 60 days, and to an unlimited amount after 60 days.

In practice, your bank or debit card issuer may still waive additional liabilities, and many do. But even then, having your checking account emptied of cash while your bank sorts out the situation can still be a huge hassle and create secondary problems (bounced checks, for instance).

Interestingly, this advice against using debit cards at the pump often runs counter to the messaging pushed by fuel station owners themselves, many of whom offer lower prices for cash or debit card transactions. That’s because credit card transactions typically are more expensive to process.

For all its skimmer-skewering prowess, Bluetana will not be released to the public. The researchers said the primary reason for this is highlighted in the core findings of the study.

“There are many legitimate devices near gas stations that look exactly like skimmers do in Bluetooth scans,” said UCSD Assistant Professor Aaron Schulman, in an email to KrebsOnSecurity. “Flagging suspicious devices in Bluetana is a only a way of notifying inspectors that they need to gather more data around the gas station to determine if the Bluetooth transmissions appear to be emanating from a device inside of of the pumps. If it does, they can then open the pump door and confirm that the signal strength rises, and begin their visual inspection for the skimmer.”

One of the best tips for avoiding fuel card skimmers is to favor filling stations that have updated security features, such as custom keys for each pump, better compartmentalization of individual components within the machine, and tamper protections that physically shut down a pump if the machine is improperly accessed.

How can you spot a gas station with these updated features, you ask? As noted in last summer’s story, How to Avoid Card Skimmers at the Pumps, these newer-model machines typically feature a horizontal card acceptance slot along with a raised metallic keypad. In contrast, older, less secure pumps usually have a vertical card reader a flat, membrane-based keypad.

Newer, more tamper-resistant fuel pumps include pump-specific key locks, raised metallic keypads, and horizontal card readers.

The researchers will present their work on Bluetana later today at the USENIX Security 2019 conference in Santa Clara, Calif. A copy of their paper is available here (PDF).

If you enjoyed this story, check out my series on all things skimmer-related: All About Skimmers. Looking for more information on fuel pump skimming? Have a look at some of these stories.

CryptogramExploiting GDPR to Get Private Information

A researcher abused the GDPR to get information on his fiancee:

It is one of the first tests of its kind to exploit the EU's General Data Protection Regulation (GDPR), which came into force in May 2018. The law shortened the time organisations had to respond to data requests, added new types of information they have to provide, and increased the potential penalty for non-compliance.

"Generally if it was an extremely large company -- especially tech ones -- they tended to do really well," he told the BBC.

"Small companies tended to ignore me.

"But the kind of mid-sized businesses that knew about GDPR, but maybe didn't have much of a specialised process [to handle requests], failed."

He declined to identify the organisations that had mishandled the requests, but said they had included:

  • a UK hotel chain that shared a complete record of his partner's overnight stays

  • two UK rail companies that provided records of all the journeys she had taken with them over several years

  • a US-based educational company that handed over her high school grades, mother's maiden name and the results of a criminal background check survey.

CryptogramAttorney General Barr and Encryption

Last month, Attorney General William Barr gave a major speech on encryption policy足what is commonly known as "going dark." Speaking at Fordham University in New York, he admitted that adding backdoors decreases security but that it is worth it.

Some hold this view dogmatically, claiming that it is technologically impossible to provide lawful access without weakening security against unlawful access. But, in the world of cybersecurity, we do not deal in absolute guarantees but in relative risks. All systems fall short of optimality and have some residual risk of vulnerability -- a point which the tech community acknowledges when they propose that law enforcement can satisfy its requirements by exploiting vulnerabilities in their products. The real question is whether the residual risk of vulnerability resulting from incorporating a lawful access mechanism is materially greater than those already in the unmodified product. The Department does not believe this can be demonstrated.

Moreover, even if there was, in theory, a slight risk differential, its significance should not be judged solely by the extent to which it falls short of theoretical optimality. Particularly with respect to encryption marketed to consumers, the significance of the risk should be assessed based on its practical effect on consumer cybersecurity, as well as its relation to the net risks that offering the product poses for society. After all, we are not talking about protecting the Nation's nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications. If one already has an effective level of security say, by way of illustration, one that protects against 99 percent of foreseeable threats -- is it reasonable to incur massive further costs to move slightly closer to optimality and attain a 99.5 percent level of protection? A company would not make that expenditure; nor should society. Here, some argue that, to achieve at best a slight incremental improvement in security, it is worth imposing a massive cost on society in the form of degraded safety. This is untenable. If the choice is between a world where we can achieve a 99 percent assurance against cyber threats to consumers, while still providing law enforcement 80 percent of the access it might seek; or a world, on the other hand, where we have boosted our cybersecurity to 99.5 percent but at a cost reducing law enforcements [sic] access to zero percent the choice for society is clear.

I think this is a major change in government position. Previously, the FBI, the Justice Department and so on had claimed that backdoors for law enforcement could be added without any loss of security. They maintained that technologists just need to figure out how足 -- an approach we have derisively named "nerd harder."

With this change, we can finally have a sensible policy conversation. Yes, adding a backdoor increases our collective security because it allows law enforcement to eavesdrop on the bad guys. But adding that backdoor also decreases our collective security because the bad guys can eavesdrop on everyone. This is exactly the policy debate we should be having -- not the fake one about whether or not we can have both security and surveillance.

Barr makes the point that this is about "consumer cybersecurity" and not "nuclear launch codes." This is true, but it ignores the huge amount of national security-related communications between those two poles. The same consumer communications and computing devices are used by our lawmakers, CEOs, legislators, law enforcement officers, nuclear power plant operators, election officials and so on. There's no longer a difference between consumer tech and government tech -- it's all the same tech.

Barr also says:

Further, the burden is not as onerous as some make it out to be. I served for many years as the general counsel of a large telecommunications concern. During my tenure, we dealt with these issues and lived through the passage and implementation of CALEA the Communications Assistance for Law Enforcement Act. CALEA imposes a statutory duty on telecommunications carriers to maintain the capability to provide lawful access to communications over their facilities. Companies bear the cost of compliance but have some flexibility in how they achieve it, and the system has by and large worked. I therefore reserve a heavy dose of skepticism for those who claim that maintaining a mechanism for lawful access would impose an unreasonable burden on tech firms especially the big ones. It is absurd to think that we would preserve lawful access by mandating that physical telecommunications facilities be accessible to law enforcement for the purpose of obtaining content, while allowing tech providers to block law enforcement from obtaining that very content.

That telecommunications company was GTE -- which became Verizon. Barr conveniently ignores that CALEA-enabled phone switches were used to spy on government officials in Greece in 2003 -- which seems to have been a National Security Agency operation -- and on a variety of people in Italy in 2006. Moreover, in 2012 every CALEA-enabled switch sold to the Defense Department had security vulnerabilities. (I wrote about all this, and more, in 2013.)

The final thing I noticed about the speech is that it is not about iPhones and data at rest. It is about communications足 -- data in transit. The "going dark" debate has bounced back and forth between those two aspects for decades. It seems to be bouncing once again.

I hope that Barr's latest speech signals that we can finally move on from the fake security vs. privacy debate, and to the real security vs. security debate. I know where I stand on that: As computers continue to permeate every aspect of our lives, society, and critical infrastructure, it is much more important to ensure that they are secure from everybody -- even at the cost of law enforcement access足 -- than it is to allow access at the cost of security. Barr is wrong, it kind of is like these systems are protecting nuclear launch codes.

This essay previously appeared on Lawfare.com.

Worse Than FailureCodeSOD: A Loop in the String

Robert was browsing through a little JavaScript used at his organization, and found this gem of type conversion.

//use only for small numbers
function StringToInteger (str) {
    var int = -1;
    for (var i=0; i<=100; i++) {
        if (i+"" == str) {
            int = i;
            break;
        }
    }
    return int;
}

So, this takes our input str, which is presumably a string, and it starts counting from 0 to 100. i+"" coerces the integer value to a string, which we compare against our string. If it’s a match, we’ll store that value and break out of the loop.

Obviously, this has a glaring flaw: the 100 is hardcoded. So what we really need to do is add a search_low and search_high parameter, so we can write the for loop as i = search_low; i <= search_high; i++ instead. Because that’s the only glaring flaw in this code. I can’t think of any possible better way of converting strings to integers. Not a one.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

CryptogramPhone Pharming for Ad Fraud

Interesting article on people using banks of smartphones to commit ad fraud for profit.

No one knows how prevalent ad fraud is on the Internet. I believe it is surprisingly high -- here's an article that places losses between $6.5 and $19 billion annually -- and something companies like Google and Facebook would prefer remain unresearched.

Krebs on SecurityPatch Tuesday, August 2019 Edition

Most Microsoft Windows (ab)users probably welcome the monthly ritual of applying security updates about as much as they look forward to going to the dentist: It always seems like you were there just yesterday, and you never quite know how it’s all going to turn out. Fortunately, this month’s patch batch from Redmond is mercifully light, at least compared to last month.

Okay, maybe a trip to the dentist’s office is still preferable. In any case, today is the second Tuesday of the month, which means it’s once again Patch Tuesday (or — depending on your setup and when you’re reading this post — Reboot Wednesday). Microsoft today released patches to fix some 93 vulnerabilities in Windows and related software, 35 of which affect various Server versions of Windows, and another 70 that apply to the Windows 10 operating system.

Although there don’t appear to be any zero-day vulnerabilities fixed this month — i.e. those that get exploited by cybercriminals before an official patch is available — there are several issues that merit attention.

Chief among those are patches to address four moderately terrifying flaws in Microsoft’s Remote Desktop Service, a feature which allows users to remotely access and administer a Windows computer as if they were actually seated in front of the remote computer. Security vendor Qualys says two of these weaknesses can be exploited remotely without any authentication or user interaction.

“According to Microsoft, at least two of these vulnerabilities (CVE-2019-1181 and CVE-2019-1182) can be considered ‘wormable’ and [can be equated] to BlueKeep,” referring to a dangerous bug patched earlier this year that Microsoft warned could be used to spread another WannaCry-like ransomware outbreak. “It is highly likely that at least one of these vulnerabilities will be quickly weaponized, and patching should be prioritized for all Windows systems.”

Fortunately, Remote Desktop is disabled by default in Windows 10, and as such these flaws are more likely to be a threat for enterprises that have enabled the application for various purposes. For those keeping score, this is the fourth time in 2019 Microsoft has had to fix critical security issues with its Remote Desktop service.

For all you Microsoft Edge and Internet Exploiter Explorer users, Microsoft has issued the usual panoply of updates for flaws that could be exploited to install malware after a user merely visits a hacked or booby-trapped Web site. Other equally serious flaws patched in Windows this month could be used to compromise the operating system just by convincing the user to open a malicious file (regardless of which browser the user is running).

As crazy as it may seem, this is the second month in a row that Adobe hasn’t issued a security update for its Flash Player browser plugin, which is bundled in IE/Edge and Chrome (although now hobbled by default in Chrome). However, Adobe did release important updates for its Acrobat and free PDF reader products.

If the tone of this post sounds a wee bit cantankerous, it might be because at least one of the updates I installed last month totally hosed my Windows 10 machine. I consider myself an equal OS abuser, and maintain multiple computers powered by a variety of operating systems, including Windows, Linux and MacOS.

Nevertheless, it is frustrating when being diligent about applying patches introduces so many unfixable problems that you’re forced to completely reinstall the OS and all of the programs that ride on top of it. On the bright side, my newly-refreshed Windows computer is a bit more responsive than it was before crash hell.

So, three words of advice. First off, don’t let Microsoft decide when to apply patches and reboot your computer. On the one hand, it’s nice Microsoft gives us a predictable schedule when it’s going to release patches. On the other, Windows 10 will by default download and install patches whenever it pleases, and then reboot the computer.

Unless you change that setting. Here’s a tutorial on how to do that. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

Secondly, it doesn’t hurt to wait a few days to apply updates.  Very often fixes released on Patch Tuesday have glitches that cause problems for an indeterminate number of Windows systems. When this happens, Microsoft then patches their patches to minimize the same problems for users who haven’t yet applied the updates, but it sometimes takes a few days for Redmond to iron out the kinks.

Finally, please have some kind of system for backing up your files before applying any updates. You can use third-party software for this, or just the options built into Windows 10. At some level, it doesn’t matter. Just make sure you’re backing up your files, preferably following the 3-2-1 backup rule. Thankfully, I’m vigilant about backing up my files.

And, as ever, if you experience any problems installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips.

Planet DebianSteve Kemp: That time I didn't find a kernel bug, or did I?

Recently I saw a post to the linux kernel mailing-list containing a simple fix for a use-after-free bug. The code in question originally read:

    hdr->pkcs7_msg = pkcs7_parse_message(buf + buf_len, sig_len);
    if (IS_ERR(hdr->pkcs7_msg)) {
        kfree(hdr);
        return PTR_ERR(hdr->pkcs7_msg);
    }

Here the bug is obvious once it has been pointed out:

  • A structure is freed.
    • But then it is dereferenced, to provide a return value.

This is the kind of bug that would probably have been obvious to me if I'd happened to read the code myself. However patch submitted so job done? I did have some free time so I figured I'd scan for similar bugs. Writing a trivial perl script to look for similar things didn't take too long, though it is a bit shoddy:

  • Open each file.
  • If we find a line containing "free(.*)" record the line and the thing that was freed.
  • The next time we find a return look to see if the return value uses the thing that was free'd.
    • If so that's a possible bug. Report it.

Of course my code is nasty, but it looked like it immediately paid off. I found this snippet of code in linux-5.2.8/drivers/media/pci/tw68/tw68-video.c:

    if (hdl->error) {
        v4l2_ctrl_handler_free(hdl);
        return hdl->error;
    }

That looks promising:

  • The structure hdl is freed, via a dedicated freeing-function.
  • But then we return the member error from it.

Chasing down the code I found that linux-5.2.8/drivers/media/v4l2-core/v4l2-ctrls.c contains the code for the v4l2_ctrl_handler_free call and while it doesn't actually free the structure - just some members - it does reset the contents of hdl->error to zero.

Ahah! The code I've found looks for an error, and if it was found returns zero, meaning the error is lost. I can fix it, by changing to this:

    if (hdl->error) {
        int err = hdl->error;
        v4l2_ctrl_handler_free(hdl);
        return err;
    }

I did that. Then looked more closely to see if I was missing something. The code I've found lives in the function tw68_video_init1, that function is called only once, and the return value is ignored!

So, that's the story of how I scanned the Linux kernel for use-after-free bugs and contributed nothing to anybody.

Still fun though.

I'll go over my list more carefully later, but nothing else jumped out as being immediately bad.

There is a weird case I spotted in ./drivers/media/platform/s3c-camif/camif-capture.c with a similar pattern. In that case the function involved is s3c_camif_create_subdev which is invoked by ./drivers/media/platform/s3c-camif/camif-core.c:

        ret = s3c_camif_create_subdev(camif);
        if (ret < 0)
                goto err_sd;

So I suspect there is something odd there:

  • If there's an error in s3c_camif_create_subdev
    • Then handler->error will be reset to zero.
    • Which means that return handler->error will return 0.
    • Which means that the s3c_camif_create_subdev call should have returned an error, but won't be recognized as having done so.
    • i.e. "0 < 0" is false.

Of course the error-value is only set if this code is hit:

    hdl->buckets = kvmalloc_array(hdl->nr_of_buckets,
                      sizeof(hdl->buckets[0]),
                      GFP_KERNEL | __GFP_ZERO);
    hdl->error = hdl->buckets ? 0 : -ENOMEM;

Which means that the registration of the sub-device fails if there is no memory, and at that point what can you even do?

It's a bug, but it isn't a security bug.

Planet DebianRicardo Mones: When your mail hub password is updated...

don't
 forget
  to
   run
    postmap
     on
      your
       /etc/postfix/sasl_passwd

(repeat 100 times sotto voce or until falling asleep, whatever happens first).

Planet DebianSven Hoexter: Debian/buster on HPE DL360G10 - interfaces change back to ethX

For yet unknown reasons some recently installed HPE DL360G10 running buster changed back the interface names from the expected "onboard" based names eno5 and eno6 to ethX after a reboot.

My current workaround is a link file which kind of enforces the onboard scheme.

$ cat /etc/systemd/network/101-onboard-rd.link 
[Link]
NamePolicy=onboard kernel database slot path
MACAddressPolicy=none

The hosts are running the latest buster kernel

Linux foobar 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 GNU/Linu

A downgrade of the kernel did not change anything. So I currently like to believe this is not related a kernel change.

I tried to collect a few information on one of the broken systems while in a broken state:

root@foobar:~# SYSTEMD_LOG_LEVEL=debug udevadm test-builtin net_setup_link /sys/class/net/eth0
=== trie on-disk ===
tool version:          241
file size:         9492053 bytes
header size             80 bytes
strings            2069269 bytes
nodes              7422704 bytes
Load module index
Found container virtualization none.
timestamp of '/etc/systemd/network' changed
Skipping overridden file '/usr/lib/systemd/network/99-default.link'.
Parsed configuration file /etc/systemd/network/99-default.link
Created link configuration context.
ID_NET_DRIVER=i40e
eth0: No matching link configuration found.
Builtin command 'net_setup_link' fails: No such file or directory
Unload module index
Unloaded link configuration context.

root@foobar:~# udevadm test-builtin net_id /sys/class/net/eth0 2>/dev/null
ID_NET_NAMING_SCHEME=v240
ID_NET_NAME_MAC=enx48df37944ab0
ID_OUI_FROM_DATABASE=Hewlett Packard Enterprise
ID_NET_NAME_ONBOARD=eno5
ID_NET_NAME_PATH=enp93s0f0

Most interesting hint right now seems to be that /sys/class/net/eth0/name_assign_type is invalid While on sytems before the reboot that breaks it, and after setting the .link file fix, contains a 4.

Since those hosts were intially installed with buster there are no remains on any ethX related configuration present. If someone has an idea what is going on write a mail (sven at stormbind dot net), or blog on planet.d.o.

I found a vaguely similar bug report for a Dell PE server in #929622, though that was a change from 4.9 (stretch) to the 4.19 stretch-bpo kernel and the device names were not changed back to the ethX scheme, and Ben found a reason for it inside the kernel. Also the hardware is different using bnxt_en, while I've tg3 and i40e in use.

Cory DoctorowPodcast: Interoperability and Privacy: Squaring the Circle

In my latest podcast (MP3), I read my essay “Interoperability and Privacy: Squaring the Circle, published today on EFF’s Deeplinks; it’s another in the series of “adversarial interoperability” explainers, this one focused on how privacy and adversarial interoperability relate to each other.

Even if we do manage to impose interoperability on Facebook in ways that allow for meaningful competition, in the absence of robust anti-monopoly rules, the ecosystem that grows up around that new standard is likely to view everything that’s not a standard interoperable component as a competitive advantage, something that no competitor should be allowed to make incursions upon, on pain of a lawsuit for violating terms of service or infringing a patent or reverse-engineering a copyright lock or even more nebulous claims like “tortious interference with contract.”

In other words, the risk of trusting competition to an interoperability mandate is that it will create a new ecosystem where everything that’s not forbidden is mandatory, freezing in place the current situation, in which Facebook and the other giants dominate and new entrants are faced with onerous compliance burdens that make it more difficult to start a new service, and limit those new services to interoperating in ways that are carefully designed to prevent any kind of competitive challenge.

Standards should be the floor on interoperability, but adversarial interoperability should be the ceiling. Adversarial interoperability takes place when a new company designs a product or service that works with another company’s existing products or services, without seeking permission to do so.

MP3

Worse Than FailureCodeSOD: Nullable Knowledge

You’ve got a decimal value- maybe. It could be nothing at all, and you need to handle that null gracefully. Fortunately for you, C# has “nullable types”, which make this task easy.

Ian P’s co-worker made this straightforward application of nullable types.

public static decimal ValidateDecimal(decimal? value)
{
if (value == null) return 0;
decimal returnValue = 0;
Decimal.TryParse(value.ToString(), out returnValue);
return returnValue;
}

The lack of indentation was in the original.

The obvious facepalm is the Decimal.TryParse call. If our decimal has a value, we could just return it, but no, instead, we convert it to a string then convert that string back into a Decimal.

But the real problem here is someone who doesn’t understand what .NET’s nullable types offer. For starters, one could make the argument that value.HasValue() is more readable than value == null, though that’s clearly debatable. That’s not really the problem though.

The purpose of ValidateDecimal is to return the input value, unless the input value was null, in which case we want to return 0. Nullable types have a lovely GetValueOrDefault() method, which returns the value, or a reasonable default. What is the default for any built in numeric type?

0.

This method doesn’t need to exist, it’s already built in to the decimal? type. Of course, the built-in method almost certainly doesn’t do a string conversion to get its value, so the one with a string is better, is it knot?

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecuritySEC Investigating Data Leak at First American Financial Corp.

The U.S. Securities and Exchange Commission (SEC) is investigating a security failure on the Web site of real estate title insurance giant First American Financial Corp. that exposed more than 885 million personal and financial records tied to mortgage deals going back to 2003, KrebsOnSecurity has learned.

First American Financial Corp.

In May, KrebsOnSecurity broke the news that the Web site for Santa Ana, Calif.-based First American [NYSE:FAFexposed some 885 million documents related to real estate closings over the past 16 years, including bank account numbers and statements, mortgage and tax records, Social Security numbers, wire transaction receipts and drivers license images. No authentication was required to view the documents.

The initial tip on that story came from Ben Shoval, a real estate developer based in Seattle. Shoval said he recently received a letter from the SEC’s enforcement division which stated the agency was investigating the data exposure to determine if First American had violated federal securities laws.

In its letter, the SEC asked Shoval to preserve and share any documents or evidence he had related to the data exposure.

“This investigation is a non-public, fact-finding inquiry,” the letter explained. “The investigation does not mean that we have concluded that anyone has violated the law.”

The SEC declined to comment for this story.

Word of the SEC investigation comes weeks after regulators in New York said they were investigating the company in what could turn out to be the first test of the state’s strict new cybersecurity regulation, which requires financial companies to periodically audit and report on how they protect sensitive data, and provides for fines in cases where violations were reckless or willful. First American also is now the target of a class action lawsuit that alleges it “failed to implement even rudimentary security measures.”

First American has issued a series of statements over the past few months that seem to downplay the severity of the data exposure, which the company said was the result of a “design defect” in its Web site.

On June 18, First American said a review of system logs by an outside forensic firm, “based on guidance from the company, identified 484 files that likely were accessed by individuals without authorization. The company has reviewed 211 of these files to date and determined that only 14 (or 6.6%) of those files contain non-public personal information. The company is in the process of notifying the affected consumers and will offer them complimentary credit monitoring services.”

In a statement on July 16, First American said its now-completed investigation identified just 32 consumers whose non-public personal information likely was accessed without authorization.

“These 32 consumers have been notified and offered complimentary credit monitoring services,” the company said.

First American has not responded to questions about how long this “design defect” persisted on its site, how far back it maintained access logs, or how far back in those access logs the company’s review extended.

Updated, Aug, 13, 8:40 a.m.: Added “no comment” from the SEC.

Planet DebianRobert McQueen: Flathub, brought to you by…

Over the past 2 years Flathub has evolved from a wild idea at a hackfest to a community of app developers and publishers making over 600 apps available to end-users on dozens of Linux-based OSes. We couldn’t have gotten anything off the ground without the support of the 20 or so generous souls who backed our initial fundraising, and to make the service a reality since then we’ve relied on on the contributions of dozens of individuals and organisations such as Codethink, Endless, GNOME, KDE and Red Hat. But for our day to day operations, we depend on the continuous support and generosity of a few companies who provide the services and resources that Flathub uses 24/7 to build and deliver all of these apps. This post is about saying thank you to those companies!

Running the infrastructure

Mythic Beasts Logo

Mythic Beasts is a UK-based “no-nonsense” hosting provider who provide managed and un-managed co-location, dedicated servers, VPS and shared hosting. They are also conveniently based in Cambridge where I live, and very nice people to have a coffee or beer with, particularly if you enjoy talking about IPv6 and how many web services you can run on a rack full of Raspberry Pis. The “heart” of Flathub is a physical machine donated by them which originally ran everything in separate VMs – buildbot, frontend, repo master – and they have subsequently increased their donation with several VMs hosted elsewhere within their network. We also benefit from huge amounts of free bandwidth, backup/storage, monitoring, management and their expertise and advice at scaling up the service.

Starting with everything running on one box in 2017 we quickly ran into scaling bottlenecks as traffic started to pick up. With Mythic’s advice and a healthy donation of 100s of GB / month more of bandwidth, we set up two caching frontend servers running in virtual machines in two different London data centres to cache the commonly-accessed objects, shift the load away from the master server, and take advantage of the physical redundancy offered by the Mythic network.

As load increased and we brought a CDN online to bring the content closer to the user, we also moved the Buildbot (and it’s associated Postgres database) to a VM hosted at Mythic in order to offload as much IO bandwidth from the repo server, to keep up sustained HTTP throughput during update operations. This helped significantly but we are in discussions with them about a yet larger box with a mixture of disks and SSDs to handle the concurrent read and write load that we need.

Even after all of these changes, we keep the repo master on one, big, physical machine with directly attached storage because repo update and delta computations are hugely IO intensive operations, and our OSTree repos contain over 9 million inodes which get accessed randomly during this process. We also have a physical HSM (a YubiKey) which stores the GPG repo signing key for Flathub, and it’s really hard to plug a USB key into a cloud instance, and know where it is and that it’s physically secure.

Building the apps

Our first build workers were under Alex’s desk, in Christian’s garage, and a VM donated by Scaleway for our first year. We still have several ARM workers donated by Codethink, but at the start of 2018 it became pretty clear within a few months that we were not going to keep up with the growing pace of builds without some more serious iron behind the Buildbot. We also wanted to be able to offer PR and test builds, beta builds, etc ­­— all of which multiplies the workload significantly.

Packet Logo

Thanks to an introduction by the most excellent Jorge Castro and the approval and support of the Linux Foundation’s CNCF Infrastructure Lab, we were able to get access to an “all expenses paid” account at Packet. Packet is a “bare metal” cloud provider — like AWS except you get entire boxes and dedicated switch ports etc to yourself – at a handful of main datacenters around the world with a full range of server, storage and networking equipment, and a larger number of edge facilities for distribution/processing closer to the users. They have an API and a magical provisioning system which means that at the click of a button or one method call you can bring up all manner of machines, configure networking and storage, etc. Packet is clearly a service built by engineers for engineers – they are smart, easy to get hold of on e-mail and chat, share their roadmap publicly and set priorities based on user feedback.

We currently have 4 Huge Boxes (2 Intel, 2 ARM) from Packet which do the majority of the heavy lifting when it comes to building everything that is uploaded, and also use a few other machines there for auxiliary tasks such as caching source downloads and receiving our streamed logs from the CDN. We also used their flexibility to temporarily set up a whole separate test infrastructure (a repo, buildbot, worker and frontend on one box) while we were prototyping recent changes to the Buildbot.

A special thanks to Ed Vielmetti at Packet who has patiently supported our requests for lots of 32-bit compatible ARM machines, and for his support of other Linux desktop projects such as GNOME and the Freedesktop SDK who also benefit hugely from Packet’s resources for build and CI.

Delivering the data

Even with two redundant / load-balancing front end servers and huge amounts of bandwidth, OSTree repos have so many files that if those servers are too far away from the end users, the latency and round trips cause a serious problem with throughput. In the end you can’t distribute something like Flathub from a single physical location – you need to get closer to the users. Fortunately the OSTree repo format is very efficient to distribute via a CDN, as almost all files in the repository are immutable.

Fastly Logo

After a very speedy response to a plea for help on Twitter, Fastly – one of the world’s leading CDNs – generously agreed to donate free use of their CDN service to support Flathub. All traffic to the dl.flathub.org domain is served through the CDN, and automatically gets cached at dozens of points of presence around the world. Their service is frankly really really cool – the configuration and stats are reallly powerful, unlike any other CDN service I’ve used. Our configuration allows us to collect custom logs which we use to generate our Flathub stats, and to define edge logic in Varnish’s VCL which we use to allow larger files to stream to the end user while they are still being downloaded by the edge node, improving throughput. We also use their API to purge the summary file from their caches worldwide each time the repository updates, so that it can stay cached for longer between updates.

To get some feelings for how well this works, here are some statistics: The Flathub main repo is 929 GB, of which 73 GB are static deltas and 1.9 GB of screenshots. It contains 7280 refs for 640 apps (plus runtimes and extensions) over 4 architectures. Fastly is serving the dl.flathub.org domain fully cached, with a cache hit rate of ~98.7%. Averaging 9.8 million hits and 464 Gb downloaded per hour, Flathub uses between 1-2 Gbps sustained bandwidth depending on the time of day. Here are some nice graphs produced by the Fastly management UI (the numbers are per-hour over the last month):

Graph showing the requests per hour over the past month, split by hits and misses.
Graph showing the data transferred per hour over the past month.

To buy the scale of services and support that Flathub receives from our commercial sponsors would cost tens if not hundreds of thousands of dollars a month. Flathub could not exist without Mythic Beasts, Packet and Fastly‘s support of the free and open source Linux desktop. Thank you!

CryptogramEvaluating the NSA's Telephony Metadata Program

Interesting analysis: "Examining the Anomalies, Explaining the Value: Should the USA FREEDOM Act's Metadata Program be Extended?" by Susan Landau and Asaf Lubin.

Abstract: The telephony metadata program which was authorized under Section 215 of the PATRIOT Act, remains one of the most controversial programs launched by the U.S. Intelligence Community (IC) in the wake of the 9/11 attacks. Under the program major U.S. carriers were ordered to provide NSA with daily Call Detail Records (CDRs) for all communications to, from, or within the United States. The Snowden disclosures and the public controversy that followed led Congress in 2015 to end bulk collection and amend the CDR authorities with the adoption of the USA FREEDOM Act (UFA).

For a time, the new program seemed to be functioning well. Nonetheless, three issues emerged around the program. The first concern was over high numbers: in both 2016 and 2017, the Foreign Intelligence Surveillance Court issued 40 orders for collection, but the NSA collected hundreds of millions of CDRs, and the agency provided little clarification for the high numbers. The second emerged in June 2018 when the NSA announced the purging of three years' worth of CDR records for "technical irregularities." Finally, in March 2019 it was reported that the NSA had decided to completely abandon the program and not seek its renewal as it is due to sunset in late 2019.

This paper sheds significant light on all three of these concerns. First, we carefully analyze the numbers, showing how forty orders might lead to the collection of several million CDRs, thus offering a model to assist in understanding Intelligence Community transparency reporting across its surveillance programs. Second, we show how the architecture of modern telephone communications might cause collection errors that fit the reported reasons for the 2018 purge. Finally, we show how changes in the terrorist threat environment as well as in the technology and communication methods they employ ­ in particular the deployment of asynchronous encrypted IP-based communications ­ has made the telephony metadata program far less beneficial over time. We further provide policy recommendations for Congress to increase effective intelligence oversight.

Worse Than FailureInternship of Things

Mindy was pretty excited to start her internship with Initech's Internet-of-Things division. She'd been hearing at every job fair how IoT was still going to be blowing up in a few years, and how important it would be for her career to have some background in it.

It was a pretty standard internship. Mindy went to meetings, shadowed developers, did some light-but-heavily-supervised changes to the website for controlling your thermostat/camera/refrigerator all in one device.

As part of testing, Mindy created a customer account on the QA environment for the site. She chucked a junk password at it, only to get a message: "Your password must be at least 8 characters long, contain at least three digits, not in sequence, four symbols, at least one space, and end with a letter, and not be more than 10 characters."

"Um, that's quite the password rule," Mindy said to her mentor, Bob.

"Well, you know how it is, most people use one password for every site, and we don't want them to do that here. That way, when our database leaks again, it minimizes the harm."

"Right, but it's not like you're storing the passwords anyway, right?" Mindy said. She knew that even leaked hashes could be dangerous, but good salting/hashing would go a long way.

"Of course we are," Bob said. "We're selling web connected thermostats to what can be charitably called 'twelve-o-clock flashers'. You know what those are, right? Every clock in their house is flashing twelve?" Bob sneered. "They can't figure out the site, so we often have to log into their account to fix the things they break."

A few days later, Initech was ready to push a firmware update to all of the Model Q baby monitor cameras. Mindy was invited to watch the process so she could understand their workflow. It started off pretty reasonable: their CI/CD system had a verified build, signed off, ready to deploy.

"So, we've got a deployment farm running in the cloud," Bob explained. "There are thousands of these devices, right? So we start by putting the binary up in an S3 bucket." Bob typed a few commands to upload the binary. "What's really important for our process is that it follows this naming convention. Because the next thing we're going to do is spin up a half dozen EC2 instances- virtual servers in the cloud."

A few more commands later, and then Bob had six sessions open to cloud servers in tmux. "Now, these servers are 'clean instances', so the very first thing I have to do is upload our SSH keys." Bob ran an ssh-copy-id command to copy the SSH key from his computer up to the six cloud VMs.

"Wait, you're using your personal SSH keys?"

"No, that'd be crazy!" Bob said. "There's one global key for every one of our Model Q cameras. We've all got a copy of it on our laptops."

"All… the developers?"

"Everybody on the team," Bob said. "Developers to management."

"On their laptops?"

"Well, we were worried about storing something so sensitive on the network."

Bob continued the process, which involved launching a script that would query a webservice to see which Model Q cameras were online, then sshing into them, having them curl down the latest firmware, and then self-update. "For the first few days, we leave all six VMs running, but once most of them have gotten the update, we'll just leave one cloud service running," Bob explained. "Helps us manage costs."

It's safe to say Mindy learned a lot during her internship. Mostly, she learned, "don't buy anything from Initech."

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Planet DebianWilliam (Bill) Blough: Free Software Activities (July 2019)


Debian

  • Bug 932626: passwordsafe — Non-English locales don't work due to translation files being installed in the wrong directory.

    The fixed versions are:

    • unstable/testing: 1.06+dfsg-2
    • buster: 1.06+dfsg-1+deb10u1 (via 932945)
    • stretch: 1.00+dfsg-1+deb9u1 (via 932944)
  • Bug 932947: file — The --mime-type flag fails on arm64 due to seccomp

    Recently, there was a message on debian-devel about enabling seccomp sandboxing for the file utility. While I knew that passwordsafe uses file to determine some mime type information, testing on my development box (which is amd64-based) didn't show any problems.

    However, this was happening around the same time that I was preparing the the fix for 932626 as noted above. Lo and behold, when I uploaded the fix, everything went fine except for on the arm64 architecture. The build there failed due to the package's test suite failing.

    After doing some troubleshooting on one of the arm64 porterboxes, it was clear that the seccomp change to file was the culprit. I haven't worked with arm64 very much, so I don't know all of the details. But based on my research, it appears that arm64 doesn't implement the access() system call, but uses faccessat() instead. However, in this case, seccomp was allowing calls to access(), but not calls to faccessat(). This led to the difference in behavior between arm64 and the other architectures.

    So I filed the bug to let the maintainer know the details, in hopes that the seccomp filters could be adjusted. However, it seems he took it as the "final straw" with regard to some of the other problems he was hearing about, and decided to revert the seccomp change altogether.

    Once the change was reverted, I requested a rebuild of the failed passwordsafe package on arm64 so it could be rebuilt against the fixed dependency without doing another full upload.

  • I updated django-cas-server in unstable to 1.1.0, which is the latest upstream version. I also did some miscellaneous cleanup/maintenance on the packaging.

  • I attended DebConf19 in Curitiba, Brazil.

    This was my 3rd DebConf, and my first trip to Brazil. Actually, it was my first trip to anywhere in the Southern Hemisphere.

    As usual, DebConf was quite enjoyable. From a technical perspective, there were lots of interesting talks. I learned some new things, and was also exposed to some new (to me) projects and techniques, as well as some new ideas in general. It also gave me some ideas of other ways/places I could potentially contribute to Debian.

    From a social perspective, it was a good opportunity to see and spend time with people that I normally only get to interact with via email or irc. I also enjoyed being able to put faces/voices to names that I only see on mailing lists. Even if I don't know or interact with them much, it really helps my mental picture when I'm reading things they wrote. And of course, I met some new people, too. It was nice to share stories and experiences over food and drinks, or in the hacklabs.

    If any of the DebConf team read this, thanks for your hard work. It was another great DebConf.

Planet DebianSteve McIntyre: DebConf in Brazil again!

Highvoltage and me

I was lucky enough to meet up with my extended Debian family again this year. We went back to Brazil for the first time since 2004, this time in Curitiba. And this time I didn't lose anybody's clothes! :-)

Rhonda modelling our diversity T!

I had a very busy time, as usual - lots of sessions to take part in, and lots of conversations with people from all over. As part of the Community Team (ex-AH Team), I had a lot of things to catch up on too, and a sprint report to send. Despite all that, I even managed to do some technical things too!

I ran sessions about UEFI Secure Boot, the Arm ports and the Community Team. I was meant to be running a session for the web team too, but the dreaded DebConf 'flu took me out for a day. It's traditional - bring hundreds of people together from all over the world, mix them up with too much alcohol and not enough sleep and many people get ill... :-( Once I'm back from vacation, I'll be doing my usual task of sending session summaries to the Debian mailing lists to describe what happened in my sessions.

Maddog showed a group of us round the micro-brewery at Hop'n'Roll which was extra fun. I'm sure I wasn't the only experienced guy there, but it's always nice to listen to geeky people talking about their passion.

Small group at Hop'n'Roll

Of course, I could't get to all the sessions I wanted to - there's just too many things going on in DebConf week, and sessions clash at the best of times. So I have a load of videos on my laptop to watch while I'm away. Heartfelt thanks to our always-awesome video team for their efforts to make that possible. And I know that I had at least one follower at home watching the live streams too!

Pepper watching the Arm BoF

Planet DebianDaniel Lange: Cleaning a broken GnuPG (gpg) key

I've long said that the main tools in the Open Source security space, OpenSSL and GnuPG (gpg), are broken and only a complete re-write will solve this. And that is still pending as nobody came forward with the funding. It's not a sexy topic, so it has to get really bad before it'll get better.

Gpg has a UI that is close to useless. That won't substantially change with more bolted-on improvements.

Now Robert J. Hansen and Daniel Kahn Gillmor had somebody add ~50k signatures (read 1, 2, 3, 4 for the g{l}ory details) to their keys and - oops - they say that breaks gpg.

But does it?

I downloaded Robert J. Hansen's key off the SKS-Keyserver network. It's a nice 45MB file when de-ascii-armored (gpg --dearmor broken_key.asc ; mv broken_key.asc.gpg broken_key.gpg).

Now a friendly:

$ /usr/bin/time -v gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

User-ID "Robert J. Hansen <rjh@sixdemonbag.org>": 49705 Signaturen entfernt
User-ID "Robert J. Hansen <rob@enigmail.net>": 49704 Signaturen entfernt
User-ID "Robert J. Hansen <rob@hansen.engineering>": 49701 Signaturen entfernt

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

        Command being timed: "gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit"
        User time (seconds): 3911.14
        System time (seconds): 2442.87
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 1:45:56
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 107660
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 1
        Minor (reclaiming a frame) page faults: 26630
        Voluntary context switches: 43
        Involuntary context switches: 59439
        Swaps: 0
        File system inputs: 112
        File system outputs: 48
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
 

And the result is a nicely useable 3835 byte file of the clean public key. If you supply a keyring instead of --no-default-keyring it will also keep the non-self signatures that are useful for you (as you apparently know the signing party).

So it does not break gpg. It does break things that call gpg at runtime and not asynchronously. I heard Enigmail is affected, quelle surprise.

Now the main problem here is the runtime. 1h45min is just ridiculous. As Filippo Valsorda puts it:

Someone added a few thousand entries to a list that lets anyone append to it. GnuPG, software supposed to defeat state actors, suddenly takes minutes to process entries. How big is that list you ask? 17 MiB. Not GiB, 17 MiB. Like a large picture. https://dev.gnupg.org/T4592

If I were a gpg / SKS keyserver developer, I'd

  • speed this up so the edit-key run above completes in less than 10 s (just getting rid of the lseek/read dance and deferring all time-based decisions should get close)
  • (ideally) make the drop-sig import-filter syntax useful (date-ranges, non-reciprocal signatures, ...)
  • clean affected keys on the SKS keyservers (needs coordination of sysops, drop servers from unreachable people)
  • (ideally) use the opportunity to clean all keyserver filesystem and the message board over pgp key servers keys, too
  • only accept new keys and new signatures on keys extending the strong set (rather small change to the existing codebase)

That way another key can only be added to the keyserver network if it contains at least one signature from a previously known strong-set key. Attacking the keyserver network would become at least non-trivial. And the web-of-trust thing may make sense again.

Updates

09.07.2019

GnuPG 2.2.17 has been released with another set of quickly bolted together fixes:

  * gpg: Ignore all key-signatures received from keyservers.  This
    change is required to mitigate a DoS due to keys flooded with
    faked key-signatures.  The old behaviour can be achieved by adding
    keyserver-options no-self-sigs-only,no-import-clean
    to your gpg.conf.  [#4607]
  * gpg: If an imported keyblocks is too large to be stored in the
    keybox (pubring.kbx) do not error out but fallback to an import
    using the options "self-sigs-only,import-clean".  [#4591]
  * gpg: New command --locate-external-key which can be used to
    refresh keys from the Web Key Directory or via other methods
    configured with --auto-key-locate.
  * gpg: New import option "self-sigs-only".
  * gpg: In --auto-key-retrieve prefer WKD over keyservers.  [#4595]
  * dirmngr: Support the "openpgpkey" subdomain feature from
    draft-koch-openpgp-webkey-service-07. [#4590].
  * dirmngr: Add an exception for the "openpgpkey" subdomain to the
    CSRF protection.  [#4603]
  * dirmngr: Fix endless loop due to http errors 503 and 504.  [#4600]
  * dirmngr: Fix TLS bug during redirection of HKP requests.  [#4566]
  * gpgconf: Fix a race condition when killing components.  [#4577]

Bug T4607 shows that these changes are all but well thought-out. They introduce artificial limits, like 64kB for WKD-distributed keys or 5MB for local signature imports (Bug T4591) which weaken the web-of-trust further.

I recommend to not run gpg 2.2.17 in production environments without extensive testing as these limits and the unverified network traffic may bite you. Do validate your upgrade with valid and broken keys that have segments (packet groups) surpassing the above mentioned limits. You may be surprised what gpg does. On the upside: you can now refresh keys (sans signatures) via WKD. So if your buddies still believe in limiting their subkey validities, you can more easily update them bypassing the SKS keyserver network. NB: I have not tested that functionality. So test before deploying.

10.08.2019

Christopher Wellons (skeeto) has released his pgp-poisoner tool. It is a go program that can add thousands of malicious signatures to a GNUpg key per second. He comments "[pgp-poisoner is] proof that such attacks are very easy to pull off. It doesn't take a nation-state actor to break the PGP ecosystem, just one person and couple evenings studying RFC 4880. This system is not robust." He also hints at the next likely attack vector, public subkeys can be bound to a primary key of choice.

Planet DebianPetter Reinholdtsen: Legal to share more than 16,000 movies listed on IMDB?

The recent announcement of from the New York Public Library on its results in identifying books published in the USA that are now in the public domain, inspired me to update the scripts I use to track down movies that are in the public domain. This involved updating the script used to extract lists of movies believed to be in the public domain, to work with the latest version of the source web sites. In particular the new edition of the Retro Film Vault web site now seem to list all the films available from that distributor, bringing the films identified there to more than 12.000 movies, and I was able to connect 46% of these to IMDB titles.

The new total is 16307 IMDB IDs (aka films) in the public domain or creative commons licensed, and unknown status for 31460 movies (possibly duplicates of the 16307).

The complete data set is available from a public git repository, including the scripts used to create it.

Anyway, this is the summary of the 28 collected data sources so far:

 2361 entries (   50 unique) with and 22472 without IMDB title ID in free-movies-archive-org-search.json
 2363 entries (  146 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  299 entries (   32 unique) with and    93 without IMDB title ID in free-movies-cinemovies.json
   88 entries (   52 unique) with and    36 without IMDB title ID in free-movies-creative-commons.json
 3190 entries ( 1532 unique) with and    13 without IMDB title ID in free-movies-fesfilm-xls.json
  620 entries (   24 unique) with and   283 without IMDB title ID in free-movies-fesfilm.json
 1080 entries (  165 unique) with and   651 without IMDB title ID in free-movies-filmchest-com.json
  830 entries (   13 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
   19 entries (   19 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-gb.json
 7410 entries ( 7101 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-us.json
 1205 entries (   41 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
  163 entries (   22 unique) with and    88 without IMDB title ID in free-movies-infodigi-pd.json
  158 entries (  103 unique) with and     0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json
  113 entries (    4 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  182 entries (   71 unique) with and     0 without IMDB title ID in free-movies-letterboxd-silent.json
  248 entries (   85 unique) with and     0 without IMDB title ID in free-movies-manual.json
  158 entries (    4 unique) with and    64 without IMDB title ID in free-movies-mubi.json
   85 entries (    1 unique) with and    23 without IMDB title ID in free-movies-openflix.json
  520 entries (   22 unique) with and   244 without IMDB title ID in free-movies-profilms-pd.json
  343 entries (   14 unique) with and    10 without IMDB title ID in free-movies-publicdomainmovies-info.json
  701 entries (   16 unique) with and   560 without IMDB title ID in free-movies-publicdomainmovies-net.json
   74 entries (   13 unique) with and    60 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (   16 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
 5506 entries ( 2941 unique) with and  6585 without IMDB title ID in free-movies-retrofilmvault.json
   16 entries (    0 unique) with and     0 without IMDB title ID in free-movies-thehillproductions.json
  110 entries (    2 unique) with and    29 without IMDB title ID in free-movies-two-movies-net.json
   73 entries (   20 unique) with and   131 without IMDB title ID in free-movies-vodo.json
16307 unique IMDB title IDs in total, 12509 only in one list, 31460 without IMDB title ID

New this time is a list of all the identified IMDB titles, with title, year and running time, provided in free-complete.json. this file also indiciate which source is used to conclude the video is free to distribute.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

,

CryptogramFriday Squid Blogging: Sinuous Asperoteuthis Mangoldae Squid

Great video of the Sinuous Asperoteuthis Mangoldae Squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityiNSYNQ Ransom Attack Began With Phishing Email

A ransomware outbreak that hit QuickBooks cloud hosting firm iNSYNQ in mid-July appears to have started with an email phishing attack that snared an employee working in sales for the company, KrebsOnSecurity has learned. It also looks like the intruders spent roughly ten days rooting around iNSYNQ’s internal network to properly stage things before unleashing the ransomware. iNSYNQ ultimately declined to pay the ransom demand, and it is still working to completely restore customer access to files.

Some of this detail came in a virtual “town hall” meeting held August 8, in which iNSYNQ chief executive Elliot Luchansky briefed customers on how it all went down, and what the company is doing to prevent such outages in the future.

A great many iNSYNQ’s customers are accountants, and when the company took its network offline on July 16 in response to the ransomware outbreak, some of those customers took to social media to complain that iNSYNQ was stonewalling them.

“We could definitely have been better prepared, and it’s totally unacceptable,” Luchansky told customers. “I take full responsibility for this. People waiting ridiculous amounts of time for a response is unacceptable.”

By way of explaining iNSYNQ’s initial reluctance to share information about the particulars of the attack early on, Luchansky told customers the company had to assume the intruders were watching and listening to everything iNSYNQ was doing to recover operations and data in the wake of the ransomware outbreak.

“That was done strategically for a good reason,” he said. “There were human beings involved with [carrying out] this attack in real time, and we had to assume they were monitoring everything we could say. And that posed risks based on what we did say publicly while the ransom negotiations were going on. It could have been used in a way that would have exposed customers even more. That put us in a really tough bind, because transparency is something we take very seriously. But we decided it was in our customers’ best interests to not do that.”

A paid ad that comes up prominently when one searches for “insynq” in Google.

Luchansky did not say how much the intruders were demanding, but he mentioned two key factors that informed the company’s decision not to pay up.

“It was a very substantial amount, but we had the money wired and were ready to pay it in cryptocurrency in the case that it made sense to do so,” he told customers. “But we also understood [that paying] would put a target on our heads in the future, and even if we actually received the decryption key, that wasn’t really the main issue here. Because of the quick reaction we had, we were able to contain the encryption part” to roughly 50 percent of customer systems, he said.

Luchansky said the intruders seeded its internal network with MegaCortex, a potent new ransomware strain first spotted just a couple of months ago that is being used in targeted attacks on enterprises. He said the attack appears to have been carefully planned out in advance and executed “with human intervention all the way through.”

“They decided they were coming after us,” he said. “It’s one thing to prepare for these sorts of events but it’s an entirely different experience to deal with first hand.”

According to an analysis of MegaCortex published this week by Accenture iDefense, the crooks behind this ransomware strain are targeting businesses — not home users — and demanding ransom payments in the range of two to 600 bitcoins, which is roughly $20,000 to $5.8 million.

“We are working for profit,” reads the ransom note left behind by the latest version of MegaCortex. “The core of this criminal business is to give back your valuable data in the original form (for ransom of course).”

A portion of the ransom note left behind by the latest version of MegaCortex. Image: Accenture iDefense.

Luchansky did not mention in the town hall meeting exactly when the initial phishing attack was thought to have occurred, noting that iNSYNQ is still working with California-based CrowdStrike to gain a more complete picture of the attack.

But Alex Holden, founder of Milwaukee-based cyber intelligence firm Hold Security, showed KrebsOnSecurity information obtained from monitoring dark web communications which suggested the problem started on July 6, after an employee in iNSYNQ’s sales division fell for a targeted phishing email.

“This shows that even after the initial infection, if companies act promptly they can still detect and stop the ransomware,” Holden said. “For these infections hackers take sometimes days, weeks, or even months to encrypt your data.”

iNSYNQ did not respond to requests for comment on Hold Security’s findings.

Asked whether the company had backups of customer data and — if so — why iNSYNQ decided not to restore from those, Luchansky said there were backups but that some of those were also infected.

“The backup system is backing up the primary system, and that by definition entails some level of integration,” Luchansky explained. “The way our system was architected, the malware had spread into the backups as well, at least a little bit. So [by] just turning the backups back on, there was a good chance the the virus would then start to spread through the backup system more. So we had to treat the backups similarly to how we were treating the primary systems.”

Luchansky said their backup system has since been overhauled, and that if a similar attack happened in the future it would take days instead of weeks to recover. However, he declined to get into specifics about exactly what had changed, which is too bad because in every ransomware attack story I’ve written this seems to be the detail most readers are interested in and arguing about.

The CEO added that iNSYNQ also will be partnering with a company that helps firms detect and block targeted phishing attacks, and that it envisioned being able to offer this to its customers at a discounted rate. It wasn’t clear from Luchansky’s responses to questions whether the cloud hosting firm was also considering any kind of employee anti-phishing education and/or testing service.

Luchansky said iNSYNQ was able to restore access to more than 90 percent of customer files by Aug. 2 — roughly two weeks after the ransomware outbreak — and that the company would be offering customers a two month credit as a result of the outage.

Sociological ImagesData Science Needs Social Science

What do college graduates do with a sociology major? We just got an updated look from Phil Cohen this week:

These are all great career fields for our students, but as I was reading the list I realized there is a huge industry missing: data science and analytics. From Netflix to national policy, many interesting and lucrative jobs today are focused on properly observing, understanding, and trying to predict human behavior. With more sociology graduate programs training their students in computational social science, there is a big opportunity to bring those skills to teaching undergraduates as well.

Of course, data science has its challenges. Social scientists have observed that the booming field has some big problems with bias and inequality, but this is sociology’s bread and butter! When we talk about these issues, we usually go straight to very important conversations about equity, inclusion, and justice, and rightfully so; it is easy to design algorithms that seem like they make better decisions, but really just develop their own biases from watching us.

We can also tackle these questions by talking about research methods–another place where sociologists shine! We spend a lot of time thinking about whether our methods for observing people are valid and reliable. Are we just watching talk, or action? Do people change when researchers watch them? Once we get good measures and a strong analytic approach, can we do a better job explaining how and why bias happens to prevent it in the future?

Sociologists are well-positioned to help make sense of big questions in data science, and the field needs them. According to a recent industry report, only 5% of data scientists come out of the social sciences! While other areas of study may provide more of the technical skills to work in analytics, there is only so much that the technology can do before companies and research centers need to start making sense of social behavior. 

Source: Burtch Works Executive Recruiting. 2018. “Salaries of Data Scientists.” Emphasis Mine

So, if students or parents start up the refrain of “what can you do with a sociology major” this fall, consider showing them the social side of data science!

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianAndy Simpkins: gov.uk paperwork

[Edit: Removed accusation of non UK hosting – thank you to Richard Mortimer & Philipp Edelmann for pointing out I had incorrectly looked up the domain “householdresponce.com” in place of  “householdresponse.com”.  Learn to spell…]

I live in England, the government keeps an Electoral Roll, a list of people registered to vote.  This list needs to be maintained, so once a year we are required to update the database.  To make sure we don’t forget we get sent a handy letter through the door looking like this:

Is it a scam?

Well that’s the first page anyway.   Correctly addressed to the “Current Occupier”.   So why am I posting about this?

Phishing emails land in our inbox all the time (hopefully only a few because our spam filters eat the rest).  These are unsolicited emails trying to trick us into doing something, usually they look like something official and warn us about something that we should take action about, for example an email that looks like it has come from your bank warning about suspicious activity in your account, they then ask you to follow a link to the ‘banks website’ where you can login and confirm if the activity is genuine – obviously taking you through a ‘man in the middle’ website that harvests your account credentials.

The government is justifiably concerned about this (as to are banks and other businesses that are impersonated in this way) and so run media campaigns to educate the public in the dangers of such scams and what to look out for.

So back to the “Household Enquiry” form…

How do I know that this is genuine?  Well I don’t.  I can’t easily verify the letter, I can’t be sure who sent it, It arrived through my letterbox unbidden, and even if I was expecting it wouldn’t the perfect time to send such a scam letter be at the same time genuine letters are being distributed?

All I can do read the letter carefully and apply the same rational tests that I would to any unsolicited (e)mail.

1) Does it claim to come from a source I would have dealings with (bulk mailing is so cheep that sending to huge numbers of people is still effective even if most of the recipients will know it is a scam because they wouldn’t have dealings with the alleged sender).  Yes it claims to have been sent by South Cambridge District Council and They are my county council and would send be this letter.

2) Do all the communication links point to the sender?  No.  Stop this is probably a scam.

 

Alarm bells should now be ringing – their preferred method of communication is for me to visits the website www.householdresponse.com/southcambs.  Sure they have gov.uk website mentioned and they claim to be south Cambridgeshire District Council and they have an email address elections@scambs.gov.uk  but all the fake emails claiming to come from my bank look like the come from my bank as well – the only thing that doesn’t is the link they want you to follow.  Just like this letter….

Ok Time for a bit of detective work

:~$whois householdresponse.com
 Domain Name: HOUSEHOLDRESPONSE.COM
 Registry Domain ID: 2036860356_DOMAIN_COM-VRSN
 Registrar WHOIS Server: whois.easyspace.com
 Registrar URL: http://www.easyspace.com
 Updated Date: 2018-05-23T05:56:38Z
 Creation Date: 2016-06-22T09:24:15Z
 Registry Expiry Date: 2020-06-22T09:24:15Z
 Registrar: EASYSPACE LIMITED
 Registrar IANA ID: 79
 Registrar Abuse Contact Email: abuse@easyspace.com
 Registrar Abuse Contact Phone: +44.3707555066
 Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
 Domain Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited
 Name Server: NS1.NAMECITY.COM
 Name Server: NS2.NAMECITY.COM
 DNSSEC: unsigned
 URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/
>>> Last update of whois database: 2019-08-09T17:05:57Z <<<
<snip>

Really?  just a hosting companies details for a domain claiming to be from local government?

:~$nslookup householdresponse.com
<snip>
Name: householdresponse.com
Address: 62.25.101.164

:~$ whois 62.25.101.164
<snip>
% Information related to '62.25.64.0 - 62.25.255.255'

% Abuse contact for '62.25.64.0 - 62.25.255.255' is 'ipabuse@vodafone.co.uk'

inetnum: 62.25.64.0 - 62.25.255.255
netname: UK-VODAFONE-WORLDWIDE-20000329
country: GB
org: ORG-VL225-RIPE
admin-c: GNOC4-RIPE
tech-c: GNOC4-RIPE
status: ALLOCATED PA
mnt-by: RIPE-NCC-HM-MNT
mnt-by: VODAFONE-WORLDWIDE-MNTNER
mnt-lower: VODAFONE-WORLDWIDE-MNTNER
mnt-domains: VODAFONE-WORLDWIDE-MNTNER
mnt-routes: VODAFONE-WORLDWIDE-MNTNER
created: 2017-10-18T09:50:20Z
last-modified: 2017-10-18T09:50:20Z
source: RIPE # Filtered

organisation: ORG-VL225-RIPE
org-name: Vodafone Limited
org-type: LIR
address: Vodafone House, The Connection
address: RG14 2FN
address: Newbury
address: UNITED KINGDOM
phone: +44 1635 33251
admin-c: GSOC-RIPE
tech-c: GSOC-RIPE
abuse-c: AR40377-RIPE
mnt-ref: CW-EUROPE-GSOC
mnt-by: RIPE-NCC-HM-MNT
mnt-by: CW-EUROPE-GSOC
created: 2017-05-11T14:35:11Z
last-modified: 2018-01-03T15:48:36Z
source: RIPE # Filtered

role: Cable and Wireless IP GNOC Munich
remarks: Cable&Wireless Worldwide Hostmaster
address: Smale House
address: London SE1
address: UK
admin-c: DOM12-RIPE
admin-c: DS3356-RIPE
admin-c: EJ343-RIPE
admin-c: FM1414-RIPE
admin-c: MB4
tech-c: AB14382-RIPE
tech-c: MG10145-RIPE
tech-c: DOM12-RIPE
tech-c: JO361-RIPE
tech-c: DS3356-RIPE
tech-c: SA79-RIPE
tech-c: EJ343-RIPE
tech-c: MB4
tech-c: FM1414-RIPE
abuse-mailbox: ipabuse@vodafone.co.uk
nic-hdl: GNOC4-RIPE
mnt-by: CW-EUROPE-GSOC
created: 2004-02-03T16:44:58Z
last-modified: 2017-05-25T12:03:34Z
source: RIPE # Filtered

% Information related to '62.25.64.0/18AS1273'

route: 62.25.64.0/18
descr: Vodafone Hosting
origin: AS1273
mnt-by: ENERGIS-MNT
created: 2019-02-28T08:50:03Z
last-modified: 2019-02-28T08:57:04Z
source: RIPE

% Information related to '62.25.64.0/18AS2529'

route: 62.25.64.0/18
descr: Energis UK
origin: AS2529
mnt-by: ENERGIS-MNT
created: 2014-03-26T16:21:40Z
last-modified: 2014-03-26T16:21:40Z
source: RIPE

Is this a scam…
I only wish it was :-(

A quick search of https://www.scambs.gov.uk/elections/electoral-registration-faqs/ and the very first thing on the webpage is a link to www.householdresponse.com/southcambs…

A phone call to the council, just to confirm that they haven’t been hacked and I am told yes this is for real.

OK lets look at the Privacy statement (on the same letter)

Right a link to a uk gov website… https://www.scambs.gov.uk/privacynotice

A Copy of this page as of 2019-08-09 because websites have a habit of changing can be found here
http://koipond.org.uk/photo/Screenshot_2019-08-09_CustomerPrivacyNotice.png

[Edit
I originally thought that this was being hosted outside the UK (on a US based server) which would be outside of GPDR.  I am still pissed off that this looks and feels ‘spammy’ and that the site is being hosted outside of a gov.uk based server, but this is not the righteous rage that I previously felt]

Summary Of Issue

  1. UK Government, District and Local Councils should be an exemplar of best practice.  Any correspondence from any part of UK government should only use websites within the subdomain gov.uk  (fraud prevention)

Actions taken

  • 2019-08-09
    • I spoke with South Cambridgeshire District Council and confirmed that this was genuine
    • Spoke with South Cambridgeshire District Council Electoral Services Team and made them aware of both issues (and sent follow up email)
    • Spoke with the ICO and asked for advice.  The will take up the issue if South Cambs do not resolve this within 20 working days.
    • Spoken again with ICO – even though I had mistakenly believed this was being hosted outside UK and this is not the case, they are still interested in pushing for a move to a .gov.uk domain

 

Planet DebianMike Gabriel: Cudos to the Rspamd developers

I just migrated the first / a customer's mail server site away from Amavis+SpamAssassin to Rspamd. Main reasons for the migration were speed and the setup needed a polish up anyway. People on site had been complaining about too much SPAM for quite a while. Plus, it is always good to dive into something new. Mission accomplished.

Implemented functionalities:

  • Sophos AV (savdi) antivirus checks backend
  • Clam AV antivirus backend as fallback
  • Auto-Learner CRON Job for SPAM mails published by https://artinvoice.hu
  • Work-around lacking http proxy support

Unfortunately, I could not enable the full scope of Rspamd features, as that specific site I worked on is on a private network, behind a firewall, etc. Some features don't make sense there (e.g. greylisting) or are hard-disabled in Rspamd once it detects that the mail host is on some local network infrastructure (local as in RFC-1918, or the corresponding fd00:: RFC for IPv6 I currently can't remember).

Cudos + Thanks!

Rspamd is just awesome!!! I am really really pleased with the result (and so is the customer, I heard). Thanks to the upstream developers, thanks to the Debian maintainers of the rspamd Debian package. [1]

Credits + Thanks for sharing your Work

The main part of the work had already been documented in a blog post [2] by someome with the nick "zac" (no real name found). Thanks for that!

The Sophos AV integration was a little tricky at the start, but worked out well, after some trial and error, log reading, Rspamd code studies, etc.

On half way through, there was popped up one tricky part, that could be avoided by the Rspamd upstream maintainers in future releases. As far as I took from [3], Rspamd lacks support for retrieving its map files and such (hosted on *.rspamd.com, or other 3rd party providers) via a http proxy server. This was nearly a full blocker for my last project, as the customer's mail gateway is part of a larger infrastructure and hosted inside a double ring of firewalls. Only access to the internet leads over a non-transparent squid proxy server (one which I don't have control over).

To work around this, I set up a transparent https proxy on "localhost", using a neat Python script [4]. Thanks for sharing this script.

I love all the sharing we do in FLOSS

Working on projects like this is just pure fun. And deeply interesting, as well. Such a project is fun as this one has been 98% FLOSS and 100% in the spirit of FLOSS and the correlated sharing mentality. I love this spirit of sharing ones work with the rest of the world, may someone find what I have to share useful or not.

I invite everyone to join in with sharing and in fact, for the IT business, I dearly recommend it.

I did not post config snippets here and such (as some of them are really customer specific), but if you stumble over similar issues when setting up your anti-SPAM gateway mail site using Rspamd, feel free to poke me and I'll see how I can help.

light+love
Mike (aka sunweaver at debian.org)

References

Worse Than FailureError'd: Intentionally Obtuse

"Normally I do pretty well on the Super Quiz, but then they decided to do it in Latin," writes Mike S.

 

"Uh oh, this month's AWS costs are going to be so much higher than last month's!" Ben H. writes.

 

Amanda C. wrote, "Oh, neat, Azure has some recommendations...wait...no...'just kidding' I guess?"

 

"Here I never thought that SQL Server log space could go negative, and yet, here we are," Michael writes.

 

"I love the form factor on this motherboard, but I'm not sure what case to buy with it," Todd C. writes, "Perhaps, if it isn't working, I can just give it a little kick?"

 

Maarten C. writes, "Next time, I'll name my spreadsheets with dog puns...maybe that'll make things less ruff."

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

CryptogramSupply-Chain Attack against the Electron Development Platform

Electron is a cross-platform development system for many popular communications apps, including Skype, Slack, and WhatsApp. Security vulnerabilities in the update system allows someone to silently inject malicious code into applications. From a news article:

At the BSides LV security conference on Tuesday, Pavel Tsakalidis demonstrated a tool he created called BEEMKA, a Python-based tool that allows someone to unpack Electron ASAR archive files and inject new code into Electron's JavaScript libraries and built-in Chrome browser extensions. The vulnerability is not part of the applications themselves but of the underlying Electron framework -- ­and that vulnerability allows malicious activities to be hidden within processes that appear to be benign. Tsakalidis said that he had contacted Electron about the vulnerability but that he had gotten no response -- ­and the vulnerability remains.

While making these changes required administrator access on Linux and MacOS, it only requires local access on Windows. Those modifications can create new event-based "features" that can access the file system, activate a Web cam, and exfiltrate information from systems using the functionality of trusted applications­ -- including user credentials and sensitive data. In his demonstration, Tsakalidis showed a backdoored version of Microsoft Visual Studio Code that sent the contents of every code tab opened to a remote website.

Basically, the Electron ASAR files aren't signed or encrypted, so modifying them is easy.

Note that this attack requires local access to the computer, which means that an attacker that could do this could do much more damaging things as well. But once an app has been modified, it can be distributed to other users. It's not a big deal attack, but it's a vulnerability that should be closed.

CryptogramAT&T Employees Took Bribes to Unlock Smartphones

This wasn't a small operation:

A Pakistani man bribed AT&T call-center employees to install malware and unauthorized hardware as part of a scheme to fraudulently unlock cell phones, according to the US Department of Justice. Muhammad Fahd, 34, was extradited from Hong Kong to the US on Friday and is being detained pending trial.

An indictment alleges that "Fahd recruited and paid AT&T insiders to use their computer credentials and access to disable AT&T's proprietary locking software that prevented ineligible phones from being removed from AT&T's network," a DOJ announcement yesterday said. "The scheme resulted in millions of phones being removed from AT&T service and/or payment plans, costing the company millions of dollars. Fahd allegedly paid the insiders hundreds of thousands of dollars­ -- paying one co-conspirator $428,500 over the five-year scheme."

In all, AT&T insiders received more than $1 million in bribes from Fahd and his co-conspirators, who fraudulently unlocked more than 2 million cell phones, the government alleged. Three former AT&T customer service reps from a call center in Bothell, Washington, already pleaded guilty and agreed to pay the money back to AT&T.

Worse Than FailureCodeSOD: Swimming Downstream

When Java added their streams API, they brought the power and richness of functional programming styles to the JVM, if we ignore all the other non-Java JVM languages that already did this. Snark aside, streams were a great addition to the language, especially if we want to use them absolutely wrong.

Like this code Miles found.

See, every object in the application needs to have a unique identifier. So, for every object, there’s a method much like this one:

/**
     * Get next priceId
     *
     * @return next priceId
     */
    public String createPriceId() {
        List<String> ids = this.prices.stream().map(m -> m.getOfferPriceId()).collect(Collectors.toList());
        for (Integer i = 0; i < ids.size(); i++) {
            ids.set(i, ids.get(i).split("PR")[1]);
        }
        try {
            List<Integer> intIds = ids.stream().map(id -> Integer.parseInt(id)).collect(Collectors.toList());
            Integer max = intIds.stream().mapToInt(id -> id).max().orElse(0);
            return "PR" + (max + 1);
        } catch (Exception e) {
            return "PR" + 1;
        }
    }

The point of a stream is that you can build a processing pipeline: starting with a list, you can perform a series of operations but only touch each item in the stream once. That, of course, isn’t what we do here.

First, we map the prices to extract the offerPriceId and convert it into a list. Now, this list is a set of strings, so we iterate across that list of IDs, to break the "PR" prefix off. Then, we’ll map that list of IDs again, to parse the strings into integers. Then, we’ll cycle across that new list one more time, to find the max value. Then we can return a new ID.

And if anything goes wrong in this process, we won’t complain. We just return an ID that’s likely incorrect- "PR1". That’ll probably cause an error later, right? They can deal with it then.

Everything here is just wrong. This is the wrong way to use streams- the whole point is this could have been a single chain of function calls that only needed to iterate across the input data once. It’s also the wrong way to handle exceptions. And it’s also the wrong way to generate IDs.

Worse, a largely copy/pasted version of this code, with the names and prefixes changed, exists in nearly every model class. And these are database model classes, according to Miles, so one has to wonder if there might be a better way to generate IDs…

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Valerie AuroraGoth fashion tips for Ehlers-Danlos Syndrome

A woman wearing a dramatic black hooded jacket typing on a laptop
Skingraft hoodie, INC shirt, Fisherman’s Wharf fingerless gloves

My ideal style could perhaps be best described as “goth chic”—a lot of structured black somewhere on the border between couture and business casual—but because I have Ehlers-Danlos Syndrome, I more often end up wearing “sport goth”: a lot of stretchy black layers in washable fabrics with flat shoes. With great effort, I’ve nudged my style back towards “goth chic,” at least on good days. Enough people have asked me about my gear that I figured I’d share what I’ve learned with other EDS goths (or people who just like being comfortable and also wearing a lot of black).

Here are the constraints I’m operating under:

  • Flat shoes with thin soles to prevent ankle sprains and foot and back pain
  • Stretchy/soft shoes without pressure points to prevent blisters on soft skin
  • Can’t show sweat because POTS causes excessive sweating, also I walk a lot
  • Layers because POTS, walking, and San Francisco weather means I need to adjust my temperature a lot
  • Little or no weight on shoulders due to hypermobile shoulders
  • No tight clothes on abdomen due to pain (many EDS folks don’t have this problem but I do)
  • Soft fabric only touching skin due to sensitive easily irritated skin
  • Warm wrists to prevent hands from losing circulation due to Reynaud’s or POTS

On the other hand, I have a few things that make fashion easier for me. For starters, I can afford a four-figure annual clothing budget. I still shop a lot at thrift stores, discount stores like Ross, or discount versions of more expensive stores like Nordstrom Rack but I can afford a few expensive pieces at full price. Many of the items on this page can be found used on Poshmark, eBay, and other online used clothing marketplaces. I also recommend doing the math for “cost per wear” to figure out if you would save money if you wore a more expensive but more durable piece for a longer period of time. I usually keep clothing and shoes for several years and repair as necessary.

I currently fit within the “standard” size ranges of most clothing and shoe brands, but many of the brands I recommend here have a wider range of sizes. I’ve included the size range where relevant.

Finally, as a cis woman with an extremely femme body type, I can wear a wide range of masculine and feminine styles without being hassled in public for being gender-nonconforming (I still get hassled in public for being a woman, yay). Most of the links here are to women’s styles, but many brands also have men’s styles. (None of these brands have unisex styles that I know of.)

Shoes and socks

Shoes are my favorite part of fashion! I spend much more money on shoes than I used to because more expensive shoes are less likely to give me blisters. If I resole/reheel/polish them regularly, they can last for several years instead of a few months, so they cost the same per wear. Functional shoes are notoriously hard for EDS people to find, so the less often I have to search for new shoes, the better. I nearly always wear my shoes until they can no longer be repaired. If this post does nothing other than convince you that it is economical and wise to spend more money on shoes, I have succeeded.

Woman wearing two coats and holding two rolling bags
Via Spiga trench, Mossimo hoodie, VANELi flats, Aimee Kestenberg rolling laptop bag, Travelpro rolling bag

Smartwool black socks – My poor tender feet need cushiony socks that don’t sag or rub. Smartwool socks are expensive but last forever, and you can get them in 100% black so that you can wash them with your black clothes without covering them in little white balls. I wear mostly the men’s Walk Light Crew and City Slicker, with occasional women’s Hide and Seek No Show.

Skechers Cleo flats – These are a line of flats in a stretchy sweater-like material. The heel can be a little scratchy, but I sewed ribbon over the seam and it was fine. The BOBS line of Skechers is also extremely comfortable. Sizes 5 – 11.

VANELi flats – The sportier versions of these shoes are obscenely comfortable and also on the higher end of fashion. I wore my first pair until they had holes in the soles, and then I kept wearing them another year. I’m currently wearing out this pair. You can get them majorly discounted at DSW and similar places. Sizes 5 – 12.

Stuart Weitzman 5050 boots – These over-the-knee boots are the crown jewel of any EDS goth wardrobe. First, they are almost totally flat and roomy in the toe. Second, the elastic in the boot shaft acts like compression socks, helping with POTS. Third, they look amazing. Charlize Theron wore them in “Atomic Blonde” while performing martial arts. Angelina Jolie wears these in real life. The downside is the price, but there is absolutely no reason to pay full price. I regularly find them in Saks Off 5th for 30% off. Also, they last forever: with reheeling, my first pair lasted around three years of heavy use. Stuart Weitzman makes several other flat boots with elastic shafts which are also worth checking out, but they have been making the 5050 for around 25 years so this style should always be available. Sizes 4 – 12, runs about a half size large.

Pants/leggings/skirts

A woman wearing black leggings and a black sweater
Patty Boutik sweater, Demobaza leggings, VANELi flats

Satina high-waisted leggings – I wear these extremely cheap leggings probably five days a week under skirts or dresses. Available in two sizes, S – L and XL – XXXL. If you can wear tight clothing, you might want to check out the Spanx line of leggings (e.g. the Moto Legging) which I would totally wear if I could.

Toad & Co. Women’s Chaka skirt – I wear this skirt probably three days a week. Ridiculously comfortable and only middling expensive. Sizes XS – L.

NYDJ jeans/leggings – These are pushing it for me in terms of tightness, but I can wear them if I’m standing or walking most of the day. Expensive, but they look professional and last forever. Sizes 00 – 28, including petites, and  they run at least a size large.

Demobaza leggings – The leggings made mostly of stretch material are amazingly comfortable, but also obscenely expensive. They also last forever. Sizes XS – L.

Tops

Patty Boutik – This strange little label makes comfortable tops with long long sleeves and long long bodies, and it keeps making the same styles for years. Unfortunately, they tend to sell out of the solid black versions of my favorite tops on a regular basis. I order two or three of my favorite styles whenever they are in stock as they are reasonably cheap. I’ve been wearing the 3/4 sleeve boat neck shirt at least once a week for about 5 years now. Sizes XS – XL, tend to run a size small.

14th and Union – This label makes very simple pieces out of the most comfortable fabrics I’ve ever worn for not very much money. I wear this turtleneck long sleeve tee about once a week. I also like their skirts. Sizes XS to XL, standard and petite.

Macy’s INC – This label is a reliable source of stretchy black clothing at Macy’s prices. It often edges towards club wear but keeps the simplicity I prefer.

Coats

Mossimo hoodie – Ugh, I love this thing. It’s the perfect cheap fashion staple. I often wear it underneath other coats. Not sure about sizes since it is only available on resale sites.

Skingraft Royal Hoodie – A vastly more expensive version of the black hoodie, but still comfortable, stretchy, and washable. And oh so dramatic. Sizes XS – L.

3/4 length hooded black trench coat – Really any brand will do, but I’ve mostly recently worn out a Calvin Klein and am currently wearing a Via Spiga.

Accessories

A woman wearing all black with a fanny pack
Mossimo hoodie, Toad & Co. skirt, T Tahari fanny pack, Satina leggings, VANELi flats

Fingerless gloves – The cheaper, the better! I buy these from the tourist shops at Fisherman’s Wharf in San Francisco for under $10. I am considering these gloves from Demobaza.

Medline folding cane – Another cheap fashion staple for the EDS goth! Sturdy, adjustable, folding black cane with clean sleek lines.

T Tahari Logo Fanny Pack – I stopped being able to carry a purse right about the time fanny packs came back into style! Ross currently has an entire fanny pack section, most of which are under $13. If I’m using a backpack or the rolling laptop bag, I usually keep my wallet, phone, keys, and lipstick in the fanny pack for easy access.

Duluth Child’s Pack, Envelope style – A bit expensive, but another simple fashion staple. I used to carry the larger roll-top canvas backpack until I realized I was packing it full of stuff and aggravating my shoulders. The child’s pack barely fits a small laptop and a few accessories.

Aimee Kestenberg rolling laptop bag – For the days when I need more than I can fit in my tiny backpack and fanny pack. It has a strap to fit on to the handle of a rolling luggage bag, which is great for air travel.

Apple Watch – The easiest way to diagnose POTS! (Look up “poor man’s tilt table test.”) A great way to track your heart rate and your exercise, two things I am very focused on as someone with EDS. When your first watch band wears out, go ahead and buy a random cheap one off the Internet.

That’s my EDS goth fashion tips! If you have more, please share them in the comments.

,

Krebs on SecurityWho Owns Your Wireless Service? Crooks Do.

Incessantly annoying and fraudulent robocalls. Corrupt wireless company employees taking hundreds of thousands of dollars in bribes to unlock and hijack mobile phone service. Wireless providers selling real-time customer location data, despite repeated promises to the contrary. A noticeable uptick in SIM-swapping attacks that lead to multi-million dollar cyberheists.

If you are somehow under the impression that you — the customer — are in control over the security, privacy and integrity of your mobile phone service, think again. And you’d be forgiven if you assumed the major wireless carriers or federal regulators had their hands firmly on the wheel.

No, a series of recent court cases and unfortunate developments highlight the sad reality that the wireless industry today has all but ceded control over this vital national resource to cybercriminals, scammers, corrupt employees and plain old corporate greed.

On Tuesday, Google announced that an unceasing deluge of automated robocalls had doomed a feature of its Google Voice service that sends transcripts of voicemails via text message.

Google said “certain carriers” are blocking the delivery of these messages because all too often the transcripts resulted from unsolicited robocalls, and that as a result the feature would be discontinued by Aug. 9. This is especially rich given that one big reason people use Google Voice in the first place is to screen unwanted communications from robocalls, mainly because the major wireless carriers have shown themselves incapable or else unwilling to do much to stem the tide of robocalls targeting their customers.

AT&T in particular has had a rough month. In July, the Electronic Frontier Foundation (EFF) filed a class action lawsuit on behalf of AT&T customers in California to stop the telecom giant and two data location aggregators from allowing numerous entities — including bounty hunters, car dealerships, landlords and stalkers — to access wireless customers’ real-time locations without authorization.

And on Monday, the U.S. Justice Department revealed that a Pakistani man was arrested and extradited to the United States to face charges of bribing numerous AT&T call-center employees to install malicious software and unauthorized hardware as part of a scheme to fraudulently unlock cell phones.

Ars Technica reports the scam resulted in millions of phones being removed from AT&T service and/or payment plans, and that the accused allegedly paid insiders hundreds of thousands of dollars to assist in the process.

We should all probably be thankful that the defendant in this case wasn’t using his considerable access to aid criminals who specialize in conducting unauthorized SIM swaps, an extraordinarily invasive form of fraud in which scammers bribe or trick employees at mobile phone stores into seizing control of the target’s phone number and diverting all texts and phone calls to the attacker’s mobile device.

Late last month, a federal judge in New York rejected a request by AT&T to dismiss a $224 million lawsuit over a SIM-swapping incident that led to $24 million in stolen cryptocurrency.

The defendant in that case, 21-year-old Manhattan resident Nicholas Truglia, is alleged to have stolen more than $80 million from victims of SIM swapping, but he is only one of many individuals involved in this incredibly easy, increasingly common and lucrative scheme. The plaintiff in that case alleges that he was SIM-swapped on two different occasions, both allegedly involving crooked or else clueless employees at AT&T wireless stores.

And let’s not forget about all the times various hackers figured out ways to remotely use a carrier’s own internal systems for looking up personal and account information on wireless subscribers.

So what the fresh hell is going on here? And is there any hope that lawmakers or regulators will do anything about these persistent problems? Gigi Sohn, a distinguished fellow at the Georgetown Institute for Technology Law and Policy, said the answer — at least in this administration — is probably a big “no.”

“The takeaway here is the complete and total abdication of any oversight of the mobile wireless industry,” Sohn told KrebsOnSecurity. “Our enforcement agencies aren’t doing anything on these topics right now, and we have a complete and total breakdown of oversight of these incredibly powerful and important companies.”

Aaron Mackey, a staff attorney at the EFF, said that on the location data-sharing issue, federal law already bars the wireless carriers from sharing this with third parties without the expressed consent of consumers.

“What we’ve seen is the Federal Communications Commission (FCC) is well aware of this ongoing behavior about location data sales,” Mackey said. “The FCC has said it’s under investigation, but there has been no public action taken yet and this has been going on for more than a year. The major wireless carriers are not only violating federal law, but they’re also putting people in harm’s way. There are countless stories of folks being able to pretend to be law enforcement and gaining access to information they can use to assault and harass people based on the carriers making location data available to a host of third parties.”

On the issue of illegal SIM swaps, Wired recently ran a column pointing to a solution that many carriers in Africa have implemented which makes it much more difficult for SIM swap thieves to ply their craft.

“The carrier would set up a system to let the bank query phone records for any recent SIM swaps associated with a bank account before they carried out a money transfer,” wrote Wired’s Andy Greenberg in April. “If a SIM swap had occurred in, say, the last two or three days, the transfer would be blocked. Because SIM swap victims can typically see within minutes that their phone has been disabled, that window of time let them report the crime before fraudsters could take advantage.”

For its part, AT&T says it is now offering a solution to help diminish the fallout from unauthorized SIM swaps, and that the company is planning on publishing a consumer blog on this soon. Here are some excerpts from what they sent on that front:

“Our AT&T Authentication and Verification Service, or AAVS. AAVS offers a new method to help businesses determine that you are, in fact, you,” AT&T said in a statement. “This is how it works. If a business or company builds the AAVS capability into its website or mobile app, it can automatically connect with us when you attempt to log-in. Through that connection, the number and the phone are matched to confirm the log-in. If it detects something fishy, like the SIM card not in the right device, the transaction won’t go through without further authorization.”

“It’s like an automatic background check on your phone’s history, but with no personal information changing hands, and it all happens in a flash without you knowing. Think about how you do business with companies on your mobile device now. You typically log into an online account or a mobile app using a password or fingerprint. Some tasks might require you to receive a PIN from your institution for additional security, but once you have access, you complete your transactions. With AAVS, the process is more secure, and nothing changes for you. By creating an additional layer of security without adding any steps for the consumer, we can take larger strides in helping businesses and their customers better protect their data and prevent fraud. Even if it is designed to go unnoticed, we want you to know that extra layer of protection exists.   In fact, we’re offering it to dozens of financial institutions.”

“We are working with several leading banks to roll out this service to protect their customers accessing online accounts and mobile apps in the coming months, with more to follow. By directly working with those banks, we can help to better protect your information.”

In terms of combating the deluge of robocalls, Sohn says we already have a workable approach to arresting these nuisance calls: It’s an authentication procedure known as “SHAKEN/STIR,” and it is premised on the idea that every phone has a certificate of authenticity attached to it that can be used to validate if the call is indeed originating from the number it appears to be calling from.

Under a SHAKEN/STIR regime, anyone who is spoofing their number (and most of these robocalls are spoofed to appear as though they come from a number that is in the same prefix as yours) gets automatically blocked.

“The FCC could make the carriers provide robocall apps for free to customers, but they’re not,” Sohn said. “The carriers instead are turning around and charging customers extra for this service. There was a fairly strong anti-robocalls bill that passed the House, but it’s now stuck in the legislative graveyard that is the Senate.”

AT&T said it and the other major carriers in the US are adopting SHAKEN/STIR and do not plan to charge for it. The company said it is working on building this feature into its Call Protect app, which is free and is meant to help customers block unwanted calls.

What about the prospects of any kind of major overhaul to the privacy laws in this country that might give consumers more say over who can access their private data and what recourse they may have when companies entrusted with that information screw up?

Sohn said there are few signs that anyone in Congress is seriously championing consumer privacy as a major legislative issue. Most of the nascent efforts to bring privacy laws in the United States into the 21st Century she said are interminably bogged down on two sticky issues: Federal preemption of stronger state laws, and the ability of consumers to bring a private right of civil action in the courts against companies that violate those provisions.

“It’s way past time we had a federal privacy bill,” Sohn said. “Companies like Facebook and others are practically begging for some type of regulatory framework on consumer privacy, yet this congress can’t manage to put something together. To me it’s incredible we don’t even have a discussion draft yet. There’s not even a bill that’s being discussed and debated. That is really pitiful, and the closer we get to elections, the less likely it becomes because nobody wants to do anything that upsets their corporate contributions. And, frankly, that’s shameful.”

Update, Aug. 8, 2:05 p.m. ET: Added statements and responses from AT&T.

CryptogramBrazilian Cell Phone Hack

I know there's a lot of politics associated with this story, but concentrate on the cybersecurity aspect for a moment. The cell phones of a thousand Brazilians, including senior government officials, were hacked -- seemingly by actors much less sophisticated than rival governments.

Brazil's federal police arrested four people for allegedly hacking 1,000 cellphones belonging to various government officials, including that of President Jair Bolsonaro.

Police detective João Vianey Xavier Filho said the group hacked into the messaging apps of around 1,000 different cellphone numbers, but provided little additional information at a news conference in Brasilia on Wednesday. Cellphones used by Bolsonaro were among those attacked by the group, the justice ministry said in a statement on Thursday, adding that the president was informed of the security breach.

[...]

In the court order determining the arrest of the four suspects, Judge Vallisney de Souza Oliveira wrote that the hackers had accessed Moro's Telegram messaging app, along with those of two judges and two federal police officers.

When I say that smartphone security equals national security, this is the kind of thing I am talking about.

Planet DebianDirk Eddelbuettel: RQuantLib 0.4.10: Pure maintenance

A new version 0.4.10 of RQuantLib just got onto CRAN; a Debian upload will follow in due course.

QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language.

This version does two things related to the new upstream QuantLib release 1.16. First, it updates the Windows build script in two ways: it uses binaries for the brand new 1.16 release as prepapred by Jeroen, and it sets win-builder up for the current and “prospective next version”, also set up by Jeroen. I also updated the Dockerfile used for CI to pick QuantLib 1.16 from Debian’s unstable repo as it is too new to have moved to testing (which the r-base container we build on defaults to). The complete set of changes is listed below:

Changes in RQuantLib version 0.4.10 (2019-08-07)

  • Changes in RQuantLib build system:

    • The src/Makevars.win and tools/winlibs.R file get QuantLib 1.16 for either toolchain (Jeroes in #136).

    • The custom Docker container now downloads QuantLib from Debian unstable to get release 1.16 (from yesterday, no less)

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureCodeSOD: Seven First Dates

Your programming language is PHP, which represents datetimes as milliseconds since the epoch. Your database, on the other hand, represents datetimes as seconds past the epoch. Now, your database driver certainly has methods to handle this, but can you really trust that?

Nancy found some code which simply needs to check: for the past week, how many customers arrived each day?

$customerCount = array();
$result2 = array();
$result3 = array();
$result4 = array();

$min = 20;
$max = 80;

for ( $i = $date; $i < $date + $days7 + $day; $i += $day ) {

	$first_datetime = date('Y-m-d H:i',substr($i - $day,0,-3));
	$second_datetime = date('Y-m-d H:i',substr($i,0,-3));

	$sql = $mydb ->prepare("SELECT 
								COUNT(DISTINCT Customer.ID) 'Customer'
				            FROM Customer
				                WHERE Timestamp BETWEEN %s AND %s",$first_datetime,$second_datetime);
	$output = $mydb->get_row($sql);
	array_push( $customerCount, $output->Customer == null ? 0 : $output->Customer);
}

array_push( $result4, $customerCount );
array_push( $result4, $result2 );
array_push( $result4, $result3 );

return $result4;

If you have a number of milliseconds and you wish to convert it to seconds, you might do something silly and divide by 1,000, but here we have a more obvious solution: substr the last three digits off to create our $first_datetime and $second_datetime.

Using that, we can prepare a separate query for each day, looping across them to populate $customerCount.

Once we’ve collected all the counts in $customerCount, we then push that into $result4. And then we push the empty $result2 into $result4, followed by the equally empty $result3, at which point we can finally return $result4.

There’s no $result1, but it looks like $customerCount was a renamed version of that, just by the sequence of declarations. And then $min and $max are initialized but never used, and from that, it’s very easy to understand what happened here.

The original developer copied some sample code from a tutorial, but they didn’t understand it. They knew they had a goal, and they knew that their goal was similar to the tutorial, so they just blundered about changing things until they got the results they expected.

Nancy threw all this out and replaced it with a GROUP BY query.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Planet DebianJonas Meurer: debian lts report 2019.07

Debian LTS report for July 2019

This month I was allocated 17 hours. I also had 2 hours left over from Juney, which makes a total of 19 hours. I spent all of them on the following tasks/ issues.

  • DLA-1843-1: Fixed CVE-2019-10162 and CVE-2019-10163 in pdns.
  • DLA-1852-1: Fixed CVE-2019-9948 in python3.4. Also found, debugged and fixed several further regressions in the former CVE-2019-9740 patches.
  • Improved testing of LTS uploads: We had some internal discussion in the Debian LTS team on how to improve the overall quality of LTS security uploads by doing more (semi-)automated testing of the packages before uploading them to jessie-security. I tried to summarize the internal discussion, bringing it to the public debian-lts mailinglist. I also did a lot of testing and worked on Jessie support in Salsa-CI. Now that salsa-ci-team/images MR !74 and ci-team/debci MR !89 got merged, we only have to wait for a new debci release in order to enable autopkgtest Jessie support in Salsa-CI. Afterwards, we can use the Salsa-CI pipeline for (semi-)automatic testing of packages targeted at jessie-security.

Worse Than FailureCodeSOD: Bunny Bunny

When you deploy any sort of complicated architecture, like microservices, you also end up needing to deploy some way to route messages between all the various bits and bobs in your application. You could build this yourself, but you’ll usually use an off-the-shelf product, like Kafka or RabbitMQ.

This is the world Tina lives in. They have a microservice-based architecture, glued together with a RabbitMQ server. The various microservices need to connect to the RabbitMQ, and thus, they need to be able to check if that connection were successful.

Now, as you can imagine, that would be a built-in library method for pretty much any MQ client library, but if people used the built-in methods for common tasks, we’d have far fewer articles to write.

Tina’s co-worker solved the “am I connected?” problem thus:

def are_we_connected_to_rabbitmq():
    our_node_ip = get_server_ip_address()
    consumer_connected = False
    response = requests.get("http://{0}:{1}@{2}:15672/api/queues/{3}/{4}".format(
        self.username,
        self.password,
        self.rabbitmq_host,
        self.vhost,
        self.queue))

    if response and response.status_code == 200:
        js_response = json.loads(response.content)
        consumer_details = js_response.get('consumer_details', [])
        for consumer in consumer_details:
            peer_host = consumer.get('channel_details', {}).get(
                'peer_host')
            if peer_host == our_node_ip:
                consumer_connected = True
                break

    return consumer_connected

To check if our queue consumer has successfully connected to the queue, we send an HTTP request to one of RabbitMQ’s restful endpoints to find a list of all of the connected consumers. Then we check to see if any of those consumers has our IP address. If one does, that must be us, so we must be connected!

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianLouis-Philippe Véronneau: Paying for The Internet

For a while now, I've been paying for The Internet. Not the internet connection provided by my ISP, mind you, but for the stuff I enjoy online and the services I find useful.

Most of the Internet as we currently know it is funded by ads. I hate ads and I take a vicious pride in blocking them with the help of great projects like uBlock Orign and NoScript. More fundamentally, I believe the web shouldn't be funded via ads:

  • they control your brain (that alone should be enough to ban ads)
  • they create morally wrong economic incentives towards consumerism
  • they create important security risks and make websites gather data on you

A sticker with a feminist anti-ads message

I could go on like this, but I feel those are pretty strong arguments. Feel free to disagree.

So I've started paying. Paying for my emails. Paying for the comics I enjoy online 1. Paying for the few YouTube channels I like. Paying for the newspapers I read.

At the moment, The Internet costs me around 260 USD per year. Luckily for me, I'm privileged enough that it doesn't have a significant impact on my finances. I also pay for a lot of the software I use and enjoy by making patches and spending time working on them. I feel that's a valid way to make The Internet a more sustainable place.

I don't think individual actions like this one have a very profound impact on how things work, but like riding your bike to work or eating locally produced organic food, it opens a window into a possible future. A better future.

Planet DebianDirk Eddelbuettel: #23: Debugging with Docker and Rocker – A Concrete Example helping on macOS

Welcome to the 23nd post in the rationally reasonable R rants series, or R4 for short. Today’s post was motivated by an exchange on the r-devel list earlier in the day, and a few subsequent off-list emails.

Roger Koenker posted a question: how to best debug an issue arising only with gfortran-9 which is difficult to get hold off on his macOS development platform. Some people followed up, and I mentioned that I had good success using Docker, and particularly our Rocker containers—and outlined a quick mini-tutorial (which had one mini-typo lacking the imporant slash in -w /work). Roger and I followed up over a few more off-list emails, and by and large this worked for him.

So what follows below is a jointly written / edited ‘mini HOWTO’ of how to deploy Docker on macOS for debugging under particular toolchains more easily available on Linux. Windows and Linux use should be very similar, albeit differ in the initial install. In fact, I frequently debug or test in Docker sessions when I do not want to install on my Linux host system. Roger sent one version (I had also edited) back to the list. What follows is my final version.

Debugging with Docker: Getting Hold of Particular Compilers

Context: The quantreg package was seen exhibiting errors when compiled with gfortran-9. The following shows how to use gfortran-9 on macOS by virtue of Docker. It is written in Roger Koenker’s voice, but authored by Roger and myself.

With extensive help from Dirk Eddelbuettel I have installed docker on my mac mini from

https://hub.docker.com/editions/community/docker-ce-desktop-mac

which installs from a dmg in quite standard fashion. This has allowed me to simulate running R in a Debian environment with gfortran-9 and begin the process of debugging my ancient rqbr.f code.

Some further details:

Step 0: Install Docker and Test

Install Docker for macOS following this Docker guide. Do some initial testing, e.g.

Step 1: Download r-base and test OS

We use the plainest Rocker container rocker/r-base, in the aliased form of the official Docker container for, i.e. r-base. We first ‘pull’, then test the version and drop into bash as second test.

Step 2: Setup the working directory

We tell Docker to run from the current directory and access the files therein. For the work on quantreg package this is projects/rq for RogerL

This put the contents of projects/rq into the /work directory, and starts the session in /work (as can be seen from the prompt).

Next, we update the package information inside the container:

Step 3: Install gcc-9 and gfortran-9

root@90521904fa86:/work# apt-get install gcc-9 gfortran-9
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
cpp-9 gcc-9-base libasan5 libatomic1 libcc1-0 libgcc-9-dev libgcc1 libgfortran-9-dev
libgfortran5 libgomp1 libitm1 liblsan0 libquadmath0 libstdc++6 libtsan0 libubsan1
Suggested packages:
gcc-9-locales gcc-9-multilib gcc-9-doc libgcc1-dbg libgomp1-dbg libitm1-dbg libatomic1-dbg
libasan5-dbg liblsan0-dbg libtsan0-dbg libubsan1-dbg libquadmath0-dbg gfortran-9-multilib
gfortran-9-doc libgfortran5-dbg libcoarrays-dev
The following NEW packages will be installed:
cpp-9 gcc-9 gfortran-9 libgcc-9-dev libgfortran-9-dev
The following packages will be upgraded:
gcc-9-base libasan5 libatomic1 libcc1-0 libgcc1 libgfortran5 libgomp1 libitm1 liblsan0
libquadmath0 libstdc++6 libtsan0 libubsan1
13 upgraded, 5 newly installed, 0 to remove and 71 not upgraded.
Need to get 35.6 MB of archives.
After this operation, 107 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libasan5 amd64 9.1.0-10 [390 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libubsan1 amd64 9.1.0-10 [128 kB]
Get:3 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libtsan0 amd64 9.1.0-10 [295 kB]
Get:4 http://cdn-fastly.deb.debian.org/debian testing/main amd64 gcc-9-base amd64 9.1.0-10 [190 kB]
Get:5 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libstdc++6 amd64 9.1.0-10 [500 kB]
Get:6 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libquadmath0 amd64 9.1.0-10 [145 kB]
Get:7 http://cdn-fastly.deb.debian.org/debian testing/main amd64 liblsan0 amd64 9.1.0-10 [137 kB]
Get:8 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libitm1 amd64 9.1.0-10 [27.6 kB]
Get:9 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgomp1 amd64 9.1.0-10 [88.1 kB]
Get:10 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgfortran5 amd64 9.1.0-10 [633 kB]
Get:11 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libcc1-0 amd64 9.1.0-10 [47.7 kB]
Get:12 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libatomic1 amd64 9.1.0-10 [9,012 B]
Get:13 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgcc1 amd64 1:9.1.0-10 [40.5 kB]
Get:14 http://cdn-fastly.deb.debian.org/debian testing/main amd64 cpp-9 amd64 9.1.0-10 [9,667 kB]
Get:15 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgcc-9-dev amd64 9.1.0-10 [2,346 kB]
Get:16 http://cdn-fastly.deb.debian.org/debian testing/main amd64 gcc-9 amd64 9.1.0-10 [9,945 kB]
Get:17 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgfortran-9-dev amd64 9.1.0-10 [676 kB]
Get:18 http://cdn-fastly.deb.debian.org/debian testing/main amd64 gfortran-9 amd64 9.1.0-10 [10.4 MB]
Fetched 35.6 MB in 6s (6,216 kB/s)      
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 17787 files and directories currently installed.)
Preparing to unpack .../libasan5_9.1.0-10_amd64.deb ...
Unpacking libasan5:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../libubsan1_9.1.0-10_amd64.deb ...
Unpacking libubsan1:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../libtsan0_9.1.0-10_amd64.deb ...
Unpacking libtsan0:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../gcc-9-base_9.1.0-10_amd64.deb ...
Unpacking gcc-9-base:amd64 (9.1.0-10) over (9.1.0-8) ...
Setting up gcc-9-base:amd64 (9.1.0-10) ...
(Reading database ... 17787 files and directories currently installed.)
Preparing to unpack .../libstdc++6_9.1.0-10_amd64.deb ...
Unpacking libstdc++6:amd64 (9.1.0-10) over (9.1.0-8) ...
Setting up libstdc++6:amd64 (9.1.0-10) ...
(Reading database ... 17787 files and directories currently installed.)
Preparing to unpack .../0-libquadmath0_9.1.0-10_amd64.deb ...
Unpacking libquadmath0:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../1-liblsan0_9.1.0-10_amd64.deb ...
Unpacking liblsan0:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../2-libitm1_9.1.0-10_amd64.deb ...
Unpacking libitm1:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../3-libgomp1_9.1.0-10_amd64.deb ...
Unpacking libgomp1:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../4-libgfortran5_9.1.0-10_amd64.deb ...
Unpacking libgfortran5:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../5-libcc1-0_9.1.0-10_amd64.deb ...
Unpacking libcc1-0:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../6-libatomic1_9.1.0-10_amd64.deb ...
Unpacking libatomic1:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../7-libgcc1_1%3a9.1.0-10_amd64.deb ...
Unpacking libgcc1:amd64 (1:9.1.0-10) over (1:9.1.0-8) ...
Setting up libgcc1:amd64 (1:9.1.0-10) ...
Selecting previously unselected package cpp-9.
(Reading database ... 17787 files and directories currently installed.)
Preparing to unpack .../cpp-9_9.1.0-10_amd64.deb ...
Unpacking cpp-9 (9.1.0-10) ...
Selecting previously unselected package libgcc-9-dev:amd64.
Preparing to unpack .../libgcc-9-dev_9.1.0-10_amd64.deb ...
Unpacking libgcc-9-dev:amd64 (9.1.0-10) ...
Selecting previously unselected package gcc-9.
Preparing to unpack .../gcc-9_9.1.0-10_amd64.deb ...
Unpacking gcc-9 (9.1.0-10) ...
Selecting previously unselected package libgfortran-9-dev:amd64.
Preparing to unpack .../libgfortran-9-dev_9.1.0-10_amd64.deb ...
Unpacking libgfortran-9-dev:amd64 (9.1.0-10) ...
Selecting previously unselected package gfortran-9.
Preparing to unpack .../gfortran-9_9.1.0-10_amd64.deb ...
Unpacking gfortran-9 (9.1.0-10) ...
Setting up libgomp1:amd64 (9.1.0-10) ...
Setting up libasan5:amd64 (9.1.0-10) ...
Setting up libquadmath0:amd64 (9.1.0-10) ...
Setting up libatomic1:amd64 (9.1.0-10) ...
Setting up libgfortran5:amd64 (9.1.0-10) ...
Setting up libubsan1:amd64 (9.1.0-10) ...
Setting up cpp-9 (9.1.0-10) ...
Setting up libcc1-0:amd64 (9.1.0-10) ...
Setting up liblsan0:amd64 (9.1.0-10) ...
Setting up libitm1:amd64 (9.1.0-10) ...
Setting up libtsan0:amd64 (9.1.0-10) ...
Setting up libgcc-9-dev:amd64 (9.1.0-10) ...
Setting up gcc-9 (9.1.0-10) ...
Setting up libgfortran-9-dev:amd64 (9.1.0-10) ...
Setting up gfortran-9 (9.1.0-10) ...
Processing triggers for libc-bin (2.28-10) ...
root@90521904fa86:/work# pwd

Here filenames and versions reflect the Debian repositories as of today, August 5, 2019. While minor details may change at a future point in time, the key fact is that we get the components we desire via a single call as the Debian system has a well-honed package system

Step 4: Prepare Package

At this point Roger removed some dependencies from the package quantreg that he knew were not relevant to the debugging problem at hand.

Step 5: Set Compiler Flags

Next, set compiler flags as follows:

adding the values

CC=gcc-9
FC=gfortran-9
F77=gfortran-9

to the file. Alternatively, one can find the settings of CC, FC, CXX, … in /etc/R/Makeconf (which for the Debian package is a softlink to R’s actual Makeconf) and alter them there.

Step 6: Install the Source Package

Now run

which uses the gfortran-9 compiler, and this version did reproduce the error initially reported by the CRAN maintainers.

Step 7: Debug!

With the tools in place, and the bug reproduces, it is (just!) a matter of finding the bug and fixing it.

And that concludes the tutorial.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Cory DoctorowPodcast: “IBM PC Compatible”: how adversarial interoperability saved PCs from monopolization

In my latest podcast (MP3), I read my essay “IBM PC Compatible”: how adversarial interoperability saved PCs from monopolization, published today on EFF’s Deeplinks; it’s another installment in my series about “adversarial interoperability,” and the role it has historically played in keeping tech open and competitive. This time, I relate the origin story of the “PC compatible” computer, with help from Tom Jennings (inventor of FidoNet!) who played a key role in the story.

All that changed in 1981, when IBM entered the PC market with its first personal computer, which quickly became the de facto standard for PC hardware. There are many reasons that IBM came to dominate the fragmented PC market: they had the name recognition (“No one ever got fired for buying IBM,” as the saying went) and the manufacturing experience to produce reliable products.

Equally important was IBM’s departure from its usual business practice of pursuing advantage by manufacturing entire systems, down to the subcomponents. Instead, IBM decided to go with an “open” design that incorporated the same commodity parts that the existing PC vendors were using, including MS-DOS and Intel’s 8086 chip. To accompany this open hardware, IBM published exhaustive technical documentation that covered every pin on every chip, every way that programmers could interact with IBM’s firmware (analogous to today’s “APIs”), as well as all the non-standard specifications for its proprietary ROM chip, which included things like the addresses where IBM had stored the fonts it bundled with the system.

Once IBM’s PC became the standard, rival hardware manufacturers realized that they had to create systems that were compatible with IBM’s systems. The software vendors were tired of supporting a lot of idiosyncratic hardware configurations, and IT managers didn’t want to have to juggle multiple versions of the software they relied on. Unless non-IBM PCs could run software optimized for IBM’s systems, the market for those systems would dwindle and wither.

MP3

Planet DebianRomain Perier: Free software activities in May, June and July 2019

Hi Planet, it is been a long time since my last post.
Here is an update covering what I have been doing in my free software activities during May, June and July 2019.

May

Only contributions related to Debian were done in May
  •  linux: Update to 5.1 (including porting of all debian patches to the new release)
  • linux: Update to 5.1.2
  • linux: Update to 5.1.3
  • linux: Update to 5.1.5
  • firmware-nonfree: misc-nonfree: Add GV100 signed firmwares (Closes: #928672)

June

Debian

  • linux: Update to 5.1.7
  • linux: Update to 5.1.8
  • linux: Update to 5.1.10
  • linux: Update to 5.1.11
  • linux: Update to 5.1.15
  • linux: [sparc64] Fix device naming inconsistency between sunhv_console and sunhv_reg (Closes: #926539)
  • raspi3-firmware:  New upstream version 1.20190517
  • raspi3-firmware: New upstream version 1.20190620+1

Kernel Self Protection Project

I have recently joined the kernel self protection protect, which basically intends to harden the mainline linux kernel the most as possible by adding subsystems that improve the security or make internal subsystems more robust to some common errors that might lead to security issues.

As a first contribution, Kees Cook asked me to check all the NLA_STRING for non-nul terminated strings. Internal functions of NLA attrs expect to have standard nul-terminated strings and use standard strings functions like strcmp() or equivalent. Few drivers were using non-nul terminated strings in some cases, which might lead to buffer overflow. I have checked all the NLA_STRING uses in all drivers and forwarded a status for all of these. Everything were already fixed in linux-next (hopefully).

July

Debian

  • linux: Update to 5.1.16
  • linux: Update to 5.2-rc7 (including porting of all debian patches to the new release)
  • linux: Update to 5.2
  • linux: Update to 5.2.1
  • linux: [rt] Update to 5.2-rt1
  • linux: Update to 5.2.4
  • ethtool: New upstream version 5.2
  • raspi3-firmware: Fixed lintians warnings about the binaries blobs for the raspberry PI 4
  • raspi3-firmware: New upstream version 1.20190709
  • raspi3-firmware: New upstream version 1.20190718
The following CVEs are for buster-security:
  • linux: [x86] x86/insn-eval: Fix use-after-free access to LDT entry (CVE-2019-13233)
  • linux: [powerpc*] mm/64s/hash: Reallocate context ids on fork (CVE-2019-12817)
  • linux: nfc: Ensure presence of required attributes in the deactivate_target handler (CVE-2019-12984)
  • linux: binder: fix race between munmap() and direct reclaim (CVE-2019-1999)
  • linux: scsi: libsas: fix a race condition when smp task timeout (CVE-2018-20836)
  • linux: Input: gtco - bounds check collection indent level (CVE-2019-13631)

Kernel Self Protection Project

I am currently improving the API of the internal kernel subsystem "tasklet". This is an old API and like "timer" it has several limitations regarding the way informations are passed to the callback handler. A future patch set will be sent to upstream, I will probably write a blog post about it.

Planet DebianReproducible Builds: Reproducible Builds in July 2019

Welcome to the July 2019 report from the Reproducible Builds project!

In these reports we outline the most important things that we have been up over the past month. As a quick recap, whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.

The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

In July’s report, we cover:

  • Front pageMedia coverage, upstream news, etc.
  • Distribution workShenanigans at DebConf19
  • Software developmentSoftware transparency, yet more diffoscope work, etc.
  • On our mailing listGNU tools, education and buildinfo files
  • Getting in touch… and how to contribute

If you are interested in contributing to our project, we enthusiastically invite you to visit our Contribute page on our website.


Front page

Nico Alt wrote a detailed and well-researched article titled “Trust is good, control is better” which discusses Reproducible builds in F-Droid the alternative application repository for Android mobile phones. In contrast to the bigger commercial app stores F-Droid only offers apps that are free and open source software. The post not only demonstrates using diffoscope but talks more generally about how reproducible builds can prevent single developers or other important centralised infrastructure becoming targets for toolchain-based attacks.

Later in the month, F-Droid’s aforementioned reproducibility status was mentioned on episode 68 of the Late Night Linux podcast. (direct link to 14:12)

Morten (“Foxboron”) Linderud published his academic thesis “Reproducible Builds: break a log, good things come in trees” which investigates and describes how transparency log overlays can provide additional security guarantees for computers automatically producing software packages. The thesis was part of Morten’s studies at the University of Bergen, Norway and is an extension of the work New York University Tandon School of Engineering has been doing with package rebuilder integration in APT.

Mike Hommey posted to his blog about Reproducing the Linux builds of Firefox 68 which leverages that builds shipped by Mozilla should be reproducible from this version. He discusses the problems caused by the builds being optimised with Profile-Guided Optimisation (PGO) but armed with the now-published profiling data, Mike provides Docker-based instructions how to reproduce the published builds yourself.

Joel Galenson has been making progress on a reproducible Rust compiler which includes support for a --remap-path-prefix argument, related to the concepts and problems involved in the BUILD_PATH_PREFIX_MAP proposal to fix issues with build paths being embedded in binaries.

Lastly, Alessio Treglia posted to their blog about Cosmos Hub and Reproducible Builds which describes the reproducibility work happening in the Cosmos Hub, a network of interconnected blockchains. Specifically, Alessio talks about work being done on the Gaia development kit for the Hub.



Distribution work

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution where enabling. Enabling Link Time Optimization (LTO) in this distribution’s “Tumbleweed” branch caused multiple issues due to the number of cores on the build host being added to the CFLAGS variable. This affected, for example, a debuginfo/rpm header as well as resulted in in CFLAGS appearing in built binaries such as fldigi, gmp, haproxy, etc.

As highlighted in last month’s report, the OpenWrt project (a Linux operating system targeting embedded devices such as wireless network routers) hosted a summit in Hamburg, Germany. Their full summit report and roundup is now available that covers many general aspects within that distribution, including the work on reproducible builds that was done during the event.

Debian

It was an extremely productive time in Debian this month in and around DebConf19, the 20th annual conference for both contributors and users and was held at the Federal University of Technology in Paraná (UTFPR) in Curitiba, Brazil, from July 21 to 28. The conference was preceded by “DebCamp” from the 14th until the 19th with an additional “Open Day” that is targeted at the more-general public on the 20th.

There were a number of talks touching on the topic of reproducible builds and secure toolchains throughout the conference, including:

There were naturally countless discussions regarding Reproducible Builds in and around the conference on the questions of tooling, infrastructure and our next steps as a project.

The release of Debian 10 buster has also meant the release cycle for the next release (codenamed “bullseye”) has just begun. As part of this, the Release Team recently announced that Debian will no longer allow binaries built and uploaded by maintainers on their own machines to be part of the upcoming release. This is great news not only for toolchain security in general but also in that it will ensure that all binaries that will form part of this release will likely have a .buildinfo file and thus metadata that could be used by others to reproduce and verify the builds.

Holger Levsen filed a bug against the underlying tool that maintains the Debian archive (“dak”) after he noticed that .buildinfo metadata files were not being automatically propagated if packages had to be manually approved or processed in the so-called “NEW queue”. After it was pointed out that the files were being retained in a separate location, Benjamin Hof proposed a potential patch for the issue which is pending review.

David Bremner posted to his blog post about “Yet another buildinfo database” that provides a SQL interface for querying .buildinfo attestation documents, particularly focusing on identifying packages that were built with a specific — and possibly buggy — build-dependency. Later at DebConf, David demonstrated his tool live (starting at 36:30).

Ivo de Decker (“ivodd”) scheduled rebuilds of over 600 packages that last experienced an upload to the archive in December 2016 or earlier. This was so that they would be built using a version of the low-level dpkg package build tool that supports the generation of reproducible binary packages. The effect of this on the main archive will be deliberately staggered and thus visible throughout the upcoming weeks, potentially resulting in some of these packages now failing to build.

Joaquin de Andres posted an update regarding the work being done on continuous integration on Debian’s GitLab instance at DebConf19 in which he mentions, inter alia, a tool called atomic-reprotest. This is a relatively new utility to help debug failures logged by our reprotest tool which attempts to test whether a build is reproducible or not. This tool was also mentioned in a subsequent lightning talk.

Chris Lamb filed two bugs to drop the test jobs for both strip-nondeterminism (#932366) and reprotest (#932374) after modifying them to build on the Salsa server’s own continuous integration platform and Holger Levsen shortly resolved them.

Lastly, 63 reviews of Debian packages were added, 72 were updated and 22 were removed this month, adding to our large knowledge about identified issues. Chris Lamb added and categorised four new issue types, umask_in_java_jar_file, built_by-in_java_manifest_mf, timestamps_in_manpages_generated_by_lopsubgen and codadef_coda_data_files.


Software development

The goal of Benjamin Hof’s Software Transparency effort is to improve on the cryptographic signatures of the APT package manager by introducing a Merkle tree-based transparency log for package metadata and source code, in a similar vein to certificate transparency. This month, he pushed a number of repositories to our revision control system for further future development and review.

In addition, Bernhard M. Wiedemann updated his (deliberately) unreproducible demonstration project to add support for floating point variations as well as changes in the project’s copyright year.

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Neal Gompa, Michael Schröder & Miro Hrončok responded to Fedora’s recent change to rpm-config with some new developments within rpm to fix an unreproducible “Build Date” and reverted a change to the Python interpreter to switch back to unreproducible/time-based compile caches.

Lastly, kpcyrd submitted a pull request for Alpine Linux to add SOURCE_DATE_EPOCH support to the abuild build tool in this operating system.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, Chris Lamb made the following changes:

  • Add support for Java .jmod modules (#60). However, not all versions of file(1) support detection of these files yet, so we perform a manual comparison instead [].
  • If a command fails to execute but does not print anything to standard error, try and include the first line of standard output in the message we include in the difference. This was motivated by readelf(1) returning its error messages on standard output. [#59) []
  • Add general support for file(1) 5.37 (#57) but also adjust the code to not fail in tests when, eg, we do not have sufficiently newer or older version of file(1) (#931881).
  • Factor out the ability to ignore the exit codes of zipinfo and zipinfo -v in the presence of non-standard headers. [] but only override the exit code from our special-cased calls to zipinfo(1) if they are 1 or 2 to avoid potentially masking real errors [].
  • Cease ignoring test failures in stable-backports. []
  • Add missing textual DESCRIPTION headers for .zip and “Mozilla”-optimised .zip files. []
  • Merge two overlapping environment variables into a single DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS. []
  • Update some reporting:
    • Re-add “return code” noun to “Command foo exited with X” error messages. []
    • Use repr(..)-style output when printing DIFFOSCOPE_TESTS_FAIL_ON_MISSING_TOOLS in skipped test rationale text. []
    • Skip the extra newline in Output:\nfoo. []
  • Add some explicit return values to appease Pylint, etc. []
  • Also include the python3-tlsh in the Debian test dependencies. []
  • Released and uploaded releasing versions 116, 117, 118, 119 & 120. [][][][][]

In addition, Marc Herbert provided a patch to catch failures to disassemble ELF binaries. []


Project website

There was a yet more effort put into our our website this month, including:

  • Bernhard M. Wiedemann:
    • Update multiple works to use standard (or at least consistent) terminology. []
    • Document an alternative Python snippet in the SOURCE_DATE_EPOCH examples examples. []
  • Chris Lamb:
    • Split out our non-fiscal sponsors with a description [] and make them non-display three-in-a-row [].
    • Correct references to 1&1 IONOS (née Profitbricks). []
    • Reduce ambiguity in our environment names. []
    • Recreate the badge image, saving the .svg alongside it. []
    • Update our fiscal sponsors. [][][]
    • Tidy the weekly reports section on the news page [], fixup the typography on the documentation page [] and make all headlines stand out a bit more [].
    • Drop some old CSS files and fonts. []
    • Tidy news page a bit. []
    • Fixup a number of issues in the report template and previous reports. [][][][][][]

Holger Levsen also added explanations on how to install diffoscope on OpenBSD [] and FreeBSD [] to its homepage and Arnout Engelen added a preliminary and work-in-progress idea for a badge or “shield” program for upstream projects. [][][].

A special thank you to Alexander Borkowski [] Georg Faerber [], and John Scott [] for their individual fixes. To err is human; to reproduce, divine.


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, Niko Tyni provided a patch to use the Perl Sub::Override library for some temporary workarounds for issues in Archive::Zip instead of Monkey::Patch which was due for deprecation. [].

In addition, Chris Lamb made the following changes:

  • Identify data files from the COmmon Data Access (CODA) framework as being .zip files. []
  • Support OpenJDK “.jmod” files. []
  • Pass --no-sandbox if necessary to bypass seccomp-enabled version of file(1) which was causing a huge number of regressions in our testing framework.
  • Don’t just run the tests but build the Debian package instead using Salsa’s centralised scripts so that we get code coverage, Lintian, autopkgtests, etc. [][]
  • Update tests:
    • Don’t build release Git tags on salsa.debian.org. []
    • Merge the debian branch into the master branch to simplify testing and deployment [] and update debian/gbp.conf to match [].
  • Drop misleading and outdated MANIFEST and MANIFEST.SKIP files as they are not used by our release process. []


Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. The following changes were performed in the last month:

  • Holger Levsen:
    • Debian-specific changes:
      • Make a large number of adjustments to support the new Debian bullseye distribution and the release of buster. [][][][][][][] [][][][]
      • Fix the colours for the five suites now being built. []
      • Make a number code improvements to the calculation of our “metapackage” sets including refactoring and changes of email address, etc. [][][][][]
      • Add the “http-proxy” variable to the displayed node info. []
    • Alpine changes:
      • Rebuild the webpages every two hours (instead of twice per hour). []
    • Reproducible tooling:
      • Fix the detection of version number in Arch Linux. []
      • Drop reprotest and strip-nondeterminism jobs as we run that via Salsa CI now. [][]
      • Add a link to current SQL database schema. []
  • Mattia Rizzolo:
    • Make a number of adjustments to support the new Debian bullseye distribution. [][][][]
    • Ensure that our arm64 hosts always trust the Debian archive keyring. []
    • Enable the backports repositories on the arm64 build hosts. []

Holger Levsen [][][] and Mattia Rizzolo [][][] performed the usual node maintenance and lastly, Vagrant Cascadian added support to generate a reproducible-tracker.json metadata file for the next release of Debian (bullseye). []


On the mailing list

Chris Lamb cross-posted his reply to the “Re: file(1) now with seccomp support enabled thread that was originally started on the debian-devel Debian list. In his post, he refers to a strip-nondeterminism not being able to accommodate the additional security hardening in file(1) and the changes made to the tool in order to do fix this issue which was causing a huge number of regressions in our testing framework.

Matt Bearup wrote about his experience when he generated different checksums for the libgcrypt20 package which resulted in some pointers etc. in that one should be using the equivalent .buildinfo post-build certificate when attempting to reproduce any particular build.

Vagrant Cascadian posted a request for comments regarding a potential proposal to the GNU Tools “Cauldron” gathering to be held in Montréal, Canada during September 2019 and Bernhard M. Wiedemann posed a query about using consistent terms on our webpages and elsewhere.

Lastly, in a thread titled “Reproducible Builds - aiming for bullseye: comments and a purpose” Jathan asked about whether we had considered offering “101”-like beginner sessions to fix packages that are not currently reproducible.



Getting in touch

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

This month’s report was written by Benjamin Hof, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

CryptogramRegulating International Trade in Commercial Spyware

Siena Anstis, Ronald J. Deibert, and John Scott-Railton of Citizen Lab published an editorial calling for regulating the international trade in commercial surveillance systems until we can figure out how to curb human rights abuses.

Any regime of rigorous human rights safeguards that would make a meaningful change to this marketplace would require many elements, for instance, compliance with the U.N. Guiding Principles on Business and Human Rights. Corporate tokenism in this space is unacceptable; companies will have to affirmatively choose human rights concerns over growing profits and hiding behind the veneer of national security. Considering the lies that have emerged from within the surveillance industry, self-reported compliance is insufficient; compliance will have to be independently audited and verified and accept robust measures of outside scrutiny.

The purchase of surveillance technology by law enforcement in any state must be transparent and subject to public debate. Further, its use must comply with frameworks setting out the lawful scope of interference with fundamental rights under international human rights law and applicable national laws, such as the "Necessary and Proportionate" principles on the application of human rights to surveillance. Spyware companies like NSO Group have relied on rubber stamp approvals by government agencies whose permission is required to export their technologies abroad. To prevent abuse, export control systems must instead prioritize a reform agenda that focuses on minimizing the negative human rights impacts of surveillance technology and that ensures -- with clear and immediate consequences for those who fail -- that companies operate in an accountable and transparent environment.

Finally, and critically, states must fulfill their duty to protect individuals against third-party interference with their fundamental rights. With the growth of digital authoritarianism and the alarming consequences that it may hold for the protection of civil liberties around the world, rights-respecting countries need to establish legal regimes that hold companies and states accountable for the deployment of surveillance technology within their borders. Law enforcement and other organizations that seek to protect refugees or other vulnerable persons coming from abroad will also need to take digital threats seriously.

Krebs on SecurityThe Risk of Weak Online Banking Passwords

If you bank online and choose weak or re-used passwords, there’s a decent chance your account could be pilfered by cyberthieves — even if your bank offers multi-factor authentication as part of its login process. This story is about how crooks increasingly are abusing third-party financial aggregation services like Mint, PlaidYodlee, YNAB and others to surveil and drain consumer accounts online.

Crooks are constantly probing bank Web sites for customer accounts protected by weak or recycled passwords. Most often, the attacker will use lists of email addresses and passwords stolen en masse from hacked sites and then try those same credentials to see if they permit online access to accounts at a range of banks.

A screenshot of a password-checking tool being used to target Chase Bank customers who re-use passwords from other sites. Image: Hold Security.

From there, thieves can take the list of successful logins and feed them into apps that rely on application programming interfaces (API)s from one of several personal financial data aggregators which help users track their balances, budgets and spending across multiple banks.

A number of banks that do offer customers multi-factor authentication — such as a one-time code sent via text message or an app — have chosen to allow these aggregators the ability to view balances and recent transactions without requiring that the aggregator service supply that second factor. That’s according to Brian Costello, vice president of data strategy at Yodlee, one of the largest financial aggregator platforms.

Costello said while some banks have implemented processes which pass through multi-factor authentication (MFA) prompts when consumers wish to link aggregation services, many have not.

“Because we have become something of a known quantity with the banks, we’ve set up turning off MFA with many of them,” Costello said.  “Many of them are substituting coming from a Yodlee IP or agent as a factor because banks have historically been relying on our security posture to help them out.”

Such reconnaissance helps lay the groundwork for further attacks: If the thieves are able to access a bank account via an aggregator service or API, they can view the customer’s balance(s) and decide which customers are worthy of further targeting.

This targeting can occur in at least one of two ways. The first involves spear phishing attacks to gain access to that second authentication factor, which can be made much more convincing once the attackers have access to specific details about the customer’s account — such as recent transactions or account numbers (even partial account numbers).

The second is through an unauthorized SIM swap, a form of fraud in which scammers bribe or trick employees at mobile phone stores into seizing control of the target’s phone number and diverting all texts and phone calls to the attacker’s mobile device.

But beyond targeting customers for outright account takeovers, the data available via financial aggregators enables a far more insidious type of fraud: The ability to link the target’s bank account(s) to other accounts that the attackers control.

That’s because PayPal, Zelle, and a number of other pure-play online financial institutions allow customers to link accounts by verifying the value of microdeposits. For example, if you wish to be able to transfer funds between PayPal and a bank account, the company will first send a couple of tiny deposits  — a few cents, usually — to the account you wish to link. Only after verifying those exact amounts will the account-linking request be granted.

Alex Holden is founder and chief technology officer of Hold Security, a Milwaukee-based security consultancy. Holden and his team closely monitor the cybercrime forums, and he said the company has seen a number of cybercriminals discussing how the financial aggregators are useful for targeting potential victims.

Holden said it’s not uncommon for thieves in these communities to resell access to bank account balance and transaction information to other crooks who specialize in cashing out such information.

“The price for these details is often very cheap, just a fraction of the monetary value in the account, because they’re not selling ‘final’ access to the account,” Holden said. “If the account is active, hackers then can go to the next stage for 2FA phishing or social engineering, or linking the accounts with another.”

Currently, the major aggregators and/or applications that use those platforms store bank logins and interactively log in to consumer accounts to periodically sync transaction data. But most of the financial aggregator platforms are slowly shifting toward using the OAuth standard for logins, which can give banks a greater ability to enforce their own fraud detection and transaction scoring systems when aggregator systems and apps are initially linked to a bank account.

That’s according to Don Cardinal, managing director of the Financial Data Exchange (FDX), which is seeking to unite the financial industry around a common, interoperable, and royalty-free standard for secure consumer and business access to their financial data.

“This is where we’re going,” Cardinal said. “The way it works today, you the aggregator or app stores the credentials encrypted and presents them to the bank. What we’re moving to is [an account linking process] that interactively loads the bank’s Web site, you login there, and the site gives the aggregator an OAuth token. In that token granting process, all the bank’s fraud controls are then direct to the consumer.”

Alissa Knight, a senior analyst with the Aite Group, a financial and technology analyst firm, said such attacks highlight the need to get rid of passwords altogether. But until such time, she said, more consumers should take full advantage of the strongest multi-factor authentication option offered by their bank(s), and consider using a password manager, which helps users pick and remember strong and unique passwords for each Web site.

“This is just more empirical data around the fact that passwords just need to go away,” Knight said. “For now, all the standard precautions we’ve been giving consumers for years still stand: Pick strong passwords, avoid re-using passwords, and get a password manager.”

Some of the most popular password managers include 1Password, Dashlane, LastPass and Keepass. Wired.com recently published a worthwhile writeup which breaks down each of these based on price, features and usability.

CryptogramWanted: Cybersecurity Imagery

Eli Sugarman of the Hewlettt Foundation laments about the sorry state of cybersecurity imagery:

The state of cybersecurity imagery is, in a word, abysmal. A simple Google Image search for the term proves the point: It's all white men in hoodies hovering menacingly over keyboards, green "Matrix"-style 1s and 0s, glowing locks and server racks, or some random combination of those elements -- sometimes the hoodie-clad men even wear burglar masks. Each of these images fails to convey anything about either the importance or the complexity of the topic­ -- or the huge stakes for governments, industry and ordinary people alike inherent in topics like encryption, surveillance and cyber conflict.

I agree that this is a problem. It's not something I noticed until recently. I work in words. I think in words. I don't use PowerPoint (or anything similar) when I give presentations. I don't need visuals.

But recently, I started teaching at the Harvard Kennedy School, and I constantly use visuals in my class. I made those same image searches, and I came up with similarly unacceptable results.

But unlike me, Hewlett is doing something about it. You can help: participate in the Cybersecurity Visuals Challenge.

EDITED TO ADD (8/5): News article. Slashdot thread.

Worse Than FailureCodeSOD: A Truly Painful Exchange

Java has a boolean type, and of course it also has a parseBoolean method, which works about how you'd expect. It's worth noting that a string "true" (ignoring capitalization) is the only thing which is considered true, and all other inputs are false. This does mean that you might not always get the results you want, depending on your inputs, so you might need to make your own boolean parser.

Adam H has received the gift of legacy code. In this case, the code was written circa 2002, and the process has been largely untouched since. An outside vendor uploads an Excel spreadsheet to an FTP site. And yes, it must be FTP, as the vendor's tool won't do anything more modern, and it must be Excel because how else do you ship tables of data between two businesses?

The Excel sheet has some columns which are filled with "TRUE" and "FALSE". This means their process needs to parse those values in as booleans. Or does it…

public class BooleanParseUtil { private static final String TRUE = "TRUE"; private static final String FALSE = "FALSE"; private BooleanParseUtil() { //private because class should not be instantiated } public static String parseBoolean(String paramString) { String result = null; if (paramString != null) { String s = paramString.toUpperCase().trim(); if (ParseUtilHelper.isPositive(s)) { result = TRUE; } else if (ParseUtilHelper.isNegative(s)) { result = FALSE; } } else { result = FALSE; } return result; } //snip }

Note the signature of parseBoolean: it takes a string and it returns a string. If we trace through the logic: a null input is false, a not-null input that isPositive is "TRUE", one that isNegative is "FALSE", and anything else returns null. I'm actually pretty sure that's a mistake, and is exactly the kind of thing that happens when you follow the "single return rule"- where each method has only one return statement. This likely is a source of heisenbugs and failed processing runs.

But wait a second, isPositive sure sounds like it means "greater than or equal to zero". But that can't be what it means, right? What are isPositive and isNegative actually doing?

public class ParseUtilHelper { private static final String TRUE = "TRUE"; private static final String FALSE = "FALSE"; private static final Set<String> positiveValues = new HashSet<>( Arrays.asList(TRUE, "YES", "ON", "OK", "ENABLED", "ACTIVE", "CHECKED", "REPORTING", "ON ALL", "ALLOW") ); private static final Set<String> negativeValues = new HashSet<>( Arrays.asList(FALSE, "NO", "OFF", "DISABLED", "INACTIVE", "UNCHECKED", "DO NOT DISPLAY", "NOT REPORTING", "N/A", "NONE", "SCHIMELPFENIG") ); private ParseUtilHelper() { //private constructor because class should not be instantiated } public static boolean isPositive(String v) { return positiveValues.contains(v); } public static boolean isNegative(String v) { return negativeValues.contains(v) || v.contains("DEFERRED"); } //snip }

For starters, we redefine constants that exist over in our BooleanParseUtil, which, I mean, maybe we could use different strings for TRUE and FALSE in this object, because that wouldn't be confusing at all.

But the real takeaway is that we have absolutely ALL of the boolean values. TRUE, YES, OK, DO NOT DISPLAY, and even SCHIMELPFENIG, the falsest of false values. That said, I can't help but think there's one missing.

In truth, this is exactly the sort of code that happens when you have a cross-organization data integration task with no schema. And while I'm sure the end users are quite happy to continue doing this in Excel, the only tool they care about using, there are many, many other possible ways to send that data around. I suppose we should just be happy that the process wasn't built using XML? I'm kidding, of course, even XML would be an improvement.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Planet DebianThorsten Alteholz: My Debian Activities in July 2019

FTP master

After the release of Buster I could start with real work in NEW again. Even the temperature could not hinder me to reject something. So this month I accepted 279 packages and rejected 15. The overall number of packages that got accepted was 308.

Debian LTS

This was my sixty first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 18.5h. During that time I did LTS uploads of:

  • [DLA 1849-1] zeromq3 security update for one CVE
  • [DLA 1833-2] bzip2 regression update for one patch
  • [DLA 1856-1] patch security update for one CVE
  • [DLA 1859-1] bind9 security update for one CVE
  • [DLA 1864-1] patch security update for one CVE

I am glad that I could finish the bind9 upload this month.
I also started to work on ruby-mini-magick and python2.7. Unfortunatley when building both packages (even without new patches), the test suite fails. So I first have to fix that as well.

Last but not least I did ten days of frontdesk duties. This was more than a week as everybody was at DebConf and I seemed to be the only one at home …

Debian ELTS

This month was the fourteenth ELTS month.

During my allocated time I uploaded:

  • ELA-132-2 of bzip2 for an upstream regression
  • ELA-144-1 of patch for one CVE
  • ELA-147-1 of patch for one CVE
  • ELA-148-1 of bind9 for one CVE

I also did some days of frontdesk duties.

Other stuff

This month I reuploaded some go packages, that would not migrate due to being binary uploads.

I also filed rm bugs to remove all alljoyn packages. Upstream is dead, no one is using this software anymore and bugs won’t be fixed.

Planet DebianEmmanuel Kasper: Debian 9 -> 10 Ugrade report

I upgraded my laptop and VPS to Debian 10, as usual in Debian everything worked out of the box, the necessary daemons restarted without problems.
I followed my usual upgrade approach, which involves upgrading a backup of the root FS of the server in a container, to test the upgrade path, followed by a config file merge.

I had one major problem, though, connecting to my php based Dokuwiki subsole.org website, which displayed a rather unwelcoming screen after the upgrade:




I was a bit unsure at first, as I thought I would need to fight my way through the nine different config files of the dokuwiki debian package in /etc/dokuwiki

However the issue was not so complicated: as  the apache2 php module was disabled, apache2 was outputting the source code of dokuwiki instead of executing it. As you see, I don't php that often.

A simple
a2enmod php7.3
systemctl restart apache2


fixed the issue.

I understood the problem after noticing that a simple phpinfo() would not get executed by the server.

I would have expected the upgrade to automatically enable the new php7.3 module, since the oldstable php7.0 apache module was removed as part of the upgrade, but I am not sure what the Debian policy would recommend here, or if I am missing something else.
If I can reproduce the issue in a upgrade scenario, I'll probably submit a bug to the php package maintainers.

Planet DebianDebian GSoC Kotlin project blog: Packaging Dependencies Part 2; and plan on how to.

Mapping and packaging dependencies part 1.

Hey all, I had my exams during weeks 8 ad 9 so I couldn't update my blog nor get much accomplished; but last week was completely free so I managed to finish packaging all the dependencies from pacakging dependencies part 1. Since some of you may not remember how I planned to tackle pacakging dependencies I'll mention it here one more time.

"I split this task into two sub tasks that can be done independently. The 2 subtasks are as follows:
->part 1: make the entire project build successfully without :buildSrc:prepare-deps:intellij-sdk:build
--->part1.1:package these dependencies
->part 2: package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build ; i.e try to recreate whatever is in it."

This is taken from my last blog which was specifically on packaging dependencies in part 1. Now I am happy to tell all of you that packaging dependencies for part 1 is now complete and all the needed pacakges are either in the new queue or already in sid archive as of 04 August 2019. I would like to thank ebourg, seamlik and andrewsh for helping me with this.

How to build kotlin 1.3.30 after dependency packaging part 1 and design choices.

Before I go into how to build the project as it is now I'll briefly talk of some of the choices I made while packaging dependencies in part 1 and general things you should know.

Two dependencies in part 1 were Jcabi-aether and sonatype-aether, both of these are incompatible with maven-3 and these were only used in one single file in the entire dist task graph. Considering the time it would take to migrate these dependencies to maven-3 I chose to patch out the one file that needed both of these and that change is denoted by this commit. Also it must be noted that so far we are only trying to build the dist task which only and only builds the basic kotlin compiler; it doesn't build the maven artifacts with poms nor does it build the kotlin-gradle-plugin. Those things are built and installed in the local maven repository (.m2 file in surce project when you invoke debuild) using the install task which I am planning to do once we finish successfully building the dist task. Invoking the install task in our master as of Aug 04 2019 will build and install all available maven artifacts into the local maven repo but this again will not have kotlin-gradle-plugin or such since I have removed those subprojects as they aren't needed by the dist task. Keeping them would mean that I have to convert and patch them to groovy if they are written in .kts since they are evaluated during the initialization phase.

Now we are ready to build the project. I have written a simple makefile which copies all the needed bootstrap jars and prebuilts to their proper places. All you need to build the project is

1.git clone https://salsa.debian.org/m36-guest/kotlin-1.3.30.git  
2.cd kotlin-1.3.30
3.git checkout buildv1  
4.debian/pseudoBootstrap bootstrap
5.debuild -b -rfakeroot -us -uc
Note that we need only do steps 1 though 4 the very first time you are building this project. everytime after that just invoke step 5

Packaging dependencies part 2.

Now packaging dependencies part 2 involves package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build. This is the folder that is taking up the most space in Kotlin-1.3.30-temp-requirements. The sole purpose of this task is reduce the jars in this folder and substitue them with jar from the debian environment. I have managed to map out the needed jars from these for the dist task graph and they are

```
saif@Hope:/srv/chroot/KotlinCh/home/kotlin/kotlin-1.3.30-debian-maintained/buildSrc/prepare-deps/intellij-sdk/repo/kotlin.build.custom.deps/183.5153.4$ ls -R .: intellij-core intellij-core.ivy.xml intellijUltimate intellijUltimate.ivy.xml jps-standalone jps-standalone.ivy.xml

./intellij-core:
asm-all-7.0.jar  intellij-core.jar  java-compatibility-1.0.1.jar

./intellijUltimate:
lib

./intellijUltimate/lib:
asm-all-7.0.jar  guava-25.1-jre.jar  jna.jar           log4j.jar      openapi.jar    picocontainer-1.2.jar  platform-impl.jar   trove4j.jar
extensions.jar   jdom.jar            jna-platform.jar  lz4-1.3.0.jar  oro-2.0.8.jar  platform-api.jar       streamex-0.6.7.jar  util.jar

./jps-standalone:
jps-model.jar
```

This folder is treated as an ant repository and the code to that is here. Build.gradle files use this via methods like this which tells the project to take only the needed jars from the collection. I am planning on replacing this with just plain old maven repository resolution using format like compile(groupID:artifactId:version) but we will need the jars to be in our system anyways, atleast now we know that this particular file structure can be avoided.

Please note that these jars listed above by me are only needed for the dist task and the ones needed for other subprojects in the original install task can still be found here.

The following are the dependencies need for part 2. * denotes what I am not sure of. Contact me before you attempt to pacakge any of the intellij dependencies as we only need parts from those and I have a script to tell what we need.

1.->java-compatibility-1.0.1 -> https://github.com/JetBrains/intellij-deps-java-compatibility (DONE: here)
2.->jps-model -> https://github.com/JetBrains/intellij-community/tree/master/jps
3.->intellij-core, open-api -> https://github.com/JetBrains/intellij-community/tree/183.5153
4.->streamex-0.6.7 -> https://github.com/amaembo/streamex/tree/streamex-0.6.7 (DONE: here)
5.->guava-25.1 -> https://github.com/google/guava/tree/v25.1 ([WIP-Saif])
6.->lz4-java -> https://github.com/lz4/lz4-java/blob/1.3.0/build.xml(DONE:here)
7.->libjna-java & libjna-platform-java recompiled in jdk 8. -> https://salsa.debian.org/java-team/libjna-java (DONE : commit)
8.->liboro-java recompiled in jdk8 -> https://salsa.debian.org/java-team/liboro-java (DONE : commit)
9.->picocontainer-1.3 refining -> https://salsa.debian.org/java-team/libpicocontainer-1-java (DONE: here)
10.-> * platform-api -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform
11.-> * util -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform
12.-> * platform-impl -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform

So if any of you want to help please kindly take on any of these and package them.

!!NOTE-ping me if you want to build kotlin in your system and are stuck!!

Here is a link to the work I have done so far. You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates weekly.

Planet DebianMike Gabriel: MATE 1.22 landed in Debian unstable

Last week, I did a bundle upload of (nearly) all MATE 1.22 related components to Debian unstable. Packages should have been built by now for most of the 24 architectures supported by Debian (I just fixed an FTBFS of mate-settings-daemon on non-Linux host archs). The current/latest build status can be viewed on the DDPO page of the Debian+Ubuntu MATE Packaging Team [1].

Credits

Again a big thanks goes to the packaging team and also to the upstream maintainers of the MATE desktop environment. Martin Wimpress and I worked on most parts of the packaging for the 1.22 release series this time. On the upstream side, a big thanks goes to all developers, esp. Vlad Orlov and Wolfgang Ulbrich for fixing / reviewing many many issues / merge requests. Good work, folks!!! plus Big Thanks!!!

References


light+love,
Mike Gabriel (aka sunweaver)

CryptogramMore on Backdooring (or Not) WhatsApp

Yesterday, I blogged about a Facebook plan to backdoor WhatsApp by adding client-side scanning and filtering. It seems that I was wrong, and there are no such plans.

The only source for that post was a Forbes essay by Kalev Leetaru, which links to a previous Forbes essay by him, which links to a video presentation from a Facebook developers conference.

Leetaru extrapolated a lot out of very little. I watched the video (the relevant section is at the 23:00 mark), and it doesn't talk about client-side scanning of messages. It doesn't talk about messaging apps at all. It discusses using AI techniques to find bad content on Facebook, and the difficulties that arise from dynamic content:

So far, we have been keeping this fight [against bad actors and harmful content] on familiar grounds. And that is, we have been training our AI models on the server and making inferences on the server when all the data are flooding into our data centers.

While this works for most scenarios, it is not the ideal setup for some unique integrity challenges. URL masking is one such problem which is very hard to do. We have the traditional way of server-side inference. What is URL masking? Let us imagine that a user sees a link on the app and decides to click on it. When they click on it, Facebook actually logs the URL to crawl it at a later date. But...the publisher can dynamically change the content of the webpage to make it look more legitimate [to Facebook]. But then our users click on the same link, they see something completely different -- oftentimes it is disturbing; oftentimes it violates our policy standards. Of course, this creates a bad experience for our community that we would like to avoid. This and similar integrity problems are best solved with AI on the device.

That might be true, but it also would hand whatever secret-AI sauce Facebook has to every one of its users to reverse engineer -- which means it's probably not going to happen. And it is a dumb idea, for reasons Steve Bellovin has pointed out.

Facebook's first published response was a comment on the Hacker News website from a user named "wcathcart," which Cardozo assures me is Will Cathcart, the vice president of WhatsApp. (I have no reason to doubt his identity, but surely there is a more official news channel that Facebook could have chosen to use if they wanted to.) Cathcart wrote:

We haven't added a backdoor to WhatsApp. The Forbes contributor referred to a technical talk about client side AI in general to conclude that we might do client side scanning of content on WhatsApp for anti-abuse purposes.

To be crystal clear, we have not done this, have zero plans to do so, and if we ever did it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise which is why we are opposed to it.

Facebook's second published response was a comment on my original blog post, which has been confirmed to me by the WhatsApp people as authentic. It's more of the same.

So, this was a false alarm. And, to be fair, Alec Muffet called foul on the first Forbes piece:

So, here's my pre-emptive finger wag: Civil Society's pack mentality can make us our own worst enemies. If we go around repeating one man's Germanic conspiracy theory, we may doom ourselves to precisely what we fear. Instead, we should ­ we must ­ take steps to constructively demand what we actually want: End to End Encryption which is worthy of the name.

Blame accepted. But in general, this is the sort of thing we need to watch for. End-to-end encryption only secures data in transit. The data has to be in the clear on the device where it is created, and it has to be in the clear on the device where it is consumed. Those are the obvious places for an eavesdropper to get a copy.

This has been a long process. Facebook desperately wanted to convince me to correct the record, while at the same time not wanting to write something on their own letterhead (just a couple of comments, so far). I spoke at length with Privacy Policy Manager Nate Cardozo, whom Facebook hired last December from EFF. (Back then, I remember thinking of him -- and the two other new privacy hires -- as basically human warrant canaries. If they ever leave Facebook under non-obvious circumstances, we know that things are bad.) He basically leveraged his historical reputation to assure me that WhatsApp, and Facebook in general, would never do something like this. I am trusting him, while also reminding everyone that Facebook has broken so many privacy promises that they really can't be trusted.

Final note: If they want to be trusted, Adam Shostack and I gave them a road map.

Hacker News thread.

EDITED TO ADD (8/4): SlashDot covered my retraction.

,

Planet DebianAndy Simpkins: Debconf19: Curitiba, Brazil – AV Setup

I write this on Monday whilst sat in the airport in São Paulo awaiting my onward flight back to the UK and the fun of the change of personnel in Downing street that has been something I have fortunately been able to ignore whilst at DebConf.  [Edit: and finishing writing the Saturday after getting home after much sleep]

Arriving on the first Sunday of DebCamp meant that I was one of the first people to arrive; however most of the video team were either arriving about the same time or had landed before me.  We spent most of our daytime time during DebCamp setting up for the following weeks conference.

Step one was getting a network operational.  We had been offered space for our servers in a university machine room, but chose instead to occupy the two ‘green’ rooms below the main auditorium stage, using one as a makeshift V/NOC and the other as our machine room as this enabled us continuous and easy access [0] to our servers whilst separating us from the fan noise.  I ran additional network cable between the back of the stage and our makeshift machine room, routing the cable around the back of the stage and into the ceiling void to just outside the V/NOC was relatively simple.   Routing into the V/NOC needed a bit of help to get the cable through a small gap we found where some other cables ran through the ‘fire break’.  Getting a cable between the two ‘green rooms’ however was a PITA.  Many people, including myself, eventually giving up before I finally returned to the problem and with the aid of a fully extended server rail gaffer taped to a clothing rail to make a 4m long pole I was eventually able to deliver a cable through the 3 floor supports / fire breaks that separated the two rooms (and before someone suggests I should have used a ‘fish’ wire that was what we tried first).   The university were providing us with backbone network but it did take a couple of meetings to get our video network in it’s own separate VLAN and get it to pass traffic unmolested between nodes.

The final network setup (for video that is – the conference was piggy-backing on the university WiFi network and there was also a DebConf network in the National Inn) was to make live the fibre links that had been installed prior to our arrival.  Two links had been pulled through so that we could join the ‘Video Confrencia’ room and the ‘Front Desk’ to the rest of the university network, however when we came to commission them we discovered that the wrong media converters had been supplied, they should have been for single mode fibre but multi-mode converters had been delivered.  Nothing that the university IT department couldn’t solve, indeed they did as soon as we pointed out the mistake.  The provided us with replacement media converters capable of driving a signal down *both* single and multi-mode fiber, something I have not seen before.

For the rest of the week Paddatrapper and myself spent most of our time running cables and setting up the three talk rooms that were to be filmed.  Phls had been able to provide us with details of the venue’s AV system AND scale plans of the three talk rooms, this along with the photos provided by the local team, & Tumbleweed’s visit to the sight enabled us to plan the cable runs right down to the location of power sockets.

I am going to add scale plans & photos to the things that we request for all future DebConfs.  They made planning and setup so much easier and faster.  Of cause we still ended up running more cables than we originally expected – we ran Ethernet front to back in all three rooms when we originally intended to only do this in Video Confrencia (the small BoF room), this was because it turned out that the sockets at different ends of the room were on differing packet switches that in turn feed into the university backbone.  We were informed that the backbone is 1Gb/s which meant that the video LAN would have consumed the entire bandwidth of the backbone with nothing left over.

We have 200Mb/s streams from between OPSIS frame grabbers and a 2nd 200Mb/s output stream from each room.  That equates to exactly 1Gb/s (the video-confrencia BoF room is small enough that we were always going to run a front/back cable) and that is before any backups of recordings to our server.  As it turns out that wasn’t true but by then we had already run the cables and got things working…

I won’t blog about the software setup the servers, our back-end CDN or the review process – this is not my area of expertise.  You need to thank Olasd, Tumbleweed & Ivo for the on-site system setup and Walter for the review process.  Actually I there is also Carlfk, Ubec, Valhalla and I am sure numerous other people that I am too tired to remember, I appologise for forgetting you…

So back to physical setup.  The main auditorium was operational.  I had re-patched the mixing desk to give a setup as close as possible in all three talk rooms – we are most interested in audio for the stream/recording and so use the main mix output for this, and move the room PA onto a sub group output.  Unusually for a DebConf, I found that I needed to ensure that there *was* a ground connection at the desk for all output feeds – It appears that there was NO earth in the entire auditorium; well there was at some point back in time but had been systematically removed either by cutting off the earth pin on power plugs, or unfortunately for us, by cutting and removing cables from any bonding points, behind sockets etc.   Done, probably, because RCDs kept tripping and clearly the problem is that there is an earth present to leak into and not that there is a leak in the first place, or just long cable runs into inductive loads that mean that a different ‘trip curve’ needed to be selected <sigh>.

We still had significant mains hum on the PA system (slightly less than was present before I started room setup so nothing I had done).  The venue AV team pointed out that they had a magnetic coupler AND an audio DSP unit in front of the PA amplifier stack – telling me that this was to reduce the hum. Fortunately for us the venue had 4 equalisers that I could use, one for each of the mics So I was able to knock out 60Hz, 120Hz and dip higher harmonics, this again made an improvement.  Apparently we were getting the best results in living memory of the on-site AV team so at this point I stooped tweaking the setup “It was good enough”, we could live with the remaining hum.

The other two talk rooms were pretty much the same setup, only the rooms are smaller.  The exception being that whilst we do have a small portable PA in the Video Conferancia room we only use it for audio from the presenters laptop – the room was so small there was no point in amplifying presenters…

Right I could now move on to ‘lighting’.  We were supposed to use the flood ‘work’ lights above the stage, but quite a few of the halogen lamps were blown.  This meant that there were far too many ‘dark’ patches along the stage.  Additionally the colour temperatures of the different work lights were all over the place, and this would cause havoc with white balance, still we could have lived with this…  I asked about getting the lamps replaced.  Initially I was told no, but once I pointed out the problem to a more senior member of staff they agreed that the lamps could be replaced and that it would be done the following day.  It wasn’t.  I offered that we could replace the lamps but was then told that they would now be doing this as part of a service in a few weeks time.  I was however told that instead, if I was prepared to rig them myself, that we could use the stage lights on the dimmers.  Win!  This would have been my preferred option all along and I suspect we were only offered this having started to build a reasonable working relationship with the site AV team.  I was able to sign out a bunch of lamps from the stores and rig then as I saw fit.  I was given a large wooden step ladder, and shown how to access the catwalk.  I could now rig lights where I wanted them.

Two over head floods and two spots were used to light the lectern from different angles.  Three overhead floods and three focused cans were used to light the discussion table.  I also hung to forward facing spots to illuminate someone stood at the question mic, and finally 4 cans (2 focus cans and a pair of 1kW par cans sharing the same plug) to add some light to the front 5 or 6 rows of the audience.  The Venue AV team repaired the DMX cable to the lighting dimmers and we were good to go…  well just as soon as I had worked out the DMX addressing / cable patching at the dimmer banks and then there was a short time whilst I read the instructions for the desk – enough to apply ‘soft patches’ so I could allocate a fader to each dimmer channel we were using.  I then red the instructions a bit further and came back the following day and programmed appropriate scenes so that the table could be lit using one ‘slider’, the lectern by another and so on.  JMW came back later in the week and updated the program again to add a timed fade up or down and we also set a maximum level on the audience lights to stop us from ‘blinding’ people in the first couple of rows (we set the maximum value of that particular scene to be 20% available intensity).

Lighting in the mini auditorium was from simple overhead ‘domestic’ lamps, I still needed to get some bulbs replaced, and then move / adjust them to best light a speaker stood at the lectern or a discussion panel sat at the table.   Finally we had no control of lighting in Video Confeencia (about normal for a DebConf).

Later in the week we revisited the hum problem again.  We confirmed that the Hum was no longer being emitted out of the desk, so it must have be on the cable run to the stack or in the stack itself.  The hum was still annoying and Kyle wanted to confirm that the DSP at the top of the amp stack was correctly setup – could we improve things?  It took a little persuasion but eventually we were granted permission, and the password, to access the DSP.  The DSP had not been configured properly at all.  Kyle applied a 60Hz notch filter, and this made some difference.  I suggested a comb filter which Kyle then applied for 60Hz and 5 or 6 orders of harmonics, that did the trick (thanks Kyle – I wouldn’t have had a clue how to drive the DSP).  There was no longer any perceivable noise coming out of the left hand speakers, but there was still a noticeable, but much lower, hum from the right.  We removed the input cable to the amp stack and yes the hum was still there, so we were picking up noise between the amps and the speaker!  a quick confirmation of turning off the lighting dimmers and the noise dropped again.  I started chasing the right hand speaker cables – they run up and over the stage along the catwalk, in the same bundle as all the unearthed lighting AND permanent power cables.  We were inducing mains noise directly onto the speaker cables.  The only fix for this would be to properly screen AND separate the speaker fed cables.  Better yet send a balanced audio feed, separated from the power cables, to the right hand side of the stage and move the right hand amplifiers to that side of the stage.  Nothing we could do – but something that we could point out to the venue AV team, who strangely, hadn’t considered this before…

 

 

[0] Where continuous access meant “whilst we had access to the site” (the whole campus is closed overnight)

Planet DebianDirk Eddelbuettel: RcppCCTZ 0.2.6

A shiny new release 0.2.6 of RcppCCTZ is now at CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do—using copies in their packages which remains less than ideal.

This version updates to CCTZ release 2.3 from April, plus changes accrued since then. It also switches to tinytest which, among other benefits, permits continued testing of the installed package.

Changes in version 0.2.6 (2019-08-03)

  • Synchronized with upstream CCTZ release 2.3 plus commits accrued since then (Dirk in #30).

  • The package now uses tinytest for unit tests (Dirk in #31).

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBits from Debian: New Debian Developers and Maintainers (May and June 2019)

The following contributors got their Debian Developer accounts in the last two months:

  • Jean-Philippe Mengual (jpmengual)
  • Taowa Munene-Tardif (taowa)
  • Georg Faerber (georg)
  • Kyle Robbertze (paddatrapper)
  • Andy Li (andyli)
  • Michal Arbet (kevko)
  • Sruthi Chandran (srud)
  • Alban Vidal (zordhak)
  • Denis Briand (denis)
  • Jakob Haufe (sur5r)

The following contributors were added as Debian Maintainers in the last two months:

  • Bobby de Vos
  • Jongmin Kim
  • Bastian Germann
  • Francesco Poli

Congratulations!

Planet DebianElana Hashman: My favourite bash alias for git

I review a lot of code. A lot. And an important part of that process is getting to experiment with said code so I can make sure it actually works. As such, I find myself with a frequent need to locally run code from a submitted patch.

So how does one fetch that code? Long ago, when I was a new maintainer, I would add the remote repository I was reviewing to my local repo so I could fetch that whole fork and target branch. Once downloaded, I could play around with that on my local machine. But this was a lot of overhead! There was a lot of clicking, copying, and pasting involved in order to figure out the clone URL for the remote repo, and a bunch of commands to set it up. It felt like a lot of toil that could be easily automated, but I didn't know a better way.

One day, when a coworker of mine saw me struggling with this, he showed me the better way.

Turns out, most hosted git repos with pull request functionality will let you pull down a read-only version of the changeset from the upstream fork using git, meaning that you don't have to set up additional remote tracking to fetch and run the patch or use platform-specific HTTP APIs.

Using GitHub's git references for pull requests

I first learned how to do this on GitHub.

GitHub maintains a copy of pull requests against a particular repo at the pull/NUM/head reference. (More documentation on refs here.) This means that if you have set up a remote called origin and someone submits a pull request #123 against that repository, you can fetch the code by running

$ git fetch origin pull/123/head
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 4 (delta 3), reused 3 (delta 3), pack-reused 1
Unpacking objects: 100% (4/4), done.
From github.com:ehashman/hack_the_planet
 * branch            refs/pull/123/head -> FETCH_HEAD

$ git checkout FETCH_HEAD
Note: checking out 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at deadb00 hack the planet!!!

Woah.

Using pull request references for CI

As a quick aside: This is also handy if you want to write your own CI scripts against users' pull requests. Even better—on GitHub, you can fetch a tree with the pull request already merged onto the top of the current master branch by fetching pull/NUM/merge. (I'm not sure if this is officially documented somewhere, and I don't believe it's widely supported by other hosted git platforms.)

If you also specify the --depth flag in your fetch command, you can fetch code even faster by limiting how much upstream history you download. It doesn't make much difference on small repos, but it is a big deal on large projects:

elana@silverpine:/tmp$ time git clone https://github.com/kubernetes/kubernetes.git
Cloning into 'kubernetes'...
remote: Enumerating objects: 295, done.
remote: Counting objects: 100% (295/295), done.
remote: Compressing objects: 100% (167/167), done.
remote: Total 980446 (delta 148), reused 136 (delta 128), pack-reused 980151
Receiving objects: 100% (980446/980446), 648.95 MiB | 12.47 MiB/s, done.
Resolving deltas: 100% (686795/686795), done.
Checking out files: 100% (20279/20279), done.

real    1m31.035s
user    1m17.856s
sys     0m7.782s

elana@silverpine:/tmp$ time git clone --depth=10 https://github.com/kubernetes/kubernetes.git kubernetes-shallow
Cloning into 'kubernetes-shallow'...
remote: Enumerating objects: 34305, done.
remote: Counting objects: 100% (34305/34305), done.
remote: Compressing objects: 100% (22976/22976), done.
remote: Total 34305 (delta 17247), reused 19060 (delta 10567), pack-reused 0
Receiving objects: 100% (34305/34305), 34.22 MiB | 10.25 MiB/s, done.
Resolving deltas: 100% (17247/17247), done.

real    0m31.495s
user    0m3.941s
sys     0m1.228s

Writing the pull alias

So how can one harness all this as a bash alias? It takes just a little bit of code:

pull() {
    git fetch "$1" pull/"$2"/head && git checkout FETCH_HEAD
}

alias pull='pull'

Then I can check out a PR locally with the short command pull <remote> <num>:

$ pull origin 123
remote: Enumerating objects: 4, done.
remote: Counting objects: 100% (4/4), done.
remote: Total 5 (delta 4), reused 4 (delta 4), pack-reused 1
Unpacking objects: 100% (5/5), done.
From github.com:ehashman/hack_the_planet
 * branch            refs/pull/123/head -> FETCH_HEAD
Note: checking out 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at deadb00 hack the planet!!!

You can even add your own commits, save them on a local branch, and push that to your collaborator's repository to build on their PR if you're so inclined... but let's not get too ahead of ourselves.

Changeset references on other git platforms

These pull request refs are not a special feature of git itself, but rather a per-platform implementation detail using an arbitrary git ref format. As far as I'm aware, most major git hosting platforms implement this, but they all use slightly different ref names.

GitLab

At my last job I needed to figure out how to make this work with GitLab in order to set up CI pipelines with our Jenkins instance. Debian's Salsa platform also runs GitLab.

GitLab calls user-submitted changesets "merge requests" and that language is reflected here:

git fetch origin merge-requests/NUM/head

They also have some nifty documentation for adding a git alias to fetch these references. They do so in a way that creates a local branch automatically, if that's something you'd like—personally, I check out so many patches that I would not be able to deal with cleaning up all the extra branch mess!

BitBucket

Bad news: as of the time of publication, this isn't supported on bitbucket.org, even though a request for this feature has been open for seven years. (BitBucket Server supports this feature, but that's standalone and proprietary, so I won't bother including it in this post.)

Gitea

While I can't find any official documentation for it, I tested and confirmed that Gitea uses the same ref names for pull requests as GitHub, and thus you can use the same bash/git aliases on a Gitea repo as those you set up for GitHub.

Saved you a click?

Hope you found this guide handy. No more excuses: now that it's just one short command away, go forth and run your colleagues' code locally!

,

Krebs on SecurityWhat We Can Learn from the Capital One Hack

On Monday, a former Amazon employee was arrested and charged with stealing more than 100 million consumer applications for credit from Capital One. Since then, many have speculated the breach was perhaps the result of a previously unknown “zero-day” flaw, or an “insider” attack in which the accused took advantage of access surreptitiously obtained from her former employer. But new information indicates the methods she deployed have been well understood for years.

What follows is based on interviews with almost a dozen security experts, including one who is privy to details about the ongoing breach investigation. Because this incident deals with somewhat jargon-laced and esoteric concepts, much of what is described below has been dramatically simplified. Anyone seeking a more technical explanation of the basic concepts referenced here should explore some of the many links included in this story.

According to a source with direct knowledge of the breach investigation, the problem stemmed in part from a misconfigured open-source Web Application Firewall (WAF) that Capital One was using as part of its operations hosted in the cloud with Amazon Web Services (AWS).

Known as “ModSecurity,” this WAF is deployed along with the open-source Apache Web server to provide protections against several classes of vulnerabilities that attackers most commonly use to compromise the security of Web-based applications.

The misconfiguration of the WAF allowed the intruder to trick the firewall into relaying requests to a key back-end resource on the AWS platform. This resource, known as the “metadata” service, is responsible for handing out temporary information to a cloud server, including current credentials sent from a security service to access any resource in the cloud to which that server has access.

In AWS, exactly what those credentials can be used for hinges on the permissions assigned to the resource that is requesting them. In Capital One’s case, the misconfigured WAF for whatever reason was assigned too many permissions, i.e. it was allowed to list all of the files in any buckets of data, and to read the contents of each of those files.

The type of vulnerability exploited by the intruder in the Capital One hack is a well-known method called a “Server Side Request Forgery” (SSRF) attack, in which a server (in this case, CapOne’s WAF) can be tricked into running commands that it should never have been permitted to run, including those that allow it to talk to the metadata service.

Evan Johnson, manager of the product security team at Cloudflare, recently penned an easily digestible column on the Capital One hack and the challenges of detecting and blocking SSRF attacks targeting cloud services. Johnson said it’s worth noting that SSRF attacks are not among the dozen or so attack methods for which detection rules are shipped by default in the WAF exploited as part of the Capital One intrusion.

“SSRF has become the most serious vulnerability facing organizations that use public clouds,” Johnson wrote. “The impact of SSRF is being worsened by the offering of public clouds, and the major players like AWS are not doing anything to fix it. The problem is common and well-known, but hard to prevent and does not have any mitigations built into the AWS platform.”

Johnson said AWS could address this shortcoming by including extra identifying information in any request sent to the metadata service, as Google has already done with its cloud hosting platform. He also acknowledged that doing so could break a lot of backwards compatibility within AWS.

“There’s a lot of specialized knowledge that comes with operating a service within AWS, and to someone without specialized knowledge of AWS, [SSRF attacks are] not something that would show up on any critical configuration guide,” Johnson said in an interview with KrebsOnSecurity.

“You have to learn how EC2 works, understand Amazon’s Identity and Access Management (IAM) system, and how to authenticate with other AWS services,” he continued. “A lot of people using AWS will interface with dozens of AWS services and write software that orchestrates and automates new services, but in the end people really lean into AWS a ton, and with that comes a lot of specialized knowledge that is hard to learn and hard to get right.”

In a statement provided to KrebsOnSecurity, Amazon said it is inaccurate to argue that the Capital One breach was caused by AWS IAM, the instance metadata service, or the AWS WAF in any way.

“The intrusion was caused by a misconfiguration of a web application firewall and not the underlying infrastructure or the location of the infrastructure,” the statement reads. “AWS is constantly delivering services and functionality to anticipate new threats at scale, offering more security capabilities and layers than customers can find anywhere else including within their own datacenters, and when broadly used, properly configured and monitored, offer unmatched security—and the track record for customers over 13+ years in securely using AWS provides unambiguous proof that these layers work.”

Amazon pointed to several (mostly a la carte) services it offers AWS customers to help mitigate many of the threats that were key factors in this breach, including:

Access Advisor, which helps identify and scope down AWS roles that may have more permissions than they need;
GuardDuty, designed to raise alarms when someone is scanning for potentially vulnerable systems or moving unusually large amounts of data to or from unexpected places;
The AWS WAF, which Amazon says can detect common exploitation techniques, including SSRF attacks;
Amazon Macie, designed to automatically discover, classify and protect sensitive data stored in AWS.

William Bengston, formerly a senior security engineer at Netflix, wrote a series of blog posts last year on how Netflix built its own systems for detecting and preventing credential compromises in AWS. Interestingly, Bengston was hired roughly two months ago to be director of cloud security for Capital One. My guess is Capital One now wishes they had somehow managed to lure him away sooner.

Rich Mogull is founder and chief technology officer with DisruptOPS, a firm that helps companies secure their cloud infrastructure. Mogull said one major challenge for companies moving their operations from sprawling, expensive physical data centers to the cloud is that very often the employees responsible for handling that transition are application and software developers who may not be as steeped as they should in security.

“There is a basic skills and knowledge gap that everyone in the industry is fighting to deal with right now,” Mogull said. “For these big companies making that move, they have to learn all this new stuff while maintaining their old stuff. I can get you more secure in the cloud more easily than on-premise at a physical data center, but there’s going to be a transition period as you’re acquiring that new knowledge.”

Image: Capital One

Since news of the Capital One breach broke on Monday, KrebsOnSecurity has received numerous emails and phone calls from security executives who are desperate for more information about how they can avoid falling prey to the missteps that led to this colossal breach (indeed, those requests were part of the impetus behind this story).

Some of those people included executives at big competing banks that haven’t yet taken the plunge into the cloud quite as deeply as Capital One has. But it’s probably not much of a stretch to say they’re all lining up in front of the diving board.

It’s been interesting to watch over the past couple of years how various cloud providers have responded to major outages on their platforms — very often soon after publishing detailed post-mortems on the underlying causes of the outage and what they are doing to prevent such occurrences in the future. In the same vein, it would be wonderful if this kind of public accounting extended to other big companies in the wake of a massive breach.

I’m not holding out much hope that we will get such detail officially from Capital One, which declined to comment on the record and referred me to their statement on the breach and to the Justice Department’s complaint against the hacker. That’s probably to be expected, seeing as the company is already facing a class action lawsuit over the breach and is likely to be targeted by more lawsuits going forward.

But as long as the public and private response to data breaches remains orchestrated primarily by attorneys (which is certainly the case now at most major corporations), everyone else will continue to lack the benefit of being able to learn from and avoid those same mistakes.

CryptogramFriday Squid Blogging: Piglet Squid Video

Really neat.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramDisabling Security Cameras with Lasers

There's a really interesting video of protesters in Hong Kong using some sort of laser to disable security cameras. I know nothing more about the technologies involved.

CryptogramFacebook Plans on Backdooring WhatsApp

This article points out that Facebook's planned content moderation scheme will result in an encryption backdoor into WhatsApp:

In Facebook's vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user's device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted.

The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service.

Facebook's model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.

Once this is in place, it's easy for the government to demand that Facebook add another filter -- one that searches for communications that they care about -- and alert them when it gets triggered.

Of course alternatives like Signal will exist for those who don't want to be subject to Facebook's content moderation, but what happens when this filtering technology is built into operating systems?

The problem is that if Facebook's model succeeds, it will only be a matter of time before device manufacturers and mobile operating system developers embed similar tools directly into devices themselves, making them impossible to escape. Embedding content scanning tools directly into phones would make it possible to scan all apps, including ones like Signal, effectively ending the era of encrypted communications.

I don't think this will happen -- why does AT&T care about content moderation -- but it is something to watch?

EDITED TO ADD (8/2): This story is wrong. Read my correction.

Planet DebianSven Hoexter: From 30 to 230 docker containers per host

I could not find much information on the interwebs how many containers you can run per host. So here are mine and the issues we ran into along the way.

The Beginning

In the beginning there were virtual machines running with 8 vCPUs and 60GB of RAM. They started to serve around 30 containers per VM. Later on we managed to squeeze around 50 containers per VM.

Initial orchestration was done with swarm, later on we moved to nomad. Access was initially fronted by nginx with consul-template generating the config. When it did not scale anymore nginx was replaced by Traefik. Service discovery is managed by consul. Log shipping was initially handled by logspout in a container, later on we switched to filebeat. Log transformation is handled by logstash. All of this is running on Debian GNU/Linux with docker-ce.

At some point it did not make sense anymore to use VMs. We've no state inside the containerized applications anyway. So we decided to move to dedicated hardware for our production setup. We settled with HPe DL360G10 with 24 physical cores and 128GB of RAM.

THP and Defragmentation

When we moved to the dedicated bare metal hosts we were running Debian/stretch + Linux from stretch-backports. At that time Linux 4.17. These machines were sized to run 95+ containers. Once we were above 55 containers we started to see occasional hiccups. First occurences lasted only for 20s, then 2min, and suddenly some lasted for around 20min. Our system metrics, as collected by prometheus-node-exporter, could only provide vague hints. The metric export did work, so processes were executed. But the CPU usage and subsequently the network throughput went down to close to zero.

I've seen similar hiccups in the past with Postgresql running on a host with THP (Transparent Huge Pages) enabled. So a good bet was to look into that area. By default /sys/kernel/mm/transparent_hugepage/enabled is set to always, so THP are enabled. We stick to that, but changed the defrag mode /sys/kernel/mm/transparent_hugepage/defrag (since Linux 4.12) from the default madavise to defer+madvise.

This moves page reclaims and compaction for pages which were not allocated with madvise to the background, which was enough to get rid of those hiccups. See also the upstream documentation. Since there is no sysctl like facility to adjust sysfs values, we're using the sysfsutils package to adjust this setting after every reboot.

Conntrack Table

Since the default docker networking setup involves a shitload of NAT, it shouldn't be surprising that nf_conntrack will start to drop packets at some point. We're currently fine with setting the sysctl tunable

net.netfilter.nf_conntrack_max = 524288

but that's very much up to your network setup and traffic characteristics.

Inotify Watches and Cadvisor

Along the way cadvisor refused to start at one point. Turned out that the default settings (again sysctl tunables) for

fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192

are too low. We increased to

fs.inotify.max_user_instances = 4096
fs.inotify.max_user_watches = 32768

Ephemeral Ports

We didn't ran into an issue with running out of ephemeral ports directly, but dockerd has a constant issue of keeping track of ports in use and we already see collisions to appear regularly. Very unscientifically we set the sysctl

net.ipv4.ip_local_port_range = 11000 60999

NOFILE limits and Nomad

Initially we restricted nomad (via systemd) with

LimitNOFILE=65536

which apparently is not enough for our setup once we were crossing the 100 container per host limit. Though the error message we saw was hard to understand:

[ERROR] client.alloc_runner.task_runner: prestart failed: alloc_id=93c6b94b-e122-30ba-7250-1050e0107f4d task=mycontainer error="prestart hook "logmon" failed: Unrecognized remote plugin message:

This was solved by following the official recommendation and setting

LimitNOFILE=infinity
LimitNPROC=infinity
TasksMax=infinity

The main lead here was looking into the "hashicorp/go-plugin" library source, and understanding that they try to read the stdout of some other process, which sounded roughly like someone would have to open at some point a file.

Running out of PIDs

Once we were close to 200 containers per host (test environment with 256GB RAM per host), we started to experience failures of all kinds because processes could no longer be forked. Since that was also true for completely fresh user sessions, it was clear that we're hitting some global limitation and nothing bound to session via a pam module.

It's important to understand that most of our workloads are written in Java, and a lot of the other software we use is written in go. So we've a lot of Threads, which in Linux are presented as "Lightweight Process" (LWP). So every LWP still exists with a distinct PID out of the global PID space.

With /proc/sys/kernel/pid_max defaulting to 32768 we actually ran out of PIDs. We increased that limit vastly, probably way beyond what we currently need, to 500000. Actuall limit on 64bit systems is 222 according to man 5 proc.

CryptogramHow Privacy Laws Hurt Defendants

Rebecca Wexler has an interesting op-ed about an inadvertent harm that privacy laws can cause: while law enforcement can often access third-party data to aid in prosecution, the accused don't have the same level of access to aid in their defense:

The proposed privacy laws would make this situation worse. Lawmakers may not have set out to make the criminal process even more unfair, but the unjust result is not surprising. When lawmakers propose privacy bills to protect sensitive information, law enforcement agencies lobby for exceptions so they can continue to access the information. Few lobby for the accused to have similar rights. Just as the privacy interests of poor, minority and heavily policed communities are often ignored in the lawmaking process, so too are the interests of criminal defendants, many from those same communities.

In criminal cases, both the prosecution and the accused have a right to subpoena evidence so that juries can hear both sides of the case. The new privacy bills need to ensure that law enforcement and defense investigators operate under the same rules when they subpoena digital data. If lawmakers believe otherwise, they should have to explain and justify that view.

For more detail, see her paper.

Planet DebianVincent Bernat: Securing BGP on the host with origin validation

An increasingly popular design for a datacenter network is BGP on the host: each host ships with a BGP daemon to advertise the IPs it handles and receives the routes to its fellow servers. Compared to a L2-based design, it is very scalable, resilient, cross-vendor and safe to operate.1 Take a look at “L3 routing to the hypervisor with BGP” for a usage example.

Spine-leaf fabric two spine routers, six leaf routers and nine physical hosts. All links have a BGP session established over them. Some of the servers have a speech balloon expliciting the IP prefix they want to handle.
BGP on the host with a spine-leaf IP fabric. A BGP session is established over each link and each host advertises its own IP prefixes.

While routing on the host eliminates the security problems related to Ethernet networks, a server may announce any IP prefix. In the above picture, two of them are announcing 2001:db8:cc::/64. This could be a legit use of anycast or a prefix hijack. BGP offers several solutions to improve this aspect and one of them is to reuse the features around the RPKI.

Short introduction to the RPKI

On the Internet, BGP is mostly relying on trust. This contributes to various incidents due to operator errors, like the one that affected Cloudflare a few months ago, or to malicious attackers, like the hijack of Amazon DNS to steal cryptocurrency wallets. RFC 7454 explains the best practices to avoid such issues.

IP addresses are allocated by five Regional Internet Registries (RIR). Each of them maintains a database of the assigned Internet resources, notably the IP addresses and the associated AS numbers. These databases may not be totally reliable but are widely used to build ACLs to ensure peers only announce the prefixes they are expected to. Here is an example of ACLs generated by bgpq3 when peering directly with Apple:2

$ bgpq3 -l v6-IMPORT-APPLE -6 -R 48 -m 48 -A -J -E AS-APPLE
policy-options {
 policy-statement v6-IMPORT-APPLE {
replace:
  from {
    route-filter 2403:300::/32 upto /48;
    route-filter 2620:0:1b00::/47 prefix-length-range /48-/48;
    route-filter 2620:0:1b02::/48 exact;
    route-filter 2620:0:1b04::/47 prefix-length-range /48-/48;
    route-filter 2620:149::/32 upto /48;
    route-filter 2a01:b740::/32 upto /48;
    route-filter 2a01:b747::/32 upto /48;
  }
 }
}

The RPKI (RFC 6480) adds public-key cryptography on top of it to sign the authorization for an AS to be the origin of an IP prefix. Such record is a Route Origination Authorization (ROA). You can browse the databases of these ROAs through the RIPE’s RPKI Validator instance:

Screenshot from an instance of RPKI validator showing the validity of 85.190.88.0/21 for AS 64476
RPKI validator shows one ROA for 85.190.88.0/21

BGP daemons do not have to download the databases or to check digital signatures to validate the received prefixes. Instead, they offload these tasks to a local RPKI validator implementing the “RPKI-to-Router Protocol” (RTR, RFC 6810).

For more details, have a look at “RPKI and BGP: our path to securing Internet Routing.”

Using origin validation in the datacenter

While it is possible to create our own RPKI for use inside the datacenter, we can take a shortcut and use a validator implementing RTR, like GoRTR, and accepting another source of truth. Let’s work on the following topology:

Spine-leaf fabric two spine routers, six leaf routers and nine physical hosts. All links have a BGP session established over them. Three of the physical hosts are validators and RTR sessions are established between them and the top-of-the-rack routers—except their own top-of-the-racks.
BGP on the host with prefix validation using RTR. Each server has its own AS number. The leaf routers establish RTR sessions to the validators.

You assume we have a place to maintain a mapping between the private AS numbers used by each host and the allowed prefixes:3

ASN Allowed prefixes
AS 65005 2001:db8:aa::/64
AS 65006 2001:db8:bb::/64,
2001:db8:11::/64
AS 65007 2001:db8:cc::/64
AS 65008 2001:db8:dd::/64
AS 65009 2001:db8:ee::/64,
2001:db8:11::/64
AS 65010 2001:db8:ff::/64

From this table, we build a JSON file for GoRTR, assuming each host can announce the provided prefixes or longer ones (like 2001:db8:aa::­42:d9ff:­fefc:287a/128 for AS 65005):

{
  "roas": [
    {
      "prefix": "2001:db8:aa::/64",
      "maxLength": 128,
      "asn": "AS65005"
    }, {
      "…": "…"
    }, {
      "prefix": "2001:db8:ff::/64",
      "maxLength": 128,
      "asn": "AS65010"
    }, {
      "prefix": "2001:db8:11::/64",
      "maxLength": 128,
      "asn": "AS65006"
    }, {
      "prefix": "2001:db8:11::/64",
      "maxLength": 128,
      "asn": "AS65009"
    }
  ]
}

This file is deployed to all validators and served by a web server. GoRTR is configured to fetch it and update it every 10 minutes:

$ gortr -refresh=600 \
        -verify=false -checktime=false \
        -cache=http://127.0.0.1/rpki.json
INFO[0000] New update (7 uniques, 8 total prefixes). 0 bytes. Updating sha256 hash  -> 68a1d3b52db8d654bd8263788319f08e3f5384ae54064a7034e9dbaee236ce96
INFO[0000] Updated added, new serial 1

The refresh time could be lowered but GoRTR can be notified of an update using the SIGHUP signal. Clients are immediately notified of the change.

The next step is to configure the leaf routers to validate the received prefixes using the farm of validators. Most vendors support RTR:

Platform Over TCP? Over SSH?
Juniper JunOS ✔️
Cisco IOS XR ✔️ ✔️
Cisco IOS XE ✔️
Cisco IOS ✔️
Arista EOS
BIRD ✔️ ✔️
FRR ✔️ ✔️
GoBGP ✔️

Configuring JunOS

JunOS only supports plain-text TCP. First, let’s configure the connections to the validation servers:

routing-options {
    validation {
        group RPKI {
            session validator1 {
                hold-time 60;         # session is considered down after 1 minute
                record-lifetime 3600; # cache is kept for 1 hour
                refresh-time 30;      # cache is refreshed every 30 seconds
                port 8282;
            }
            session validator2 { /* OMITTED */ }
            session validator3 { /* OMITTED */ }
        }
    }
}

By default, at most two sessions are randomly established at the same time. This provides a good way to load-balance them among the validators while maintaining good availability. The second step is to define the policy for route validation:

policy-options {
    policy-statement ACCEPT-VALID {
        term valid {
            from {
                protocol bgp;
                validation-database valid;
            }
            then {
                validation-state valid;
                accept;
            }
        }
        term invalid {
            from {
                protocol bgp;
                validation-database invalid;
            }
            then {
                validation-state invalid;
                reject;
            }
        }
    }
    policy-statement REJECT-ALL {
        then reject;
    }
}

The policy statement ACCEPT-VALID turns the validation state of a prefix from unknown to valid if the ROA database says it is valid. It also accepts the route. If the prefix is invalid, the prefix is marked as such and rejected. We have also prepared a REJECT-ALL statement to reject everything else, notably unknown prefixes.

A ROA only certifies the origin of a prefix. A malicious actor can therefore prepend the expected AS number to the AS path to circumvent the validation. For example, AS 65007 could annonce 2001:db8:dd::/64, a prefix allocated to AS 65006, by advertising it with the AS path 65007 65006. To avoid that, we define an additional policy statement to reject AS paths with more than one AS:

policy-options {
    as-path EXACTLY-ONE-ASN "^.$";
    policy-statement ONLY-DIRECTLY-CONNECTED {
        term exactly-one-asn {
            from {
                protocol bgp;
                as-path EXACTLY-ONE-ASN;
            }
            then next policy;
        }
        then reject;
    }
}

The last step is to configure the BGP sessions:

protocols {
    bgp {
        group HOSTS {
            local-as 65100;
            type external;
            # export [ … ];
            import [ ONLY-DIRECTLY-CONNECTED ACCEPT-VALID REJECT-ALL ];
            enforce-first-as;
            neighbor 2001:db8:42::a10 {
                peer-as 65005;
            }
            neighbor 2001:db8:42::a12 {
                peer-as 65006;
            }
            neighbor 2001:db8:42::a14 {
                peer-as 65007;
            }
        }
    }
}

The import policy rejects any AS path longer than one AS, accepts any validated prefix and rejects everything else. The enforce-first-as directive is also pretty important: it ensures the first (and, here, only) AS in the AS path matches the peer AS. Without it, a malicious neighbor could inject a prefix using an AS different than its own, defeating our purpose.4

Let’s check the state of the RTR sessions and the database:

> show validation session
Session                                  State   Flaps     Uptime #IPv4/IPv6 records
2001:db8:4242::10                        Up          0   00:16:09 0/9
2001:db8:4242::11                        Up          0   00:16:07 0/9
2001:db8:4242::12                        Connect     0            0/0

> show validation database
RV database for instance master

Prefix                 Origin-AS Session                                 State   Mismatch
2001:db8:11::/64-128       65006 2001:db8:4242::10                       valid
2001:db8:11::/64-128       65006 2001:db8:4242::11                       valid
2001:db8:11::/64-128       65009 2001:db8:4242::10                       valid
2001:db8:11::/64-128       65009 2001:db8:4242::11                       valid
2001:db8:aa::/64-128       65005 2001:db8:4242::10                       valid
2001:db8:aa::/64-128       65005 2001:db8:4242::11                       valid
2001:db8:bb::/64-128       65006 2001:db8:4242::10                       valid
2001:db8:bb::/64-128       65006 2001:db8:4242::11                       valid
2001:db8:cc::/64-128       65007 2001:db8:4242::10                       valid
2001:db8:cc::/64-128       65007 2001:db8:4242::11                       valid
2001:db8:dd::/64-128       65008 2001:db8:4242::10                       valid
2001:db8:dd::/64-128       65008 2001:db8:4242::11                       valid
2001:db8:ee::/64-128       65009 2001:db8:4242::10                       valid
2001:db8:ee::/64-128       65009 2001:db8:4242::11                       valid
2001:db8:ff::/64-128       65010 2001:db8:4242::10                       valid
2001:db8:ff::/64-128       65010 2001:db8:4242::11                       valid

  IPv4 records: 0
  IPv6 records: 18

Here is an example of accepted route:

> show route protocol bgp table inet6 extensive all
inet6.0: 11 destinations, 11 routes (8 active, 0 holddown, 3 hidden)
2001:db8:bb::42/128 (1 entry, 0 announced)
        *BGP    Preference: 170/-101
                Next hop type: Router, Next hop index: 0
                Address: 0xd050470
                Next-hop reference count: 4
                Source: 2001:db8:42::a12
                Next hop: 2001:db8:42::a12 via em1.0, selected
                Session Id: 0x0
                State: <Active NotInstall Ext>
                Local AS: 65006 Peer AS: 65000
                Age: 12:11
                Validation State: valid
                Task: BGP_65000.2001:db8:42::a12+179
                AS path: 65006 I
                Accepted
                Localpref: 100
                Router ID: 1.1.1.1

A rejected route would be similar with the reason “rejected by import policy” shown in the details and the validation state would be invalid.

Configuring BIRD

BIRD supports both plain-text TCP and SSH. Let’s configure it to use SSH. We need to generate keypairs for both the leaf router and the validators (they can all share the same keypair). We also have to create a known_hosts file for BIRD:

(validatorX)$ ssh-keygen -qN "" -t rsa -f /etc/gortr/ssh_key
(validatorX)$ echo -n "validatorX:8283 " ; \
              cat /etc/bird/ssh_key_rtr.pub
validatorX:8283 ssh-rsa AAAAB3[…]Rk5TW0=
(leaf1)$ ssh-keygen -qN "" -t rsa -f /etc/bird/ssh_key
(leaf1)$ echo 'validator1:8283 ssh-rsa AAAAB3[…]Rk5TW0=' >> /etc/bird/known_hosts
(leaf1)$ echo 'validator2:8283 ssh-rsa AAAAB3[…]Rk5TW0=' >> /etc/bird/known_hosts
(leaf1)$ cat /etc/bird/ssh_key.pub
ssh-rsa AAAAB3[…]byQ7s=
(validatorX)$ echo 'ssh-rsa AAAAB3[…]byQ7s=' >> /etc/gortr/authorized_keys

GoRTR needs additional flags to allow connections over SSH:

$ gortr -refresh=600 -verify=false -checktime=false \
      -cache=http://127.0.0.1/rpki.json \
      -ssh.bind=:8283 \
      -ssh.key=/etc/gortr/ssh_key \
      -ssh.method.key=true \
      -ssh.auth.user=rpki \
      -ssh.auth.key.file=/etc/gortr/authorized_keys
INFO[0000] Enabling ssh with the following authentications: password=false, key=true
INFO[0000] New update (7 uniques, 8 total prefixes). 0 bytes. Updating sha256 hash  -> 68a1d3b52db8d654bd8263788319f08e3f5384ae54064a7034e9dbaee236ce96
INFO[0000] Updated added, new serial 1

Then, we can configure BIRD to use these RTR servers:

roa6 table ROA6;
template rpki VALIDATOR {
   roa6 { table ROA6; };
   transport ssh {
     user "rpki";
     remote public key "/etc/bird/known_hosts";
     bird private key "/etc/bird/ssh_key";
   };
   refresh keep 30;
   retry keep 30;
   expire keep 3600;
}
protocol rpki VALIDATOR1 from VALIDATOR {
   remote validator1 port 8283;
}
protocol rpki VALIDATOR2 from VALIDATOR {
   remote validator2 port 8283;
}

Unlike JunOS, BIRD doesn’t have a feature to only use a subset of validators. Therefore, we only configure two of them. As a safety measure, if both connections become unavailable, BIRD will keep the ROAs for one hour.

We can query the state of the RTR sessions and the database:

> show protocols all VALIDATOR1
Name       Proto      Table      State  Since         Info
VALIDATOR1 RPKI       ---        up     17:28:56.321  Established
  Cache server:     rpki@validator1:8283
  Status:           Established
  Transport:        SSHv2
  Protocol version: 1
  Session ID:       0
  Serial number:    1
  Last update:      before 25.212 s
  Refresh timer   : 4.787/30
  Retry timer     : ---
  Expire timer    : 3574.787/3600
  No roa4 channel
  Channel roa6
    State:          UP
    Table:          ROA6
    Preference:     100
    Input filter:   ACCEPT
    Output filter:  REJECT
    Routes:         9 imported, 0 exported, 9 preferred
    Route change stats:     received   rejected   filtered    ignored   accepted
      Import updates:              9          0          0          0          9
      Import withdraws:            0          0        ---          0          0
      Export updates:              0          0          0        ---          0
      Export withdraws:            0        ---        ---        ---          0

> show route table ROA6
Table ROA6:
    2001:db8:11::/64-128 AS65006  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:11::/64-128 AS65009  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:aa::/64-128 AS65005  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:bb::/64-128 AS65006  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:cc::/64-128 AS65007  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:dd::/64-128 AS65008  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:ee::/64-128 AS65009  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:ff::/64-128 AS65010  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)

Like for the JunOS case, a malicious actor could try to workaround the validation by building an AS path where the last AS number is the legitimate one. BIRD is flexible enough to allow us to use any AS to check the IP prefix. Instead of checking the origin AS, we ask it to check the peer AS with this function, without looking at the AS path:

function validated(int peeras) {
   if (roa_check(ROA6, net, peeras) != ROA_VALID) then {
      print "Ignore invalid ROA ", net, " for ASN ", peeras;
      reject;
   }
   accept;
}

The BGP instance is then configured to use the above function as the import policy:

protocol bgp PEER1 {
   local as 65100;
   neighbor 2001:db8:42::a10 as 65005;
   ipv6 {
      import keep filtered;
      import where validated(65005);
      # export …;
   };
}

You can view the rejected routes with show route filtered, but BIRD does not store information about the validation state in the routes. You can also watch the logs:

2019-07-31 17:29:08.491 <INFO> Ignore invalid ROA 2001:db8:bb::40:/126 for ASN 65005

Currently, BIRD does not reevaluate the prefixes when the ROAs are updated. There is work in progress to fix this. If this feature is important to you, have a look at FRR instead: it also supports the RTR protocol and triggers a soft reconfiguration of the BGP sessions when ROAs are updated.


  1. Notably, the data flow and the control plane are separated. A node can remove itself by notifying its peers without losing a single packet. ↩︎

  2. People often use AS sets, like AS-APPLE in this example, as they are convenient if you have multiple AS numbers or customers. However, there is currently nothing preventing a rogue actor to add arbitrary AS numbers to their AS set. ↩︎

  3. We are using 16-bit AS numbers for readability. Because we need to assign a different AS number for each host in the datacenter, in an actual deployment, we would use 32-bit AS numbers. ↩︎

  4. Cisco routers and FRR enforce the first AS by default. It is a tunable value to allow the use of route servers: they distribute prefixes on behalf of other routers. ↩︎

Worse Than FailureError'd: Choice is but an Illusion

"If you choose not to decide which button to press, you still have made a choice," Rob H. wrote.

 

"If you have a large breed cat, or small dog, the name doesn't matter, it just has to get the job done," writes Bryan.

 

Mike R. wrote, "Thanks Dropbox. Becuase your survey can't add, I missed out on my chance to win a gift card. Way to go guys..."

 

"There was a magnitude 7.1 earthquake near Ridgecrest, CA on 7/5/2019 at 8:25PM PDT. I visited the USGS earthquakes page, clicked on the earthquake link, and clickedd on the 'Did you feel it?' link, because we DID feel it here in Sacramento, CA, 290 miles away," Ken M. wrote, "Based on what I'm seeing though, I think they may call it a 'bat-quake' instead."

 

Benjamin writes, "Apparently Verizon is trying to cast a spell on me because I used too much data."

 

Daniel writes, "German telekom's customer center site is making iFrames sexy again."

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianJunichi Uekawa: Started wanting to move stuff to docker.

Started wanting to move stuff to docker. Especially around build systems. If things are mutable they will go bad and fixing them is annoying.

,

Cory DoctorowPaul Di Filippo on Radicalized: “Upton-Sinclairish muckraking, and Dickensian-Hugonian ashcan realism”

I was incredibly gratified and excited to read Paul Di Filippo’s Locus review of my latest book, Radicalized; Di Filippo is a superb writer, one of the original, Mirrorshades cyberpunks, and he is a superb and insightful literary critic, so when I read his superlative-laden review of my book today, it was an absolute thrill (I haven’t been this excited about a review since Bruce Sterling reviewed Walkaway).


There’s so much to be delighted by in this review, not least a comparison to Rod Serling (!). Below, a couple paras of especial note.

His latest, a collection of four novellas, subtitled “Four Tales of Our Present Moment”, fits the template perfectly, and extends his vision further into a realm where impassioned advocacy, Upton-Sinclairish muckraking, and Dickensian-Hugonian ashcan realism drives a kind of partisan or Cassandran science fiction seen before mostly during the post-WWII atomic bomb panic (think On the Beach) and 1960s New Wave-Age of Aquarius agitation (think Bug Jack Barron). Those earlier troubled eras resonate with our current quandary, but the “present moment” under Doctorow’s microscope – or is that a sniper’s crosshairs? – has its own unique features that he seeks to elucidate. These stories walk a razor’s edge between literature and propaganda, aesthetics and bludgeoning, subtlety and stridency, rant and revelation. The only guaranteed outcome after reading is that no one can be indifferent to them…

…The Radicalized collection strikes me in some sense as an episode of a primo TV anthology series – Night Gallery in the classical mode, or maybe in a more modern version, Philip K. Dick’s Electric Dreams. It gives us polymath Cory Doctorow as talented Rod Serling – himself both a dreamer and a social crusader – telling us that he’s going to show us, as vividly as he can, several nightmares or future hells, but that somehow the human spirit and soul will emerge intact and even triumphant.


Paul Di Filippo Reviews Radicalized by Cory Doctorow [Paul Di Filippo/Locus]

Planet DebianMike Gabriel: My Work on Debian LTS/ELTS (July 2019)

In July 2019, I have worked on the Debian LTS project for 15.75 hours (of 18.5 hours planned) and on the Debian ELTS project for another 12 hours (as planned) as a paid contributor.

LTS Work

  • Upload to jessie-security: libssh2 (DLA 1730-3) [1]
  • Upload to jessie-security: libssh2 (DLA 1730-4) [2]
  • Upload to jessie-security: glib2.0 (DLA 1866-1) [3]
  • Upload to jessie-security: wpa (DLA 1867-1) [4]

The Debian Security package archive only has arch-any buildds attached, so source packages that build at least one arch-all bin:pkg must include the arch-all DEBs from a local build. So, ideally, we upload source + arch-all builds and leave the arch-any builds to the buildds. However, this seems to be problematic when doing the builds using sbuild. So, I spent a little time on...

  • sbuild: Try to understand the mechanism of building arch-all + source package (i.e. omit arch-any uploads)... Unfortunately, there is no "-g" option (like in dpkg-buildpackage). Neither does the parameter combination ''--source --arch-all --no-arch-any'' result in a source + arch-all build. More investigation / communication with the developers of sbuild required here. To be continued...

ELTS Work

  • Upload to wheezy-lts: freetype (ELA 149-1) [5]
  • Upload to wheezy-lts: libssh2 (ELA 99-3) [6]

References

Planet DebianGunnar Wolf: Goodbye, pgp.gwolf.org

I started running an SKS keyserver a couple of years ago (don't really remember, but I think it was around 2014). I am, as you probably expect me to be given my lines of work, a believer of the Web-of-Trust model upon which the PGP network is built. I have published a couple of academic papers (Strengthening a Curated Web of Trust in a Geographically Distributed Project, with Gina Gallegos, Cryptologia 2016, and Insights on the large-scale deployment of a curated Web-of-Trust: the Debian project’s cryptographic keyring, with Victor González Quiroga, Journal of Internet Services and Applications, 2018) and presented several conferences regarding some aspects of it, mainly in relation to the Debian project.

Even in light of the recent flooding attacks (more info by dkg, Daniel Lange, Michael Altfield, others available; GnuPG task tracker). I still believe in the model. But I have had enough of the implementation's brittleness. I don't know how much to blame SKS and how much to blame myself, but I cannot devote more time to fiddling around to try to get it to work as it should — I was providing an unstable service. Besides, this year I had to rebuild the database three times already due to it getting corrupted... And yesterday I just could not get past of segfaults when importing.

So, I have taken the unhappy decision to shut down my service. I have contacted both the SKS mailing list and the servers I was peering with. Due to the narrow scope of a single SKS server, possibly this post is not needed... But it won't hurt, so here it goes.

Planet DebianThomas Goirand: My work during DebCamp / DebConf

Lots of uploads

Grepping my IRC log for the BTS bot output shows that I uploaded roughly 244 times in Curitiba.

Removing Python 2 from OpenStack by uploading OpenStack Stein in Sid

Most of these uploads were uploading OpenStack Stein from Experimental to Sid, with a breaking record of 96 uploads in a single day. As the work for Python 2 removal was done before the Buster release (uploads in Experimental), this effectively removed a lot of Python 2 support.

Removing Python 2 from Django packages

But once that was done, I started uploading some Django packages. Indeed, since Django 2.2 was uploaded to Sid with the removal of Python 2 support, a lot of dangling python-django-* needed to be fixed. Not only Python 2 support needed to be removed from them, but often, patches were needed in order to fix at least unit tests since Django 2.2 removed a lot of things that were deprecated since a few earlier versions. I went through all of the django packages we have in Debian, and I believe I fixed most of them. I uploaded 43 times some Django packages, fixing 39 packages.

Removing Python 2 support from non-django or OpenStack packages

During the Python BoF at Curitiba, we collectively decided it was time to remove Python 2, and that we’ll try to do as much of that work as possible before Bullseye. Details of this will come from our dear leader p1otr, so I’ll let him write the document and wont comment (yet) on how we’re going to proceed. Anyway, we already have a “python2-rm” release tracker. After the Python BOF, I then also started removing Python 2 support on a few package with more generic usage. Hopefully, touching only leaf packages, without breaking things. I’m not sure of the total count of packages that I touched, probably a bit less than a dozen.

Horizon broken in Sid since the beginning of July

Unfortunately, Horizon, the OpenStack dashboard, is currently still broken in Debian Sid. Indeed, since Django 1.11, the login() function in views.py has been deprecated in the favor of a LoginView class. And in Django 2.2, the support for the function has been removed. As a consequence, since the 9th of July, when Django 2.2 was uploaded, Horizon’s openstack_auth/views.py is boken. Upstream says they are targeting Django 2.2 for next February. That’s a way too late. Hopefully, someone will be able to fix this situation with me (it’s probably a bit too much for Django my skills). Once this is fixed, I’ll be able to work on all the Horizon plugins which are still in Experimental. Note that I already fixed all of Horizon’s reverse dependencies in Sid, but some of the patches need to be upstreamed.

Next work (from home): fixing piuparts

I’ve already written a first attempt at a patch for piuparts, so that it uses Python 3 and not Python 2 anymore. That patch is already as a merge request in Salsa, though I haven’t had the time to test it yet. What’s remaining to do is: actually test using Puiparts with this patch, and fix debian/control so that it switches to Python 2.

Planet DebianSteve Kemp: Building a computer - part 3

This is part three in my slow journey towards creating a home-brew Z80-based computer. My previous post demonstrated writing some simple code, and getting it running under an emulator. It also described my planned approach:

  • Hookup a Z80 processor to an Arduino Mega.
  • Run code on the Arduino to emulate RAM reads/writes and I/O.
  • Profit, via the learning process.

I expect I'll have to get my hands-dirty with a breadboard and naked chips in the near future, but for the moment I decided to start with the least effort. Erturk Kocalar has a website where he sells "shields" (read: expansion-boards) which contain a Z80, and which is designed to plug into an Arduino Mega with no fuss. This is a simple design, I've seen a bunch of people demonstrate how to wire up by hand, for example this post.

Anyway I figured I'd order one of those, and get started on the easy-part, the software. There was some sample code available from Erturk, but it wasn't ideal from my point of view because it mixed driving the Z80 with doing "other stuff". So I abstracted the core code required to interface with the Z80 and packaged it as a simple library.

The end result is that I have a z80 retroshield library which uses an Arduino mega to drive a Z80 with something as simple as this:

#include <z80retroshield.h>


//
// Our program, as hex.
//
unsigned char rom[32] =
{
    0x3e, 0x48, 0xd3, 0x01, 0x3e, 0x65, 0xd3, 0x01, 0x3e, 0x6c, 0xd3, 0x01,
    0xd3, 0x01, 0x3e, 0x6f, 0xd3, 0x01, 0x3e, 0x0a, 0xd3, 0x01, 0xc3, 0x16,
    0x00
};


//
// Our helper-object
//
Z80RetroShield cpu;


//
// RAM I/O function handler.
//
char ram_read(int address)
{
    return (rom[address]) ;
}


// I/O function handler.
void io_write(int address, char byte)
{
    if (address == 1)
        Serial.write(byte);
}


// Setup routine: Called once.
void setup()
{
    Serial.begin(115200);


    //
    // Setup callbacks.
    //
    // We have to setup a RAM-read callback, otherwise the program
    // won't be fetched from RAM and executed.
    //
    cpu.set_ram_read(ram_read);

    //
    // Then we setup a callback to be executed every time an "out (x),y"
    // instruction is encountered.
    //
    cpu.set_io_write(io_write);

    //
    // Configured.
    //
    Serial.println("Z80 configured; launching program.");
}


//
// Loop function: Called forever.
//
void loop()
{
    // Step the CPU.
    cpu.Tick();
}

All the logic of the program is contained in the Arduino-sketch, and all the use of pins/ram/IO is hidden away. As a recap the Z80 will make requests for memory-contents, to fetch the instructions it wants to execute. For general purpose input/output there are two instructions that are used:

IN A, (1)   ; Read a character from STDIN, store in A-register.
OUT (1), A  ; Write the character in A-register to STDOUT

Here 1 is the I/O address, and this is an 8 bit number. At the moment I've just configured the callback such that any write to I/O address 1 is dumped to the serial console.

Anyway I put together a couple of examples of increasing complexity, allowing me to prove that RAM read/writes work, and that I/O reads and writes work.

I guess the next part is where I jump in complexity:

  • I need to wire a physical Z80 to a board.
  • I need to wire a PROM to it.
    • This will contain the program to be executed - hardcoded.
  • I need to provide power, and a clock to make the processor tick.

With a bunch of LEDs I'll have a Z80-system running, but it'll be isolated and hard to program. (Since I'll need to reflash the RAM/ROM-chip).

The next step would be getting it hooked up to a serial-console of some sort. And at that point I'll have a genuinely programmable standalone Z80 system.

Planet DebianSylvain Beucler: Debian LTS - July 2019

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) project, which extends the security support for past Debian releases, as a paid contributor.

In July, the monthly sponsored hours were split evenly among contributors depending on their max availability - I declared max 30h and got 18.5h.

My time was mostly spend on Front-Desk duties, as well as improving our scripts&docs.

Current vulnerabilities triage:

  • CVE-2019-13117/libxslt CVE-2019-13118/libxslt: triage (affected, dla-needed)
  • CVE-2019-12781/python-django: triage (affected)
  • CVE-2019-12970/squirrelmail: triage (affected)
  • CVE-2019-13147/audiofile: triage (postponed)
  • CVE-2019-12493/poppler: jessie triage (postponed)
  • CVE-2019-13173/node-fstream: jessie triage (node-* not supported)
  • exiv2: jessie triage (5 CVEs, none to fix - CVE-2019-13108 CVE-2019-13109 CVE-2019-13110 CVE-2019-13112 CVE-2019-13114)
  • CVE-2019-13207/nsd: jessie triage (affected, posponed)
  • CVE-2019-11272/libspring-security-2.0-java: jessie triage (affected, dla-needed)
  • CVE-2019-13312/ffmpeg: (libav) jessie triage (not affected)
  • CVE-2019-13313/libosinfo: jessie triage (affected, postponed)
  • CVE-2019-13290/mupdf: jessie triage (not-affected)
  • CVE-2019-13351/jackd2: jessie triage (affected, postponed)
  • CVE-2019-13345/squid3: jessie triage (2 XSS: 1 unaffected, 1 reflected affected, dla-needed)
  • CVE-2019-11841/golang-go.crypto: jessie triage (affected, dla-needed)
  • Call for triagers for the upcoming weeks

Past undermined issues triage:

  • libgig: contact maintainer about 17 pending undetermined CVEs
  • libsixel: contact maintainer about 6 pending undetermined CVEs
  • netpbm-free - actually an old Debian-specific fork: contact original reporter for PoCs and attach them to BTS; CVE-2017-2579 and CVE-2017-2580 not-affected, doubts about CVE-2017-2581

Documentation:

Tooling - bin/lts-cve-triage.py:

  • filter out 'undetermined' but explicitely 'ignored' packages (e.g. jasperreports)
  • fix formatting with no-colors output, hint that color output is available
  • display lts' nodsa sub-states
  • upgrade unsupported packages list to jessie

Worse Than FailureCodeSOD: Close to the Point

Lena inherited some C++ code which had issues regarding a timeout. While skimming through the code, one block in particular leapt out. This was production code which had been running in this state for some time.

if((pFile) && (pFile != (FILE *)(0xcdcdcdcd))) {
    fclose(pFile);
    pFile = NULL;
}

The purpose of this code is, as you might gather from the call to fclose, to close a file handle represented by pFile, a pointer to the handle. This code mostly is fine, but with one, big, glaring “hunh?” and it’s this bit here: (pFile != (FILE *)(0xcdcdcdcd))

(FILE *)(0xcdcdcdcd) casts the number 0xcdcdcdcd to a file pointer- essentially it creates a pointer pointing at memory address 0xcdcdcdcd. If pFile points to that address, we won’t close pFile. Is there a reason for this? Not that Lena could determine from the code. Did the 0xcdcdcdcd come from anywhere specific? Probably a previous developer trying to track down a bug and dumping addresses from the debugger. How did it get into production code? How long had it been there? It was impossible to tell. It was also impossible to tell if it was secretly doing something important, so Lena made a note to dig into it later, but focused on solving the timeout bug which had started this endeavor.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianKurt Kremitzki: Summer Update for FreeCAD & Debian Science Work

Hello, and welcome to my "summer update" on my free software work on FreeCAD and the Debian Science team. I call it a summer update because it was winter when I last wrote, and quite some time has elapsed since I fell out of the monthly update habit. This is a high-level summary of what I've been working on since March.

FreeCAD 0.18 Release & Debian 10 Full Freeze Timing

/images/freecadsplash.png


The official release date of FreeCAD 0.18 ( release notes ) is March 12, 2019, although the git tag for it wasn't pushed until March 14th. This timing was a bit unfortunate as the full freeze for Debian 10 went into effect March 12th, with a de-facto freeze date of March 2nd due to the 10 day testing migration period. To compound things, since this was my first Debian release as a packaging contributor, I didn't do things quite right such that while I probably could have gotten FreeCAD 0.18 into Debian 10, I didn't. Instead, what's available is a pre-release version from about a month before the release which is missing a few bugfixes and refinements.

On the positive side, this is an impetus for me to learn about Debian Backports, a way to provide non-bugfix updates to Debian Stable users. The 0.18 release line has already had several bugfix releases; I've currently got Debian Testing/Unstable as well as the Ubuntu Stable PPA up-to-date with version 0.18.3. As soon as I'm able, I'll get this version into Debian Backports, too.

FreeCAD PPA Improvements

Another nice improvement I've recently made is migrating the packaging for the Ubuntu Stable and Daily PPAs to Debian's GitLab instance at https://salsa.debian.org/science-team/freecad by creating the ppa/master and ppa/daily branches. Having all the Debian and Ubuntu packaging in one place means that propagating updates has become a matter of git merging and pushing. Once any changes are in place, I simply have to trigger an import and build on Launchpad for the stable releases. For the daily builds, changes are automatically synced and the debian directory from Salsa is combined with the latest synced upstream source from GitHub, so daily builds no longer have to be triggered manually. However, this has uncovered another problem in our process which being worked on at the FreeCAD forums. (Thread: Finding a solution for the 'version.h' issue

Science Team Package Updates

/images/bunny.png


The main Science Team packages I've been working on recently have been OpenCASCADE, Netgen, Gmsh, and OpenFOAM.

For OpenCASCADE, I have uploaded the third bugfix release in the 7.3.0 series. Unfortunately, their versioning scheme is a bit unusual, so this version is tagged 7.3.0p3. This is unfortunate because dpkg --compare-versions 7.3.0p3+dfsg1 gt 7.3.0+dfsg1 evaluates to false. As such, I've uploaded this package as 7.3.3, with plans to contact upstream to discuss their bugfix release versioning scheme. Currently, version 7.4.0 has an upstream target release date for the end of August, so there will be an opportunity to convince them to release 7.4.1 instead of 7.4.0p1. If you're interested in the changes contained in this upload, you can refer to the upstream git log for more information.

In collaboration with Nico Schlömer and Anton Gladky, the newest Gmsh, version 4.4.1, has been uploaded to wait in the Debian NEW queue. See the upstream changelog for more information on what's new.

I've also prepared the package for the newest version of Netgen, 6.2.1905. Unfortunately, uploading this is blocked because 6.2.1810 is still in Debian NEW. However, I've tested compiling FreeCAD against Netgen, and I've been able to get the integration with it working again, so once I'm able to do this upload, I'll be able to upload a new and improved FreeCAD with the power of Netgen meshing.

I've also begun working on packaging the latest OpenFOAM release, 1906. I've gotten a little sidetracked, though, as a pecularity in the way upstream prepares their tarballs seems to be triggering a bug in GNU tar. I should have this one uploaded soon. For a preview in what'll be coming, see the release notes for version 1906.

GitLab CI Experimentation with salsa.debian.org

Some incredibly awesome Debian contributors have set up the ability to use GitLab CI to automate the testing of Debian packages (see documentation.)

I did a bit of experimentation with it. Unfortunately, both OpenCASCADE and FreeCAD exceeded the 2 hour time limit. There's a lot of promise in it for smaller packages, though!

Python 2 Removal in Debian Underway

/images/deadsnakes.jpeg


Per pythonclock.org, Python 2 has less than 5 months until it's end-of-life, so the task of removing it for the next version of Debian has begun. For now, it's mainly limited to leaf packages with nothing depending on them. As such, I've uploaded Python 3-only packages for new upstream releases of python-fluids (a fluid dynamics engineering & design library) and python-ulmo (provides clean & simple access to public hydrology and climatology data).

Debian Developer Application

I've finally applied to become a full Debian Developer, which is an exciting prospect. I'll be more able to enact improvements without having to bug, well, mostly Anton, Andreas, and Tobias. (Thanks!) I'm also looking forward to having access to more resources to improve my packages on other architectures, particularly arm64 now that the Raspberry Pi 4 is out and potentially a serious candidate for a low-powered FreeCAD workstation.

The process is slow and calculating, as it should be, so it'll be some time before I'm officially in, but it sure will be cause for celebration.

Google Summer of Code Mentoring

/images/gsoc.png

CC-BY-SA-4.0, Aswinshenoy.


I'm mentoring a Google Summer of Code project for FreeCAD this year! (See forum thread.) My student is quite new to FreeCAD and Debian/Ubuntu, so the first half of the project has involved relatively the deep-end topics of using Debian packaging to distribute bugfixes for FreeCAD and to learn by exploring related packages in its ecosystem. In particular, focus was given to OpenCAMLib, since there is a lot of user and developer interest in FreeCAD's potential for generating toolpaths for machining and manufacturing the models created in the program.

Now that he's officially swimming and not sinking, the next phase is working on making development and packaging-related improvements for FreeCAD on Windows, which is in even rougher shape than Debian/Ubuntu, but more his area of familiarity. Stay tuned for the final results!

Thanks to my sponsors

This work is made possible in part by contributions from readers like you! You can send moral support my way via Twitter @thekurtwk. Financial support is also appreciated at any level and possible on several platforms: Patreon, Liberapay, and PayPal.

Planet DebianPaul Wise: FLOSS Activities July 2019

Changes

Issues

Review

Administration

  • apt-xapian-index: migrated repo to Salsa, merged some branches and patches
  • Debian: redirect user support request, answer porterbox access query,
  • Debian wiki: ping team member, re-enable accounts, unblock IP addresses, whitelist domains, whitelist email addresses, send unsubscribe info, redirect support requests
  • Debian QA services: deploy changes
  • Debian PTS: deploy changes
  • Debian derivatives census: disable cron job due to design flaws

Communication

Sponsors

The File::LibMagic, purple-discord, librecaptcha & harmony work was sponsored by my employer. All other work was done on a volunteer basis.

,

Planet DebianJonathan Carter: Free Software Activities (2019-07)

DC19 Group Photo

Group photo above taken at DebConf19 by Agairs Mahinovs.

2019-07-03: Upload calamares-settings-debian (10.0.20-1) (CVE 2019-13179) to debian unstable.

2019-07-05: Upload calamares-settings-debian (10.0.25-1) to debian unstable.

2019-07-06: Debian Buster Live final ISO testing for release, also attended Cape Town buster release party.

2019-07-08: Sponsor package ddupdate (0.6.4-1) for debian unstable (mentors.debian.net request, RFS: #931582)

2019-07-08: Upload package btfs (2.19-1) to debian unstable.

2019-07-08: Upload package calamares (3.2.11-1) to debian unstable.

2019-07-08: Request update for util-linux (BTS: #931613).

2019-07-08: Upload package gnome-shell-extension-dashtodock (66-1) to debian unstable.

2019-07-08: Upload package gnome-shell-extension-multi-monitors (18-1) to debian unstable.

2019-07-08: Upload package gnome-shell-extension-system-monitor (38-1) to debian unstable.

2019-07-08: Upload package gnome-shell-extension-tilix-dropdown (7-1) to debian unstable.

2019-07-08: Upload package python3-aniso8601 (7.0.0-1) to debian unstable.

2019-07-08: Upload package python-3-flask-restful (0.3.7-2) to debian unstable.

2019-07-08: Upload package xfce4-screensaver (0.1.6) to debian unstable.

2019-07-09: Sponsor package wordplay (8.0-1) (mentors.debian.net request).

2019-07-09: Sponsor package blastem (0.6.3.2-1) (mentors.debian.net request) (Closes RFS: #931263).

2019-07-09: Upload gnome-shell-extension-workspaces-to-dock (50-1) to debian unstable.

2019-07-09: Upload bundlewrap (3.6.1-2) to debian unstable.

2019-07-09: Upload connectagram (1.2.9-6) to debian unstable.

2019-07-09: Upload fracplanet (0.5.1-5) to debian unstable.

2019-07-09: Upload fractalnow (0.8.2-4) to debian unstable.

2019-07-09: Upload gnome-shell-extension-dash-to-panel (19-2) to debian unstable.

2019-07-09: Upload powerlevel9k (0.6.7-2) to debian unstable.

2019-07-09: Upload speedtest-cli (2.1.1-2) to debian unstable.

2019-07-11: Upload tetzle (2.1.4+dfsg1-2) to debian unstable.

2019-07-11: Review mentors.debian.net package hipercontracer (1.4.1-1).

2019-07-15 – 2019-07-28: Attend DebCamp and DebConf!

My DebConf19 mini-report:

There is really too much to write about that happened at DebConf, I hope to get some time and write seperate blog entries on those really soon.

  • Participated in Bursaries BoF, I was chief admin of DebConf bursaries in this cycle. Thanks to everyone who already stepped up to help with next year.
  • Gave a lightning talk titled “Can you install Debian within a lightning talk slot?” where I showed off Calamares on the latest official live media. Spoiler alert: it barely doesn’t fit in the allotted time, something to fix for bullseye!
  • Participated in a panel called “Surprise, you’re a manager!“.
  • Hosted “Debian Live BoF” – we made some improvements for the live images during the buster cycle, but there’s still a lot of work to do so we held a session to cut out our initial work for Debian 11.
  • Got the debbug and missed the day trip, I hope to return to this part of Brazil one day, so much to explore in just the surrounding cities.
  • The talk selection this year was good, there’s a lot that I learned and caught up on that I probably wouldn’t have done if it wasn’t for DebConf. Talks are recorded so (http archive, YouTube). PS: If you find something funny, please link (with time stamp) on the FunnyMoments wiki page (that page is way too bare right now).

Planet DebianChris Lamb: Free software activities in July 2019

Here is my monthly update covering what I have been doing in the free software world during July 2019 (previous month):


Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom. Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.


This month:

I spent significant amount of time working on our website this month, including:

  • Split out our non-fiscal sponsors with a description [...] and make them non-display three-in-a-row [...].
  • Correct references to "1&1 IONOS" (née Profitbricks). [...]
  • Lets not promote yet more ambiguity in our environment names! [...]
  • Recreate the badge image, saving the .svg alongside it. [...]
  • Update our fiscal sponsors. [...][...][...]
  • Tidy the weekly reports section on the news page [...], fixup the typography on the documentation page [...] and make all headlines stand out a bit more [...].
  • Drop some old CSS files and fonts. [...]
  • Tidy news page a bit. [...]
  • Fixup a number of issues in the report template and previous reports. [...][...][...][...][...][...]

I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Add support for Java .jmod modules (#60). However, not all versions of file(1) support detection of these files yet, so we perform a manual comparison instead [...].
  • If a command fails to execute but does not print anything to standard error, try and include the first line of standard output in the message we include in the difference. This was motivated by readelf(1) returning its error messages on standard output. [#59) [...]
  • Add general support for file(1) 5.37 (#57) but also adjust the code to not fail in tests when, eg, we do not have sufficiently newer or older version of file(1) (#931881).
  • Factor out the ability to ignore the exit codes of zipinfo and zipinfo -v in the presence of non-standard headers. [...] but only override the exit code from our special-cased calls to zipinfo(1) if they are 1 or 2 to avoid potentially masking real errors [...].
  • Cease ignoring test failures in stable-backports. [...]
  • Add missing textual DESCRIPTION headers for .zip and "Mozilla"-optimised .zip files. [...]
  • Merge two overlapping environment variables into a single DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS. [...]
  • Update some reporting:
    • Re-add "return code" noun to "Command foo exited with X" error messages. [...]
    • Use repr(..)-style output when printing DIFFOSCOPE_TESTS_FAIL_ON_MISSING_TOOLS in skipped test rationale text. [...]
    • Skip the extra newline in Output:\nfoo. [...]
  • Add some explicit return values to appease Pylint, etc. [...]
  • Also include the python3-tlsh in the Debian test dependencies. [...]
  • Released and uploaded releasing versions 116, 117, 118, 119 & 120. [...][...][...][...][...]


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Support OpenJDK ".jmod" files. [...]
  • Identify data files from the COmmon Data Access (CODA) framework as being .zip files. [...]
  • Pass --no-sandbox if necessary to bypass seccomp-enabled version of file(1) which was causing a huge number of regressions in our testing framework.
  • Don't just run the tests but build the Debian package instead using Salsa's centralised scripts so that we get code coverage, Lintian, autopkgtests, etc. [...][...]
  • Update tests:
    • Don't build release Git tags on salsa.debian.org. [...]
    • Merge the debian branch into the master branch to simplify testing and deployment [...] and update debian/gbp.conf to match [...].
  • Drop misleading and outdated MANIFEST and MANIFEST.SKIP files as they are not used by our release process. [...]

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS (ELTS) project.


Uploads

I also made "sourceful" uploads to unstable to ensure migration to testing after recent changes that prevent maintainer-supplied packages entering bullseye for bfs (1.5-3), redis (5:5.0.5-2), lastpass-cli (1.3.3-2), python-daiquiri (1.5.0-3) and I finally performed a sponsored upload of elpy (1.29.1+40.gb929013-1).


FTP Team

As a Debian FTP assistant I ACCEPTed 19 packages: aiorwlock, bolt, caja-mediainfo, cflow, cwidget, dgit, fonts-smc-gayathri, gmt, gnuastro, guile-gcrypt, guile-sqlite3, guile-ssh, hepmc3, intel-gmmlib, iptables, mescc-tools, nyacc, python-pdal & scheme-bytestructures. I additionally filed a bug against scheme-bytestructures for having a seemingly-incomplete debian/copyright file. (#932466)

CryptogramAnother Attack Against Driverless Cars

In this piece of research, attackers successfully attack a driverless car system -- Renault Captur's "Level 0" autopilot (Level 0 systems advise human drivers but do not directly operate cars) -- by following them with drones that project images of fake road signs in 100ms bursts. The time is too short for human perception, but long enough to fool the autopilot's sensors.

Boing Boing post.

Planet DebianMichael Prokop: Some useful bits about Linux hardware support and patched Kernel packages

Disclaimer: I started writing this blog post in May 2018, when Debian/stretch was the current stable release of Debian, but published this article in August 2019, so please keep the version information (Debian releases + kernels not being up2date) in mind.

The kernel version of Debian/stretch (4.9.0) didn’t support the RAID controller as present in Lenovo ThinkSystem SN550 blade servers yet. The RAID controller was known to be supported with Ubuntu 18.10 using kernel v4.15 as well as with Grml ISOs using kernel v4.15 and newer. Using a more recent Debian kernel version wasn’t really an option for my customer, as there was no LTS kernel version that could be relied on. Using the kernel version from stretch-backports could have be an option, though it would be our last resort only, since the customer where this applied to controls the Debian repositories in usage and we’d have to track security issues more closely, test new versions of the kernel on different kinds of hardware more often,… whereas the kernel version from Debian/stable is known to be working fine and is less in a flux than the ones from backports. Alright, so it doesn’t support this new hardware model yet, but how to identify the relevant changes in the kernel to have a chance to get it supported in the stable Debian kernel?

Some bits about PCI IDs and related kernel drivers

We start by identifying the relevant hardware:

root@grml ~ # lspci | grep 'LSI.*RAID'
08:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID Tri-Mode SAS3404 (rev 01)
root@grml ~ # lspci -s '08:00.0'
08:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID Tri-Mode SAS3404 (rev 01)

Which driver gets used for this device?

root@grml ~ # lspci -k -s '08:00.0'
08:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID Tri-Mode SAS3404 (rev 01)
        Subsystem: Lenovo ThinkSystem RAID 530-4i Flex Adapter
        Kernel driver in use: megaraid_sas
        Kernel modules: megaraid_sas

So it’s the megaraid_sas driver, let’s check some version information:

root@grml ~ # modinfo megaraid_sas | grep version
version:        07.703.05.00-rc1
srcversion:     442923A12415C892220D5F0
vermagic:       4.15.0-1-grml-amd64 SMP mod_unload modversions

But how does the kernel know which driver should be used for this device? We start by listing further details about the hardware device:

root@grml ~ # lspci -n -s 0000:08:00.0
08:00.0 0104: 1000:001c (rev 01)

The 08:00.0 describes the hardware slot information ([domain:]bus:device.function), the 0104 describes the class (with 0104 being of type RAID bus controller, also see /usr/share/misc/pci.ids by searching for ‘C 01’ -> ’04`), the (rev 01) obviously describes the revision number. We’re interested in the 1000:001c though. The 1000 identifies the vendor:

% grep '^1000' /usr/share/misc/pci.ids
1000  LSI Logic / Symbios Logic

The `001c` finally identifies the actual model. Having this information available, we can check the mapping of the megaraid_sas driver, using the `modules.alias` file of the kernel:

root@grml ~ # grep -i '1000.*001c' /lib/modules/$(uname -r)/modules.alias
alias pci:v00001000d0000001Csv*sd*bc*sc*i* megaraid_sas
root@grml ~ # modinfo megaraid_sas | grep -i 001c
alias:          pci:v00001000d0000001Csv*sd*bc*sc*i*

Bingo! Now we can check this against the Debian/stretch kernel, which doesn’t support this device yet:

root@stretch:~# modinfo megaraid_sas | grep version
version:        06.811.02.00-rc1
srcversion:     64B34706678212A7A9CC1B1
vermagic:       4.9.0-6-amd64 SMP mod_unload modversions
root@stretch:~# modinfo megaraid_sas | grep -i 001c
root@stretch:~#

No match here – bingo²! Now we know for sure that the ID 001c is relevant for us. How do we identify the corresponding change in the Linux kernel though?

The file drivers/scsi/megaraid/megaraid_sas.h of the kernel source lists the PCI device IDs supported by the megaraid_sas driver. Since we know that kernel v4.9 doesn’t support it yet, while it’s supported with v4.15 we can run "git log v4.9..v4.15 drivers/scsi/megaraid/megaraid_sas.h" in the git repository of the kernel to go through the relevant changes. It’s easier to run "git blame drivers/scsi/megaraid/megaraid_sas.h" though – then we’ll stumble upon our ID from before – `0x001C` – right at the top:

[...]
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   59) #define PCI_DEVICE_ID_LSI_VENTURA                 0x0014
754f1bae0f1e3 (Shivasharan S              2017-10-19 02:48:49 -0700   60) #define PCI_DEVICE_ID_LSI_CRUSADER                0x0015
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   61) #define PCI_DEVICE_ID_LSI_HARPOON                 0x0016
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   62) #define PCI_DEVICE_ID_LSI_TOMCAT                  0x0017
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   63) #define PCI_DEVICE_ID_LSI_VENTURA_4PORT               0x001B
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   64) #define PCI_DEVICE_ID_LSI_CRUSADER_4PORT      0x001C
[...]

Alright, the relevant change was commit 45f4f2eb3da3c:

commit 45f4f2eb3da3cbff02c3d77c784c81320c733056
Author: Sasikumar Chandrasekaran […]
Date:   Tue Jan 10 18:20:43 2017 -0500

    scsi: megaraid_sas: Add new pci device Ids for SAS3.5 Generic Megaraid Controllers
    
    This patch contains new pci device ids for SAS3.5 Generic Megaraid Controllers
    
    Signed-off-by: Sasikumar Chandrasekaran […]
    Reviewed-by: Tomas Henzl […]
    Signed-off-by: Martin K. Petersen […]

diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
index fdd519c1dd57..cb82195a8be1 100644
--- a/drivers/scsi/megaraid/megaraid_sas.h
+++ b/drivers/scsi/megaraid/megaraid_sas.h
@@ -56,6 +56,11 @@
 #define PCI_DEVICE_ID_LSI_INTRUDER_24          0x00cf
 #define PCI_DEVICE_ID_LSI_CUTLASS_52           0x0052
 #define PCI_DEVICE_ID_LSI_CUTLASS_53           0x0053
+#define PCI_DEVICE_ID_LSI_VENTURA                  0x0014
+#define PCI_DEVICE_ID_LSI_HARPOON                  0x0016
+#define PCI_DEVICE_ID_LSI_TOMCAT                   0x0017
+#define PCI_DEVICE_ID_LSI_VENTURA_4PORT                0x001B
+#define PCI_DEVICE_ID_LSI_CRUSADER_4PORT       0x001C
[...]

Custom Debian kernel packages for testing

Now that we identified the relevant change, what’s the easiest way to test this change? There’s an easy way how to build a custom Debian package, based on the official Debian kernel but including further patch(es), thanks to Ben Hutchings. Make sure to have a Debian system available (I was running this inside an amd64 system, building for amd64), with according deb-src entries in your apt’s sources.list and enough free disk space, then run:

% sudo apt install dpkg-dev build-essential devscripts fakeroot
% apt-get source -t stretch linux
% cd linux-*
% sudo apt-get build-dep linux
% bash debian/bin/test-patches -f amd64 -s none 0001-scsi-megaraid_sas-Add-new-pci-device-Ids-for-SAS3.5-.patch

This generates something like a linux-image-4.9.0-6-amd64_4.9.88-1+deb9u1a~test_amd64.deb for you (next to further Debian packages like linux-headers-4.9.0-6-amd64_4.9.88-1+deb9u1a~test_amd64.deb + linux-image-4.9.0-6-amd64-dbg_4.9.88-1+deb9u1a~test_amd64.deb), ready for installing and testing on the affected system. The Kernel Handbook documents this procedure as well, I just wasn’t aware of this handy `debian/bin/test-patches` so far though.

JFTR: sadly the patch with the additional PCI_DEVICE_ID* was not enough (also see #900349), we seem to need further patches from the changes between v4.9 and v4.15, though this turned up to be no longer relevant for my customer and it’s also working with Debian/buster nowadays.

Worse Than FailureCodeSOD: What a Happy Date

As is the case with pretty much any language these days, Python comes with robust date handling functionality. If you want to know something like what the day of the month is? datetime.now().day will tell you. Simple, easy, and of course, just an invitation for someone to invent their own.

Jan was witness to a little date-time related office politics. This particular political battle started during a code review. Klaus had written some date mangling code, relying heavily on strftime to parse dates out to strings and then parse them back in as integers. Richard, quite reasonably, pointed out that Klaus was taking the long way around, and maybe Klaus should possibly think about doing it in a simpler fashion.

“So, you don’t understand the code?” Klaus asked.

“No, I understand it,” Richard replied. “But it’s far too complicated. You’re doing a simple task- getting the day of the month! The code should be simple.”

“Ah, so it’s too complicated, so you can’t understand it.”

“Just… write it the simple way. Use the built-in accessor.”

So, Klaus made his revisions, and merged the revised code.

import datetime
# ...
now = datetime.datetime.now()  # Richard
date = now.strftime("%d")  # Richard, this is a string over here
date_int = int(date)  # day number, int("08") = 8, so no problem here
hour = now.hour  # Richard :)))))
hour_int = int(hour)  # int hour, e.g. if it's 22:36 then hour = 22

Richard did not have a big :))))) on his face when he saw that in the master branch.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

TEDStages of Life: Notes from Session 5 of TEDSummit 2019

Yilian Cañizares rocks the TED stage with a jubilant performance of her signature blend of classic jazz and Cuban rhythms. She performs at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

The penultimate session of TEDSummit 2019 had a bit of everything — new thoughts on aging, loneliness and happiness as well as breakthrough science, music and even a bit of comedy.

The event: TEDSummit 2019, Session 5: Stages of Life, hosted by Kelly Stoetzel and Alex Moura

When and where: Wednesday, July 24, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Nicola Sturgeon, Sonia Livingstone, Howard Taylor, Sara-Jane Dunn, Fay Bound Alberti, Carl Honoré

Opening: Raconteur Mackenzie Dalrymple telling the story of the Goodman of Ballengeich

Music: Yilian Cañizares and her band, rocking the TED stage with a jubilant performance that blends classic jazz and Cuban rhythms

Comedy: Amidst a head-spinning program of big (and often heavy) ideas, a welcomed break from comedian Omid Djalili, who lightens the session with a little self-deprecation and a few barbed cultural observations

The talks in brief:

“In the world we live in today, with growing divides and inequalities, with disaffection and alienation, it is more important than ever that we … promote a vision of society that has well-being, not just wealth, at its very heart,” says Nicola Sturgeon, First Minister of Scotland. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Nicola Sturgeon, First Minister of Scotland

Big idea: It’s time to challenge the monolithic importance of GDP as a quality-of-life metric — and paint a broader picture that also encompasses well-being.

How? In 2018, Scotland, Iceland and New Zealand established the Wellbeing Economy Governments group to challenge the supremacy of GDP. The leaders of these countries — who are, incidentally, all women — believe policies that promote happiness (including equal pay, childcare and paternity rights) could help decrease alienation in its citizens and, in turn, build resolve to confront global challenges like inequality and climate change.

Quote of the talk: “Growth in GDP should not be pursued at any and all cost … The goal of economic policy should be collective well-being: how happy and healthy a population is, not just how wealthy a population is.”


Sonia Livingstone, social psychologist

Big idea: Parents often view technology as either a beacon of hope or a developmental poison, but the biggest influence on their children’s life choices is how they help them navigate this unavoidable digital landscape. Society as a whole can positively impact these efforts.

How? Sonia Livingstone’s own childhood was relatively analog, but her research has been focused on how families embrace new technology today. Changes abound in the past few decades — whether it’s intensified educational pressures, migration, or rising inequality — yet it’s the digital revolution that remains the focus of our collective apprehension. Livingstone’s research suggests that policing screen time isn’t the answer to raising a well-rounded child, especially at a time when parents are trying to live more democratically with their children by sharing decision-making around activities like gaming and exploring the internet. Leaders and institutions alike can support a positive digital future for children by partnering with parents to guide activities within and outside of the home. Instead of criticizing families for their digital activities, Livingstone thinks we should identify what real-world challenges they’re facing, what options are available to them and how we can support them better.

Quote of the talk: “Screen time advice is causing conflict in the family, and there’s no solid evidence that more screen time increases childhood problems — especially compared with socio-economic or psychological factors. Restricting children breeds resistance, while guiding them builds judgment.”


Howard Taylor, child safety advocate

Big idea: Violence against children is an endemic issue worldwide, with rates of reported incidence increasing in some countries. We are at a historical moment that presents us with a unique opportunity to end the epidemic, and some countries are already leading the way.

How? Howard Taylor draws attention to Sweden and Uganda, two very different countries that share an explicit commitment to ending violence against children. Through high-level political buy-in, data-driven strategy and tactical legislative initiatives, the two countries have already made progress on. These solutions and others are all part of INSPIRE, a set of strategies created by an alliance of global organizations as a roadmap to eliminating the problem. If we put in the work, Taylor says, a new normal will emerge: generations whose paths in life will be shaped by what they do — not what was done to them.

Quote of the talk: “What would it really mean if we actually end violence against children? Multiply the social, cultural and economic benefits of this change by every family, every community, village, town, city and country, and suddenly you have a new normal emerging. A generation would grow up without experiencing violence.”


“The first half of this century is going to be transformed by a new software revolution: the living software revolution. Its impact will be so enormous that it will make the first software revolution pale in comparison,” says computational biologist Sara-Jane Dunn. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Sara-Jane Dunn, computational biologist

Big idea: In the 20th century, computer scientists inscribed machine-readable instructions on tiny silicon chips, completely revolutionizing our lives and workplaces. Today, a “living software” revolution centered around organisms built from programmable cells is poised to transform medicine, agriculture and energy in ways we can scarcely predict.

How? By studying how embryonic stem cells “decide” to become neurons, lung cells, bone cells or anything else in the body, Sara-Jane Dunn seeks to uncover the biological code that dictates cellular behavior. Using mathematical models, Dunn and her team analyze the expected function of a cellular system to determine the “genetic program” that leads to that result. While they’re still a long way from compiling living software, they’ve taken a crucial early step.

Quote of the talk: “We are at the beginning of a technological revolution. Understanding this ancient type of biological computation is the critical first step. And if we can realize this, we would enter into the era of an operating system that runs living software.”


Fay Bound Alberti, cultural historian

Big idea: We need to recognize the complexity of loneliness and its ever-transforming history. It’s not just an individual and psychological problem — it’s a social and physical one.

Why? Loneliness is a modern-day epidemic, with a history that’s often recognized solely as a product of the mind. Fay Bound Alberti believes that interpretation is limiting. “We’ve neglected [loneliness’s] physical effects — and loneliness is physical,” she says. She points to how crucial touch, smell, sound, human interaction and even nostalgic memories of sensory experiences are to coping with loneliness, making people feel important, seen and helping to produce endorphins. By reframing our perspective on this feeling of isolation, we can better understand how to heal it.

Quote of talk: “I am suggesting we need to turn to the physical body, we need to understand the physical and emotional experiences of loneliness to be able to tackle a modern epidemic. After all, it’s through our bodies, our sensory bodies, that we engage with the world.”

Fun fact: “Before 1800 there was no word for loneliness in the English language. There was something called: ‘oneliness’ and there were ‘lonely places,’ but both simply meant the state of being alone. There was no corresponding emotional lack and no modern state of loneliness.”


“Whatever age you are: own it — and then go out there and show the world what you can do!” says Carl Honoré. He speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Carl Honoré, writer, thinker and activist

Big idea: Stop the lazy thinking around age and the “cult of youth” — it’s not all downhill from 40.

How? We need to debunk the myths and stereotypes surrounding age — beliefs like “older people can’t learn new things” and “creativity belongs to the young.” There are plenty of trailblazers and changemakers who came into their own later in life, from artists and musicians to physicists and business leaders. Studies show that people who fear and feel bad about aging are more likely to suffer physical effects as if age is an actual affliction rather than just a number. The first step to getting past that is by creating new, more positive societal narratives. Honoré offers a set of simple solutions — the two most important being: check your language and own your age. Embrace aging as an adventure, a process of opening rather than closing doors. We need to feel better about aging in order to age better.

Quote of the talk: “Whatever age you are: own it — and then go out there and show the world what you can do!”

TEDWhat Brexit means for Scotland: A Q&A with First Minister Nicola Sturgeon

First Minister of Scotland Nicola Sturgeon spoke at TEDSummit on Wednesday in Edinburgh about her vision for making collective well-being the main aim of public policy and the economy. (Watch her full talk on TED.com.) That same morning, Boris Johnson assumed office as Prime Minister of the United Kingdom, the latest episode of the Brexit drama that has engulfed UK politics. During the 2016 referendum, Scotland voted against Brexit.

After her talk, Chris Anderson, the Head of TED, joined Sturgeon, who’s been vocally critical of Johnson, to ask a few questions about the current political landscape. Watch their exchange below.

,

Cory DoctorowHoustonites! Come see Hank Green and me in conversation tomorrow night!

Hank Green and I are doing a double act tomorrow night, July 31, as part of the tour for the paperback of his debut novel, An Absolutely Remarkable Thing. It’s a ticketed event (admission includes a copy of Hank’s book), and we’re presenting at 7PM at Spring Forest Middle School in association with Blue Willow Bookshop. Hope to see you there!

Krebs on SecurityCapital One Data Theft Impacts 106M People

Federal prosecutors this week charged a Seattle woman with stealing data from more than 100 million credit applications made with Capital One Financial Corp. Incredibly, much of this breach played out publicly over several months on social media and other open online platforms. What follows is a closer look at the accused, and what this incident may mean for consumers and businesses.

Paige “erratic” Thompson, in an undated photo posted to her Slack channel.

On July 29, FBI agents arrested Paige A. Thompson on suspicion of downloading nearly 30 GB of Capital One credit application data from a rented cloud data server. Capital One said the incident affected approximately 100 million people in the United States and six million in Canada.

That data included approximately 140,000 Social Security numbers and approximately 80,000 bank account numbers on U.S. consumers, and roughly 1 million Social Insurance Numbers (SINs) for Canadian credit card customers.

“Importantly, no credit card account numbers or log-in credentials were compromised and over 99 percent of Social Security numbers were not compromised,” Capital One said in a statement posted to its site.

“The largest category of information accessed was information on consumers and small businesses as of the time they applied for one of our credit card products from 2005 through early 2019,” the statement continues. “This information included personal information Capital One routinely collects at the time it receives credit card applications, including names, addresses, zip codes/postal codes, phone numbers, email addresses, dates of birth, and self-reported income.”

The FBI says Capital One learned about the theft from a tip sent via email on July 17, which alerted the company that some of its leaked data was being stored out in the open on the software development platform Github. That Github account was for a user named “Netcrave,” which includes the resume and name of one Paige A. Thompson.

The tip that alerted Capital One to its data breach.

The complaint doesn’t explicitly name the cloud hosting provider from which the Capital One credit data was taken, but it does say the accused’s resume states that she worked as a systems engineer at the provider between 2015 and 2016. That resume, available on Gitlab here, reveals Thompson’s most recent employer was Amazon Inc.

Further investigation revealed that Thompson used the nickname “erratic” on Twitter, where she spoke openly over several months about finding huge stores of data intended to be secured on various Amazon instances.

The Twitter user “erratic” posting about tools and processes used to access various Amazon cloud instances.

According to the FBI, Thompson also used a public Meetup group under the same alias, where she invited others to join a Slack channel named “Netcrave Communications.”

KrebsOnSecurity was able to join this open Slack channel Monday evening and review many months of postings apparently made by Erratic about her personal life, interests and online explorations. One of the more interesting posts by Erratic on the Slack channel is a June 27 comment listing various databases she found by hacking into improperly secured Amazon cloud instances.

That posting suggests Erratic may also have located tens of gigabytes of data belonging to other major corporations:

According to Erratic’s posts on Slack, the two items in the list above beginning with “ISRM-WAF” belong to Capital One.

Erratic also posted frequently to Slack about her struggles with gender identity, lack of employment, and persistent suicidal thoughts. In several conversations, Erratic makes references to running a botnet of sorts, although it is unclear how serious those claims were. Specifically, Erratic mentions one botnet involved in cryptojacking, which uses snippets of code installed on Web sites — often surreptitiously — designed to mine cryptocurrencies.

None of Erratic’s postings suggest Thompson sought to profit from selling the data taken from various Amazon cloud instances she was able to access. But it seems likely that at least some of that data could have been obtained by others who may have followed her activities on different social media platforms.

Ray Watson, a cybersecurity researcher at cloud security firm Masergy, said the Capital One incident contains the hallmarks of many other modern data breaches.

“The attacker was a former employee of the web hosting company involved, which is what is often referred to as insider threats,” Watson said. “She allegedly used web application firewall credentials to obtain privilege escalation. Also the use of Tor and an offshore VPN for obfuscation are commonly seen in similar data breaches.”

“The good news, however, is that Capital One Incidence Response was able to move quickly once they were informed of a possible breach via their Responsible Disclosure program, which is something a lot of other companies struggle with,” he continued.

In Capital One’s statement about the breach, company chairman and CEO Richard D. Fairbank said the financial institution fixed the configuration vulnerability that led to the data theft and promptly began working with federal law enforcement.

“Based on our analysis to date, we believe it is unlikely that the information was used for fraud or disseminated by this individual,” Fairbank said. “While I am grateful that the perpetrator has been caught, I am deeply sorry for what has happened. I sincerely apologize for the understandable worry this incident must be causing those affected and I am committed to making it right.”

Capital One says it will notify affected individuals via a variety of channels, and make free credit monitoring and identity protection available to everyone affected.

Bloomberg reports that in court on Monday, Thompson broke down and laid her head on the defense table during the hearing. She is charged with a single count of computer fraud and faces a maximum penalty of five years in prison and a $250,000 fine. Thompson will be held in custody until her bail hearing, which is set for August 1.

A copy of the complaint against Thompson is available here.

Update, 3:38 p.m. ET: I’ve reached out to several companies that appear to be listed in the last screenshot above. Infoblox [an advertiser on this site] responded with the following statement:

“Infoblox is aware of the pending investigation of the Capital One hacking attack, and that Infoblox is among the companies referenced in the suspected hacker’s alleged online communications. Infoblox is continuing to investigate the matter, but at this time there is no indication that Infoblox was in any way involved with the reported Capital One breach. Additionally, there is no indication of an intrusion or data breach involving Infoblox causing any customer data to be exposed.”

CryptogramACLU on the GCHQ Backdoor Proposal

Back in January, two senior GCHQ officials proposed a specific backdoor for communications systems. It was universally derided as unworkable -- by me, as well. Now Jon Callas of the ACLU explains why.

CryptogramAttorney General William Barr on Encryption Policy

Yesterday, Attorney General William Barr gave a major speech on encryption policy -- what is commonly known as "going dark." Speaking at Fordham University in New York, he admitted that adding backdoors decreases security but that it is worth it.

Some hold this view dogmatically, claiming that it is technologically impossible to provide lawful access without weakening security against unlawful access. But, in the world of cybersecurity, we do not deal in absolute guarantees but in relative risks. All systems fall short of optimality and have some residual risk of vulnerability a point which the tech community acknowledges when they propose that law enforcement can satisfy its requirements by exploiting vulnerabilities in their products. The real question is whether the residual risk of vulnerability resulting from incorporating a lawful access mechanism is materially greater than those already in the unmodified product. The Department does not believe this can be demonstrated.

Moreover, even if there was, in theory, a slight risk differential, its significance should not be judged solely by the extent to which it falls short of theoretical optimality. Particularly with respect to encryption marketed to consumers, the significance of the risk should be assessed based on its practical effect on consumer cybersecurity, as well as its relation to the net risks that offering the product poses for society. After all, we are not talking about protecting the Nation's nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications. If one already has an effective level of security say, by way of illustration, one that protects against 99 percent of foreseeable threats is it reasonable to incur massive further costs to move slightly closer to optimality and attain a 99.5 percent level of protection? A company would not make that expenditure; nor should society. Here, some argue that, to achieve at best a slight incremental improvement in security, it is worth imposing a massive cost on society in the form of degraded safety. This is untenable. If the choice is between a world where we can achieve a 99 percent assurance against cyber threats to consumers, while still providing law enforcement 80 percent of the access it might seek; or a world, on the other hand, where we have boosted our cybersecurity to 99.5 percent but at a cost reducing law enforcements [sic] access to zero percent the choice for society is clear.

I think this is a major change in government position. Previously, the FBI, the Justice Department and so on had claimed that backdoors for law enforcement could be added without any loss of security. They maintained that technologists just need to figure out how: 足an approach we have derisively named "nerd harder."

With this change, we can finally have a sensible policy conversation. Yes, adding a backdoor increases our collective security because it allows law enforcement to eavesdrop on the bad guys. But adding that backdoor also decreases our collective security because the bad guys can eavesdrop on everyone. This is exactly the policy debate we should be having足not the fake one about whether or not we can have both security and surveillance.

Barr makes the point that this is about "consumer cybersecurity," and not "nuclear launch codes." This is true, but ignores the huge amount of national security-related communications between those two poles. The same consumer communications and computing devices are used by our lawmakers, CEOs, legislators, law enforcement officers, nuclear power plant operators, election officials and so on. There's no longer a difference between consumer tech and government tech -- it's all the same tech.

Barr also says:

Further, the burden is not as onerous as some make it out to be. I served for many years as the general counsel of a large telecommunications concern. During my tenure, we dealt with these issues and lived through the passage and implementation of CALEA the Communications Assistance for Law Enforcement Act. CALEA imposes a statutory duty on telecommunications carriers to maintain the capability to provide lawful access to communications over their facilities. Companies bear the cost of compliance but have some flexibility in how they achieve it, and the system has by and large worked. I therefore reserve a heavy dose of skepticism for those who claim that maintaining a mechanism for lawful access would impose an unreasonable burden on tech firms especially the big ones. It is absurd to think that we would preserve lawful access by mandating that physical telecommunications facilities be accessible to law enforcement for the purpose of obtaining content, while allowing tech providers to block law enforcement from obtaining that very content.

That telecommunications company was GTE足which became Verizon. Barr conveniently ignores that CALEA-enabled phone switches were used to spy on government officials in Greece in 2003 -- which seems to have been an NSA operation -- and on a variety of people in Italy in 2006. Moreover, in 2012 every CALEA-enabled switch sold to the Defense Department had security vulnerabilities. (I wrote about all this, and more, in 2013.)

The final thing I noticed about the speech is that is it not about iPhones and data at rest. It is about communications: 足data in transit. The "going dark" debate has bounced back and forth between those two aspects for decades. It seems to be bouncing once again.

I hope that Barr's latest speech signals that we can finally move on from the fake security vs. privacy debate, and to the real security vs. security debate. I know where I stand on that: As computers continue to permeate every aspect of our lives, society, and critical infrastructure, it is much more important to ensure that they are secure from everybody -- even at the cost of law-enforcement access -- than it is to allow access at the cost of security. Barr is wrong, it kind of is like these systems are protecting nuclear launch codes.

This essay previously appeared on Lawfare.com.

EDITED TO ADD: More news articles.

EDITED TO ADD (7/28): Gen. Hayden comments.

EDITED TO ADD (7/30): Good response by Robert Graham.

Worse Than FailureThis Process is Nuts

A great man once said "I used to be over by the window, and I could see the squirrels, and they were merry." As pleasing of a sight as that was, what if the squirrels weren't merry?

Grady had an unpleasant experience with bushy-tailed rodents at a former job. Before starting at the Fintech firm as a data scientist, he was assured the Business Intelligence department was very advanced and run by an expert. They needed Grady to manipulate large data sets and implement machine learning to help out Lenny, the resident BI "expert". It quickly became apparent that Lenny didn't put the "Intelligence" in Business Intelligence.

Lenny was a long-term contractor who started the BI initiative from the ground-up. His previous work as a front-end developer led to his decision to use PHP for the ETL process. This one-of-a-kind monstrosity made it as unstable as a house of cards in a hurricane and the resultant data warehouse was more like a data cesspool.

"This here is the best piece of software in the whole company," Lenny boasted. "They tell me you're really smart, so you'll figure out how it works on your own. My work is far too important and advanced for me to be bothered with questions!" Lenny told Grady sternly.

Grady, left to fend for himself, spent weeks stumbling through code with very few comments and no existing documentation. He managed to deduce the main workflow for the ETL and warehouse process and it wasn't pretty. The first part of the ETL process deleted the entire existing data warehouse, allowing for a "fresh start" each day. If an error occurred during the ETL, rather than fail gracefully, the whole process crashed without restoring the data warehouse that was wiped out.

Grady found that the morning ETL run failed more often than not. Since Lenny never bothered to stroll in until 10 AM, the people that depended on data warehouse reports loudly complained to Grady. Having no clue how to fix it, he would tell them to be patient. Lenny would saunter in and start berating him "Seriously? Why haven't you figured out how to fix this yet?!" Lenny would spend an hour doing damage control, then disappear for a 90 minute lunch break.

One day, an email arrived informing everyone that Lenny was no longer with the company after exercising an obscure opt-out clause in his contract. Grady suddenly became the senior-most BI developer and inherited Lenny's trash pile. Determined to find the cause of the errors, he dug into parts of the code Lenny strictly forbade him to enter. Hoping to find any semblance of logging that might help, he scoured for hours.

Grady finally started seeing commands called "WritetoSkype". It sounded absurd, but it almost seemed like Lenny was logging to a Skype channel during the ETL run. Grady created a Skype account and subscribed to LennysETLLogging. All he found there was a bunch of dancing penguin emoticons, written one at a time.

Grady scrolled and scrolled and scrolled some more as thousands of dancing penguins written during the day's run performed for him. He finally reached the bottom and found an emoticon of a squirrel eating an acorn. Looking back at the code, WritetoSkype sent (dancingpenguin) when a step succeeded and (heidy) when a step failed. It was far from useful logging, but Grady now had a clear mission - Exterminate all the squirrels.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Krebs on SecurityNo Jail Time for “WannaCry Hero”

Marcus Hutchins, the “accidental hero” who helped arrest the spread of the global WannaCry ransomware outbreak in 2017, will receive no jail time for his admitted role in authoring and selling malware that helped cyberthieves steal online bank account credentials from victims, a federal judge ruled Friday.

Marcus Hutchins, just after he was revealed as the security expert who stopped the WannaCry worm. Image: twitter.com/malwaretechblog

The British security enthusiast enjoyed instant fame after the U.K. media revealed he’d registered and sinkholed a domain name that researchers later understood served as a hidden “kill switch” inside WannaCry, a fast-spreading, highly destructive strain of ransomware which propagated through a Microsoft Windows exploit developed by and subsequently stolen from the U.S. National Security Agency.

In August 2017, FBI agents arrested then 23-year-old Hutchins on suspicion of authoring and spreading the “Kronos” banking trojan and a related malware tool called UPAS Kit. Hutchins was released shortly after his arrest, but ordered to remain in the United States pending trial.

Many in the security community leaped to his defense at the time, noting that the FBI’s case appeared flimsy and that Hutchins had worked tirelessly through his blog to expose cybercriminals and their malicious tools. Hundreds of people donated to his legal defense fund.

In September 2017, KrebsOnSecurity published research which strongly suggested Hutchins’ dozens of alter egos online had a fairly lengthy history of developing and selling various malware tools and services. In April 2019, Hutchins pleaded guilty to criminal charges of conspiracy and to making, selling or advertising illegal wiretapping devices.

At his sentencing hearing July 26, U.S. District Judge Joseph Peter Stadtmueller said Hutchins’ action in halting the spread of WannaCry was far more consequential than the two malware strains he admitted authoring, and sentenced him to time served plus one year of supervised release. 

Marcy Wheeler, an independent journalist who live-tweeted and blogged about the sentencing hearing last week, observed that prosecutors failed to show convincing evidence of specific financial losses tied to any banking trojan victims, virtually all of whom were overseas — particularly in Hutchins’ home in the U.K.

“When it comes to matter of loss or gain,” Wheeler wrote, quoting Judge Stadtmeuller. “the most striking is comparison between you passing Kronos and WannaCry, if one looks at loss & numbers of infections, over 8B throughout world w/WannaCry, and >120M in UK.”

“This case should never have been prosecuted in the first place,” Wheeler wrote. “And when Hutchins tried to challenge the details of the case — most notably the one largely ceded today, that the government really doesn’t have evidence that 10 computers were damaged by anything Hutchins did — the government doubled down and issued a superseding indictment that, because of the false statements charge, posed a real risk of conviction.”

Hutchins’ conviction means he will no longer be allowed to stay in or visit the United States, although Judge Stadtmeuller reportedly suggested Hutchins should seek a presidential pardon, which would enable him to return and work here.

“Incredibly thankful for the understanding and leniency of the judge, the wonderful character letter you all sent, and everyone who helped me through the past two years, both financially and emotionally,” Hutchins tweeted immediately after the sentencing. “Once t[h]ings settle down I plan to focus on educational blog posts and livestreams again.”

Cory DoctorowPodcast: Adblocking: How About Nah?

In my latest podcast (MP3), I read my essay Adblocking: How About Nah?, published last week on EFF’s Deeplinks; it’s the latest installment in my series about “adversarial interoperability,” and the role it has historically played in keeping tech open and competitive, and how that role is changing now that yesterday’s scrappy startups have become today’s bloated incumbents, determined to prevent anyone from disrupting them they way they disrupted tech in their early days.

At the height of the pop-up wars, it seemed like there was no end in sight: the future of the Web would be one where humans adapted to pop-ups, then pop-ups found new, obnoxious ways to command humans’ attention, which would wane, until pop-ups got even more obnoxious.

But that’s not how it happened. Instead, browser vendors (beginning with Opera) started to ship on-by-default pop-up blockers. What’s more, users—who hated pop-up ads—started to choose browsers that blocked pop-ups, marginalizing holdouts like Microsoft’s Internet Explorer, until they, too, added pop-up blockers.

Chances are, those blockers are in your browser today. But here’s a funny thing: if you turn them off, you won’t see a million pop-up ads that have been lurking unseen for all these years.

Because once pop-up ads became invisible by default to an ever-larger swathe of Internet users, advertisers stopped demanding that publishers serve pop-up ads. The point of pop-ups was to get people’s attention, but something that is never seen in the first place can’t possibly do that.

MP3

Rondam RamblingsFedex: when it absolutely, positively has to get stuck in the system for over two months

I have seen some pretty serious corporate bureaucratic dysfunction over the years, but I think this one takes the cake: on May 23, we shipped a package via Fedex from California to Colorado.  The package required a signature.  It turned out that the person we sent it to had moved, and so was not able to sign for the package, and so it was not delivered. Now, the package has our return address on

Planet DebianCandy Tsai: Outreachy Week 8 – Week 9: Remote or In-Office Working

The Week 9 blog prompt recommended by Outreachy was to write about my career goals. To be honest, this is a really hard topic for me. As long as a career path involves some form of coding, creating and learning new things, I’m willing to take it on. The best situation could be that it is also doing something good for the society. This might be because that “something that I am too passionate for� doesn’t yet exist in my life. For now, I wish I’d still be coding 5 years from now. It’s just that simple. The only thing that I would like to see improvement upon is gender balance for this industry.

As for working environment, I would like to share some thoughts after having experienced both extremes of totally remote work and complete in-office work. There are a lot of articles out there comparing the pros and cons. Here are just my opinions on the time spent not working:

  • Dozing off
  • Socializing

Dozing off

Our concentration time is limited and there definitely will be times when we doze off a bit. Just a list of things that I had done before in both places. I think I’m being too honest here �

Office:

  • Browsing random pages
  • Checking useless e-mails
  • Talk to someone else also dozing off
  • Using social apps (e.g. Messenger)

Hoping people don’t think I’m doing these things for the whole day.

Remote:

  • Cook something to eat
  • Laundry or other house chores
  • Watch videos
  • Have a German lesson for an hour

I actually don’t take breaks between meals when working remotely.

In conclusion, I think dozing off in an office really really fits the definition of purely wasting time. You have peer pressure to look productive the whole 8 hours which just simply isn’t human. The things I do when I’m working remotely are actually things done after work from office. So I’ll give a vote for remote here.

Socializing

Office:

I had colleagues that I would love to go out with outside of work when I had an office job. One of the reasons that I stayed in a job is because of my colleagues. They were wonderful people and also great “friends�.

Remote:

The main means of communication is either text or video chat. Usually, they are for “work� purposes. I think my mentors are already kind enough to be there to support me whenever I’m stuck and I’m grateful for that! Don’t want to let them feel like they need to spend that much time on me. Although this might be different than “real� remote work, but I think it probably won’t be too distant from what I’m experiencing right now. I wouldn’t really want to specifically open a video chat just to talk about our daily lives through it.

I would vote for an office environment in this case since you can work and make friends at the same time which is pretty convenient for an introvert like me. If I don’t feel like making new friends, then probably I would choose remote work. I think I probably will change my preference as I get older.

Last but not least, as always my progress report for debci.

Video Report of the Internship

Link: https://youtu.be/89r4HqJL8KE

Week 8

  • Filming and editing my video for sharing the debci project for DebConf 2019
  • Fixing merge requests

Week 9

Worse Than FailureCodeSOD: Some Kind of Magic

We all have our little bits of sloppiness and our bad habits. Most of us have more than one. One place I'm likely to get lazy, especially as I'm feeling my way around a problem, is with magic numbers. I always mean to go back and replace them with a constant, but sometimes there's another fire you need to put out and you just don't get back to it till somebody calls it out in a code review.

Then, of course, there are the folks who go too far. I once got a note complaining that I shouldn't have used 2*PI, but instead should have created a new constant, TAU. I disavow the need for tau, but my critic said magic numbers, like two, were disallowed, so I said "ciao" and tau is there now.

Angela A, who's had some experience with bad constants before, has found a new one.

// Decimal constant for value of 1 static constant float THIRTY = 30.0f;

The decimal constant for the value of 1 is THIRTY.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianRuss Allbery: Review: All the Birds in the Sky

Review: All the Birds in the Sky, by Charlie Jane Anders

Publisher: Tor
Copyright: January 2016
ISBN: 1-4668-7112-1
Format: Kindle
Pages: 315

When Patricia was six years old, she rescued a wounded bird, protected it from her sister, discovered that she could talk to animals, and found her way to the Parliament Tree. There, she was asked the Endless Question, which she didn't know how to answer, and was dumped back into her everyday life. Her magic apparently disappeared again, except not quite entirely.

Laurence liked video games and building things. From schematics he found on the Internet, he built a wrist-watch time machine that could send him two seconds forward into the future. That was his badge of welcome, the thing that marked him as part of the group of cool scientists and engineers, when he managed to sneak away to visit a rocket launch.

Patricia and Laurence meet in junior high school, where both of them are bullied and awkward and otherwise friendless. They strike up an unlikely friendship based on actually listening to each other, Patricia getting Laurence out of endless outdoor adventures arranged by his parents, and the supercomputer Laurence is building in his closet. But it's not clear whether that friendship can survive endless abuse, the attention of an assassin, and their eventual recruitment into a battle between magic and technology of which they're barely aware.

So, first, the world-building in All the Birds in the Sky is subtly brilliant. I had been avoiding this book because I'd gotten the impression it was surreal and weird, which often doesn't work for me. But it's not, and that's due to careful and deft authorial control. This is a book in which two kids are sitting in a shopping mall watching people's feet go by on an escalator and guessing at their profession, and this happens:

The man in black slippers and worn gray socks was an assassin, said Patricia, a member of a secret society of trained killers who stalked their prey, looking for the perfect moment to strike and kill them undetected.

"It's amazing how much you can tell about people from their feet," said Patricia. "Shoes tell the whole story."

"Except us," said Laurence. "Our shoes are totally boring. You can't tell anything about us."

"That's because our parents pick out our shoes," said Patricia. "Just wait until we're grown up. Our shoes will be insane."

In fact, Patricia had been correct about the man in the gray socks and black shoes. His name was Theodolphus Rose, and he was a member of the Nameless Order of Assassins. He had learned 873 ways to murder someone without leaving even a whisper of evidence, and he'd had to kill 419 people to reach the number nine spot in the NOA hierarchy. He would have been very annoyed to learn that his shoes had given him away, because he prided himself on blending with his surroundings.

Anders maintains that tone throughout the book: dry, a little wry, matter-of-fact with a quirked smile, and utterly certain. The oddity of this world is laid out on the page without apologies, clear and comprehensible and orderly even when it's wildly strange. It's very easy as a reader to just start nodding along with magical academies and trans-dimensional experiments because Anders gives you the structure, pacing, and description that you need to build a coherent image.

The background work is worthy of this book's Nebula award. I just wish I'd liked the story better.

The core of my dislike is the characters, although for two very different reasons. Laurence is straight out of YA science fiction: geeky, curious, bullied, desperate to belong to something, loyal, and somewhere between stubborn and indecisive. But below that set of common traits, I never connected with him. He was just... there, doing predictable Laurence things and never surprising me or seeming to grow very much.

Laurence eventually goes to work for the Ten Percent Project, which is trying to send 10% of the population into space because clearly the planet is doomed. The blindness of that goal, and the degree to which the founder of that project resembled Elon Musk, was a bit too real to be funny. I kept waiting for Anders to either make a sharper satirical point or to let Laurence develop his own character outside of the depressing reality of techno-utopianism, but the story stayed finely balanced on that knife edge until it stopped being funny and started being awful.

Patricia, on the other hand, I liked from the very beginning. She's independent, determined, angry, empathetic, principled, and thoughtful, and immediately became the character I was cheering for. And every other major character in this novel is absolutely horrific to her.

The sheer amount of abusive gaslighting Patricia is subjected to in this book made me ill. Everyone from her family to her friends to her fellow magicians demean her, squash her, ignore her, trivialize her, shove her into boxes, try to get her to stop believing in things that happened to her, and twist every bit of natural ambition she has into new forms of prison. Even Laurence participates in this; although he's too clueless to be a major source of it, he's set up as her one port in the storm and then basically abandons her. I started the book feeling sorry for her; by the end of the book, I wanted Patricia to burn her life down with fire and start over with a completely new batch of humans. There's no way that she could do worse.

I want to be clear: I think this is an intentional authorial choice. I think Anders is entirely aware of how awful people are being, and the story of Laurence and Patricia barely managing to keep their heads above water despite them is the story she chose to write. A lot of other people loved it; this is more of a taste mismatch with the book than a structural flaw. But there are only so many paternalistic, abusive assholes passing themselves off as authority figures I can take in one book, and this book flew past my threshold and just kept going. Patricia and Laurence are mostly helpless against these people and have to let their worlds be shaped by them even when they know it's wrong, which makes it so, so much harder to bear.

The place where I think Anders did lose control of the plot, at least a little, is the ending. I can't fairly say that it came out of nowhere, since Anders was dropping hints throughout the book, but I did feel like it robbed the characters of agency in a way that I found emotionally unsatisfying as a reader, particularly since everyone in the book had been trying to take away Patricia's agency from nearly the first page. To have the ending then do the same thing added insult to injury in a way that I couldn't stomach. I can see the levels of symbolism knit together by this choice of endings, but, at least in my opinion, it would have been so much more satisfying, and somewhat redeeming of all the shit that Patricia had to go through, if she had been in firm control of how the symbolism came together.

This one's going to be a matter of taste, I think, and the world-building is truly excellent and much better than I had been expecting. But it's firmly in the "not for me" pile.

Rating: 5 out of 10

,

CryptogramFriday Squid Blogging: Humbolt Squid in Mexico Are Getting Smaller

The Humbolt squid are getting smaller:

Rawley and the other researchers found a flurry of factors that drove the jumbo squid's demise. The Gulf of California historically cycled between warm-water El Niño conditions and cool-water La Niña phases. The warm El Niño waters were inhospitable to jumbo squid­more specifically to the squid's prey­but subsequent La Niñas would allow squid populations to recover. But recent years have seen a drought of La Niñas, resulting in increasingly and more consistently warm waters. Frawley calls it an "oceanographic drought," and says that conditions like these will become more and more common with climate change. "But saying this specific instance is climate change is more than we can claim in the scope of our work," he adds. "I'm not willing to make that connection absolutely."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianKeith Packard: snekboard-0.2

Snekboard v0.2 Update

I've built six prototypes of snekboard version 0.2. They're working great and I'm happy with the design.

New Motor Driver

Having discovered that the TI DRV8838 wasn't up to driving the Lego Power Functions Medium motor (8883) because of it's start-up current draw, I went back and reworked the snekboard circuit to use TI DRV8800 instead. That controller can provide up to 2.8A and doesn't have any trouble with this motor.

The DRV8800 is larger than the DRV8838, so it took a bit of re-wiring to fit them on the circuit board.

New Power Source Selector

In version 0.1, I was using two DFLS130L Schottky diodes to automatically select between the on-board lithium polymer battery and USB to power the board. That "worked", except that there was enough leakage back through them that when the USB connector was unplugged, the battery charge indicator LEDs both lit up, which left me with the choice of disabling those indicators or draining the battery.

To fix that, I found an automatic power selector (with current limit!) part, the TPS2121. This should avoid frying the board when you short the motor controller outputs, although those also have current limiting circuits. Defense in depth!

One issue I found was that this circuit draws current even when the output is disconnected, so I changed the power switch from a SPST to DPST and now control USB and battery power separately.

CircuitPython

I included a W25Q16 2MB NOR flash chip on the board so that it could also run CircuitPython. Before finalizing the design, I thought it might be a good idea to actually get that running.

I've submitted a pull request with the necessary changes. I hope to see that merged at some point, which will allow users to select between CircuitPython and snek.

Smoothing Speed Changes

While the 9V supply on snekboard is designed to supply plenty of current for the motors, if you ask it to suddenly change how much it is producing, it places a huge load on the battery. When this happens, the battery voltage drops below the brown-out value for the SoC and the board resets.

I experimented with how to resolve this by ramping the power up and down in the snek application. That worked great; the motors could easily switch from full speed in one direction to full speed in the other direction.

Instead of having users add code to every snek application, I decided to move this functionality down into the snek implementation. I did this by modifying the PWM and direction pins values in a function called from the timer interrupt. This lets the application continue to run at full speed, while the motor controller slowly adjusts its output. No more resets when switching from full forward to full reverse.

Future Plans

I've got the six v0.2 prototypes that I'll be able to use in for the upcoming class year, but I'm unsure of whether there would be enough interest in the broader community to have more of them made. Let me know if you'd be interested in purchasing snekboards; if I get enough responses, I'll look at running them through Crowd Supply or similar.

Planet DebianDirk Eddelbuettel: anytime 0.3.5

A new release of the anytime package is arriving on CRAN. This is the sixteenth release, and comes a good month after the 0.3.4 release.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub README.md for a few examples.

This release brings a reworked fallback mechanism enabled via the useR=TRUE option. Because Windows remains a challenging platform which, among other more important ailments, also does not provide timezone information, we no longer rely on the RApiDatetime package which exposes parts of the R API. This works everywhere where timezone information is available, but less so on Windows. Instead, we now use Rcpp::Function to call directly back into R. This received a considerable amount of testing, and the package should now work even better when either a timezone is set, or the Windows fallback is used, or both. My thanks to Christoph Sax for patiently testing and helping to debug this, as well as for his two pull requests contributing to this release (even if one of these is now redundant as we no longer use RApiDatetime).

The full list of changes follows.

Changes in anytime version 0.3.5 (2019-07-28)

  • Fix use of Rcpp::Function-accessed Sys.setenv(), name all arguments in call to C++ (Christoph Sax in #95).

  • Relax constraint on Windows testing in several test files (Christoph Sax in #97).

  • Fix an issue related to TZ environment variable setting (Dirk in #101).

  • Change useR=TRUE behaviour by directly calling R via Rcpp (Dirk in #103 fixing #96).

  • Several updates to unit testing files aiming for more robust behaviour across platforms.

  • Updated documentation in manual pages, README and vignette.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page. The issue tracker tracker off the GitHub repo can be use for questions and comments.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJoachim Breitner: Custom firmware for the YQ8003 bicycle light

This blog post is about 18 months late, but better late than never...

The YQ8003

1½ years ago, when I was still a daredevil that was biking in Philly I got interested in these fancy strips of LED lights that you put into your bike wheel and when you drive fast enough, they form a stable image, both because of the additional visibility and safety, but also because the seem to be fun gadgets.

There are brands like Monkey Lights, but they are pretty expensive, and there are cheaper similar no-name products available, such as the YQ8003, which you can either order from China or hope to find on eBay for around $30 per piece.

The YQ8003 bike light

The YQ8003 bike light

Sucky software

The hardware is nice: water proof, easy to install, bright, long-lasting battery. But the software, oh my!

You need Windows to load your own pictures onto the device, and the application is really unpleasant to use, you can’t easily save your edits and sequences of images and so on.

But also the software on the device itself (which sports a microcontroller) was unsatisfying: The transformation it applies to the image assumes that the bar of LEDs goes through the center of the wheel. Obviously that is wrong, as there is the hub. With a small hub the difference is not so bad, but I have rather large hubs (a generator in the front hub, and internal gears in the rear hub), and this make the image not stable, but jump back and forth a bit.

Time to DIY!

So obviously I had to do something about it. At first I planned to to just find out how to load my own pictures onto the hardware, using the existing software on the device. So I needed to find out the protocol.

I was running their program on Windows in VirtualBox, and quickly noticed that the USB connection that you use to load your data onto the YQ8003 is actually a serial-over-USB port. I found a sniffer for serial communication and used that to dump what the Windows app sent to the device. That was all pretty hairy, and I only did it once (and deleted the Windows setup soon), but luckily one dump was sufficient.

I did not find out where in the data sent to the light the image was encoded. But I did find that the protocol used to talk to the device is a standard protocol to talk to microcontrollers, something called “STC ISP”. With that information, I could find out that the microcontroller is a STC12LE5A60S2 with 22MHz and 60KB of RAM, and that it is “8051 compatible”, whatever that means.

So this is how I, for the first and so far only time, ventured into microcontroller territory. It was pretty straight-forward to get a toolchain to compile programs for this microcontroller (using sdcc) and to upload code to it (using stcgal), and I could talk to my code over the serial port. This is promising!

Reverse engineering

I also quickly found out how the magnet (which the device uses to notice when the wheel has done one rotation) is accessed: It triggers interrupt 0.

But finding out how to actually access the LEDs and might them light up was very tricky. This kind of information is not specific to the microcontroller (STC12LE5A60S2), for which I could find documentation, but really depends on how it is wired up.

I was able to extract, from the serial port communication dump mentioned earlier, the firmware in a way I could send it to the microcontroller. So I could always go back to a working state. Moreover I could disassemble that code, and try to make sense of it. But I could not make sense of it, i.e. could not understand .

So if thinking does not help, maybe brute force does? I wrote a program that would take the working firmware, zero out parts of it. Then I would try that firmware and note if it still works. This way, my program would zero out ever more of the firmware, until only a few instructions are left that would still make the LEDs light up.

In the end I had, I think, 13 instructions left that made the LEDs light up lightly. Success! Or so I thought … the resulting program was pretty non-sensical. It essentially increments a value and writes another value to the address stored in the first value. So it just spews data all over the address range, wrapping around when at the end. No surprise it triggers the LEDs somewhere along the way…

(Still, I published the program to minimize binary data under the name bisect-binary – maybe you’ll find it useful for something.)

I actually don’t remember how I eventually figured out what to do, and which bytes and bits to toggle in which order. Maybe more reading, and some advice to look for from people who know more about LEDs.

bSpokeLight

With that knowledge I could finally write my own firmware and user application. The part that goes onto the device is written in C and compiled with sdcc. And the part that runs on your computer is a command line application written in Haskell, that takes the pictures and animations you want, applies the necessary transformations (now taking the width of your hub into account!) and embeds that into the compiled C code to produce a firmware file that you can load onto your device using stcgal.

It support images in all common formats, produces 8 colors and can store up to 8 images on the device, which then circle according to the time you specify. I dubbed the software bSpokeLight.

The light in action with more lights at the GPN19 (The short shutter speed of the camera prevents the visual effect in the eye that allows you to see the images well here)

The light in action with more lights at the GPN19 (The short shutter speed of the camera prevents the visual effect in the eye that allows you to see the images well here)

It actually supports reading GIF animations, but I found that they are much harder to recognize later, unless I rotate the wheel very fast and you know what to look for. I am not sure if this is a limitation of the hardware (and our eyes), a problem with my code or a problem with the particular animations I have tried. Will need to experiment more.

Can you see the swing dancing couple?

Can you see the swing dancing couple?

As always, I am sharing the code in the hope that others find it useful as well. Thanks to Haskell, Nix and the iohk-nix project I can easily provide pre-compiled binaries for Windows and Linux, statically compiled for the latter for distribution-independence. Let me know if you try to use it and how that went.

,

CryptogramInsider Logic Bombs

Add to the "not very smart criminals" file:

According to court documents, Tinley provided software services for Siemens' Monroeville, PA offices for nearly ten years. Among the work he was asked to perform was the creation of spreadsheets that the company was using to manage equipment orders.

The spreadsheets included custom scripts that would update the content of the file based on current orders stored in other, remote documents, allowing the company to automate inventory and order management.

But while Tinley's files worked for years, they started malfunctioning around 2014. According to court documents, Tinley planted so-called "logic bombs" that would trigger after a certain date, and crash the files.

Every time the scripts would crash, Siemens would call Tinley, who'd fix the files for a fee.

Worse Than FailureError'd: Nice Day for Golf (in Hades)

"A coworker was looking up what the weather was going to be like for his tee time. He said he’s definitely wearing shorts," writes Angela A.

 

"I guess whenever a company lists welding in their LinkedIn job posting you know that they're REEAALLY serious about computer hardware," Andrew I. writes.

 

Chris A. wrote, "It was game, set, and match, but unfortunately, someone struck out."

 

Bruce C. writes, "I'm not surprised that NULL is missing some deals....that File Not Found person must be getting it all."

 

"Learning to use Docker with the 'Get Started' tutorials and had to wonder...is there some theme here?" Dave E. wondered.

 

"Ever type up an email and hit 'send' too early? Well...here's an example," writes Charlie.

 

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

,

TEDIt’s not about privacy — it’s about power: Carole Cadwalladr speaks at TEDSummit 2019

Three months after her landmark talk, Carole Cadwalladr is back at TED. In conversation with curator Bruno Giussani, Cadwalladr discusses the latest on her reporting on the Facebook-Cambridge Analytica scandal and what we still don’t know about the transatlantic links between Brexit and the 2016 US presidential election.

“Who has the information, who has the data about you, that is where power now lies,” Cadwalladr says.

Cadwalladr appears in The Great Hack, a documentary by Karim Amer and TED Prize winner Jehane Noujaim that explores how Cambridge Analytica has come to symbolize the dark side of social media. The documentary was screened for TEDSummit participants today. Watch it in select theaters and on Netflix starting July 24.

Learn more about how you can support Cadwalladr’s investigation into data, disinformation and democracy.

TEDNot All Is Broken: Notes from Session 6 of TEDSummit 2019

Raconteur Mackenzie Dalrymple regales the TEDSummit audience with a classic Scottish story. He speaks at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

In the final session of TEDSummit 2019, the themes from the week — our search for belonging and community, our digital future, our inextricable connection to the environment — ring out with clarity and insight. From the mysterious ways our emotions impact our biological hearts, to a tour-de-force talk on the languages we all speak, it’s a fitting close to a week of revelation, laughter, tears and wonder.

The event: TEDSummit 2019, Session 6: Not All Is Broken, hosted by Chris Anderson and Bruno Giussani

When and where: Thursday, July 25, 2019, 9am BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Johann Hari, Sandeep Jauhar, Anna Piperal, Eli Pariser, Poet Ali

Interlude: Mackenzie Dalrymple sharing the tale of an uncle and nephew competing to become Lord of the Isles

Music: Djazia Satour, blending 1950s Chaabi (a genre of North African folk music) with modern grooves

The talks in brief:

Johann Hari, journalist

Big idea: The cultural narrative and definitions of depression and anxiety need to change.

Why? We need to talk less about chemical imbalances and more about imbalances in the way we live. Johann Hari met with experts around the world, boiling down his research into a surprisingly simple thesis: all humans have physical needs (food, shelter, water) as well as psychological needs (feeling that you belong, that your life has meaning and purpose). Though antidepressant drugs work for some, biology isn’t the whole picture, and any treatment must be paired with a social approach. Our best bet is to listen to the signals of our bodies, instead of dismissing them as signs of weakness madness. If we take time to investigate our red flags of depression and anxiety — and take the time to reevaluate how we build meaning and purpose, especially through social connections — we can start to heal in a society deemed the loneliest in human history.

Quote of the talk: “If you’re depressed, if you’re anxious — you’re not weak. You’re not crazy. You’re not a machine with broken parts. You’re a human being with unmet needs.”


“Even if emotions are not contained inside our hearts, the emotional heart overlaps its biological counterpart in surprising and mysterious ways,” says cardiologist Sandeep Jauhar. He speaks at TEDSummit: A Community Beyond Borders, July 21-25, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Sandeep Jauhar, cardiologist

Big Idea: Emotional stress can be a matter of life and death. Let’s factor that into how we care for our hearts.

How? “The heart may not originate our feelings, but it is highly responsive to them,” says Sandeep Jauhar. In his practice as a cardiologist, he has seen extensive evidence of this: grief and fear can cause profound cardiac injury. “Takotsubo cardiomyopathy,” or broken heart syndrome, has been found to occur when the heart weakens after the death of a loved one or the stress of a large-scale natural disaster. It comes with none of the other usual symptoms of heart disease, and it can resolve in just a few weeks. But it can also prove fatal. In response, Jauhar says that we need a new paradigm of care, one that considers the heart as more than “a machine that can be manipulated and controlled” — and recognizes that emotional stress is as important as cholesterol.

Quote of the talk: “Even if emotions are not contained inside our hearts, the emotional heart overlaps its biological counterpart in surprising and mysterious ways.”


“In most countries, people don’t trust their governments, and the governments don’t trust them back. All the complicated paper-based formal procedures are supposed to solve that problem. Except that they don’t. They just make life more complicated,” says e-governance expert Anna Piperal. She speaks at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Anna Piperal, e-governance expert 

Big idea: Bureaucracy can be eradicated by going digital — but we’ll need to build in commitment and trust.

How? Estonia is one of the most digital societies on earth. After gaining independence 30 years ago, and subsequently building itself up from scratch, the country decided not only to digitize existing bureaucracy but also to create an entirely new system. Now citizens can conduct everything online, from running a business to voting and managing their healthcare records, and only need to show up in person for literally three things: to claim their identity card, marry or divorce, or sell a property. Anna Piperal explains how, using a form of blockchain technology, e-Estonia builds trust through the “once-only” principle, through which the state cannot ask for information more than once nor store it in more than one place. The country is working to redefine bureaucracy by making it more efficient, granting citizens full ownership of their data — and serving as a model for the rest of the world to do the same.

Quote of the talk: “In most countries, people don’t trust their governments, and the governments don’t trust them back. All the complicated paper-based formal procedures are supposed to solve that problem. Except that they don’t. They just make life more complicated.”


Eli Pariser, CEO of Upworthy

Big idea: We can find ways to make our online spaces civil and safe, much like our best cities.

How? Social media is a chaotic and sometimes dangerous place. With its trolls, criminals and segregated spaces, it’s a lot like New York City in the 1970s. But like New York City, it’s also a vibrant space in which people can innovate and find new ideas. So Eli Pariser asks: What if we design social media like we design cities, taking cues from social scientists and urban planners like Jane Jacobs? Built around empowered communities, one-on-one interactions and public censure for those who act out, platforms could encourage trust and discourse, discourage antisocial behavior and diminish the sense of chaos that leads some to embrace authoritarianism.

Quote of the talk: “If online digital spaces are going to be our new home, let’s make them a comfortable, beautiful place to live — a place we all feel not just included, but actually some ownership of. A place we get to know each other. A place you’d actually want not just to visit, but to bring your kids.”


“Every language we learn is a portal by which we can access another language. The more you know, the more you can speak. … That’s why languages are so important, because they give us access to new worlds,” says Poet Ali. He speaks at at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Poet Ali, architect of human connection

Big idea: You speak far more languages than you realize, with each language representing a gateway to understanding different societies, cultures and experiences.

How? Whether it’s the recognized tongue of your country or profession, or the social norms of your community, every “language” you speak is more than a lexicon of words: it also encompasses feelings like laughter, solidarity, even a sense of being left out. These latter languages are universal, and the more we embrace their commonality — and acknowledge our fluency in them — the more we can empathize with our fellow humans, regardless of our differences.

Quote of the talk: “Every language we learn is a portal by which we can access another language. The more you know, the more you can speak. … That’s why languages are so important, because they give us access to new worlds.”

TEDBusiness Unusual: Notes from Session 4 of TEDSummit 2019

ELEW and Marcus Miller blend jazz improvisation with rock in a musical cocktail of “rock-jazz.” They perform at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

To keep pace with our ever-changing world, we need out-of-the-box ideas that are bigger and more imaginative than ever. The speakers and performers from this session explore these possibilities, challenging us to think harder about the notions we’ve come to accept.

The event: TEDSummit 2019, Session 4: Business Unusual, hosted by Whitney Pennington Rodgers and Cloe Shasha

When and where: Wednesday, July 24, 2019, 9am BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Margaret Heffernan, Bob Langert, Rose Mutiso, Mariana Mazzucato, Diego Prilusky

Music: A virtuosic violin performance by Min Kym, and a closing performance by ELEW featuring Marcus Miller, blending jazz improvisation with rock in a musical cocktail of “rock-jazz.”

The talks in brief:

“The more we let machines think for us, the less we can think for ourselves,” says Margaret Heffernan. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

Margaret Heffernan, entrepreneur, former CEO and writer 

Big idea: The more we rely on technology to make us efficient, the fewer skills we have to confront the unexpected. That’s why we must start practicing “just-in-case” management — anticipating the events (climate catastrophes, epidemics, financial crises) that will almost certainly happen but are ambiguous in timing, scale and specifics. 

Why? In our complex, unpredictable world, changes can occur out of the blue and have outsize impacts. When governments, businesses and individuals prioritize efficiency above all else, it keeps them from responding quickly, effectively and creatively. That’s why we all need to focus on cultivating what Heffernan calls our “unpredictable, messy human skills.” These include exercising our social abilities to build strong relationships and coalitions; humility to admit we don’t have all the answers; imagination to dream up never-before-seen solutions; and bravery to keep experimenting.

Quote of the talk: “The harder, deeper truth is that the future is uncharted, that we can’t map it until we get there. But that’s OK because we have so much capacity for imagination — if we use it. We have deep talents for inventiveness and exploration — if we apply them. We are brave enough to invent things we’ve never seen before. Lose these skills and we are adrift. But hone and develop them, and we can make any future we choose.”


Bob Langert, sustainability expert and VP of sustainability at McDonald’s

Big idea: Adversaries can be your best allies.

How? Three simple steps: reach out, listen and learn. As a “corporate suit” (his words), Bob Langert collaborates with his company’s strongest critics to find business-friendly solutions for society. Instead of denying and pushing back, he tries to embrace their perspectives and suggestions. He encourages others in positions of power to do the same, driven by this mindset: assume the best intentions of your critics; focus on the truth, the science and facts; and be open and transparent in order to turn critics into allies. The worst-case scenario? You’ll become better, your organization will become better — and you might make some friends along the way.

Fun fact: After working with NGOs in the 1990s, McDonald’s reduced 300 million pounds of waste over 10 years.


“When we talk about providing energy for growth, it is not just about innovating the technology: it’s the slow and hard work of improving governance, institutions and a broader macro-environment,” says Rose Mutiso. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

Rose Mutiso, energy scientist

Big Idea: In order to grow out of poverty, African countries need a steady supply of abundant and affordable electricity.

Why? Energy poverty, or the lack of access to electricity and other basic energy services, affects nearly two-thirds of Sub-Saharan Africa. As the region’s population continues to grow, we have the opportunity to build a new energy system — from scratch — to grow with it, says Rose Mutiso. It starts with naming the systemic holes that current solutions (solar, LED and battery technology) overlook: we don’t have a clear consensus on what energy poverty is; there’s too much reliance on quick fixes; and we’re misdirecting our climate change concerns. What we need, Mutiso says, is nuanced, large-scale solutions with a diverse range of energy sources. For instance, the region has significant hydroelectric potential, yet less than 10 percent of this potential is currently being utilized. If we work hard to find new solutions to our energy deficits now, everybody benefits.

Quote of talk:Countries cannot grow out of poverty without access to a steady supply of abundant, affordable and reliable energy to power these productive sectors — what I call energy for growth.”


Mariana Mazzucato, economist and policy influencer

Big idea: We’ve forgotten how to tell the difference between the value extractors in the C-suites and finance sectors and the value producers, the workers and taxpayers who actually fuel innovation and productivity. And recently we’ve neglected the importance of even questioning what the difference between the two.

How? Economists must redefine and recognize true value creators, envisioning a system that rewards them just as much as CEOs, investors and bankers. We need to rethink how we value education, childcare and other “free” services — which don’t have a price but clearly contribute to sustaining our economies. We need to make sure that our entire society not only shares risks but also rewards.

Quote of the talk: “[During the bank bailouts] we didn’t hear the taxpayers bragging that they were value creators. But, obviously, having bailed out the biggest ‘value-creating’ productive companies, perhaps they should have.”


Diego Prilusky demos his immersive storytelling technology, bringing Grease to the TED stage. He speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Diego Prilusky, video pioneer

Big idea: Get ready for the next revolution in visual storytelling: volumetric video, which aims to do nothing less than recreate reality as a cinematic experience.

How? Movies have been around for more than 100 years, but we’re still making (and watching) them in basically the same way. Can movies exist beyond the flat screen? Yes, says Diego Prilusky, but we’ll first need to completely rethink how they’re made. With his team at Intel Studios, Prilusky is pioneering volumetric video, a data-intensive medium powered by hundreds of sensors that capture light and motion from every possible direction. The result is like being inside a movie, which you could explore from different perspectives (or even through a character’s own eyes). In a live tech demo, Prilusky takes us inside a reshoot of an iconic dance number from the 1978 hit Grease. As actors twirl and sing “You’re the One That I Want,” he positions and repositions his perspective on the scene — moving, around, in front of and in between the performers. Film buffs can rest easy, though: the aim isn’t to replace traditional movies, he says, but to empower creators to tell stories in new ways, across multiple vantage points.

Quote of the talk: “We’re opening the gates for new possibilities of immersive storytelling.”

TEDThe Big Rethink: Notes from Session 3 of TEDSummit 2019

Marco Tempest and his quadcopters perform a mind-bending display that feels equal parts science and magic at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

In an incredible session, speakers and performers laid out the biggest problems facing the world — from political and economic catastrophe to rising violence and deepfakes — and some new thinking on solutions.

The event: TEDSummit 2019, Session 3: The Big Rethink, hosted by Corey Hajim and Cyndi Stivers

When and where: Tuesday, July 23, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: George Monbiot, Nick Hanauer, Raghuram Rajan, Marco Tempest, Rachel Kleinfeld, Danielle Citron, Patrick Chappatte

Music: KT Tunstall sharing how she found her signature sound and playing her hits “Miniature Disasters,” “Black Horse and the Cherry Tree” and “Suddenly I See.”

The talks in brief:

“We are a society of altruists, but we are governed by psychopaths,” says George Monbiot. He speaks at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

George Monbiot, investigative journalist and self-described “professional troublemaker”

Big idea: To get out of the political mess we’re in, we need a new story that captures the minds of people across fault lines.

Why? “Welcome to neoliberalism, the zombie doctrine that never seems to die,” says George Monbiot. We have been induced by politicians and economists into accepting an ideology of extreme competition and individualism, weakening the social bonds that make our lives worth living. And despite the 2008 financial crisis, which exposed the blatant shortcomings of neoliberalism, it still dominates our lives. Why? We haven’t yet produced a new story to replace it — a new narrative to help us make sense of the present and guide the future. So, Monbiot proposes his own: the “politics of belonging,” founded on the belief that most people are fundamentally altruistic, empathetic and socially minded. If we can tap into our fundamental urge to cooperate — namely, by building generous, inclusive communities around the shared sphere of the commons — we can build a better world. With a new story to light the way, we just might make it there.

Quote of the talk: “We are a society of altruists, but we are governed by psychopaths.”


Nick Hanauer, entrepreneur and venture capitalist.

Big idea: Economics has ceased to be a rational science in the service of the “greater good” of society. It’s time to ditch neoliberal economics and create tools that address inequality and injustice.

How? Today, under the banner of unfettered growth through lower taxes, fewer regulations, and lower wages, economics has become a tool that enforces the growing gap between the rich and poor. Nick Hanauer thinks that we must recognize that our society functions not because it’s a ruthless competition between its economically fittest members but because cooperation between people and institutions produces innovation. Competition shouldn’t be between the powerful at the expense of everyone else but between ideas battling it out in a well-managed marketplace in which everyone can participate.

Quote of the talk: “Successful economies are not jungles, they’re gardens — which is to say that markets, like gardens, must be tended … Unconstrained by social norms or democratic regulation, markets inevitably create more problems than they solve.”


Raghuram Rajan shares his idea for “inclusive localism” — giving communities the tools to turn themselves around while establishing standards tp prevent discrimination and corruption — at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Raghuram Rajan, economist and former Governor of the Reserve Bank of India

Big idea: As markets grow and governments focus on solving economic problems from the top-down, small communities and neighborhoods are losing their voices — and their livelihoods. But if nations lack the tools to address local problems, it’s time to turn to grass-roots communities for solutions.

How? Raghuram Rajan believes that nations must exercise “inclusive localism”: giving communities the tools to turn themselves around while establishing standards tp prevent discrimination and corruption. As local leaders step forward, citizens become active, and communities receive needed resources from philanthropists and through economic incentives, neighborhoods will thrive and rebuild their social fabric.

Quote of the talk: “What we really need [are] bottom-up policies devised by the community itself to repair the links between the local community and the national — as well as thriving international — economies.”


Marco Tempest, cyber illusionist

Big idea: Illusions that set our imaginations soaring are created when magic and science come together.

Why? “Is it possible to create illusions in a world where technology makes anything possible?” asks techno-magician Marco Tempest, as he interacts with his group of small flying machines called quadcopters. The drones dance around him, reacting buoyantly to his gestures and making it easy to anthropomorphize or attribute personality traits. Tempest’s buzzing buddies swerve, hover and pause, moving in formation as he orchestrates them. His mind-bending display will have you asking yourself: Was that science or magic? Maybe it’s both.

Quote to remember: “Magicians are interesting, their illusions accomplish what technology cannot, but what happens when the technology of today seems almost magical?”


Rachel Kleinfeld, democracy advisor and author

Big idea: It’s possible to quell violence — in the wider world and in our own backyards — with democracy and a lot of political TLC.

How? Compassion-concentrated action. We need to dispel the idea that some people deserve violence because of where they live, the communities they’re a part of or their socio-economic background. Kleinfeld calls this particular, inequality-based vein of violence “privilege violence,” explaining how it evolves in stages and the ways we can eradicate it. By deprogramming how we view violence and its origins and victims, we can move forward and build safer, more secure societies.

Quote of the talk: “The most important thing we can do is abandon the notion that some lives are just worth less than others.”


“Not only do we believe fakes, we are starting to doubt the truth,” says Danielle Citron, revealing the threat deepfakes pose to the truth and democracy. She speaks at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Danielle Citron, professor of law and deepfake scholar

Big idea: Deepfakes — machine learning technology used to manipulate or fabricate audio and video content — can cause significant harm to individuals and society. We need a comprehensive legislative and educational approach to the problem.

How? The use of deepfake technology to manipulate video and audio for malicious purposes — whether it’s to stoke violence against minorities or to defame politicians and journalists — is becoming ubiquitous. With tools being made more accessible and their products more realistic, what becomes of that key ingredient for democratic processes: the truth? As Danielle Citron points out, “Not only do we believe fakes, we are starting to doubt the truth.” The fix, she suggests, cannot be merely technological. Legislation worldwide must be tailored to fighting digital impersonations that invade privacy and ruin lives. Educational initiatives are needed to teach the media how to identify fakes, persuade law enforcement that the perpetrators are worth prosecuting and convince the public at large that the future of democracy really is at stake.

Quote of the talk: “Technologists expect that advances in AI will soon make it impossible to distinguish a fake video and a real one. How can truths emerge in a deepfake ridden ‘marketplace of ideas?’ Will we take the path of least resistance and just believe what we want to believe, truth be damned?”


“Freedom of expression is not incompatible with dialogue and listening to each other, but it is incompatible with intolerance,” says editorial cartoonist Patrick Chappatte. He speaks at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Patrick Chappatte, editorial cartoonist and graphic journalist

Big idea: We need humor like we need the air we breathe. We shouldn’t risk compromising our freedom of speech by censoring ourselves in the name of political correctness.

How? Our social media-saturated world is both a blessing and a curse for political cartoonists like Patrick Chappatte, whose satirical work can go viral while also making them, and the publications they work for, a target. Be it a prison sentence, firing or the outright dissolution of cartoon features in newspapers, editorial cartoonists worldwide are increasingly penalized for their art. Chappatte emphasizes the importance of the art form in political discourse by guiding us through 20 years of editorial cartoons that are equal parts humorous and caustic. In an age where social media platforms often provide places for fury instead of debate, he suggests that traditional media shouldn’t shy away from these online kingdoms, and neither should we. Now is the time to resist preventative self-censorship; if we don’t, we risk waking up in a sanitized world without freedom of expression.

Quote of the talk: “Freedom of expression is not incompatible with dialogue and listening to each other, but it is incompatible with intolerance.”

TEDAnthropo Impact: Notes from Session 2 of TEDSummit 2019

Radio Science Orchestra performs the musical odyssey “Prelude, Landing, Legacy” in celebration of the 50th anniversary of the Apollo 11 moon landing at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Session 2 of TEDSummit 2019 is all about impact: the actions we can take to solve humanity’s toughest challenges. Speakers and performers explore the perils — from melting glaciers to air pollution — along with some potential fixes — like ocean-going seaweed farms and radical proposals for how we can build the future.

The event: TEDSummit 2019, Session 2: Anthropo Impact, hosted by David Biello and Chee Pearlman

When and where: Monday, July 22, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Tshering Tobgay, María Neira, Tim Flannery, Kelly Wanser, Anthony Veneziale, Nicola Jones, Marwa Al-Sabouni, Ma Yansong

Music: Radio Science Orchestra, performing the musical odyssey “Prelude, Landing, Legacy” in celebration of the 50th anniversary of the Apollo 11 moon landing (and the 100th anniversary of the theremin’s invention)

… and something completely different: Improv maestro Anthony Veneziale, delivering a made-up-on-the-spot TED Talk based on a deck of slides he’d never seen and an audience-suggested topic: “the power of potatoes.” The result was … surprisingly profound.

The talks in brief:

Tshering Tobgay, politician, environmentalist and former Prime Minister of Bhutan

Big idea: We must save the Hindu Kush Himalayan glaciers from melting — or else face dire, irreversible consequences for one-fifth of the global population.

Why? The Hindu Kush Himalayan glaciers are the pulse of the planet: their rivers alone supply water to 1.6 billion people, and their melting would massively impact the 240 million people across eight countries within their reach. Think in extremes — more intense rains, flash floods and landslides along with unimaginable destruction and millions of climate refugees. Tshering Togbay telegraphs the future we’re headed towards unless we act fast, calling for a new intergovernmental agency: the Third Pole Council. This council would be tasked with monitoring the glaciers’ health, implementing policies to protect them and, by proxy, the billions of who depend of them.

Fun fact: The Hindu Kush Himalayan glaciers are the world’s third-largest repository of ice (after the North and South poles). They’re known as the “Third Pole” and the “Water Towers of Asia.”


Air pollution isn’t just bad for the environment — it’s also bad for our brains, says María Neira. She speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

María Neira, public health leader

Big idea: Air pollution isn’t just bad for our lungs — it’s bad for our brains, too.

Why? Globally, poor air quality causes seven million premature deaths per year. And all this pollution isn’t just affecting our lungs, says María Neira. An emerging field of research is shedding a light on the link between air pollution and our central nervous systems. The fine particulate matter in air pollution travels through our bloodstreams to our major organs, including the brain — which can slow down neurological development in kids and speed up cognitive decline in adults. In short: air pollution is making us less intelligent. We all have a role to play in curbing air pollution — and we can start by reducing traffic in cities, investing in clean energy and changing the way we consume.

Quote of the talk: “We need to exercise our rights and put pressure on politicians to make sure they will tackle the causes of air pollution. This is the first thing we need to do to protect our health and our beautiful brains.”


Tim Flannery, environmentalist, explorer and professor

Big idea: Seaweed could help us drawdown atmospheric carbon and curb global warming.

How? You know the story: the blanket of CO2 above our heads is driving adverse climate changes and will continue to do so until we get it out of the air (a process known as “drawdown”). Tim Flannery thinks seaweed could help: it grows fast, is made out of productive, photosynthetic tissue and, when sunk more than a kilometer deep into the ocean, can lock up carbon long-term. If we cover nine percent of the ocean surface in seaweed farms, for instance, we could sequester the same amount of CO2 we currently put into the atmosphere. There’s still a lot to figure, Flannery notes —  like how growing seaweed at scale on the ocean surface will affect biodiversity down below — but the drawdown potential is too great to allow uncertainty to stymie progress.

Fun fact: Seaweed is the most ancient multicellular life known, with more genetic diversity than all other multicellular life combined.


Could cloud brightening help curb global warming? Kelly Wanser speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. Photo: Bret Hartman / TED

Kelly Wanser, geoengineering expert and executive director of SilverLining

Big idea: The practice of cloud brightening — seeding clouds with sea salt or other particulates to reflect sunshine back into space — could partially offset global warming, giving us crucial time while we figure out game-changing, long-term solutions.

How: Starting in 2020, new global regulations will require ships to cut emissions by 85 percent. This is a good thing, right? Not entirely, says Kelly Wanser. It turns out that when particulate emissions (like those from ships) mix with clouds, they make the clouds brighter — enabling them to reflect sunshine into space and temporarily cool our climate. (Think of it as the ibuprofen for our fevered climate.) Wanser’s team and others are coming up with experiments to see if “cloud-brightening” proves safe and effective; some scientists believe increasing the atmosphere’s reflectivity by one or two percent could offset the two degrees celsius of warming that’s been forecasted for earth. As with other climate interventions, there’s much yet to learn, but the potential benefits make those efforts worth it. 

An encouraging fact: The global community has rallied to pull off this kind of atmospheric intervention in the past, with the 1989 Montreal Protocol.


Nicola Jones, science journalist

Big idea: Noise in our oceans — from boat motors to seismic surveys — is an acute threat to underwater life. Unless we quiet down, we will irreparably damage marine ecosystems and may even drive some species to extinction.

How? We usually think of noise pollution as a problem in big cities on dry land. But ocean noise may be the culprit behind marine disruptions like whale strandings, fish kills and drops in plankton populations. Fortunately, compared to other climate change solutions, it’s relatively quick and easy to dial down our noise levels and keep our oceans quiet. Better ship propellor design, speed limits near harbors and quieter methods for oil and gas prospecting will all help humans restore peace and quiet to our neighbors in the sea.

Quote of the talk: “Sonar can be as loud as, or nearly as loud as, an underwater volcano. A supertanker can be as loud as the call of a blue whale.”


TED curator Chee Pearlman (left) speaks with architect Marwa Al-Sabouni at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Marwa Al-Sabouni, architect, interviewed by TED curator Chee Pearlman

Big idea: Architecture can exacerbate the social disruptions that lead to armed conflict.

How? Since the time of the French Mandate, officials in Syria have shrunk the communal spaces that traditionally united citizens of varying backgrounds. This contributed to a sense of alienation and rootlessness — a volatile cocktail that built conditions for unrest and, eventually, war. Marwa Al-Sabouni, a resident of Homs, Syria, saw firsthand how this unraveled social fabric helped reduce the city to rubble during the civil war. Now, she’s taking part in the city’s slow reconstruction — conducted by citizens with little or no government aid. As she explains in her book The Battle for Home, architects have the power (and the responsibility) to connect a city’s residents to a shared urban identity, rather than to opposing sectarian groups.

Quote of the talk: “Syria had a very unfortunate destiny, but it should be a lesson for the rest of the world: to take notice of how our cities are making us very alienated from each other, and from the place we used to call home.”


“Architecture is no longer a function or a machine for living. It also reflects the nature around us. It also reflects our soul and the spirit,” says Ma Yansong. He speaks at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Ma Yansong, architect and artist

Big Idea: By creating architecture that blends with nature, we can break free from the “matchbox” sameness of many city buildings.

How? Ma Yansong paints a vivid image of what happens when nature collides with architecture — from a pair of curvy skyscrapers that “dance” with each other to buildings that burst out of a village’s mountains like contour lines. Yansong embraces the shapes of nature — which never repeat themselves, he notes — and the randomness of hand-sketched designs, creating a kind of “emotional scenery.” When we think beyond the boxy geometry of modern cities, he says, the results can be breathtaking.

Quote of talk: “Architecture is no longer a function or a machine for living. It also reflects the nature around us. It also reflects our soul and the spirit.”

TED10 years of TED Fellows: Notes from the Fellows Session of TEDSummit 2019

TED Fellows celebrate the 10-year anniversary of the program at TEDSummit: A Community Beyond Borders, July 22, 2019 in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

The event: TEDSummit 2019, Fellows Session, hosted by Shoham Arad and Lily Whitsitt

When and where: Monday, July 22, 2019, 9am BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Carl Joshua Ncube, Suzanne Lee, Sonaar Luthra, Jon Lowenstein, Alicia Eggert, Lauren Sallan, Laura Boykin

Opening: A quick, witty performance from Carl Joshua Ncube, one of Zimbabwe’s best-known comedians, who uses humor to approach culturally taboo topics from his home country.

Music: An opening from visual artist and cellist Paul Rucker of the hauntingly beautiful “Criminalization of Survival,” a piece he created to explore issues related to mass incarceration, racially motivated violence, police brutality and the impact of slavery in the US.

And a dynamic closing from hip-hop artist and filmmaker Blitz Bazawule and his band, who tells stories of the polyphonic African diaspora.

The talks in brief:

Laura Boykin, computational biologist at the University of Western Australia

Big idea: If we’re going to solve the world’s toughest challenges — like food scarcity for millions of people living in extreme poverty — science needs to be more diverse and inclusive. 

How? Collaborating with smallholder farmers in sub-Saharan Africa, Laura Boykin uses genomics and supercomputing to help control whiteflies and viruses, which cause devastation to cassava crops. Cassava is a staple food that feeds more than 500 million people in East Africa and 800 million people globally. Boykin’s work transforms farmers’ lives, taking them from being unable to feed their families to having enough crops to sell and enough income to thrive. 

Quote of the talk: “I never dreamt the best science I would ever do would be sitting on a blanket under a tree in East Africa, using the highest tech genomics gadgets. Our team imagined a world where farmers could detect crop viruses in three hours instead of six months — and then we did it.”


Lauren Sallan, paleobiologist at the University of Pennsylvania

Big idea: Paleontology is about so much more than dinosaurs.

How? The history of life on earth is rich, varied and … entirely too focused on dinosaurs, according to Lauren Sallan. The fossil record shows that earth has a dramatic past, with four mass extinctions occurring before dinosaurs even came along. From fish with fingers to galloping crocodiles and armored squid, the variety of life that has lived on our changing planet can teach us more about how we got here, and what the future holds, if we take the time to look.

Quote of the talk: “We have learned a lot about dinosaurs, but there’s so much left to learn from the other 99.9 percent of things that have ever lived, and that’s paleontology.”


“If we applied the same energy we currently do suppressing forms of life towards cultivating life, we’d turn the negative image of the urban jungle into one that literally embodies a thriving, living ecosystem,” says Suzanne Lee. She speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Suzanne Lee, designer, biofabricator

Big idea: What if we could grow bricks, furniture and even ready-made fabric for clothes?

How? Suzanne Lee is a fashion designer turned biofabrication pioneer who is part of a global community of innovators who are figuring how to grow their own materials. By utilizing living microbial organisms like bacteria and fungi, we can replace plastic, cement and other waste-generating materials with alternatives that can help reduce pollution.

Quote of the talk: If we applied the same energy we currently do suppressing forms of life towards cultivating life, we’d turn the negative image of the urban jungle into one that literally embodies a thriving, living ecosystem.”


Sonaar Luthra, founder and CEO of Water Canary

Big idea: We need to get better at monitoring the world’s water supplies — and we need to do it fast.

How? Building a global weather service for water would help governments, businesses and communities manage 21st-century water risk. Sonaar Luthra’s company Water Canary aims to develop technologies that more efficiently monitor water quality and availability around the world, avoiding the unforecasted shortages that happen now. Businesses and governments must also invest more in water, he says, and the largest polluters and misusers of water must be held accountable.

Quote of the talk: “It is in the public interest to measure and to share everything we can discover and learn about the risks we face in water. Reality doesn’t exist until it’s measured. It doesn’t just take technology to measure it — it takes our collective will.”


Jon Lowenstein shares photos from the migrant journey in Latin America at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

Jon Lowenstein, documentary photographer, filmmaker and visual artist

Big idea: We need to care about the humanity of migrants in order to understand the desperate journeys they’re making across borders.

How? For the past two decades, Jon Lowenstein has captured the experiences of undocumented Latin Americans living in the United States to show the real stories of the men and women who make up the largest transnational migration in world history. Lowenstein specializes in long-term, in-depth documentary explorations that confront power, poverty and violence. 

Quote of the talk: “With these photographs, I place you squarely in the middle of these moments and ask you to think about [the people in them] as if you knew them. This body of work is a historical document — a time capsule — that can teach us not only about migration, but about society and ourselves.”


Alicia Eggert’s art asks us to recognize where we are now as individuals and as a society, and to identify where we want to be in the future. She speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Alicia Eggert, interdisciplinary artist

Big idea: A brighter, more equitable future depends upon our ability to imagine it.  

How? Alicia Eggert creates art that explores how light travels across space and time, revealing the relationship between reality and possibility. Her work has been installed on rooftops in Philadelphia, bridges in Amsterdam and uninhabited islands in Maine. Like navigational signs, Eggert’s artwork asks us to recognize where we are now as individuals and as a society, to identify where we want to be in the future — and to imagine the routes we can take to get there.

Quote of the talk: “Signs often help to orient us in the world by telling us where we are now and what’s happening in the present moment. But they can also help us zoom out, shift our perspective and get a sense of the bigger picture.”

TEDWeaving Community: Notes from Session 1 of TEDSummit 2019

Hosts Bruno Giussani and Helen Walters open Session 1: Weaving Community on July 21, 2019, Edinburgh, Scotland. (Photo: Bret Hartman / TED)

The stage is set for TEDSummit 2019: A Community Beyond Borders! During the opening session, speakers and performers explored themes of competition, political engagement and longing — and celebrated the TED communities (representing 84 countries) gathered in Edinburgh, Scotland to forge TED’s next chapter.

The event: TEDSummit 2019, Session 1: Weaving Community, hosted by Bruno Giussani and Helen Walters

When and where: Sunday, July 21, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Pico Iyer, Jochen Wegner, Hajer Sharief, Mariana Lin, Carole Cadwalladr, Susan Cain with Min Kym

Opening: A warm Scottish welcome from raconteur Mackenzie Dalrymple

Music: Findlay Napier and Gillian Frame performing selections from The Ledger, a series of Scottish folk songs

The talks in brief:

“Seeming happiness can stand in the way of true joy even more than misery does,” says writer Pico Iyer. (Photo: Ryan Lash / TED)

Pico Iyer, novelist and nonfiction author

Big idea: The opposite of winning isn’t losing; it’s failing to see the larger picture.

Why? As a child in England, Iyer believed the point of competition was to win, to vanquish one’s opponent. Now, some 50 years later and a resident of Japan, he’s realized that competition can be “more like an act of love.” A few times a week, he plays ping-pong at his local health club. Games are played as doubles, and partners are changed every five minutes. As a result, nobody ends up winning — or losing — for long. Iyer has found liberation and wisdom in this approach. Just as in a choir, he says, “Your only job is to play your small part perfectly, to hit your notes with feeling and by so doing help to create a beautiful harmony that’s much greater than the sum of its parts.”

Quote of the talk: “Seeming happiness can stand in the way of true joy even more than misery does.”


Jochen Wegner, journalist and editor of Zeit Online

Big idea: The spectrum of belief is as multifaceted as humanity itself. As social media segments us according to our interests, and as algorithms deliver us increasingly homogenous content that reinforces our beliefs, we become resistant to any ideas — or even facts — that contradict our worldview. The more we sequester ourselves, the more divided we become. How can we learn to bridge our differences?

How? Inspired by research showing that one-on-one conversations are a powerful tool for helping people learn to trust each other, Zeit Online built Germany Talks, a “Tinder for politics” that facilitates “political arguments” and face-to-face meetings between users in an attempt to bridge their points-of-view on issues ranging from immigration to same-sex marriage. With Germany Talks (and now My Country Talks and Europe Talks) Zeit has facilitated conversations between thousands of Europeans from 33 countries.

Quote of the talk: “What matters here is not the numbers, obviously. What matters here is whenever two people meet to talk in person for hours, without anyone else listening, they change — and so do our societies. They change, little by little, discussion by discussion.”


“The systems we have nowadays for political decision-making are not from the people for the people — they have been established by the few, for the few,” says activist Hajer Sharief. She speaks at TEDSummit: A Community Beyond Borders, July 21, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Hajer Sharief, activist and cofounder of the Together We Build It Foundation

Big Idea: People of all genders, ages, races, beliefs and socioeconomic statuses should participate in politics.

Why? Hajer Sharief’s native Libya is recovering from 40 years of authoritarian rule and civil war. She sheds light on the way politics are involved in every aspect of life: “By not participating in it, you are literally allowing other people to decide what you can eat, wear, if you can have access to healthcare, free education, how much tax you pay, when can you retire, what is your pension,” she says. “Other people are also deciding whether your race is enough to consider you a criminal, or if your religion or nationality are enough to put you on a terrorist list.” When Sharief was growing up, her family held weekly meetings to discuss family issues, abiding by certain rules to ensured everyone was respectful and felt free to voice their thoughts. She recounts a meeting that went badly for her 10-year-old self, resulting in her boycotting them altogether for many years — until an issue came about which forced her to participate again. Rejoining the meetings was a political assertion, and it helped her realize an important lesson: you are never too young to use your voice — but you need to be present for it to work.

Quote of talk: “Politics is not only activism — it’s awareness, it’s keeping ourselves informed, it’s caring for facts. When it’s possible, it is casting a vote. Politics is the tool through which we structure ourselves as groups and societies.”


Mariana Lin, AI character designer and principal writer for Siri

Big idea: Let’s inject AI personalities with the essence of life: creativity, weirdness, curiosity, fun.

Why? Tech companies are going in two different directions when it comes to creating AI personas: they’re either building systems that are safe, flat, stripped of quirks and humor — or, worse, they’re building ones that are fully customizable, programmed to say just what you want to hear, just how you like to hear it. While this might sound nice at first, we’re losing part of what makes us human in the process: the friction and discomfort of relating with others, the hard work of building trusting relationships. Mariana Lin calls for tech companies to try harder to truly bring AI to life — in all its messy, complicated, uncomfortable glory. For starters, she says, companies can hire a diverse range of writers, creatives, artists and social thinkers to work on AI teams. If the people creating these personalities are as diverse as the people using it — from poets and philosophers to bankers and beekeepers — then the future of AI looks bright.

Quote of the talk: “If we do away with the discomfort of relating with others not exactly like us, with views not exactly like ours — we do away with what makes us human.”


In 2018, Carole Cadwalladr exposed Cambridge Analytica’s attempt to influence the UK Brexit vote and the 2016 US presidential election via personal data on Facebook. She’s still working to sound the alarm. She speaks at TEDSummit: A Community Beyond Borders, July 21, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Carole Cadwalladr, investigative journalist, interviewed by TED curator Bruno Giussani

Big idea: Companies that collect and hoard our information, like Facebook, have become unthinkably powerful global players — perhaps more powerful than governments. It’s time for the public hold them accountable.

How? Tech companies with offices in different countries must obey the laws of those nations. It’s up to leaders to make sure those laws are enforced — and it’s up to citizens to pressure lawmakers to further tighten protections. Despite legal and personal threats from her adversaries, Carole Cadwalladr continues to explore the ways in which corporations and politicians manipulate data to consolidate their power.

Quote to remember: “In Britain, Brexit is this thing which is reported on as this British phenomenon, that’s all about what’s happening in Westminster. The fact that actually we are part of something which is happening globally — this rise of populism and authoritarianism — that’s just completely overlooked. These transatlantic links between what is going on in Trump’s America are very, very closely linked to what is going on in Britain.”


Susan Cain meditates on how the feeling of longing can guide us to a deeper understanding of ourselves, accompanied by Min Kym on violin, at TEDSummit: A Community Beyond Borders. July 21, 2019, Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Susan Cain, quiet revolutionary, with violinist Min Kym

Big idea: Life is steeped in sublime magic that you can tap into, opening a whole world filled with passion and delight.

How? By forgoing constant positivity for a state of mind more exquisite and fleeting — a place where light (joy) and darkness (sorrow) meet, known to us all as longing. Susan Cain weaves her journey in search for the sublime with the splendid sounds of Min Kym on violin, sharing how the feeling of yearning connects us to each other and helps us to better understand what moves us deep down.

Quote of the talk: “Follow your longing where it’s telling you to go, and may it carry you straight to the beating heart of the perfect and beautiful world.”

Krebs on SecurityThe Unsexy Threat to Election Security

Much has been written about the need to further secure our elections, from ensuring the integrity of voting machines to combating fake news. But according to a report quietly issued by a California grand jury this week, more attention needs to be paid to securing social media and email accounts used by election officials at the state and local level.

California has a civil grand jury system designed to serve as an independent oversight of local government functions, and each county impanels jurors to perform this service annually. On Wednesday, a grand jury from San Mateo County in northern California released a report which envisions the havoc that might be wrought on the election process if malicious hackers were able to hijack social media and/or email accounts and disseminate false voting instructions or phony election results.

“Imagine that a hacker hijacks one of the County’s official social media accounts and uses it to report false results on election night and that local news outlets then redistribute those fraudulent election results to the public,” the report reads.

“Such a scenario could cause great confusion and erode public confidence in our elections, even if the vote itself is actually secure,” the report continues. “Alternatively, imagine that a hacker hijacks the County’s elections website before an election and circulates false voting instructions designed to frustrate the efforts of some voters to participate in the election. In that case, the interference could affect the election outcome, or at least call the results into question.”

In San Mateo County, the office of the Assessor-County Clerk-Recorder and Elections (ACRE) is responsible for carrying out elections and announcing local results. The ACRE sends election information to some 43,000 registered voters who’ve subscribed to receive sample ballots and voter information, and its Web site publishes voter eligibility information along with instructions on how and where to cast ballots.

The report notes that concerns about the security of these channels are hardly theoretical: In 2010, intruders hijacked ACRE’s election results Web page, and in 2016, cyber thieves successfully breached several county employee email accounts in a spear-phishing attack.

In the wake of the 2016 attack, San Mateo County instituted two-factor authentication for its email accounts — requiring each user to log in with a password and a one-time code sent via text message to their mobile device. However, the county uses its own Twitter, Facebook, Instagram and YouTube accounts to share election information, and these accounts are not currently secured by two-factor authentication, the report found.

“The Grand Jury finds that the security protections against hijacking of ACRE’s website, email, and social media accounts are not adequate to protect against the current cyber threats. These vulnerabilities expose the public to potential disinformation by hackers who could hijack an ACRE online communication platform to mislead voters before an election or sow confusion afterward. Public confidence is at stake, even if the vote itself is secure.”

The jury recommended the county take full advantage of the most secure two-factor authentication now offered by all of these social media platforms: The use of a FIDO physical security key, a small hardware device which allows the user to complete the login process simply by inserting the USB device and pressing a button. The key works without the need for any special software drivers [full disclosure: Yubico, a major manufacturer of security keys, is currently an advertiser on this site.]

Additionally, the report urges election officials to migrate away from one-time codes sent via text message, as these can be intercepted via man-in-the-middle (MitM) and SIM-swapping attacks.  MitM attacks use counterfeit login pages to steal credentials and one-time codes.

An unauthorized SIM swap is an increasingly rampant form of fraud in which scammers bribe or trick employees at mobile phone stores into seizing control of the target’s phone number and diverting all texts and phone calls to the attacker’s mobile device.

Samy Tarazi is a sergeant with the sheriff’s office in nearby Santa Clara County and a supervisor with the REACT Task Force, a team of law enforcement officers that has been tracking down individuals perpetrating SIM swapping attacks. Tarazi said he fully expects SIM swapping to emerge as a real threat to state and local election workers, as well as to staff and volunteers working for candidates.

“I wouldn’t be surprised if some major candidate or their staff has an email or social media account with tons of important stuff on there [whose password] can be reset with just a text message,” Tarazi told KrebsOnSecurity. “I hope that doesn’t happen, but politicians are regular people who use the same tools we use.”

A copy of the San Mateo County grand jury report is available here (PDF).

CryptogramSoftware Developers and Security

According to a survey: "68% of the security professionals surveyed believe it's a programmer's job to write secure code, but they also think less than half of developers can spot security holes." And that's a problem.

Nearly half of security pros surveyed, 49%, said they struggle to get developers to make remediation of vulnerabilities a priority. Worse still, 68% of security professionals feel fewer than half of developers can spot security vulnerabilities later in the life cycle. Roughly half of security professionals said they most often found bugs after code is merged in a test environment.

At the same time, nearly 70% of developers said that while they are expected to write secure code, they get little guidance or help. One disgruntled programmer said, "It's a mess, no standardization, most of my work has never had a security scan."

Another problem is it seems many companies don't take security seriously enough. Nearly 44% of those surveyed reported that they're not judged on their security vulnerabilities.

Worse Than FailureCodeSOD: Null Thought

These days, it almost seems like most of the developers writing code for the Java Virtual Machine aren’t doing it in Java. It’s honestly an interesting thing for programming language development, as more and more folks put together languages with fundamentally different principles from Java that still work on the JVM.

Like Kotlin. Kotlin blends functional programming styles with object-oriented principles and syntactic features built around writing more compact, concise code than equivalent Java. And it’s not even limited to Java- it can compile down into JavaScript or even target LLVM.

And since you can write bad code in any language, you can write bad code in Kotlin. Keith inherited a Kotlin-developed Android app.

In Kotlin, if you wanted to execute some code specifically if a previous step failed, you might use a try/catch exception handler. It’s built into Kotlin. But maybe you want to do some sort of error handling in your pipeline of function calls. So maybe you want something which looks more like:

response.code
    .wasSuccess()
    .takeIf { false }
    ?.run { doErrorThing(it) } 

wasSuccess in this example returns a boolean. The takeIf checks to see if the return value was false- if it wasn’t, the takeIf returns a null, and the run call doesn’t execute (the question mark is our nullable operator).

Kotlin has a weird relationship with nulls, and unless you’re explicit about where you expect nulls, it is going to complain at you. Which is why Keith got annoyed at this block:

/**
     * Handles error and returns NULL if an error was found, true if everything was good
     */
    inline fun Int.wasSuccessOrNull() : Boolean? {
        return if (handleConnectionErrors(this))
            null
        else
            true
    }
    /**
     * Return true if any connection issues were found, false if everything okay
     */
    fun handleConnectionErrors(errorCode: Int) : Boolean {
        return when (errorCode)
        {
            Error.EXPIRED_TOKEN -> { requiresSignIn.value = true;  true}
            Error.NO_CONNECTION -> { connectionIssue.value = true; true}
            Error.INACTIVE_ACCOUNT -> { inactiveAccountIssue.value = true; true}
            Error.BAD_GATEWAY -> { badGatewayIssue.value = true;  true}
            Error.SERVICE_UNAVAILABLE -> { badGatewayIssue.value = true;  true}
            else -> {
                if (badGatewayIssue.value == true) {
                    badGatewayIssue.value = false
                }
                noErrors.value = true
                false
            }
        }
    }

wasSuccessOrNull returns true, if the status code is successful, otherwise it returns… null? Why a null? Just so that a nullable ?.run… call can be used? It’s a weird choice. If we’re just going to return non-true/false values from our boolean methods, there are other options we could use.

But honestly, handleConnectionErrors, which it calls, is even more worrisome. If an error did occur, this causes a side effect. Each error condition sets a variable outside of the scope of this function. Presumably these are class members, but who knows? It could just as easily be globals.

If the error code isn’t an error, we explicitly clear the badGatewayIssue, but we don’t clear any of the other errors. Presumably that does happen, somewhere, but again, who knows? This is the kind of code where trying to guess at what works and what doesn’t is a bad idea.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Krebs on SecurityNeo-Nazi SWATters Target Dozens of Journalists

Nearly three dozen journalists at a broad range of major publications have been targeted by a far-right group that maintains a Deep Web database listing the personal information of people who threaten their views. This group specializes in encouraging others to harass those targeted by their ire, and has claimed responsibility for dozens of bomb threats and “swatting” incidents, where police are tricked into visiting potentially deadly force on the target’s address.

At issue is a site called the “Doxbin,” which hosts the names, addresses, phone number and often known IP addresses, Social Security numbers, dates of birth and other sensitive information on hundreds of people — and in some cases the personal information of the target’s friends and family.

A significant number of the 400+ entries on the Doxbin are for journalists (32 at last count, including Yours Truly), although the curators of Doxbin have targeted everyone from federal judges to executives at major corporations. In January 2019, the group behind Doxbin claimed responsibility for doxing and swatting a top Facebook executive.

At least two of the journalists listed on the Doxbin have been swatted in the past six months, including Pulitzer prize winning columnist Leonard G. Pitts Jr.

In some cases, as in the entries for reporters from CNN, Politico, ProPublica and Vox, no reason is mentioned for their inclusion. But in many others, the explanation seems connected to stories the journalist has published dealing with race or the anti-fascist (antifa) movement.

“Anti-white race/politics writer,” reads the note next to Pitts’ entry in the Doxbin.

Many of those listed on the site soon find themselves on the receiving end of extended threats and harassment. Carey Holzman, a computer technician who runs a Youtube channel on repairing and modding computers, was swatted in January, at about the same time his personal information showed up on the Doxbin.

More recently, his tormentors started calling his mobile phone at all hours of the night, threatening to hire a hit man to kill him. They even promised to have drugs ordered off the Dark Web and sent to his home, as part of a plan to get him arrested for drug possession.

“They said they were going to send me three grams of cocaine,” Holzman told KrebsOnSecurity.

Sure enough, earlier this month a small vial of white powder arrived via the U.S. Postal Service. Holzman said he didn’t open the vial, but instead handed it over to the local police for testing.

On the bright side, Holzman said, he is now on a first-name basis with some of the local police, which isn’t a bad idea for anyone who is being threatened with swatting attacks.

“When I told one officer who came out to my house that they threatened to send me drugs, he said ‘Okay, well just let me know when the cocaine arrives,'” Holzman recalled. “It was pretty funny because the other responding officer approached us and only caught the last thing his partner said, and suddenly looked at the other officer with deadly seriousness.”

The Doxbin is tied to an open IRC chat channel in which the core members discuss alt-right and racist tropes, doxing and swatting people, and posting videos or audio news recordings of their attacks.

The individual who appears to maintain the Doxbin is a fixture of this IRC channel, and he’s stated that he also was responsible for maintaining SiegeCulture, a white supremacist Web site that glorifies the writings of neo-Nazi James Mason.

Mason’s various written works call on followers to start a violent race war in the United States. Those works have become the de facto bible for the Atomwaffen Division, an extremist group whose members are suspected of having committed multiple murders in the U.S. since 2017.

Courtney Radsch, advocacy director at the nonprofit Committee to Protect Journalists, said lists that single out journalists for harassment unfortunately are not uncommon.

“We saw in the Ukraine, for example, there were lists of journalists compiled that led to harassment and threats against reporters there,” Radsch said. “We saw it in Malta where there were reports that the prime minister was part of a secret Facebook group used to coordinate harassment campaigns against a journalist who was later murdered. And we’ve seen the American government — the Customs and Border Protection — compiling lists of reporters and activists who’ve been singled out for questioning.”

Radsch said when CPJ became aware that the personal information of several journalists were listed on a doxing site, they reached out and provided information on relevant safety resources.

“It does seem that some of these campaigns by extremist groups are being coordinated in secret chat groups or dark web forums, where they can talk about the messaging before they bring it out into the public sphere,” she said.

In some ways, the Doxbin represents a far more extreme version of Exposed[.]su, a site erected briefly in 2013 by a gang of online hoodlums that doxed and swatted celebrities and public figures. The core members of that group were later arrested and charged with various crimes — including numerous swatting attacks.

One of the men in that group — convicted serial swatter and stalker Mir Islam — was arrested last year in the Philippines and charged with murder after he and an associate allegedly dumped the body of a friend in a local river.

Swatting attacks can quickly turn deadly. In March 2019, 26-year-old serial swatter Tyler Barriss was sentenced to 20 years in prison for making a phony emergency call to police in late 2017 that led to the shooting death of an innocent Kansas resident.

My hope is that law enforcement officials can shut down this Doxbin gang before someone else gets killed.

Sociological ImagesHappy Birthday, SocImages!

This month, Sociological Images turns twelve! It has been a busy year with some big changes backstage, so today I’m rounding up a dozen of our top posts as we look forward to a new academic year.

The biggest news is that the blog has a new home. It still lives on my computer (and The Society Pages’ network), but that home has moved east as I start as an assistant professor at UMass Boston Sociology. It’s a great department with wonderful colleagues who share a commitment to publicly-oriented scholarship, and I am excited to see what we can build in Boston! 

This year, readers loved the recent discovery that many of the players on the US Women’s National Team were sociology majors and a look at the the sociology of streetwear. We covered high-class hoaxes in the wake of the Fyre Festival documentaries, looked at who gets to win board games on TV, and followed the spooky side of science for the 200th anniversary of FrankensteinGender reveal parties were literally booming, unfortunately.

We also had a bunch of stellar guest posts this year, tackling all kinds of big questions like why people freaked out about fast food at the White House, why Green Book was a weird Oscar win, why people sometimes collect racist memorabilia, and why we often avoid reading the news. My personal favorites included a research roundup on women’s expertise and a look at the boom in bisexual identification in the United States. Please keep sending in guest posts! I want to feature your work. Guidelines are here, and you can always reach out via email or Twitter DM.

Finally, big thanks to all of you who read the blog actively, pass along posts to friends and family, and bring it into your classes. We keep this blog running on a zero-dollar budget, Creative Commons licensing, and a heavy dose of the sociological imagination that comes with your support. Happy reading!

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureCodeSOD: A Long Conversion

Let’s talk a little bit about .NET’s TryParse method. Many types, especially the built in numerics, support it, alongside a Parse. The key difference between Parse and TryParse is that TryParse bakes the exception handling logic in it. Instead of using exceptions to tell you if it can parse or not, it returns a boolean value, instead.

If, for example, you wanted to take an input, and either store it as an integer in a database, or store a null, you might do something like this:

int result;
if (int.TryParse(data, out result)) {
  rowData[column] = result;
} else {
  rowData[column] = DBNull.Value;
}

There are certainly better, cleaner ways to handle this. Russell F. has a co-worker that has a much uglier way to handle this.

private void BuildIntColumns(string data, DataRow rowData, int startIndex, int length, string columnName, FileInfo file, string tableName)
{
    if (data.Trim().Length > startIndex)
    {
        try
        {
            int resultOut;

            if (data.Substring(startIndex, length).Trim() == "" || string.IsNullOrEmpty(data.Substring(startIndex, length).Trim()))
            {
                rowData[columnName] = DBNull.Value;
            }
            else if (int.TryParse(data.Substring(startIndex, length).Trim(), out resultOut) == false)
            {
                rowData[columnName] = DBNull.Value;
            }
            else
            {
                rowData[columnName] = Convert.ToInt32(data.Substring(startIndex, length).Trim());
            }
        }
        catch (Exception e)
        {
            rowData[columnName] = DBNull.Value;
            SaveErrorData(file, data, e.Message, tableName);
        }
    }
}

private void BuildLongColumns(string data, DataRow rowData, int startIndex, int length, string columnName, FileInfo file, string tableName)
{
    if (data.Trim().Length > startIndex)
    {
        try
        {
            int resultOut;

            if (data.Substring(startIndex, length).Trim() == "" || string.IsNullOrEmpty(data.Substring(startIndex, length).Trim()))
            {
                rowData[columnName] = DBNull.Value;
            }
            else if (int.TryParse(data.Substring(startIndex, length).Trim(), out resultOut) == false)
            {
                rowData[columnName] = DBNull.Value;
            }
            else
            {
                rowData[columnName] = Convert.ToInt64(data.Substring(startIndex, length).Trim());
            }
        }
        catch (Exception e)
        {
            rowData[columnName] = DBNull.Value;
            SaveErrorData(file, data, e.Message, tableName);
        }
    }
}

Here’s a case where the developer knows that methods like int.TryParse and string.IsNullOrEmpty exist, but they don’t understand them. More worrying, every operation has to be on a Substring for some reason, which implies that they’re processing strings which contain multiple fields of data. Presumably that means there’s a mainframe with fixed-width records someplace in the backend, but certainly splitting while converting falls is a bad practice.

For bonus points, compare the BuildIntColumns with BuildLongColumns. There’s an extremely subtle bug in the BuildLongColumns- specifically, they still do an int.TryParse, but this isn’t an int, it’s a long. If you actually tried to feed it a long integer, it would consider it invalid.

Russell adds: “I found this in an old file – no clue who wrote it or how it came to exist (I guess I could go through the commit logs, but that sounds like a lot of work).”

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

TEDA new mission to mobilize 2 million women in US politics … and more TED news

TED2019 may be past, but the TED community is busy as ever. Below, a few highlights.

Amplifying 2 million women across the U.S. Activist Ai-jen Poo, Black Lives Matter co-founder Alicia Garza and Planned Parenthood past president Cecile Richards have joined forces to launch Supermajority, which aims to train 2 million women in the United States to become activists and political leaders. To scale, the political hub plans to partner with local nonprofits across the country; as a first step, the co-founders will embark on a nationwide listening tour this summer. (Watch Poo’s, Garza’s and Richards’ TED Talks.)

Sneaker reseller set to break billion-dollar record. Sneakerheads, rejoice! StockX, the sneaker-reselling digital marketplace led by data expert Josh Luber, will soon become the first company of its kind with a billion-dollar valuation, thanks to a new round of venture funding.  StockX — a platform where collectible and limited-edition sneakers are bought and exchanged through real-time bidding — is an evolution of Campless, Luber’s site that collected data on rare sneakers. In an interview with The New York Times, Luber said that StockX pulls in around $2 million in gross sales every day. (Watch Luber’s TED Talk.)

A move to protect iconic African-American photo archives. Investment expert Mellody Hobson and her husband, filmmaker George Lucas, filed a motion to acquire the rich photo archives of iconic African-American lifestyle magazines Ebony and Jet. The archives are owned by the recently bankrupt Johnson Publishing Company; Hobson and Lucas intend to gain control over them through their company, Capital Holdings V. The collections include over 5 million photos of notable events and people in African American history, particularly during the Civil Rights Movement. In a statement, Capital Holdings V said: “The Johnson Publishing archives are an essential part of American history and have been critical in telling the extraordinary stories of African-American culture for decades. We want to be sure the archives are protected for generations to come.” (Watch Hobson’s TED Talk.)

10 TED speakers chosen for the TIME100. TIME’s annual round-up of the 100 most influential people in the world include climate activist Greta Thunberg, primatologist and environmentalist Jane Goodall, astrophysicist Sheperd Doeleman and educational entrepreneur Fred Swaniker — also Nancy Pelosi, the Pope, Leana Wen, Michelle Obama, Gayle King (who interviewed Serena Williams and now co-hosts CBS This Morning home to TED segment), and Jeanne Gang. Thunberg was honored for her work igniting climate change activism among teenagers across the world; Goodall for her extraordinary life work of research into the natural world and her steadfast environmentalism; Doeleman for his contribution to the Harvard team of astronomers who took the first photo of a black hole; and Swaniker for the work he’s done to educate and cultivate the next generation of African leaders. Bonus: TIME100 luminaries are introduced in short, sharp essays, and this year many of them came from TEDsters including JR, Shonda Rhimes, Bill Gates, Jennifer Doudna, Dolores Huerta, Hans Ulrich Obrest, Tarana Burke, Kai-Fu Lee, Ian Bremmer, Stacey Abrams, Madeleine Albright, Anna Deavere Smith and Margarethe Vestager. (Watch Thunberg’s, Goodall’s, Doeleman’s, Pelosi’s, Pope Francis’, Wen’s, Obama’s, King’s, Gang’s and Swaniker’s TED Talks.)

Meet Sports Illustrated’s first hijab-wearing model. Model and activist Halima Aden will be the first hijab-wearing model featured in Sports Illustrated’s annual swimsuit issue, debuting May 8. Aden will wear two custom burkinis, modestly designed swimsuits. “Being in Sports Illustrated is so much bigger than me,” Aden said in a statement, “It’s sending a message to my community and the world that women of all different backgrounds, looks, upbringings can stand together and be celebrated.” (Watch Aden’s TED Talk.)

Scotland post-surgical deaths drop by a third, and checklists are to thank. A study indicated a 37 percent decrease in post-surgical deaths in Scotland since 2008, which it attributed to the implementation of a safety checklist. The 19-item list created by the World Health Organization is supposed to encourage teamwork and communication during operations. The death rate fell to 0.46 per 100 procedures between 2000 and 2014, analysis of 6.8 million operations showed. Dr. Atul Gawande, who introduced the checklist and co-authored the study, published in the British Journal of Surgery, said to the BBC: “Scotland’s health system is to be congratulated for a multi-year effort that has produced some of the largest population-wide reductions in surgical deaths ever documented.” (Watch Gawanda’s TED Talk.) — BG

And finally … After the actor Luke Perry died unexpectedly of a stroke in February, he was buried according to his wishes: on his Tennessee family farm, wearing a suit embedded with spores that will help his body decompose naturally and return to the earth. His Infinity Burial Suit was made by Coeio, led by designer, artist and TED Fellow Jae Rhim Lee. Back in 2011, Lee demo’ed the mushroom burial suit onstage at TEDGlobal; now she’s focused on testing and creating suits for more people. On April 13, Lee spoke at Perry’s memorial service, held at Warner Bros. Studios in Burbank; Perry’s daughter revealed his story in a thoughtful instagram post this past weekend. (Watch Lee’s TED Talk.) — EM

CryptogramScience Fiction Writers Helping Imagine Future Threats

The French army is going to put together a team of science fiction writers to help imagine future threats.

Leaving aside the question of whether science fiction writers are better or worse at envisioning nonfictional futures, this isn't new. The US Department of Homeland Security did the same thing over a decade ago, and I wrote about it back then:

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it "embarrassing." I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats -- which contributes to overall fear and overestimation of the risks. And that doesn't help keep us safe at all.

Science fiction writers are creative, and creativity helps in any future scenario brainstorming. But please, keep the people who actually know science and technology in charge.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.

Worse Than FailureCodeSOD: Break my Validation

Linda inherited an inner-platform front-end framework. It was the kind of UI framework with an average file size of 1,000 lines of code, and an average test coverage of 0%.

Like most UI frameworks, it had a system for doing client side validation. Like most inner-platform UI frameworks, the validation system was fragile, confusing, and impossible to understand.

This code illustrates some of the problems:

/**
 * Modify a validator key, e.g change minValue or disable required
 *
 * @param fieldName
 * @param validatorKey - of the specific validator
 * @param key - the key to change
 * @param value - the value to set
 */
modifyValidatorValue: function (fieldName, validatorKey, key, value) {

	if (!helper.isNullOrUndefined(fieldName)) {
		// Iterate over fields
		for (var i in this.fields) {
			if (this.fields.hasOwnProperty(i)) {
				var field = this.fields[i];
				if (field.name === fieldName) {
					if (!helper.isNullOrUndefined(validatorKey)) {
						if (field.hasOwnProperty('validators')) {
							// Iterate over validators
							for (var j in field.validators) {
								if (field.validators.hasOwnProperty(j)) {
									var validator = field.validators[j];
									if (validator.key === validatorKey) {
										if (!helper.isNullOrUndefined(key) && !helper.isNullOrUndefined(value)) {
											if (validator.hasOwnProperty(key)) {
												validator[key] = value;
											}
										}
										break;
									}
								}
							}
						}
					}
					break;
				}
			}
		}
	}

}

What this code needs to do is find the field for a given name, check the list of validators for that field, and update a value on that validator.

Normally, in JavaScript, you’d do this by using an object/dictionary and accessing things directly by their keys. This, instead, iterates across all the fields on the object and all the validators on that field.

It’s smart code, though, as the developer knew that once they found the fields in question, they could exit the loops, so they added a few breaks to exit. I think those breaks are in the right place. To be sure, I’d need to count curly braces, and I’m worried that I don’t have enough fingers and toes to count them all in this case.

According to the git log, this code was added in exactly this form, and hasn’t been touched since.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cory DoctorowPodcast: Adversarial Interoperability is Judo for Network Effects

In my latest podcast (MP3), I read my essay SAMBA versus SMB: Adversarial Interoperability is Judo for Network Effects, published last week on EFF’s Deeplinks; it’s a furhter exploration of the idea of “adversarial interoperability” and the role it has played in fighting monopolies and preserving competition, and how we could use it to restore competition today.

In tech, “network effects” can be a powerful force to maintain market dominance: if everyone is using Facebook, then your Facebook replacement doesn’t just have to be better than Facebook, it has to be so much better than Facebook that it’s worth using, even though all the people you want to talk to are still on Facebook. That’s a tall order.

Adversarial interoperability is judo for network effects, using incumbents’ dominance against them. To see how that works, let’s look at a historical example of adversarial interoperability role in helping to unseat a monopolist’s dominance.

The first skirmishes of the PC wars were fought with incompatible file formats and even data-storage formats: Apple users couldn’t open files made by Microsoft users, and vice-versa. Even when file formats were (more or less) harmonized, there was still the problems of storage media: the SCSI drive you plugged into your Mac needed a special add-on and flaky driver software to work on your Windows machine; the ZIP cartridge you formatted for your PC wouldn’t play nice with Macs.

MP3

Krebs on SecurityWhat You Should Know About the Equifax Data Breach Settlement

Big-three credit bureau Equifax has reportedly agreed to pay at least $650 million to settle lawsuits stemming from a 2017 breach that let intruders steal personal and financial data on roughly 148 million Americans. Here’s a brief primer that attempts to break down what this settlement means for you, and what it says about the value of your identity.

Q: What happened?

A: If the terms of the settlement are approved by a court, the Federal Trade Commission says Equifax will be required to spend up to $425 million helping consumers who can demonstrate they were financially harmed by the breach. The company also will provide up to 10 years of free credit monitoring to those who had their data exposed.

Q: What about the rest of the money in the settlement?

A: An as-yet undisclosed amount will go to pay lawyers fees for the plaintiffs.

Q: $650 million seems like a lot. Is that some kind of record?

A: If not, it’s pretty close. The New York Times reported earlier today that it was thought to be the largest settlement ever paid by a company over a data breach, but that statement doesn’t appear anywhere in their current story.

Q: Hang on…148 million affected consumers…out of that $425 million pot that comes to just $2.87 per victim, right?

A: That’s one way of looking at it. But as always, the devil is in the details. You won’t see a penny or any other benefit unless you do something about it, and how much you end up costing the company (within certain limits) is up to you.

The Times reports that the proposed settlement assumes that only around seven million people will sign up for their credit monitoring offers. “If more do, Equifax’s costs for providing it could rise meaningfully,” the story observes.

Q: Okay. What can I do?

A: You can visit www.equifaxbreachsettlement.com, although none of this will be official or on offer until a court approves the settlement.

Q: Uh, that doesn’t look like Equifax’s site…

A: Good eyes! It’s not. It’s run by a third party. But we should probably just be grateful for that; given Equifax’s total dumpster fire of a public response to the breach, the company has shown itself incapable of operating (let alone securing) a properly functioning Web site.

Q: What can I get out of this?

A: In a nutshell, affected consumers are eligible to apply for one or more remedies, including:

Free credit monitoring: At least three years of credit monitoring via all three major bureaus simultaneously, including Equifax, Experian and Trans Union. The settlement also envisions up to six more years of single bureau monitoring through Experian. Or, if you don’t want to take advantage of the credit monitoring offers, you can opt instead for a $125 cash payment. You can’t get both.

Reimbursement: …For the time you spent remedying identity theft or misuse of your personal information caused by the breach, or purchasing credit monitoring or credit reports. This is capped at 20 total hours at $25 per hour ($500). Total cash reimbursement payment will not exceed $20,000 per consumer.

Help with ongoing identity theft issues: Up to seven years of “free assisted identity restoration services.” Again, the existing breach settlement page is light on specifics there.

Q: Does this cover my kids/dependents, too?

A: The FTC says if you were a minor in May 2017 (when Equifax first learned of the breach), you are eligible for a total of 18 years of free credit monitoring.

Q: How do I take advantage of any of these?

A: You can’t yet. The settlement has to be approved first. The settlement Web site says to check back again later. In addition to checking the breach settlement site periodically, consumers can sign up with the FTC to receive email updates about this settlement.

Update: The eligibility site is now active, at this link.

The settlement site said consumers also can call 1-833-759-2982 for more information. Press #2 on your phone’s keypad if you want to skip the 1-minute preamble and get straight into the queue to speak with a real person.

KrebsOnSecurity dialed in to ask for more details on the “free assisted identity restoration services,” and the person who took my call said they’d need to have some basic information about me in order to proceed. He said they needed my name, address and phone number to proceed. I gave him a number and a name, and after checking with someone he came back and said the restoration services would be offered by Equifax, but confirmed that affected consumers would still have to apply for it.

He added that the Equifaxbreachsettlement.com site will soon include a feature that lets visitors check to see if they’re eligible, but also confirmed that just checking eligibility won’t entitle one to any of the above benefits: Consumers will still need to file a claim through the site (when it’s available to do so).

ANALYSIS

We’ll see how this unfolds, but I’ll be amazed if anything related to taking advantage of this settlement is painless. I still can’t even get a free copy of my credit report from Equifax, as I’m entitled to under the law for free each year. I’ve even requested a copy by mail, according to their instructions. So far nothing.

But let’s say for the sake of argument that our questioner is basically right — that this settlement breaks down to about $3 worth of flesh extracted from Equifax for each affected person. The thing is, this figure probably is less than what Equifax makes selling your credit history to potential creditors each year.

In a 2017 story about the Equifax breach, I quoted financial fraud expert Avivah Litan saying the credit bureaus make about $1 every time they sell your credit file to a potential creditor (or identity thief posing as you). According to recent stats from the New York Federal Reserve, there were around 145 million hard credit pulls in the fourth quarter of 2018 (it’s not known how many of those were legitimate or desired).

But there is something you can do to stop Equifax and the other bureaus from profiting this way: Freeze your credit files with them.

A security freeze essentially blocks any potential creditors from being able to view or “pull” your credit file, unless you affirmatively unfreeze or thaw your file beforehand. With a freeze in place on your credit file, ID thieves can apply for credit in your name all they want, but they will not succeed in getting new lines of credit in your name because few if any creditors will extend that credit without first being able to gauge how risky it is to loan to you. And it’s now free for all Americans.

This post explains in detail what’s involved in freezing your files; how to place, thaw or remove a freeze; the limitations of a freeze and potential side effects; and alternatives to freezes.

What’s wrong with just using credit monitoring, you might ask? These services do not prevent thieves from using your identity to open new lines of credit, and from damaging your good name for years to come in the process. The most you can hope for is that credit monitoring services will alert you soon after an ID thief does steal your identity.

If past experience is any teacher, anyone with a freeze on their credit file will need to briefly thaw their file at Equifax before successfully signing up for the service when it’s offered. Since a law mandating free freezes across the land went into effect, all three bureaus have made it significantly easier to place and lift security freezes.

Probably too easy, in fact. Especially for people who had freezes in place before Equifax revamped its freeze portal. Those folks were issued a numeric PIN to lift, thaw or remove a freeze, but Equifax no longer lets those users do any of those things online with just the PIN.

These days, that PIN doesn’t play a role in any freeze or thaw process. To create an account at the MyEquifax portal, one need only supply name, address, Social Security number, date of birth, any phone number  (all data points exposed in the Equifax breach, and in any case widely available for sale in the cybercrime underground) and answer 4 multiple-guess questions whose answers are often available in public records or on social media.

And so this is yet another reason why you should freeze your credit: If you don’t sign up as you at MyEquifax, someone else might do it for you.

What else can you do in the meantime? Be wary of any phone calls or emails you didn’t sign up for that invoke this data breach settlement and ask you to provide personal and/or financial information.

And if you haven’t done so lately, go get a free copy of your credit report from annualcreditreport.com; by law all Americans are entitled to a free report from each of the major bureaus annually. You can opt for one report, or all three at once. Either way, make sure to read the report(s) closely and dispute anything that looks amiss.

It has long been my opinion that the big three bureaus are massively stifling innovation and offering consumers so little choice or say in the bargain that’s being made on the backs of their hard work, integrity and honesty. The real question is, if someone or something eventually serves to dis-intermediate the big three and throw the doors wide open to competition, what would the net effect for consumers?

Obviously, there is no way to know for sure, but a company that truly offered to pay consumers anywhere near what their data is actually worth would probably wipe these digital dinosaurs from the face of the earth.

That is, if the banks could get on board. After all, the banks and their various fingers are what drive the credit industry. And these giants don’t move very nimbly. They’re massively hard to turn on the simplest changes. And they’re not known for quickly warming to an entirely new model of doing business (i.e. huge cost investments).

My hometown Sen. Mark Warner (D-Va.) seems to suggest the $650 million settlement was about half what it should be.

“Americans don’t choose to have companies like Equifax collecting their data – by the nature of their business models, credit bureaus collect your personal information whether you want them to or not. In light of that, the penalties for failing to secure that data should be appropriately steep. While I’m happy to see that customers who have been harmed as a result of Equifax’s shoddy cybersecurity practices will see some compensation, we need structural reforms and increased oversight of credit reporting agencies in order to make sure that this never happens again.”

Sen. Warner sponsored a bill along with Sen. Elizabeth Warren (D-Ma.) called “The Data Breach Prevention and Compensation Act,” which calls for “robust compensation to consumers for stolen data; mandatory penalties on credit reporting agencies (CRAs) for data breaches; and giving the FTC more direct supervisory authority over data security at CRAs.

“Had the bill been in effect prior to the 2017 Equifax breach, the company would have had to pay at least $1.5 billion for their failure to protect Americans’ personal information,” Warner’s statement concludes.

Update, 4:44 pm: Added statement from Sen. Warner.

CryptogramHackers Expose Russian FSB Cyberattack Projects

More nation-state activity in cyberspace, this time from Russia:

Per the different reports in Russian media, the files indicate that SyTech had worked since 2009 on a multitude of projects since 2009 for FSB unit 71330 and for fellow contractor Quantum. Projects include:

  • Nautilus -- a project for collecting data about social media users (such as Facebook, MySpace, and LinkedIn).

  • Nautilus-S -- a project for deanonymizing Tor traffic with the help of rogue Tor servers.

  • Reward -- a project to covertly penetrate P2P networks, like the one used for torrents.

  • Mentor -- a project to monitor and search email communications on the servers of Russian companies.

  • Hope -- a project to investigate the topology of the Russian internet and how it connects to other countries' network.

  • Tax-3 -- a project for the creation of a closed intranet to store the information of highly-sensitive state figures, judges, and local administration officials, separate from the rest of the state's IT networks.

BBC Russia, who received the full trove of documents, claims there were other older projects for researching other network protocols such as Jabber (instant messaging), ED2K (eDonkey), and OpenFT (enterprise file transfer).

Other files posted on the Digital Revolution Twitter account claimed that the FSB was also tracking students and pensioners.

Worse Than FailureAn Indispensible Guru

Simple budgeting spreadsheet eg

Business Intelligence is the oxymoron that makes modern capitalism possible. In order for a company the size of a Fortune 500 to operate, key people have to know key numbers: how the finances are doing, what sales looks like, whether they're trending on target to meet their business goals or above or below that mystical number.

Once upon a time, Initech had a single person in charge of their Business Intelligence reports. When that person left for greener pastures, the company had a problem. They had no idea how he'd done what he did, just that he'd gotten numbers to people who'd needed them on time every month. There was no documentation about how he'd generated the numbers, nothing to pass on to his successor. They were entirely in the dark.

Recognizing the weakness of having a single point of failure, they set up a small team to create and manage the BI reporting duties and to provide continuity in the event that somebody else were to leave. This new team consisted of four people: Charles, a senior engineer; Liam, his junior sidekick; and two business folks who could provide context around what numbers were needed where and when.

Charles knew Excel. Intimately. Charles could make Excel do frankly astonishing things. Our submitter has worked in IT for three decades, and yet has never seen the like: spreadsheets so chock-full with array formulae, vlookups, hlookups, database functions, macros, and all manner of cascading sheets that they were virtually unreadable. Granted, Charles also had Microsoft Access. However, to Charles, the only thing Access was useful for was the initial downloading of all the raw data from the IBM AS/400 mainframe. Everything else was done in Excel.

Nobody doubted the accuracy of Charles' reports. However, actually running a report involved getting Excel primed and ready to go, hitting the "manual recalculate" button, then sitting back and waiting 45 minutes for the various formulae and macros to do all the things they did. On a good day, Charles could run five, maybe six reports. On a bad day? Three, at best.

By contrast, Liam was very much the "junior" role. He was younger, and did not have the experience that Charles did. That said, Liam was a smart cookie. He took one look at the spreadsheet monstrosity and knew it was a sledgehammer to crack a walnut. Unfortunately, he was the junior member of the engineering half of the team. His objections were taken as evidence of his inexperience, not his intelligence, and his suggestions were generally overlooked.

Eventually, Charles also left for bigger and brighter things, and Liam inherited all of his reports. Almost before the door had stopped swinging, Liam solicited our submitter's assistance in recreating just one of Charles' reports in Access. This took a combined four days; it mostly consisted of the submitter asking "So, Sheet 1, cell A1 ... where does that number come from?", and Liam pointing out the six other sheets they needed to pivot, fold, spindle, and mutilate in order to calculate the number. "Right, so, Sheet 1, cell A2 ... where does that one come from?" ...

Finally, it was done, and the replacement was ready to test. They agreed to run the existing report alongside the new one, so they could determine that the new reports were producing the same output as the old ones. Liam pressed "manual recalculate" while our submitter did the honors of running the new Access report. Thirty seconds later, the Access report gave up and spat out numbers.

"Damn," our submitter muttered. "Something's wrong, it must have died or aborted or something."

"I dunno," replied Liam, "those numbers do look kinda right."

Forty minutes later, when Excel finally finished running its version, lo and behold the outputs were identical. The new report was simply three orders of magnitude faster than the old one.

Enthused by this success, they not only converted all the other reports to run in Access, but also expanded them to run Region- and Area- level variants, essentially running the report about 54 times in the same time it took the original report to run once. They also set up an automatic distribution process so that the reports were emailed out to the appropriate department heads and sales managers. Management was happy; business was happy; developers were happy.

"Why didn't we do this sooner?" was the constant refrain from all involved.

Liam was able to give our submitter the real skinny: "Charles used the long run times to prove how complex the reports were, and therefore, how irreplaceable he was. 'Job security,' he used to call it."

To this day, Charles' LinkedIn profile shows that he was basically running Initech. Liam's has a little more humility about the whole matter. Which just goes to show you shouldn't undersell your achievements in your resume. On paper, Charles still looks like a genius who single-handedly solved all the BI issues in the whole company.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

TEDGetting ready for TEDSummit 2019: Photo gallery

TEDSummit banners are hung at the entrance of the Edinburgh Convention Centre, our home for the week. (Photo: Bret Hartman / TED)

TEDSummit 2019 officially kicks off today! Members of the TED community from 84 countries — TEDx’ers, TED Translators, TED Fellows, TED-Ed Educators, past speakers and more — have gathered in Edinburgh, Scotland to dream up what’s next for TED. Over the next week, the community will share adventures around the city, more than 100 Discovery Sessions and, of course, seven sessions of TED Talks.

Below, check out some photo highlights from the lead-up to TEDSummit and pre-conference activities. (And view our full photostream here.)

It takes a small (and mighty) army to get the theater ready for TED Talks.

(Photo: Bret Hartman / TED)

(Photo: Ryan Lash / TED)

(Photo: Bret Hartman / TED)

TED Translators get the week started with a trip to Edinburgh Castle, complete with high tea in the Queen Anne Tea Room, and a welcome reception.

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

A bit of Scottish rain couldn’t stop the TED Fellows from enjoying a hike up Arthur’s Seat. Weather wasn’t a problem at a welcome dinner.

(Photo: Ryan Lash / TED)

(Photo: Ryan Lash / TED)

(Photo: Ryan Lash / TED)

TEDx’ers kick off the week with workshops, panel discussions and a welcome reception.

(Photo: Dian Lofton / TED)

(Photo: Dian Lofton / TED)

(Photo: Ryan Lash / TED)

It’s all sun and blue skies for the speaker community’s trip to Edinburgh Castle and reception at the Playfair Library.

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

Cheers to an amazing week ahead!

(Photo: Ryan Lash / TED)

TEDA first glimpse at the TEDSummit 2019 speaker lineup

At TEDSummit 2019, more than 1,000 members of the TED community will gather for five days of performances, workshops, brainstorming, outdoor activities, future-focused discussions and, of course, an eclectic program of TED Talks — curated by TED Global curator Bruno Giussani, pictured above. (Photo: Marla Aufmuth / TED)

With TEDSummit 2019 just two months away, it’s time to unveil the first group of speakers that will take to the stage in Edinburgh, Scotland, from July 21-25.

Three years ago, more than 1,000 members of the TED global community convened in Banff, Canada, for the first-ever TEDSummit. We talked about the fracturing state of the world, the impact of technology and the accelerating urgency of climate change. And we drew wisdom and inspiration from the speakers — and from each other.

These themes are equally pressing today, and we’ll bring them to the stage in novel, more developed ways in Edinburgh. We’ll also address a wide range of additional topics that demand attention — looking not only for analysis but also antidotes and solutions. To catalyze this process, half of the TEDSummit conference program will take place outside the theatre, as experts host an array of Discovery Sessions in the form of hands-on workshops, activities, debates and conversations.

Check out a glimpse of the lineup of speakers who will share their future-focused ideas below. Some are past TED speakers returning to give new talks; others will step onto the red circle for the first time. All will help us understand the world we currently live in.

Here we go! (More will be added in the coming weeks):

Anna Piperal, digital country expert

Bob Langert, corporate changemaker

Carl Honoré, author

Carole Cadwalladr, investigative journalist

Diego Prilusky, immersive media technologist

Eli Pariser, organizer and author

Fay Bound Alberti, historian

George Monbiot, thinker and author

Hajer Sharief, youth inclusion activist

Howard Taylor, children safety advocate

Jochen Wegner, editor and dialogue creator

Kelly Wanser, geoengineering expert

Ma Yansong, architect

Marco Tempest, technology magician

Margaret Heffernan, business thinker

María Neira, global public health official

Mariana Lin, AI personalities writer

Mariana Mazzucato, economist

Marwa Al-Sabouni, architect

Nick Hanauer, capitalism redesigner

Nicola Jones, science writer

Nicola Sturgeon, First Minister of Scotland

Omid Djalili, comedian

Patrick Chappatte, editorial cartoonist

Pico Iyer, global author

Poet Ali, Philosopher, poet

Rachel Kleinfeld, violence scholar

Raghuram Rajan, former central banker

Rose Mutiso, energy for Africa activist

Sandeep Jauhar, cardiologist

Sara-Jane Dunn, computational biologist

Sheperd Doeleman, black hole scientist

Sonia Livingstone, social psychologist

Susan Cain, quiet revolutionary

Tim Flannery, carbon-negative tech scholar

Tshering Tobgay, former Prime Minister of Bhutan

 

With them, a number of artists will also join us at TEDSummit, including:

Djazia Satour, singer

ELEW, pianist and DJ

KT Tunstall, singer and songwriter

Min Kym, virtuoso violinist

Radio Science Orchestra, space-music orchestra

Yilian Cañizares, singer and songwriter

 

Registration for TEDSummit is open for active members of our various communities: TED conference members, Fellows, past TED speakers, TEDx organizers, Educators, Partners, Translators and more. If you’re part of one of these communities and would like to attend, please visit the TEDSummit website.

TED7 things you can do in Edinburgh and nowhere else

Edinburgh, Scotland will host TEDSummit this summer, from July 21-25. The city was selected because of its special blend of history, culture and beauty, and for its significance to the TED community (TEDGlobal 2011, 2012 and 2013 were all held there). We asked longtime TEDster Ellen Maloney to share some of her favorite activities that showcase Edinburgh’s unique flavor.

 

From the Castle that dominates the skyline to Arthur’s Seat, an extinct volcano with hiking trails offering panoramic views of the city. Having lived here for most of my adult life, I am still discovering captivating and quirky places to explore. You probably won’t find the sites listed below on the typical “top things to do in Edinburgh” rundowns, but I recommend them to people coming for the upcoming TEDSummit 2019 who love the idea of experiencing this lovely city through a different lens.

St. Cecilia’s Hall and Music Museum

Originally built in 1762 by the University of Edinburgh’s Music Society, this was Scotland’s first venue intentionally built to be a concert hall. Its Music Museum has an impressive collection of musical instruments from around the globe, and it’s claimed to be the only place in the world where you can listen to 18th-century instruments played in an 18th-century setting — some of its ancient harpsichords are indeed playable. Learn how keyboards were once status symbols, and how technology has changed the devices that humans use to make sounds. The museum is open to the public, and the hall regularly hosts concerts and other events.

Innocent Railway Tunnel

This 19th-century former railway tunnel runs beneath the city for 1,696 feet (about 520 meters). One of the first railway tunnels in the United Kingdom and part of the first public railway tunnel in Scotland, it was in use from 1831 until 1968. Today it’s open to walkers and cyclists and connects to a lovely outdoor cycleway. The origin of its name is a mystery, but one theory is that it alludes to the fact that no fatal accidents occurred during its construction. Visitors, however, will find that walking through the tunnel doesn’t feel quite so benign — it’s cold and the wind whistles through.

The Library of Mistakes

This free library dedicated to one subject and one subject only: the human behavior and historical patterns that led to world-shaking financial mistakes. It contains research materials, photos and relics that tell the stories of the bad decisions that shaped our world. Yes, you can read about well-known wrongdoers such Charles Ponzi, but there are plenty of lesser-known schemes and people to discover. For instance, you can learn about the story behind the line “bought and sold for English gold” from the poem by Scotsman Robert Burns. While the library is free and open to the public, viewing is strictly by appointment so you’ll need to book ahead.

Blair Street Vaults

Just off the Royal Mile is Blair Street, which leads to an underground world of 19 cavernous vaults. These lie beneath the bridge that was built in 1788 to connect the Southside of the city with the university area. The archways were once home to a bustling marketplace of cobblers, milliners and other vendors. But it was taken over by less salubrious forces. Its darkness made it an attractive place for anyone who didn’t want to be seen, including thieves and 19th-century murderers William Burke and William Hare, who hid corpses there — there was a convenient opening that led directly to the medical school where they sold the bodies for dissection. Sometime in the 19th century, the vaults were declared too dangerous for use and the entryway was bricked up. Today they can be visited by tour. A warning that paranormal activity has been reported there.  

Sanctuary Stones and Holyrood Abbey

At the foot of the Royal Mile lies Abbey Strand, which leads down to the gates of Holyrood Palace (the Queen’s primary royal residence in Scotland). Look carefully on the road at Abbey Strand, and you will see three stones marked with a golden “S” on them. These stones mark part of what used to be a five-mile radius known as Abbey Sanctuary, where criminals could seek refuge from civil law under the auspices of Holyrood Abbey. In the 16th century, when land came under royal control, sanctuary was reserved for financial debtors. In 1880, a change in law meant debtors could no longer be jailed, so the sanctuary was no longer needed. As you walk the Royal Mile, be sure to appreciate these remnants of Scotland’s history. The Abbey, now a scenic ruin, can be accessed through Holyrood Palace.

White Stuff fitting rooms

This may look like an ordinary store — and yes, you can purchase clothes, home goods and gifts here —  until you head upstairs to the 10 fitting rooms. Open the door to your cubicle and instead of the usual unflattering mirror and bad lighting, you’ll find individually themed rooms. From a 1940s kitchen pantry stocked with cans of gravy and marrowfat peas to a room filled with cuddly toys, these are fitting rooms that you’ll actually want to spend time in (there is room for you to try on clothes). Most of the rooms were designed by AMD Interior Architects, but a few were winning designs from a school competition. The crafty should take a break in the “meet and make” area where they can enjoy arts and crafts while sipping tea from vintage teacups.

Jupiter Artland

Just 10 miles outside of Edinburgh, Jupiter Artland is a sculpture park set among hundreds of acres of gardens and woodlands. It’s located on the grounds of Bonnington House, a 17th-century Jacobean Manor house. While visitors are provided with a map of different artworks, there is no set route to follow. Turn left, turn right, go backwards, go forwards. Look out for the peacocks and geese. Be amazed, be delighted, be stunned. A visit to Jupiter Artland is a mini-adventure in itself.

TEDSummit is a celebration of the different communities and people that make up TED and help spread its world-changing ideas. Learn more about TEDSummit 2019. And to find even more to do in Edinburgh and Scotland, visit Scotland.org.

 

,

CryptogramFriday Squid Blogging: Squid Mural

Large squid mural in the Bushwick neighborhood of Brooklyn.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityQuickBooks Cloud Hosting Firm iNSYNQ Hit in Ransomware Attack

Cloud hosting provider iNSYNQ says it is trying to recover from a ransomware attack that shut down its network and has left customers unable to access their accounting data for the past three days. Unfortunately for iNSYNQ, the company appears to be turning a deaf ear to the increasingly anxious cries from its users for more information about the incident.

A message from iNSYNQ to customers.

Gig Harbor, Wash.-based iNSYNQ specializes in providing cloud-based QuickBooks accounting software and services. In a statement posted to its status page, iNSYNQ said it experienced a ransomware attack on July 16, and took its network offline in a bid to contain the spread of the malware.

“The attack impacted data belonging to certain iNSYNQ clients, rendering such data inaccessible,” the company said. “As soon as iNSYNQ discovered the attack, iNSYNQ took steps to contain it. This included turning off some servers in the iNSYNQ environment.”

iNSYNQ said it has engaged outside cybersecurity assistance and to determine whether any customer data was accessed without authorization, but that so far it has no estimate for when those files might be available again to customers.

Meanwhile, iNSYNQ’s customers — many of them accountants who manage financial data for a number of their own clients — have taken to Twitter to vent their frustration over a lack of updates since that initial message to users.

In response, the company appears to have simply deleted or deactivated its Twitter account (a cached copy from June 2019 is available here). Several customers venting about the outage on Twitter also accused the company of unpublishing negative comments about the incident from its Facebook page.

Some of those customers also said iNSYNQ initially blamed the outage on an alleged problem with U.S.-based nationwide cable ISP giant Comcast. Meanwhile, competing cloud hosting providers have been piling on to the tweetstorms about the iNSYNQ outage by marketing their own services, claiming they would never subject their customers to a three-day outage.

iNSYNQ has not yet responded to requests for comment.

Update, 4:35 p.m. ET: I just heard from iNSYNQ’s CEO Elliot Luchansky, who shared the following:

While we have continually updated our website and have emailed customers once if not twice daily during this malware attack, I acknowledge we’ve had to keep the detail fairly minimal.

Unfortunately, and as I’m sure you’re familiar with, the lack of detailed information we’ve shared has been purposeful and in an effort to protect our customers and their data- we’re in a behind the scenes trench warfare doing everything we possibly can to secure and restore our system and customer data and backups. I understand why our customers are frustrated, and we want more than anything to share every piece of information that we have.

Our customers and their businesses are our number one priority right now. Our team is working around the clock to secure and restore access to all impacted data, and we believe we have an end in sight in the near future.

You know as well as we that no one is 100% impervious to this – businesses large and small, governments and individuals are susceptible. iNSYNQ and our customers were the victims of a malware attack that’s a totally new variant that hadn’t been detected before, confirmed by the experienced and knowledgeable cybersecurity team we’ve employed.

Original story: There is no question that a ransomware infestation at any business — let alone a cloud data provider — can quickly turn into an all-hands-on-deck, hair-on-fire emergency that diverts all attention to fixing the problem as soon as possible.

But that is no excuse for leaving customers in the dark, and for not providing frequent and transparent updates about what the victim organization is doing to remediate the matter. Particularly when the cloud provider in question posts constantly to its blog about how companies can minimize their risk from such incidents by trusting it with their data.

Ransomware victims perhaps in the toughest spot include those providing cloud data hosting and software-as-service offerings, as these businesses are completely unable to serve their customers while a ransomware infestation is active.

The FBI and multiple security firms have advised victims not to pay any ransom demands, as doing so just encourages the attackers and in any case may not result in actually regaining access to encrypted files.

In practice, however, many cybersecurity consulting firms are quietly urging their customers that paying up is the fastest route back to business-as-usual. It’s not hard to see why: Having customer data ransomed or stolen can send many customers scrambling to find new providers. As a result, the temptation to simply pay up may become stronger with each passing day.

That’s exactly what happened in February, when cloud payroll data provider Apex Human Capital Management was knocked offline for three days following a ransomware infestation.

On Christmas Eve 2018, cloud hosting provider Dataresolution.net took its systems offline in response to a ransomware outbreak on its internal networks. The company was adamant that it would not pay the ransom demand, but it ended up taking several weeks for customers to fully regain access to their data.

KrebsOnSecurity will endeavor to update this story as more details become available. Any iNSYNQ affected by the outage is welcome to contact this author via Twitter (my direct messages are open to all) or at krebsonsecurity @ gmail.com.

Cory DoctorowAppearance on the Jim Rutt Podcast

Jim Rutt — former chairman of the Santa Fe Institute and ex-Network Solutions CEO — just launched his new podcast, and included me in the first season! (MP3) It was a characteristically wide-ranging, interdisciplinary kind of interview, covering competition and adversarial interoperability, technological self-determination and human rights, conspiracy theories and corruption. There’s a full transcript here.

CryptogramJohn Paul Stevens Was a Cryptographer

I didn't know that Supreme Court Justice John Paul Stevens "was also a cryptographer for the Navy during World War II." He was a proponent of individual privacy.

Worse Than FailureError'd: The Parameter was NOT Set

"Spotted this in front of a retro-looking record player in an Italian tech shop. I don't think anybody had any idea how to categorize it so they just left it up to the system," Marco D. writes.

 

George C. wrote, "Never thought it would come to this, but it looks like LinkedIn can't keep up with all of my connections!"

 

"Apparently opening a particular SharePoint link using anything else other than Internet Explorer made Excel absolutely lose its mind," wrote Arno P.

 

Dima R. writes, "OH! My bad, Edge, I only tried to access a file:// URL while I was offline."

 

"This display at the Vancouver airport doesn't have a lot of fans," Bruce R. wrote.

 

"Woo hoo! This is what I'm talking about! A septillion percentage gain!!" John G. writes.

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Sociological ImagesCrowding Out Crime

Buzzfeed News recently ran a story about reputation management companies using fake online personas to help their clients cover up convictions for fraud. These firms buy up domains and create personal websites for a crowd of fake professionals (stock photo headshots and all) who share the same name as the client. The idea is that search results for the client’s name will return these websites instead, hiding any news about white collar crime.

In a sea of profiles with the same name, how do you vet a new hire? Image source: anon617, Flickr CC

This is a fascinating response to a big trend in criminal justice where private companies are hosting mugshots, criminal histories, and other personal information online. Sociologist Sarah Lageson studies these sites, and her research shows that these databases are often unregulated, inaccurate, and hard to correct. The result is more inequality as people struggle to fix their digital history and often have to pay private firms to clean up these records. This makes it harder to get a job, or even just to move into a new neighborhood.

The Buzzfeed story shows how this pattern flips for wealthy clients, whose money goes toward making information about their past difficult to find and difficult to trust. Beyond the criminal justice world, this is an important point about the sociology of deception and “fake news.” The goal is not necessarily to fool people with outright deception, but to create just enough uncertainty so that it isn’t worth the effort to figure out whether the information you have is correct. The time and money that come with social class make it easier to navigate uncertainty, and we need to talk about how those class inequalities can also create a motive to keep things complicated in public policy, the legal system, and other large bureaucracies.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramIdentity Theft on the Job Market

Identity theft is getting more subtle: "My job application was withdrawn by someone pretending to be me":

When Mr Fearn applied for a job at the company he didn't hear back.

He said the recruitment team said they'd get back to him by Friday, but they never did.

At first, he assumed he was unsuccessful, but after emailing his contact there, it turned out someone had created a Gmail account in his name and asked the company to withdraw his application.

Mr Fearn said the talent assistant told him they were confused because he had apparently emailed them to withdraw his application on Wednesday.

"They forwarded the email, which was sent from an account using my name."

He said he felt "really shocked and violated" to find out that someone had created an email account in his name just to tarnish his chances of getting a role.

This is about as low-tech as it gets. It's trivially simple for me to open a new Gmail account using a random first and last name. But because people innately trust email, it works.

Worse Than FailureThe Hardware Virus

Dvi-cable

Jen was a few weeks into her new helpdesk job. Unlike past jobs, she started getting her own support tickets quickly—but a more veteran employee, Stanley, had been tasked with showing her the ropes. He also got notification of Jen's tickets, and they worked on them together. A new ticket had just come in, asking for someone to replace the DVI cable that'd gone missing from Conference Room 3. Such cables were the means by which coworkers connected their laptops to projectors for presentations.

Easy enough. Jen left her cube to head for the hardware "closet"—really, more of a room crammed full of cables, peripherals, and computer parts. On a dusty shelf in a remote corner, she spotted what she was looking for. The coiled cable was a bit grimy with age, but looked serviceable. She picked it up and headed to Stanley's cube, leaning against the threshold when she got there.

"That ticket that just came in? I found the cable they want. I'll go walk it down." Jen held it up and waggled it.

Stanley was seated, facing away from her at first. He swiveled to face her, eyed the cable, then went pale. "Where did you find that?"

"In the closet. What, is it—?"

"I thought they'd been purged." Stanley beckoned her forward. "Get in here!"

Jen inched deeper into the cube. As soon as he could reach it, Stanley snatched the cable out of her hand, threw it into the trash can sitting on the floor beside him, and dumped out his full mug of coffee on it for good measure.

"What the hell are you doing?" Jen blurted.

Stanley looked up at her desperately. "Have you used it already?"

"Uh, no?"

"Thank the gods!" He collapsed back in his swivel chair with relief, then feebly kicked at the trash can. The contents sloshed around inside, but the bin remained upright.

"What's this about?" Jen demanded. "What's wrong with the cable?"

Under the harsh office lighting, Stanley seemed to have aged thirty years. He motioned for Jen to take the empty chair across from his. Once she'd sat down, he continued nervously and quietly. "I don't know if you'll believe me. The powers-that-be would be angry if word were to spread. But, you've seen it. You very nearly fell victim to it. I must relate the tale, no matter how vile."

Jen frowned. "Of what?"

Stanley hesitated. "I need more coffee."

He picked up his mug and walked out, literally leaving Jen at the edge of her seat. She managed to sit back, but her mind was restless, wondering just what had her mentor so upset.

Eventually, Stanley returned with a fresh mug of coffee. Once he'd returned to his chair, he placed the mug on his desk and seemed to forget all about it. With clear reluctance, he focused on Jen. "I don't know where to start. The beginning, I suppose. It fell upon us from out of nowhere. Some say it's the spawn of a Sales meeting; others blame a code review gone horribly wrong. In the end, it matters little. It came alive and spread like fire, leaving destruction and chaos in its wake."

Jen's heart thumped with apprehension. "What? What came alive?"

Stanley's voice dropped to a whisper. "The hardware virus."

"Hardware virus?" Jen repeated, eyes wide.

Stanley glared. "You're going to tell me there's no such thing, but I tell you, I've seen it! The DVI cables ..."

He trailed off helplessly, reclining in his chair. When he straightened and resumed, his demeanor was calmer, but weary.

"At some godforsaken point in space and time, a single pin on one of our DVI cables was irrevocably bent. This was the source of the contagion," he explained. "Whenever the cable was plugged into a laptop, it cracked the plastic composing the laptop's DVI port, contorting it in a way that resisted all mortal attempt at repair. Any time another DVI cable was plugged into that laptop, its pin was bent in just the same way as with the original cable.

"That was how it spread. Cable infected laptop, laptop infected cable, all with vicious speed. There was no hope for the infected. We ... we were forced to round up and replace every single victim. I was knee-deep in the carnage, Jen. I see it in my nightmares. The waste, the despair, the endless reimaging!"

Stanley buried his head in his hands. It was a while before he raised his haunted gaze again. "I don't know how long it took, but it ran its course; the support tickets stopped coming in. Our superiors consider the matter resolved ... but I've never been able to let my guard down." He glanced warily at the trash can, then made eye contact with Jen. "Take no chances with any DVI cables you find within this building. Buy your own, and keep them with you at all times. If you see any more of those—" he pointed an accusing finger at the bin "—don't go near them, don't try taking a paperclip to them. There's everything to lose, and nothing to gain. Do you understand?"

Unable to manage words, Jen nodded instead.

"Good." The haunted expression vanished in favor of grim determination. Stanley stood, then rummaged through a desk drawer loaded with office supplies. He handed Jen a pair of scissors, and armed himself with a brassy letter opener.

"Our job now is to track down the missing cable that resulted in your support ticket," he continued. "If we're lucky, someone's absent-mindedly walked off with it. If we're not, we may find that this is step one in the virus' plan to re-invade. Off we go!"

Jen's mind reeled, but she sprang to her feet and followed Stanley out of the cubicle, telling herself to be ready for anything.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Krebs on SecurityParty Like a Russian, Carder’s Edition

“It takes a certain kind of man with a certain reputation
To alleviate the cash from a whole entire nation…”

KrebsOnSecurity has seen some creative yet truly bizarre ads for dodgy services in the cybercrime underground, but the following animated advertisement for a popular credit card fraud shop likely takes the cake.

The name of this particular card shop won’t be mentioned here, and its various domain names featured in the video have been pixelated so as not to further promote the online store in question.

But points for knowing your customers, and understanding how to push emotional buttons among a clientele that mostly views America’s financial system as one giant ATM that never seems to run out of cash.

WARNING: Some viewers may find this video disturbing. Also, it is almost certainly Not Safe for Work.

The above commercial is vaguely reminiscent of the slick ads produced for and promoted by convicted Ukrainian credit card fraudster Vladislav “BadB” Horohorin, who was sentenced in 2013 to serve 88 months in prison for his role in the theft of more than $9 million from RBS Worldpay, an Atlanta-based credit card processor. (In February 2017, Horohorin was released and deported from the United States. He now works as a private cybersecurity consultant).

The clip above is loosely based on the 2016 music video, “Party Like a Russian,” produced by British singer-songwriter Robbie Williams.

Tip of the hat to Alex Holden of Hold Security for finding and sharing this video.

TEDApply to be a TED2020 Fellow

Apply to be a TED2020 Fellow

Since launching the TED Fellows program ten years ago, we’ve gotten to know and support some of the brightest, most ambitious thinkers, change-makers and culture-shakers from nearly every discipline and corner of the world. The numbers speak for themselves:

  • 472 Fellows covering a vast array of disciplines, from astrophysics to the arts
  • 96 countries represented
  • More than 1.3 million views per TED Talk given by Fellows (on average)
  • At least 90 new businesses and 46 nonprofits fostered within the program

Whether it’s discovering new galaxies, leading social movements or making waves in environmental conservation, with the support of TED, our Fellows are dedicated to making the world a better place through their innovative work. And you could be one of them.

Apply now to be a TED Fellow by August 27, 2019.

What’s in it for you?

  • The opportunity to give a talk on the TED mainstage
  • Career coaching and speaker training
  • Mentorship, professional development and public relations guidance
  • The opportunity to be part of a diverse, collaborative community of more than 450 thought leaders
  • Participation in the global TED2020 conference in Vancouver, BC

What are the requirements?

  • An idea worth spreading!
  • A completed online application consisting of general biographical information, short essays on your work and three references (It’s short, fun, and it’ll make you think…)
  • You must be at least 18 years old to apply.
  • You must be fluent in English.
  • You must be available to be in Vancouver, BC from April 17 to April 25, 2020.

What do you have to lose?

The deadline to apply is August 27, 2019 at 11:59pm UTC. To learn more about the TED Fellows program and apply, head here. Don’t wait until the last minute! We do not accept late applications. Really.

,

Sam VargheseThe Rise and Fall of the Tamil Tigers is full of errors

How many mistakes should one accept in a book before it is pulled from sale? In the normal course, when a book is accepted for publication by a recognised publishing company, there are experienced editors who go through the text, correct it and ensure that there are no major bloopers.

Then there are fact-checkers who ensure that what is stated within the book is, at least, mostly aligned with public versions of events from reliable sources.

In the case of The Rise and Fall of the Tamil Tigers, a third-rate book that is being sold by some outlets online, neither of these exercises has been carried out. And it shows.

If the author, Damian Tangram, had voiced his views or even put the entire book online as a free offering, that would be fine. He is entitled to his opinion. But when he is trying to trick people into buying what is a very poor-quality book, then warnings are in order.

Here are just a few of the screw-ups in the first 14 pages (the book is 375 pages!):

In the foreword, the words “Civil War” are capitalised. This is incorrect and would be right only if the civil war were exclusive to Sri Lanka. This is not the case; there are numerous civil wars occurring around the world.

Next, the foreword claims the war started in 1985. This, again, is incorrect. It began in July 1983. The next claim is that this war “had its origins in the post-war political exploitation of socially divisive policies.” Really? Post-war means after the war – this conflict must be the first in the world to begin after it was over!

There is a further line indicating that the author does not know how to measure time: “After spanning three decades…” A decade is 10 years, three decades would be 30 years. The war lasted a little less than 26 years – July 23, 1983 to May 19, 2009.

Again, in the foreword, the author claims that the Liberation Tigers of Tamil Eelam “grew from being a small despot insurgency to the most dangerous and effective terrorist organizations the world has ever seen.” The LTTE was started by Velupillai Pirapaharan in the 1970s. By 1983, it was already a well-organised fighting force. Further, the English is all wonky here, the word should be “organization”, not the plural “organizations”.

And this is just the first paragraph of the book!

The second paragraph of the foreword claims about the year 2006: “Just when things could not be worse Sri Lanka was plunged into all-out war.” The war started much earlier and was in a brief hiatus. The final effort to eliminate the LTTE began on April 25, 2006. And a comma would be handy there.

Then again, the book claims in the foreword that the only person who refused to compromise in the conflict had been Pirapaharan. This is incorrect as the government was also equally stubborn until 2002.

To go on, the foreword says the book gives “an example of how a terrorist organisation like the LTTE can proliferate and spread its murderous ambitions”. The book suffers from numerous generalisations of this kind, all of which are standout examples of malapropism. And one’s ambitions grow, one does not “spread ambitions”.

Again, and we are still in the foreword, the book says the LTTE “was a force that lasted for more than twenty-five years…” Given that it took shape in the 1970s, this is again incorrect.

Next, there is a section titled “About this Book”. Again, misplaced capitalisation of the word “Book”. The author says he visited Sri Lanka for the first time in 1989 soon after he “met and married wife….” Great use of butler English, that. Additionally, he could not have married his wife; the woman in question became his wife only after he married her.

That year, he claims the “most frightening organization” was the JVP or Janata Vimukti Peramuna or People’s Liberation Front. Two years later, when he returned for a visit, the JVP had been defeated but “the enemy to peace was the LTTE”. This is incorrect as the LTTE did not offer any let-up while the JVP was engaging the Sri Lankan army.

Of the Tigers he says, “the power that they had acquired over those short years had turned them into a mythical unstoppable force.” This is incorrect; the Tigers became a force to be reckoned with many years earlier. They did not undergo any major evolution between 1989 and 1991.

The author’s only connection to Sri Lanka is through marrying a Sri Lankan woman. This, plus his visits, he claims give him a “close connection” to the island!

So we go on: “I returned to Sri Lankan several times…” The word is Lanka, not Lankan. More proof of a lack of editing, if any is needed by now.

“Lives were being lost; freedoms restricted and the economy being crushed under a financial burden.” The use of that semi-colon illustrates Tangram’s level of ignorance of English. Factually, this is all stating the bleeding obvious as all these fallouts of the war had begun much earlier.

The author claims that one generation started the war, a second continued to fight and a third was about to grow up and be thrown into a conflict. How three generations can come and go in the space of 26 years is a mystery and more evidence that this man just flings words about and hopes that they make sense.

More in this same section: “To know Sri Lanka without war was once an impossible dream…” Rubbish, I lived in Sri Lanka from 1957 till 1972 and I knew peace most of the time.

Ending this section is another screw-up: “I returned to Sri Lanka in 2012, after the war had ended, to witness the one thing I had not seen in over 25 years: Peace.” Leaving aside the wrong capitalisation of the word “peace”, since the author’s first visit was in 1989, how does 2012 make it “over 25 years”? By any calculation, that comes to 23 years. This is a ruse used throughout the book to give the impression that the author has a long connection to Sri Lanka when in reality he is just an opportunist trying to turn some bogus observations about a conflict he knows nothing about into a cash cow.

And so far I have covered hardly three full pages!!!

Let’s have a brief look at Ch-1 (one presumes that means Chapter 1) which is titled “Understanding Sri Lanka” with a sub-heading “Introduction Understanding Sri Lanka: The impossible puzzle”. (If it is impossible as claimed, how does the author claim he can explain it?)

So we begin: “…there is very little information being proliferated into the general media about the nation of Sri Lanka.” The author obviously does not own a dictionary and is unaware how the word “proliferated” should be used.

There are several strange conglomerations of words which mean nothing; for example, take this: “Without referring to a map most people would struggle to name any other city than Colombo. Even the name of the island may reflect some kind of echo of when it changed from being called Ceylon to when it became Sri Lanka.” Apart from all the missing punctuation, and the mixing up of the order of words, what the hell does this mean? Echo?

On the next page, the book says: “At the bottom corner of India is the small teardrop-shaped island of Sri Lankan.” That sentence could have done without the last “n”. Once again, no editor. Only Tangram the great.

The word Sinhalese is spelt that way; there is nobody who spells it “Singhalese”. But since the author is unable to read Sinhala, the local language, he makes errors of this kind over and over again. Again, common convention for the usage of numbers in print dictates that one to nine be spelt out and any higher number be used as a figure. The author is blissfully unaware of this too.

The percentage of Sinhalese-speakers is given as “about 70%” when the actual figure is 74.9%. And then in another illustration of his sloppiness, the author writes “The next largest groups are the Tamils who make up about 15% of the population.” The Tamils are not a single group, being made up of plantation Tamils who were brought in by the British from India to work in the tea estates (4.2%) and the local Tamils (11.2%) who have been there much longer.

He then refers to a group whom he calls Burgers – which is something sold in a fast-food outlet. The Sri Lankan ethnic group is called Burghers, who are the product of inter-marriages between Sinhalese and Portuguese, British or Dutch invaders. There is a reference made to a group of indigenous people, whom the author calls “Vedthas.” Later, on the same page, he calls these people Veddhas. This is not the first time that it is clear that he could not be bothered to spell-check this bogus tome.

There’s more: the “Singhalese” (the author’s spelling) are claimed to be of “Arian” origin. The word is Aryan. Then there is a claim that the Veddhas are related to the “Australian Indigenous Aborigines”. One has yet to hear of any non-Indigenous Aborigines. Redundant words are one thing at which Tangram excels.

There is reference to some king of Sri Lanka known as King Dutigama. The man’s name was Dutugemunu. But then what’s the difference, eh? We might as well have called him Charlie Chaplin!

Referring to the religious groups in Sri Lanka, Tangram writes: “Hinduism also has a long history in Sri Lanka with Kovils…” The word is temples, unless one is writing in the vernacular. He claims Buddhists make up 80%; the correct figure is 70.2%.

Then referring to the Bo tree under which Gautama Buddha is claimed to have found enlightenment, Tangram claims it is more than 2000 years old and the oldest cultivated tree alive today. He does not know about the Bristlecone pine trees that date back more than 4700 years. Or the redwoods that carbon dating has shown to be more than 3000 years old.

This brings me to page 14 and I have crossed 1500 words! The entire book would probably take me a week to cover. But this number of errors should serve to prove my point: this book should not be sold. It is a fraud on the public.

,

Rondam RamblingsThe Trouble with Many Worlds

Ten years ago I wrote an essay entitled "The Trouble with Shadow Photons" describing a problem with the dramatic narrative of what is commonly called the "many-worlds" interpretation of quantum mechanics (but which was originally and IMHO more appropriately called the "relative state" interpretation) as presented by David Deutsch in his (otherwise excellent) book, "The Fabric of Reality."  At the

,

Sam VargheseWhatever happened to the ABC’s story of the century?

In the first three weeks of June last year, the ABC’s Sarah Ferguson presented a three-part saga on the channel’s Four Corners program, which the ABC claimed was the “story of the century”.

It was a rehashing of all the claims against US President Donald Trump, which the American TV stations had gone over with a fine-toothed comb but which Ferguson seemed convinced still had something to chew over.

At the time, a special counsel, former FBI chief Robert Mueller, was conducting an investigation into claims that Trump colluded with Russia to win the presidential election.

Earlier this year, Mueller announced the results of his probe: zilch. Zero. Nada. Nothing. A big cipher.

Given that Ferguson echoed all the same claims by interviewing a number of rather dubious individuals, one would think that it was time for a mea culpa – that is, if one had even a semblance of integrity, a shred of honesty in one’s being.

But Ferguson seems to have disappeared off the face of the earth. The ABC has been silent about it too. Given that she and her entourage spent the better part of six weeks traipsing the streets and corridors of power in the US and the UK, considerable funds would have been spent.

This, by an organisation that is always weeping about its budget cuts. One would think that such a publicly-funded organisation would be a little more circumspect and not allow anyone to indulge in such an exercise of vanity.

If Ferguson had unearthed even one morsel of truth, one titbit of information that the American media had not found, then one would not be writing this. But she did nothing of the sort; she just raked over all the old bones.

One hears Ferguson is now preparing a program on the antics that the government indulged in last year by dumping its leader, Malcolm Turnbull. This issue has also been done to death and there has already been a two-part investigation by the Sky News’ presenter David Speers, a fine reporter. There has been one book published, by the former political aide Niki Savva, and more are due.

It looks like Ferguson will again be acting in the manner of a dog that returns to its own vomit. She appears to have cultivated considerable skill in this art.

,

Sam VargheseThe Rise and Fall of the Tamil Tigers is a third-rate book. Don’t waste your money buying it

How do you evaluate a book before buying? If it were from a traditional bookshop, then one scans some pages at least. The art master in my secondary school told his students of a method he had: read page 15 or 16, then flip to page 150 and read that. If the book interests you, then buy it.

But when it’s online buying, what happens? Not every book you buy is from a known author and many online booksellers do not offer the chance to flip through even a few pages. At times, this ends with the buyer getting a dud.

One book I bought recently proved to be a dud. I am interested in the outcome of the civil war in Sri Lanka where I grew up. Given that, I picked up the first book about the ending of the war, written in 2011 by Australian Gordon Weiss, a former UN official. This is an excellent account of the whole conflict, one that also gives a considerable portion of the history of the island and the events that led to the rise of tensions between the Sinhalese and the Tamils.

Prior to that, I picked up a number of other books, including the only biography of the Tamil leader, Velupillai Pirapaharan. Many of the books I picked up are written by Indians and thus the standard of English is not as good as that in Weiss’s book. But the material in all books is of a uniformly high standard.

Recently, I bought a book titled The Rise and Fall of the Tamil Tigers that gave its publication date as 2018 and claimed to tell the story of the war in its entirety. The reason I bought it was to see if it bridged the gap between 2011, when Weiss’s book was published, and 2018, when the current book came out.

But it turned out to be a scam. I am not sure why the bookseller, The Book Depository, stocks this volume, given its shocking quality.

The blurb about the book runs thus: “This book offers an accurate and easy to follow explanation of how the Tamil Tigers, who are officially known as the Liberation Tigers of Tamil Eelam (LTTE), was defeated. Who were the major players in this conflict? What were the critical strategic decisions that worked? What were the strategic mistakes and their consequences? What actually happened on the battlefield? How did Sri Lanka become the only nation in modern history to completely defeat a terrorist organisation? The mind-blowing events of the Sri Lankan civil war are documented in this book to show the truth of how the LTTE terrorist organisation was defeated. The defeat of a terrorist organisation on the battlefield was so unprecedented that it has rewritten the narrative in the fight against terrorism.”

Nothing could be further from the truth.

The book is published by the author himself, an Australian named Damian Tangram, who appears to have no connection to Sri Lanka apart from the fact that he is married to a Sri Lankan woman.

It is extremely badly written, has obviously not been edited and has not even been subjected to a spell-checker before being printed. This can be gauged by the fact that the same word is spelt in different ways on the same page.

Capital letters are spewed all over the pages and even an eighth-grade student would not write rubbish of this kind.

In many parts of the book, government propaganda is republished verbatim and it all looks like a cheap attempt to make some money by taking up a subject that would be of interest, and then producing a low-grade tome.

Some of the sources it quotes are highly dubious, one of them being a Singapore-based so-called terrorism expert Rohan Gunaratne who has been unmasked as a fraud on many occasions.

The reactions of the book sellers — I bought it through Abe Books which groups together a number of sellers from whom one can choose; I chose The Book Depository — were quite disconcerting. When the abysmal quality of the book was brought to their notice, both thought I wanted my money back. I wanted them to remove it from sale so that nobody else would get cheated the way I was.

After some back and forth, and both companies refusing to understand that the book is a fraud, I gave up.

MELong-term Device Use

It seems to me that Android phones have recently passed the stage where hardware advances are well ahead of software bloat. This is the point that desktop PCs passed about 15 years ago and laptops passed about 8 years ago. For just over 15 years I’ve been avoiding buying desktop PCs, the hardware that organisations I work for throw out is good enough that I don’t need to. For the last 8 years I’ve been avoiding buying new laptops, instead buying refurbished or second hand ones which are more than adequate for my needs. Now it seems that Android phones have reached the same stage of development.

3 years ago I purchased my last phone, a Nexus 6P [1]. Then 18 months ago I got a Huawei Mate 9 as a warranty replacement [2] (I had swapped phones with my wife so the phone I was using which broke was less than a year old). The Nexus 6P had been working quite well for me until it stopped booting, but I was happy to have something a little newer and faster to replace it at no extra cost.

Prior to the Nexus 6P I had a Samsung Galaxy Note 3 for 1 year 9 months which was a personal record for owning a phone and not wanting to replace it. I was quite happy with the Note 3 until the day I fell on top of it and cracked the screen (it would have been ok if I had just dropped it). While the Note 3 still has my personal record for continuous phone use, the Nexus 6P/Huawei Mate 9 have the record for going without paying for a new phone.

A few days ago when browsing the Kogan web site I saw a refurbished Mate 10 Pro on sale for about $380. That’s not much money (I usually have spent $500+ on each phone) and while the Mate 9 is still going strong the Mate 10 is a little faster and has more RAM. The extra RAM is important to me as I have problems with Android killing apps when I don’t want it to. Also the IP67 protection will be a handy feature. So that phone should be delivered to me soon.

Some phones are getting ridiculously expensive nowadays (who wants to walk around with a $1000+ Pixel?) but it seems that the slightly lower end models are more than adequate and the older versions are still good.

Cost Summary

If I can buy a refurbished or old model phone every 2 years for under $400 that will make using a phone cost about $0.50 per day. The Nexus 6P cost me $704 in June 2016 which means that for the past 3 years my phone cost was about $0.62 per day.

It seems that laptops tend to last me about 4 years [3], and I don’t need high-end models (I even used one from a rubbish pile for a while). The last laptops I bought cost me $289 for a Thinkpad X1 Carbon [4] and $306 for the Thinkpad T420 [5]. That makes laptops about $0.20 per day.

In May 2014 I bought a Samsung Galaxy Note 10.1 2014 edition tablet for $579. That is still working very well for me today, apart from only having 32G of internal storage space and an OS update preventing Android apps from writing to the micro SD card (so I have to use USB to copy TV shows on to it) there’s nothing more than I need from a tablet. Strangely I even get good battery life out of it, I can use it for a couple of hours without the battery running out. Battery life isn’t nearly as good as when it was new, but it’s still OK for my needs. As Samsung stopped providing security updates I can’t use the tablet as a SSH client, but now that my primary laptop is a small and light model that’s less of an issue. Currently that tablet has cost me just over $0.30 per day and it’s still working well.

Currently it seems that my hardware expense for the forseeable future is likely to be about $1 per day. 20 cents for laptop, 30 cents for tablet, and 50 cents for phone. The overall expense is about $1.66 per month as I’m on a $20 per month pre-paid plan with Aldi Mobile.

Saving Money

A laptop is very important to me, the amounts of money that I’m spending don’t reflect that. But it seems that I don’t have any option for spending more on a laptop (the Thinkpad X1 Carbon I have now is just great and there’s no real option for getting more utility by spending more). I also don’t have any option to spend less on a tablet, 5 years is a great lifetime for a device that is practically impossible to repair (repair will cost a significant portion of the replacement cost).

I hope that the Mate 10 can last at least 2 years which will make it a new record for low cost of ownership of a phone for me. If app vendors can refrain from making their bloated software take 50% more RAM in the next 2 years that should be achievable.

The surprising thing I learned while writing this post is that my mobile phone expense is the largest of all my expenses related to mobile computing. Given that I want to get good reception in remote areas (needs to be Telstra or another company that uses their network) and that I need at least 3GB of data transfer per month it doesn’t seem that I have any options for reducing that cost.

,

Sam VargheseMethinks Israel Folau is acting like a hypocrite

The case of Israel Folau has been a polarising one in Australia with some supporting the rugby union player’s airing of his Christian beliefs and others loudly opposed. In the end, it turns out that Folau may be guilty of one of the sins of which he accuses others: hypocrisy.

Last year, Folau made a post on Instagram saying adulterers, drunkards, fornicators, homosexuals and the like would all go to hell if they did not repent and come to Jesus. In this, he was merely stating what the Bible says about these kinds of people. He was cautioned about such posts by his employer, Rugby Australia. Whether he signed any agreement about not putting up similar posts in the future is unknown.

A second similar post this year resulted in a fairly big outcry among the media and those who champion the gay cause. Folau had a number of meetings with his employers and was finally shown the door. He was on a four-year $4 million contract so he has lost a considerable amount of cash. The Australian team has lost a lot too, as he was by far the best player and the World Cup rugby tournament is in September this year. The main sponsor of the team is Qantas and the chief executive, Alan Joyce, is gay. There have been accusations that Joyce has been a pivotal force in pushing for Folau’s sacking.

Soon after this, Folau announced that he was suing Rugby Australia and sought to raise $3 million for defending himself. His campaign on GoFundMe had reached about $750,000 when it was pulled down by the site. But the Christian lobby started another fund for Folau and it has now raised well beyond a million dollars.

Now Folau has the right to hold his own religious beliefs. He is also free to state them openly. But in this he goes against the very Bible he quotes, for Christians are told to practise their faith quietly, and not in the manner of the scribes and Pharisees of Jesus’ time, people who took great care to show outwardly that they were religious – though in private they were as worldly and non-religious as anyone else. In short, they were hypocrites.

Christians were told to behave in this manner and promised that their God would reward them openly. In other words, a Christian is expected to influence others not by talking loudly and flaunting his/her faith, but by impressing others by one’s behaviour and attitudes. Folau’s flaunting of his faith appears to go against this admonishment.

Then again, Folau’s seeking of money to fund his court case is a very worldly act. First of all, Christians are told not to go to court against their neighbours but rather to settle things peacefully. Even if Folau decided to violate this teaching, he has plenty of properties and did not need to take money from others. If he wanted a court battle, then he could have used his own money. This is un-Christian in the extreme.

Folau’s supporters cite the admonishment by Jesus that his followers should go to all corners of the world and preach the gospel. That is an admonishment given to pastors and leaders of the flock. The rest of us are told to influence others by the way we live.

Folau is the one who set himself up as one who acts according to the Christian faith and left himself open to be judged by the same creed. If all his actions had been in keeping with the faith, then one would have no quarrel with him. But when one chooses Christianity when it is convenient, and goes the way of the world when it suits, then there is only word to describe it: hypocrisy.

Hypocrites were one category of people who attracted a huge amount of criticism from Jesus Christ during his earthly sojourn. Israel Folau should muse on this.