Planet Russell

,

Planet DebianJonathan Carter: DebConf 20 Sessions

DebConf20 is happening from 23 August to 29 August. The full is schedule available on the DebConf20 website.

I’m preparing (or helping to prepare) 3 sessions for this DebConf. I wish I had the time for more, but with my current time constraints, even preparing for these sessions took some careful planning!

Bits from the DPL

Time: Aug 24 (Mon): 16:00 UTC.

The traditional DebConf talk from the DPL, where we take a look at the state of the Debian project and where we’re heading. This talk is pre-recorded, but there will be a few minutes after the talk for questions.

https://debconf20.debconf.org/talks/9-bits-from-the-dpl/

Leadership in Debian BOF/Panel

Time: Aug 27 (Thu): 18:00 UTC.

In this session, we will host a panel of people who hold (or who have held) leadership positions within Debian.

We’ll go through a few questions for the panel and then continue with open questions and discussion.

https://debconf20.debconf.org/talks/46-leadership-in-debian-bofpanel/

Local Teams

Time: Aug 29 (Sat): 19:00 UTC.

We already have a number of large and very successful Debian Local Groups (Debian France, Debian Brazil and Debian Taiwan, just to name a few), but what can we do to help support upcoming local groups or help spark interest in more parts of the world?

In this BoF, we’ll discuss the possibility of setting up a local group support team or a new delegation that will keep track of local teams, manage budgets and get new local teams bootstrapped.

https://debconf20.debconf.org/talks/50-local-teams/

CryptogramDiceKeys

DiceKeys is a physical mechanism for creating and storing a 192-bit key. The idea is that you roll a special set of twenty-five dice, put them into a plastic jig, and then use an app to convert those dice into a key. You can then use that key for a variety of purposes, and regenerate it from the dice if you need to.

This week Stuart Schechter, a computer scientist at the University of California, Berkeley, is launching DiceKeys, a simple kit for physically generating a single super-secure key that can serve as the basis for creating all the most important passwords in your life for years or even decades to come. With little more than a plastic contraption that looks a bit like a Boggle set and an accompanying web app to scan the resulting dice roll, DiceKeys creates a highly random, mathematically unguessable key. You can then use that key to derive master passwords for password managers, as the seed to create a U2F key for two-factor authentication, or even as the secret key for cryptocurrency wallets. Perhaps most importantly, the box of dice is designed to serve as a permanent, offline key to regenerate that master password, crypto key, or U2F token if it gets lost, forgotten, or broken.

[...]

Schechter is also building a separate app that will integrate with DiceKeys to allow users to write a DiceKeys-generated key to their U2F two-factor authentication token. Currently the app works only with the open-source SoloKey U2F token, but Schechter hopes to expand it to be compatible with more commonly used U2F tokens before DiceKeys ship out. The same API that allows that integration with his U2F token app will also allow cryptocurrency wallet developers to integrate their wallets with DiceKeys, so that with a compatible wallet app, DiceKeys can generate the cryptographic key that protects your crypto coins too.

Here's the DiceKeys website and app. Here's a short video demo. Here's a longer SOUPS talk.

Preorder a set here.

Note: I am an adviser on the project.

Another news article. Slashdot thread. Hacker News thread. Reddit thread.

Planet DebianSven Hoexter: google cloud buster images without python 2

Fun in the morning, we realized that the Debian Cloud image builds dropped python 2 and that propagated to the Google provided Debian/buster images. So in case you use something like ansible, and so far assumed python 2 as the default interpreter, and installed additional python 2 modules to support ansible modules, you now have to either install python 2 again or just move to python 3k.

We just try to suffer it through now, and set interpreter_python = auto in our ansible.cfg to anticipate the new default behaviour, which is planned for ansible 2.12. See also https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html

Other lesson to learn here: The GCE Debian stable images are not stable. Blends in nicely with this rant, though it's not 100% a Google Cloud foul this time.

Worse Than FailureCodeSOD: Sudon't

There are a few WTFs in today's story. Let's get the first one out of the way: Jan S downloaded a shell script and ran it as root, without reading it. Now, let's be fair, that's honestly a pretty mild WTF; we've all done something similar, and popular software tools still tell you to install them with a curl … | sh, and then sudo themselves extra permissions in the script.

The software being installed in this case is a tool for accessing Bitlocker encrypted drives from Linux. And the real WTF for this one is the install script, which we'll dig into in a moment. This is not, however, some small scale open source project thrown together by hobbyists, but instead released by Initech's "Data Recovery" business. In this case, this is the open source core of a larger data recovery product- if you're willing to muck around with low level commands and configs, you can do it for free, but if you want a vaguely usable UI, get ready to pony up $40.

With that in mind, let's take a look at the script. We're going to do this in chunks, because nearly everything is wrong. You might think I'm exaggerating, but here's the first two lines of the script:

#!/bin/bash home_dir="/home/"${USER}"/initech.bitlocker"

That is not how you find out the user's home directory. We'll usually use ${HOME}, or since the shebang tells us this is definitely bash, we could just use ~. Jan also points out that while a username probably shouldn't have a space, it's possible, and since the ${USER} isn't in quotes, this breaks in that case.

echo ${home_dir} install_dir=$1 if [ ! -d "${install_dir}" ]; then install_dir=${home_dir} if [ ! -d "${install_dir}" ]; then echo "create dir : "${install_dir} mkdir ${install_dir}

Who wants indentation in their scripts? And if a script supports arguments, should we tell the user about it? Of course not! Just check to see if they supplied an argument, and if they did, we'll treat that as the install directory.

As a bonus, the mkdir line protects people like Jan who run this script as root, at least if their home directory is /root, which is common. When it tries to mkdir /home/root/initech.bitlocker, the script fails there.

echo "Install software to ${install_dir}" cp -rf ./* ${install_dir}"/"

Once again, the person who wrote this script doesn't seem to understand what the double quotes in Bash are for, but the real magic is the next segment:

echo "Copy runtime environment ..." sudo cp -f ./libcrypto.so.1.0.0 /usr/lib/ sudo cp -f ./libssl.so.1.0.0 /usr/lib64 sudo cp -f ./libcrypto.so.1.0.0 /usr/lib/ sudo cp -f ./libssl.so.1.0.0 /usr/lib64

Did you have libssl already installed in your system? Well now you have this version! Hope that's okay for you. We like our version of libssl and libcrypto so much we're copying them into your library directories twice. They probably meant to copy libcrypto and libssl to both lib and lib64, but messed up.

Well, that is assuming you already have a lib64 directory, because if you don't, you now have a lib64 file which contains the data from libssl.so.1.0.0.

This is the installer for a piece of software which has been released as part of a product that Initech wants to sell, and they don't successfully install it.

sudo ln -s ${install_dir}/mount.bitlocker /usr/bin/mount.bitlocker sudo ln -s ${install_dir}/bitlocker.creator /usr/bin/create.bitlocker sudo ln -s ${install_dir}/activate.sh /usr/bin/initech.bitlocker.active sudo ln -s ${install_dir}/initech.mount.sh /usr/bin/initech.bitlocker.mount sudo ln -s ${install_dir}/initech.bitlocker.sh /usr/bin/initech.bitlocker

Hey, here's an install step with no critical mistakes, assuming that no other package or tool has tried to claim those names in /usr/bin, which is probably true (Jan actually checked this using dpkg -S … to see if any packages wanted to use that path).

source /etc/os-release case $ID in debian|ubuntu|devuan) echo "Installing dependent package - curl ..." sudo apt-get install curl -y echo "Installing dependent package - openssl ..." sudo apt-get install openssl -y echo "Installing dependent package - fuse ..." sudo apt-get install fuse -y echo "Installing dependent package - gksu ..." sudo apt-get install gksu -y ;;

Here's the first branch of our case. They've learned to indent. They've chosen to slap the -y flag on all the apt-get commands, which means the user isn't going to get a choice about installing these packages, which is mildly annoying. It's also worth noting that sourceing /etc/os-release can be considered harmful, but clearly "not doing harm" isn't high on this script's agenda.

centos|fedora|rhel) yumdnf="yum" if test "$(echo "$VERSION_ID >= 22" | bc)" -ne 0; then yumdnf="dnf" fi echo "Installing dependent package - curl ..." sudo $yumdnf install -y curl echo "Installing dependent package - openssl ..." sudo $yumdnf install -y openssl echo "Installing dependent package - fuse ..." sudo $yumdnf install -y fuse3-libs.x86_64 ;;

So, maybe they just don't think if supports additional indentation? They indent the case fine. I'm not sure what their thinking is.

Speaking of if, look closely at that version check: test "$(echo "$VERSION_ID >= 22" | bc)" -ne 0.

Now, this is almost clever. If your Linux version number uses decimal values, like 18.04, you can't do a simple if [ "$VERSION_ID" -ge 22]…: you'd get an integer expression expected error. So using bc does make sense…ish. It would be good to check if, y'know, bc were actually installed- it probably is, but you don't know- and it might be better to actually think about the purpose of the check.

They don't actually care what version of Redhat Linux you're running. What they're checking is if your version uses yum for package management, or its successor dnf. A more reliable check would be to simply see if dnf is a valid command, and if not, fallback to yum.

Let's finish out the case statement:

*) exit 1 ;; esac

So if your system doesn't use an apt based package manager or a yum/dnf based package manager, this just bails at this point. No error message, just an error number. You know it failed, and you don't know why, and it failed after copying a bunch of crap around your system.

So first it mostly installs itself, then it checks to see if it can successfully install all of its dependencies. And if it fails, does it clean up the changes it made? You better believe it doesn't!

echo "" echo "Initech BitLocker Loader has been installed to "${install_dir}" successfully." echo "Run initech.bitlocker --help to learn more about Initech BitLocker Loader"

This is a pretty optimistic statement, and while yes, it has theoretically been installed to ${install_dir}, assuming that we've gotten this far, it's really installed to your /usr/bin directory.

The real extra super-special puzzle to me is that it interfaces with your package manager to install dependencies. But it also installs its own versions of libcrypto and libssl, which don't come from your package manager. Ignoring the fact that it probably *installs them into the wrong places*, it seems bad. Suspicious, bad, and troubling.

Jan didn't send us the uninstall script, and honestly, I assume there isn't one. But if there is one, you know it probably tries to do rm -rf /${SOME_VAR_THAT_MIGHT_BE_EMPTY} somewhere in there. Which, in consideration, is probably the safest way to uninstall this software anyway.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianUlrike Uhlig: Code reviews: from nitpicking to cooperation

After we gave our talk at DebConf 20, Doing things together, there were 5 minutes left for the live Q&A. Pollo asked a question that I think is interesting and deserves a longer answer: How can we still have a good code review process without making it a "you need to be perfect" scenario? I often find picky code reviews help me write better code.

I find it useful to first disentangle what code reviews are good for, how we do them, why we do them that way, and how we can potentially improve processes.

What are code reviews good for?

Code review and peer review are great methods for cooperation aiming at:

  • Ensuring that the code works as intended
  • Ensuring that the task was fully accomplished and no detail left out
  • Ensuring that no security issues have been introduced
  • Making sure the code fits the practices of the team and is understandable and maintainable by others
  • Sharing insights, transferring knowledge between code author and code reviewer
  • Helping the code author to learn to write better code

Looking at this list, the last point seems to be more like a nice side effect of all the other points. :)

How do code reviews happen in our communities?

It seems to be a common assumption that code reviews are—and have to be—picky and perfectionist. To me, this does not actually seem to be a necessity to accomplish the above mentioned goals. We might want to work with precision—a quality which is different from perfection. Perfection can hardly be a goal: perfection does not exist.

Perfectionist dynamics can lead to failing to call something "good enough" or "done". Sometimes, a disproportionate amount of time is invested in writing (several) code reviews for minor issues. In some cases, strong perfectionist dynamics of a reviewer can create a feeling of never being good enough along with a loss of self esteem for otherwise skilled code authors.

When do we cross the line?

When going from cooperation, precision, and learning to write better code, to nitpicking, we are crossing a line: nitpicking means to pedantically search for others' faults. For example, I once got one of my Git commits at work criticized merely for its commit message that was said to be "ugly" because I "use[d] the same word twice" in it.

When we are nitpicking, we might not give feedback in an appreciative, cooperative way, we become fault finders instead. From there it's a short way to operating on the level of blame.

Are you nitpicking to help or are you nitpicking to prove something? Motivations matter.

How can we improve code reviewing?

When we did something wrong, we can do better next time. When we are told that we are wrong, the underlying assumption is that we cannot change (See Brené Brown, The difference between blame and shame). We can learn to go beyond blame.

Negative feedback rarely leads to improvement if the environment in which it happens lacks general appreciation and confirmation. We can learn to give helpful feedback. It might be harder to create an appreciative environment in which negative feedback is a possibility for growth. One can think of it like of a relationship: in a healthy relationship we can tell each other when something does not work and work it out—because we regularly experience that we respect, value, and support each other.

To be able to work precisely, we need guidelines, tools, and time. It's not possible to work with precision if we are in a hurry, burnt out, or working under a permanent state of exception. The same is true for receiving picky feedback.

On DebConf's IRC channel, after our talk, marvil07 said: On picky code reviews, something that I find useful is automation on code reviews; i.e. when a bot is stating a list of indentation/style errors it feels less personal, and also saves time to humans to provide more insightful changes. Indeed, we can set up routines that do automatic fault checking (linting). We can set up coding guidelines. We can define what we call "done" or "good enough".

We can negotiate with each other how we would like code to be reviewed. For example, one could agree that a particularly perfectionist reviewer should point out only functional faults. They can spare their time and refrain from writing lengthy reviews about minor esthetic issues that have never made it into a guideline. If necessary, author and reviewer can talk about what can be improved on the long term during a retrospective. Or, on the contrary, one could explicitly ask for a particularly detailed review including all sorts of esthetic issues to learn the best practices of a team applied to one's own code.

In summary: let's not lose sight of what code reviews are good for, let's have a clear definition of "done", let's not confuse precision with perfection, let's create appreciative work environments, and negotiate with each other how reviews are made.

I'm sure you will come up with more ideas. Please do not hesitate to share them!

Planet DebianNorbert Preining: Social Equality and Free Software – BoF at DebConf20

Shortly after yesterday’s start of the Debian Conference 2020, I had the honor to participate in a BoF on social equality in free software, led by the OSI vice president and head of the FOSSASIA community, Hong Phuc Dang. The group of discussants consisted of OSS representatives from a wide variety of countries (India, Indonesia, China, Hong Kong, Germany, Vietnam, Singapore, Japan).

After a short introduction by Hong Phuc we turned to a self-introduction and “what is equality for me” round. This brought up already a wide variety of issues that need to be addressed if we want to counter inequality in free software (culture differences, language barriers, internet connection, access to services, onboarding difficulties, political restrictions, …).

Unfortunately, on-air time was rather restricted, but even after the DebConf related streaming time slot was finished, we continued discussing problems and possible approaches for another two hours. We have agreed to continue our collaboration and meetings in the hope that we, in particular the FOSSASIA community, can support those in need to counter inequality.

Concluding, I have to say I am very happy to be part of the FOSSASIA community – where real diversity is lived and everyone strives for and tries to increase social equality. In the DebConf IRC chat I was asked why at FOSSASIA we have about a 50:50 quote between women and men, in contrast to the usual 10:90 predominant in most software communities including Debian. For me this boils down to many reasons, one being competent female leadership, Hong Phuc is inspiring and competent to a degree I haven’t seen in anyone else. Another reason is of course that software development is, especially in developing countries, one of the few “escape pods” for any gender, and thus fully embraced by normally underrepresented groups. Finally, but this is a typical chicken-egg problem, the FOSSASIA community is not doing any specific gender politics, but simply remains open and friendly to everyone. I think Debian, and in particular the diversity movement in Debian – can learn a lot from the FOSSASIA community. At the end we are all striving for more equality in our projects and in the realm of free software as a whole!

Thanks again for all the participants for the very inspiring discussion, and I am looking forward to our next meetings!

Planet DebianArnaud Rebillout: Send emails from your terminal with msmtp

In this tutorial, we'll configure everything needed to send emails from the terminal. We'll use msmtp, a lightweight SMTP client. For the sake of the example, we'll use a GMail account, but any other email provider can do. Your OS is expected to be Debian, as usual on this blog, although it doesn't really matter. We will also see how to store the credentials for the email account in the system keyring. And finally, we'll go the extra mile, and see how to configure various command-line utilities so that they automatically use msmtp to send emails. Even better, we'll make msmtp the default email sender, to actually avoid configuring these utilities one by one.

Prerequisites

Strong prerequisites (if you don't recognize yourself here, you probably landed on the wrong page):

  • You run Linux on your computer (let's assume a Debian-like distro).
  • You want to send emails from your terminal.

Weak prerequisites (if your setup doesn't match those points exactly, that's fine, you can still read on):

  • Your email account is a GMail one.
  • Your desktop environment is GNOME.

GMail account setup

For a GMail account, there's a bit of configuration to do. For other email providers, I have no idea, maybe you can just skip this part, or maybe you will have to go through a similar procedure.

If you want an external program (msmtp in this case) to talk to the GMail servers on your behalf, and send emails, you can't just use your usual GMail password. Instead, GMail requires you to generate so-called app passwords, one for each application that needs to access your GMail account.

This approach has several advantages:

  • it will basically work, GMail won't block you because it thinks that you're trying to sign in from an unknown device, a weird location or whatever.
  • your main GMail password remains secret, you won't have to write it down in any configuration file or anywhere else.
  • you can change your main GMail password, no breakage, apps will still work as each of them use their own passwords.
  • you can revoke an app password anytime, without impacting anything else.

So app passwords are a good idea, it just requires a bit of work to set it up. Let's see what it takes.

First, 2-Step Verification must be enabled on your GMail account. Visit https://myaccount.google.com/security, and if that's not the case, enable it. You'll need to authorize all of your devices (computer(s), phone(s) and so on), and it can be a bit tedious, granted. But you only have to do it once in a lifetime, and after it's done, you're left with a more secure account, so it's not that bad, right?

Enabling the 2-Step Verification will unlock the feature we need: App passwords. Visit https://myaccount.google.com/apppasswords, and under "Signing in to Google", click "App passwords", and generate one. An app password is a 16 characters string, something like qwertyuiopqwerty. It's supposed to be used from only one place, ie. from ONE application that is installed on ONE device. That's why it's common to give it a name of the form application@device, so in our case it could be msmtp@laptop, but really it's free form, choose whatever name suits you, as long as it makes sense to you.

So let's give a name to this app password, write it down for now, and we're done with the GMail config.

Send your first email

Time to get started with msmtp.

First thing first, installation, trivial:

sudo apt install msmtp

Let's try to send an email. At this point, we did not create any configuration file for msmtp yet, so we have to provide every details on the command line.

# Write a dummy email
cat << EOF > message.txt
From: YOUR_LOGIN@gmail.com
To: SOMEONE_ELSE@SOMEWHERE_ELSE.com
Subject: Cafe Sua Da

Iced-coffee with condensed milk
EOF

# Send it
cat message.txt | msmtp \
    --auth=on --tls=on \
    --host smtp.gmail.com \
    --port 587 \
    --user YOUR_LOGIN \
    --read-envelope-from \
    --read-recipients

# msmtp prompts you for your password:
# this is where goes the app password!

Obviously, in this example you should replace the uppercase words with the real thing, that is, your email login, and real email addresses.

Also, let me insist, you must enter the app password that was generated previously, not your real GMail password.

And it should work already, this email should have been sent and received by now.

So let me explain quickly what happened here.

In the file message.txt, we provided From: (the email address of the person sending the email) and To: (the destination email address). Then we asked msmtp to re-use those values to set the envelope of the email with --read-envelope-from and --read-recipients.

What about the other parameters?

  • --auth=on because we want to authenticate with the server.
  • --tls=on because we want to make sure that the communication with the server is encrypted.
  • --host and --port tells where to find the server. If you don't use GMail, adjust that accordingly.
  • --user is obviously your GMail username.

For more details, you should refer to the msmtp documentation.

Write a configuration file

So we could send an email, that's cool already.

However the command to do that was a bit long, and we don't want to juggle with all these arguments every time we send an email. So let's write down all of that into a configuration file.

msmtp supports two locations: ~/.msmtprc and ~/.config/msmtp/config, at your preference. In this tutorial we'll use ~/.msmtprc for brevity:

cat << 'EOF' > ~/.msmtprc
defaults
tls on

account gmail
auth on
host smtp.gmail.com
port 587
user YOUR_LOGIN
from YOUR_LOGIN@gmail.com

account default : gmail
EOF

And for a quick explanation:

  • under defaults are the default values for all the following accounts.
  • under account are the settings specific to this account, until another account line is found.
  • finally, the last line defines which account is the default.

All in all it's pretty simple, and it's becoming easier to send an email:

# Write a dummy email. Note that the
# header 'From:' is no longer needed,
# it's already in '~/.msmtprc'.
cat << 'EOF' > message.txt
To: SOMEONE_ELSE@SOMEWHERE_ELSE.com
Subject: Flat White

The milky way for coffee
EOF

# Send it
cat message.txt | msmtp \
    --account default \
    --read-recipients

Actually, --account default is not needed, as it's the default anyway if you don't provide a --account argument. Furthermore --read-recipients can be shortened as -t. So we can make it real short now:

msmtp -t < message.txt

At this point, life is good! Except for one thing maybe: we still have to type the password every time we send an email. Surely it must be possible to avoid that annoyance...

Store your password in the system keyring

For this part, we'll make use of the libsecret tool to store the password in the system keyring via the Secret Service API. It means that your desktop environment should implement the Secret Service specification, which is the case for both GNOME and KDE.

Note that GNOME provides Seahorse to have a look at your secrets, KDE has the KDE Wallet. There's also KeePassXC, which I have only heard of but never used. I guess it can be your password manager of choice if you use neither GNOME nor KDE.

For those running an up-to-date Debian unstable, you should have msmtp >= 1.8.11-2, and you're all good to go. For those having an older version than that however, you will have to install the package msmtp-gnome in order to have msmtp built with libsecret support. Note that this package depends on seahorse, hence it pulls in a good part of the GNOME stack when you install it. For those not running GNOME, that's unfortunate. All of this was discussed and fixed in #962689.

Alright! So let's just make sure that the libsecret tools are installed:

sudo apt install libsecret-tools

And now we can store our password in the system keyring with this command:

secret-tool store --label msmtp \
    host smtp.gmail.com \
    service smtp \
    user YOUR_LOGIN

If this looks a bit too magic, and you want something more visual, you can actually fire a GUI like seahorse (for GNOME users), or kwalletmanager5 (for KDE users), and then you will see what passwords are stored in there.

Here's a screenshot of Seahorse, with a msmtp password stored:

seahorse with msmtp password

Let's try to send an email again:

msmtp -t < message.txt

No need for a password anymore, msmtp got it from the system keyring!

For more details on how msmtp handle the passwords, and to see what other methods are supported, refer to the extensive documentation.

Use-cases and integration

Let's go over a few use-cases, situations where you might end up sending emails from the command-line, and what configuration is required to make it work with msmtp.

Git Send-Email

Sending emails with git is a common workflow for some projects, like the Linux kernel. How does git send-email actually send emails? From the git-send-email manual page:

the built-in default is to search for sendmail in /usr/sbin, /usr/lib and $PATH if such program is available

It is possible to override this default though:

--smtp-server=
[...] Alternatively it can specify a full pathname of a sendmail-like program instead; the program must support the -i option.

So in order to use msmtp here, you'd add a snippet like that to your ~/.gitconfig file:

[sendemail]
    smtpserver = /usr/bin/msmtp

For a full guide, you can also refer to https://git-send-email.io.

Debian developer tools

Tools like bts or reportbug are also good examples of command-line tools that need to send emails.

From the bts manual page:

--sendmail=SENDMAILCMD
Specify the sendmail command [...] Default is /usr/sbin/sendmail.

So if you want bts to send emails with msmtp instead of sendmail, you must use bts --sendmail='/usr/bin/msmtp -t'.

Note that bts also loads settings from the file /etc/devscripts.conf and ~/.devscripts, so you could also set BTS_SENDMAIL_COMMAND='/usr/bin/msmtp -t' in one of those files.

From the reportbug manual page:

--mta=MTA
Specify an alternate MTA, instead of /usr/sbin/sendmail (the default).

In order to use msmtp here, you'd write reportbug --mta=/usr/bin/msmtp.

Note that reportbug reads it settings from /etc/reportbug.conf and ~/.reportbugrc, so you could as well set mta /usr/bin/msmtp in one of those files.

So who is this sendmail again?

By now, you probably noticed that sendmail seems to be considered the default tool for the job, the "traditional" command that has been around for ages.

Rather than configuring every tool to use something else than sendmail, wouldn't it be simpler to actually replace sendmail by msmtp? Like, create a symlink that points to msmtp, something like ln -sr /usr/bin/msmtp /usr/sbin/sendmail? So that msmtp acts as a drop-in replacement for sendmail, and there's nothing else to configure?

Answer is yes, kind of. Actually, the first msmtp feature that is listed on the homepage is "Sendmail compatible interface (command line options and exit codes)". Meaning that msmtp is a drop-in replacement for sendmail, that seems to be the intent.

However, you should refrain from creating or modifying anything in /usr, as it's the territory of the package manager, apt. Any change in /usr might be overwritten by apt the next time you run an upgrade or install new packages.

In the case of msmtp, there is actually a package named msmtp-mta that will create this symlink for you. So if you really want a definitive replacement for sendmail, there you go:

sudo apt install msmtp-mta

From this point, sendmail is now a symlink /usr/sbin/sendmail → /usr/bin/msmtp, and there's no need to configure git, bts, reportbug or any other tool that would rely on sendmail. Everything should work "out of the box".

Conclusion

I hope that you enjoyed reading this article! If you have any comment, feel free to send me a short email, preferably from your terminal!

,

Planet DebianJonathan Dowland: out of control

Chemical Brothers — Out Of Control (21 Minutes of Madness remix)

Chemical Brothers — Out Of Control (21 Minutes of Madness remix)

I picked this up last year. It was issued to promote the 20th anniversary re-issue of the parent album "Surrender". I remember liking this song back when it came out. At that time I didn't know who the guest singer was — Bernard Sumner — and if I had it wouldn't mean anything to me.

This is a pretty good mix. There's nothing "extra" in the mix, really, it's the same elements as the original 7 minute version, for 21 minutes this time, with perhaps some more production elements (more dubby stuff) but it doesn't seem to overstay its welcome.

Planet DebianEnrico Zini: Doing things /together/

Here are the slides of mine and Ulrike's talk Doing things /together/.

Our thoughts about cooperation aspects of doing things together.

Sometimes in Debian we do work together with others, and sometimes we are a number of people who work alone, and happen to all upload their work in the same place.

In times when we have needed to take important decisions together, this distinction has become crucial, and some of us might have found that we were not as good at cooperation as we would have thought.

This talk is intended for everyone who is part of a larger community. We will show concepts and tools that we think could help understand and shape cooperation.

A recording of the talk should be available in the next day, and I'll replace this phrase with a video link once it's available.

The slides have extensive notes: you can use ViewNotes in LibreOffice Impress to see them.

Here are the Inkscape sources for the graphs:

Here are links to resources quoted in the talk:

In the Q&A, pollo asked:

How can we still have a good code review process without making it a "you need to be perfect" scenario? I often find picky code reviews help me write better code.

Ulrike wrote a more detailed answer: Code reviews: from nitpicking to cooperation

Planet DebianVincent Bernat: Zero-Touch Provisioning for Cisco IOS

The official documentation to automatically upgrade and configure on first boot a Cisco switch running on IOS, like a Cisco Catalyst 2960-X Series switch, is scarce on details. This note explains how to configure the ISC DHCP Server for this purpose.


When booting for the first time, Cisco IOS sends a DHCP request on all ports:

Dynamic Host Configuration Protocol (Discover)
    Message type: Boot Request (1)
    Hardware type: Ethernet (0x01)
    Hardware address length: 6
    Hops: 0
    Transaction ID: 0x0000117c
    Seconds elapsed: 0
    Bootp flags: 0x8000, Broadcast flag (Broadcast)
    Client IP address: 0.0.0.0
    Your (client) IP address: 0.0.0.0
    Next server IP address: 0.0.0.0
    Relay agent IP address: 0.0.0.0
    Client MAC address: Cisco_6c:12:c0 (b4:14:89:6c:12:c0)
    Client hardware address padding: 00000000000000000000
    Server host name not given
    Boot file name not given
    Magic cookie: DHCP
    Option: (53) DHCP Message Type (Discover)
    Option: (57) Maximum DHCP Message Size
    Option: (61) Client identifier
        Length: 25
        Type: 0
        Client Identifier: cisco-b414.896c.12c0-Vl1
    Option: (55) Parameter Request List
        Length: 12
        Parameter Request List Item: (1) Subnet Mask
        Parameter Request List Item: (66) TFTP Server Name
        Parameter Request List Item: (6) Domain Name Server
        Parameter Request List Item: (15) Domain Name
        Parameter Request List Item: (44) NetBIOS over TCP/IP Name Server
        Parameter Request List Item: (3) Router
        Parameter Request List Item: (67) Bootfile name
        Parameter Request List Item: (12) Host Name
        Parameter Request List Item: (33) Static Route
        Parameter Request List Item: (150) TFTP Server Address
        Parameter Request List Item: (43) Vendor-Specific Information
        Parameter Request List Item: (125) V-I Vendor-specific Information
    Option: (255) End

It requests a number of options, including the Bootfile name option 67, the TFTP server address option 150 and the Vendor-Identifying Vendor-Specific Information Option 125—or VIVSO. Option 67 provides the name of the configuration file located on the TFTP server identified by option 150. Option 125 includes the name of the file describing the Cisco IOS image to use to upgrade the switch. This file only contains the name of the tarball embedding the image.1

Configuring the ISC DHCP Server to answer with the TFTP server address and the name of the configuration file is simple enough:

filename "ob2-p2.example.com";
option tftp-server-address 172.16.15.253;

However, if you want to also provide the image for upgrade, you have to specify a hexadecimal-encoded string:2

option vivso 00:00:00:09:24:05:22:63:32:39:36:30:2d:6c:61:6e:62:61:73:65:6b:39:2d:74:61:72:2e:31:35:30:2d:32:2e:53:45:31:31:2e:74:78:74;

Having a large hexadecimal-encoded string inside a configuration file is quite unsatisfying. Instead, the ISC DHCP Server allows you to express this information in a more readable way using the option space statement:

# Create option space for Cisco and encapsulate it in VIVSO/vendor space
option space cisco code width 1 length width 1;
option cisco.auto-update-image code 5 = text;
option vendor.cisco code 9 = encapsulate cisco;

# Image description for Cisco IOS ZTP
option cisco.auto-update-image = "c2960-lanbasek9-tar.150-2.SE11.txt";

# Workaround for VIVSO option 125 not being sent
option vivso.iana code 0 = string;
option vivso.iana = 01:01:01;

Without the workaround mentioned in the last block, the ISC DHCP Server would not send back option 125. With such a configuration, it returns the following answer, including a harmless additional enterprise 0 encapsulated into option 125:

Dynamic Host Configuration Protocol (Offer)
    Message type: Boot Reply (2)
    Hardware type: Ethernet (0x01)
    Hardware address length: 6
    Hops: 0
    Transaction ID: 0x0000117c
    Seconds elapsed: 0
    Bootp flags: 0x8000, Broadcast flag (Broadcast)
    Client IP address: 0.0.0.0
    Your (client) IP address: 172.16.15.6
    Next server IP address: 0.0.0.0
    Relay agent IP address: 0.0.0.0
    Client MAC address: Cisco_6c:12:c0 (b4:14:89:6c:12:c0)
    Client hardware address padding: 00000000000000000000
    Server host name not given
    Boot file name: ob2-p2.example.com
    Magic cookie: DHCP
    Option: (53) DHCP Message Type (Offer)
    Option: (54) DHCP Server Identifier (172.16.15.252)
    Option: (51) IP Address Lease Time
    Option: (1) Subnet Mask (255.255.248.0)
    Option: (6) Domain Name Server
    Option: (3) Router
    Option: (150) TFTP Server Address
        Length: 4
        TFTP Server Address: 172.16.15.252
    Option: (125) V-I Vendor-specific Information
        Length: 49
        Enterprise: Reserved (0)
        Enterprise: ciscoSystems (9)
            Length: 36
            Option 125 Suboption: 5
                Length: 34
                Data: 63323936302d6c616e626173656b392d7461722e3135302d…
    Option: (255) End

  1. The reason of this indirection is still puzzling me. I suppose it could be because updating the image name directly in option 125 is quite a hassle. ↩︎

  2. It contains the following information:

    • 0x00000009: Cisco’s Enterprise Number,
    • 0x24: length of the enclosed data,
    • 0x05: Cisco’s auto-update sub-option,
    • 0x22: length of the sub-option data, and
    • filename of the image description (c2960-lanbasek9-tar.150-2.SE11.txt).

    ↩︎

Planet DebianPhilipp Kern: Self-service buildd givebacks now use Salsa auth

As client certificates are on the way out and Debian's SSO solution is effectively not maintained any longer, I switched self-service buildd givebacks over to Salsa authentication. It lives again at https://buildd.debian.org/auth/giveback.cgi. For authorization you still need to be in the "debian" group for now, i.e. be a regular Debian member.

For convenience the package status web interface now features an additional column "Actions" with generated "giveback" links.

Please remember to file bugs if you give builds back because of flakiness of the package rather than the infrastructure and resist the temptation to use this excessively to let your package migrate. We do not want to end up with packages that require multiple givebacks to actually build in stable, as that would hold up both security and stable updates needlessly and complicate development.

,

Planet DebianNorbert Preining: Converting html to mp4

Such an obvious problem, convert a piece of html/js/css, often with animations, to a video (mp4 or similar). We were just put before this problem for the TUG 2020 online conference. Searching the internet it turned up mostly web services, some of them even with lots of money to pay. At the end (below I will give a short history) it turned out to be rather simple.

The key is to use timesnap, a tool to take screenshots from web pages. It is actively maintained, and internally uses puppeteer, which in turn uses Google Chrome browser headless. This also means that rendering quality is very high.

So having an html file available, with all the necessary assets, either online or local, one simply creates enough single screenshots per second so that they can be assembled later on into a video with ffmpeg.

In our case, we wanted our leaders to last 10secs before the actual presentation video starts. I decided to render at 30fps, which left me with the simple invocation:

timesnap Leader.html --viewport=1920,1080 --fps=30 --duration=10 --output-pattern="leader-%03d.png"

followed by conversion of the various png images to an mp4:

ffmpeg -r 30 -f image2 -s 1920x1080 -i leader-%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p leader.mp4

The -r is the fps, so needs to agree with the --fps above. Also the --viewport and -s values should better agree. -crf is the video quality, and -pix_fmt the pixel format.

With that very simple and quick invocation a nice leader video was ready!

History

It was actually more complicated than normal. For similar problems, it usually takes me about 5min of googling and a bit of scripting, but this time, it was actually a long way. Simply searching for “convert html to mp4” doesn’t give a lot but web services, often paid for. At some point I came up with the idea to use Electron and led to Electron Recorder, which looked promising, but didn’t work.

A bit more searching led me to PhantomJS, which is not developed anymore, but there was some explanation how to dump frames using phantomjs and merge them using ffmpeg, very similar to the above. Unfortunately, the rendering of the html page by phantomjs was broken, and thus not usable.

Thus I ventured off into searching for alternatives of PhantomJS, which brought me to puppeteer, and from there it wasn’t too long a way that pointed me at timesnap.

Till now it is surprising to me that such a basic task is neither well documented, so hopefully this page helps some users.

Planet DebianJelmer Vernooij: Debian Janitor: > 60,000 Lintian Issues Automatically Fixed

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

Scheduling Lintian Fixes

To determine which packages to process, the Janitor looks at the import of lintian output across the archive that is available in UDD [1]. It will prioritize those packages with the most and more severe issues that it has fixers for.

Once a package is selected, it will clone the packaging repository and run lintian-brush on it. Lintian-brush provides a framework for applying a set of “fixers” to a package. It will run each of a set of “fixers” in a pristine version of the repository, and handles most of the heavy lifting.

The Inner Workings of a Fixer

Each fixer is just an executable which gets run in a clean checkout of the package, and can make changes there. Most of the fixers are written in Python or shell, but they can be in any language.

The contract for fixers is pretty simple:

  • If the fixer exits with non-zero, the changes are reverted and fixer is considered to have failed
  • If exits with zero and made changes, then it should write a summary of its changes to standard out

If a fixer is uncertain about the changes it has made, it should report so on standard output using a pseudo-header. By default, lintian-brush will discard any changes with uncertainty but if you are running it locally you can still apply them by specifying --uncertain.

The summary message on standard out will be used for the commit message and (possibly) the changelog message, if the package doesn’t use gbp dch.

Example Fixer

Let’s look at an example. The package priority “extra” is deprecated since Debian Policy 4.0.1 (released August 2 017) – see Policy 2.5 "Priorities". Instead, most packages should use the “optional” priority.

Lintian will warn when a package uses the deprecated “extra” value for the “Priority” - the associated tag is priority-extra-is-replaced-by-priority-optional. Lintian-brush has a fixer script that can automatically replace “extra” with “optional”.

On systems that have lintian-brush installed, the source for the fixer lives in /usr/share/lintian-brush/fixers/priority-extra-is-replaced-by-priority-optional.py, but here is a copy of it for reference:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
#!/usr/bin/python3

from debmutate.control import ControlEditor
from lintian_brush.fixer import report_result, fixed_lintian_tag

with ControlEditor() as updater:
    for para in updater.paragraphs:
        if para.get("Priority") == "extra":
            para["Priority"] = "optional"
            fixed_lintian_tag(
                para, 'priority-extra-is-replaced-by-priority-optional')

report_result("Change priority extra to priority optional.")

This fixer is written in Python and uses the debmutate library to easily modify control files while preserving formatting — or back out if it is not possible to preserve formatting.

All the current fixers come with tests, e.g. for this particular fixer the tests can be found here: https://salsa.debian.org/jelmer/lintian-brush/-/tree/master/tests/priority-extra-is-replaced-by-priority-optional.

For more details on writing new fixers, see the README for lintian-brush.

For more details on debugging them, see the manual page.

Successes by fixer

Here is a list of the fixers currently available, with the number of successful merges/pushes per fixer:

Lintian Tag Previously merged/pushed Ready but not yet merged/pushed
uses-debhelper-compat-file 4906 4161
upstream-metadata-file-is-missing 4281 3841
package-uses-old-debhelper-compat-version 4256 3617
upstream-metadata-missing-bug-tracking 2438 2995
out-of-date-standards-version 2062 2936
upstream-metadata-missing-repository 1936 2987
trailing-whitespace 1720 2295
insecure-copyright-format-uri 1791 1093
package-uses-deprecated-debhelper-compat-version 1391 1287
vcs-obsolete-in-debian-infrastructure 872 782
homepage-field-uses-insecure-uri 527 1111
vcs-field-not-canonical 850 655
debian-changelog-has-wrong-day-of-week 224 376
debian-watch-uses-insecure-uri 314 242
useless-autoreconf-build-depends 112 428
priority-extra-is-replaced-by-priority-optional 315 194
debian-rules-contains-unnecessary-get-orig-source-target 35 428
tab-in-license-text 125 320
debian-changelog-line-too-long 186 190
debian-rules-sets-dpkg-architecture-variable 69 166
debian-rules-uses-unnecessary-dh-argument 42 182
package-lacks-versioned-build-depends-on-debhelper 125 95
unversioned-copyright-format-uri 43 136
package-needs-versioned-debhelper-build-depends 127 50
binary-control-field-duplicates-source 34 134
renamed-tag 73 69
vcs-field-uses-insecure-uri 14 109
uses-deprecated-adttmp 13 91
debug-symbol-migration-possibly-complete 12 88
copyright-refers-to-symlink-license 51 48
debian-control-has-unusual-field-spacing 33 66
old-source-override-location 32 62
out-of-date-copyright-format 20 62
public-upstream-key-not-minimal 43 30
older-source-format 17 54
custom-compression-in-debian-source-options 12 57
copyright-refers-to-versionless-license-file 29 39
tab-in-licence-text 33 31
global-files-wildcard-not-first-paragraph-in-dep5-copyright 28 33
out-of-date-copyright-format-uri 9 50
field-name-typo-dep5-copyright 29 29
copyright-does-not-refer-to-common-license-file 13 42
debhelper-but-no-misc-depends 9 45
debian-watch-file-is-missing 11 41
debian-control-has-obsolete-dbg-package 8 40
possible-missing-colon-in-closes 31 13
unnecessary-testsuite-autopkgtest-field 32 9
missing-debian-source-format 7 33
debhelper-tools-from-autotools-dev-are-deprecated 9 29
vcs-field-mismatch 8 29
debian-changelog-file-contains-obsolete-user-emacs-setting 33 0
patch-file-present-but-not-mentioned-in-series 24 9
copyright-refers-to-versionless-license-file 22 9
debian-control-has-empty-field 25 6
missing-build-dependency-for-dh-addon 10 20
obsolete-field-in-dep5-copyright 15 13
xs-testsuite-field-in-debian-control 20 7
ancient-python-version-field 13 12
unnecessary-team-upload 19 5
misspelled-closes-bug 6 16
field-name-typo-in-dep5-copyright 1 20
transitional-package-not-oldlibs-optional 4 17
maintainer-script-without-set-e 9 11
dh-clean-k-is-deprecated 4 14
no-dh-sequencer 14 4
missing-vcs-browser-field 5 12
space-in-std-shortname-in-dep5-copyright 6 10
xc-package-type-in-debian-control 4 11
debian-rules-missing-recommended-target 4 10
desktop-entry-contains-encoding-key 1 13
build-depends-on-obsolete-package 4 9
license-file-listed-in-debian-copyright 1 12
missing-built-using-field-for-golang-package 9 4
unused-license-paragraph-in-dep5-copyright 4 7
missing-build-dependency-for-dh_command 6 4
comma-separated-files-in-dep5-copyright 3 6
systemd-service-file-refers-to-var-run 4 5
copyright-not-using-common-license-for-apache2 3 5
debian-tests-control-autodep8-is-obsolete 2 6
dh-quilt-addon-but-quilt-source-format 2 6
no-homepage-field 3 5
font-packge-not-multi-arch-foreign 1 6
homepage-in-binary-package 1 4
vcs-field-bitrotted 1 3
built-using-field-on-arch-all-package 2 1
copyright-should-refer-to-common-license-file-for-apache-2 1 2
debian-pyversions-is-obsolete 3 0
debian-watch-file-uses-deprecated-githubredir 1 1
executable-desktop-file 1 1
skip-systemd-native-flag-missing-pre-depends 1 1
vcs-field-uses-not-recommended-uri-format 1 1
init.d-script-needs-depends-on-lsb-base 1 0
maintainer-also-in-uploaders 1 0
public-upstream-keys-in-multiple-locations 1 0
wrong-debian-qa-group-name 1 0
Total 29656 32209

Footnotes

[1]temporarily unavailable due to Debian bug #960156 – but the Janitor is relying on historical data

For more information about the Janitor's lintian-fixes efforts, see the landing page

,

CryptogramFriday Squid Blogging: Rhode Island's State Appetizer Is Calamari

Rhode Island has an official state appetizer, and it's calamari. Who knew?

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityFBI, CISA Echo Warnings on ‘Vishing’ Threat

The Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) on Thursday issued a joint alert to warn about the growing threat from voice phishing or “vishing” attacks targeting companies. The advisory came less than 24 hours after KrebsOnSecurity published an in-depth look at a crime group offering a service that people can hire to steal VPN credentials and other sensitive data from employees working remotely during the Coronavirus pandemic.

“The COVID-19 pandemic has resulted in a mass shift to working from home, resulting in increased use of corporate virtual private networks (VPNs) and elimination of in-person verification,” the alert reads. “In mid-July 2020, cybercriminals started a vishing campaign—gaining access to employee tools at multiple companies with indiscriminate targeting — with the end goal of monetizing the access.”

As noted in Wednesday’s story, the agencies said the phishing sites set up by the attackers tend to include hyphens, the target company’s name, and certain words — such as “support,” “ticket,” and “employee.” The perpetrators focus on social engineering new hires at the targeted company, and impersonate staff at the target company’s IT helpdesk.

The joint FBI/CISA alert (PDF) says the vishing gang also compiles dossiers on employees at the specific companies using mass scraping of public profiles on social media platforms, recruiter and marketing tools, publicly available background check services, and open-source research. From the alert:

“Actors first began using unattributed Voice over Internet Protocol (VoIP) numbers to call targeted employees on their personal cellphones, and later began incorporating spoofed numbers of other offices and employees in the victim company. The actors used social engineering techniques and, in some cases, posed as members of the victim company’s IT help desk, using their knowledge of the employee’s personally identifiable information—including name, position, duration at company, and home address—to gain the trust of the targeted employee.”

“The actors then convinced the targeted employee that a new VPN link would be sent and required their login, including any 2FA [2-factor authentication] or OTP [one-time passwords]. The actor logged the information provided by the employee and used it in real-time to gain access to corporate tools using the employee’s account.”

The alert notes that in some cases the unsuspecting employees approved the 2FA or OTP prompt, either accidentally or believing it was the result of the earlier access granted to the help desk impersonator. In other cases, the attackers were able to intercept the one-time codes by targeting the employee with SIM swapping, which involves social engineering people at mobile phone companies into giving them control of the target’s phone number.

The agencies said crooks use the vished VPN credentials to mine the victim company databases for their customers’ personal information to leverage in other attacks.

“The actors then used the employee access to conduct further research on victims, and/or to fraudulently obtain funds using varying methods dependent on the platform being accessed,” the alert reads. “The monetizing method varied depending on the company but was highly aggressive with a tight timeline between the initial breach and the disruptive cashout scheme.”

The advisory includes a number of suggestions that companies can implement to help mitigate the threat from these vishing attacks, including:

• Restrict VPN connections to managed devices only, using mechanisms like hardware checks or installed certificates, so user input alone is not enough to access the corporate VPN.

• Restrict VPN access hours, where applicable, to mitigate access outside of allowed times.

• Employ domain monitoring to track the creation of, or changes to, corporate, brand-name domains.

• Actively scan and monitor web applications for unauthorized access, modification, and anomalous activities.

• Employ the principle of least privilege and implement software restriction policies or other controls; monitor authorized user accesses and usage.

• Consider using a formalized authentication process for employee-to-employee communications made over the public telephone network where a second factor is used to
authenticate the phone call before sensitive information can be discussed.

• Improve 2FA and OTP messaging to reduce confusion about employee authentication attempts.

• Verify web links do not have misspellings or contain the wrong domain.

• Bookmark the correct corporate VPN URL and do not visit alternative URLs on the sole basis of an inbound phone call.

• Be suspicious of unsolicited phone calls, visits, or email messages from unknown individuals claiming to be from a legitimate organization. Do not provide personal information or information about your organization, including its structure or networks, unless you are certain of a person’s authority to have the information. If possible, try to verify the caller’s identity directly with the company.

• If you receive a vishing call, document the phone number of the caller as well as the domain that the actor tried to send you to and relay this information to law enforcement.

• Limit the amount of personal information you post on social networking sites. The internet is a public resource; only post information you are comfortable with anyone seeing.

• Evaluate your settings: sites may change their options periodically, so review your security and privacy settings regularly to make sure that your choices are still appropriate.

CryptogramYet Another Biometric: Bioacoustic Signatures

Sound waves through the body are unique enough to be a biometric:

"Modeling allowed us to infer what structures or material features of the human body actually differentiated people," explains Joo Yong Sim, one of the ETRI researchers who conducted the study. "For example, we could see how the structure, size, and weight of the bones, as well as the stiffness of the joints, affect the bioacoustics spectrum."

[...]

Notably, the researchers were concerned that the accuracy of this approach could diminish with time, since the human body constantly changes its cells, matrices, and fluid content. To account for this, they acquired the acoustic data of participants at three separate intervals, each 30 days apart.

"We were very surprised that people's bioacoustics spectral pattern maintained well over time, despite the concern that the pattern would change greatly," says Sim. "These results suggest that the bioacoustics signature reflects more anatomical features than changes in water, body temperature, or biomolecule concentration in blood that change from day to day."

It's not great. A 97% accuracy is worse than fingerprints and iris scans, and while they were able to reproduce the biometric in a month it almost certainly changes as we age, gain and lose weight, and so on. Still, interesting.

Worse Than FailureError'd: Just a Suggestion

"Sure thing Google, I guess I'll change my language to... let's see...Ah, how about English?" writes Peter G.

 

Marcus K. wrote, "Breaking news: tt tttt tt,ttt!"

 

Tim Y. writes, "Nothing makes my day more than someone accidentially leaving testing mode enabled (and yes, the test number went through!)"

 

"I guess even thinning brows and psoriasis can turn political these days," Lawrence W. wrote.

 

Strahd I. writes, "It was evident at the time that King Georges VI should have gone asked for a V12 instead."

 

"Well, gee, ZDNet, why do you think I enabled this setting in the first place?" Jeroen V. writes.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianReproducible Builds (diffoscope): diffoscope 157 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 157. This version includes the following changes:

[ Chris Lamb ]

* Try obsensibly "data" files named .pgp against pgpdump to determine whether
  they are PGP files. (Closes: reproducible-builds/diffoscope#211)
* Don't raise an exception when we encounter XML files with "<!ENTITY>"
  declarations inside the DTD, or when a DTD or entity references an external
  resource. (Closes: reproducible-builds/diffoscope#212)
* Temporarily drop gnumeric from Build-Depends as it has been removed from
  testing due to Python 2.x deprecation. (Closes: #968742)
* Codebase changes:
  - Add support for multiple file extension matching; we previously supported
    only a single extension to match.
  - Move generation of debian/tests/control.tmp to an external script.
  - Move to our assert_diff helper entirely in the PGP tests.
  - Drop some unnecessary control flow, unnecessary dictionary comprehensions
    and some unused imports found via pylint.
* Include the filename in the "... not identified by any comparator"
  logging message.

You find out more by visiting the project homepage.

,

Planet DebianBits from Debian: Lenovo, Infomaniak, Google and Amazon Web Services (AWS), Platinum Sponsors of DebConf20

We are very pleased to announce that Lenovo, Infomaniak, Google and Amazon Web Services (AWS), have committed to supporting DebConf20 as Platinum sponsors.

lenovologo

As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.

infomaniaklogo

Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

Googlelogo

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.

AWSlogo

Amazon Web Services (AWS) is one of the world's most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies.

With these commitments as Platinum Sponsors, Lenovo, Infomaniak, Google and Amazon Web Services are contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much for your support of DebConf20!

Participating in DebConf20 online

The 21st Debian Conference is being held Online, due to COVID-19, from August 23rd to 29th, 2020. There are 7 days of activities, running from 10:00 to 01:00 UTC. Visit the DebConf20 website at https://debconf20.debconf.org to learn about the complete schedule, watch the live streaming and join the different communication channels for participating in the conference.

Planet DebianRitesh Raj Sarraf: LUKS Headless Laptop

As we grow old, so do our computing machines. And just like we don’t decommission ourselves, so should be the case of the machines. They should be semi-retired, delegating major tasks to newer machines while they can still serve some less demaning work: File Servers, UPNP Servers et cetera.

It is common on a Debian installer based derivative, and otherwise too, to use block encryption on Linux. With machines from this decade, I think we’ve always had CPU extension for encryption.

So, as would be the usual case, all my laptops are block encrypted. But as they reach the phase of their life to retire and serving as a headless boss, it becomes cumbersome to keep feeding it a password and all the logistics involved to feed it. As such, I wanted to get rid of feeding it the password.

Then, there’s also the case of bad/faulty hardware, many of which mostly can temporarily fix their functionality when reset, which usually is to reboot the machine. I still recollect words of my Linux Guru - Dhiren Raj Bhandari - that many of the unexplainable errors can be resolved by just rebooting the machine. This was more than 20 years ago in the prime era of Microsoft Windows OS and the context back then was quite different, but yes, some bits of that saying still apply today.

So I wanted my laptop, which had LUKS set up for 2 disks, to go password-less now. I stumbled across a slightly dated article where the author achieved similar results with keyscript. So the thing was doable.

To my delight, Debian cryptsetup has the best setup and documentation in place to do it with just adding keyfiles

rrs@lenovo:~$ dd if=/dev/random of=sda7.key bs=1 count=512
512+0 records in
512+0 records out
512 bytes copied, 0.00540209 s, 94.8 kB/s
19:19 ♒♒♒   ☺ 😄    

rrs@lenovo:~$ dd if=/dev/random of=sdb1.key bs=1 count=512
512+0 records in
512+0 records out
512 bytes copied, 0.00536747 s, 95.4 kB/s
19:20 ♒♒♒   ☺ 😄    

rrs@lenovo:~$ sudo cryptsetup luksAddKey /dev/sda7 sda7.key 
[sudo] password for rrs: 
Enter any existing passphrase: 
No key available with this passphrase.
19:20 ♒♒♒    ☹ 😟=> 2  

rrs@lenovo:~$ sudo cryptsetup luksAddKey /dev/sda7 sda7.key 
Enter any existing passphrase: 
19:20 ♒♒♒   ☺ 😄    

rrs@lenovo:~$ sudo cryptsetup luksAddKey /dev/sdb1 sdb1.key 
Enter any existing passphrase: 
19:21 ♒♒♒   ☺ 😄    

and the nice integration in crypttab to ensure your keys propagate to initramfs

rrs@lenovo:~$ cat /etc/cryptsetup-initramfs/conf-hook 
#
# Configuration file for the cryptroot initramfs hook.
#

#
# KEYFILE_PATTERN: ...
#
# The value of this variable is interpreted as a shell pattern.
# Matching key files from the crypttab(5) are included in the initramfs
# image.  The associated devices can then be unlocked without manual
# intervention.  (For instance if /etc/crypttab lists two key files
# /etc/keys/{root,swap}.key, you can set KEYFILE_PATTERN="/etc/keys/*.key"
# to add them to the initrd.)
#
# If KEYFILE_PATTERN if null or unset (default) then no key file is
# copied to the initramfs image.
#
# Note that the glob(7) is not expanded for crypttab(5) entries with a
# 'keyscript=' option.  In that case, the field is not treated as a file
# name but given as argument to the keyscript.
#
# WARNING: If the initramfs image is to include private key material,
# you'll want to create it with a restrictive umask in order to keep
# non-privileged users at bay.  For instance, set UMASK=0077 in
# /etc/initramfs-tools/initramfs.conf
#

KEYFILE_PATTERN="/etc/luks/sd*.key"
19:44 ♒♒♒   ☺ 😄    

The whole thing took me around 20-25 minutes, including drafting this post. From Retired Head and Password Prompt to Headless and Password-less. The beauty of Debian and FOSS

CryptogramCopying a Key by Listening to It in Action

Researchers are using recordings of keys being used in locks to create copies.

Once they have a key-insertion audio file, SpiKey's inference software gets to work filtering the signal to reveal the strong, metallic clicks as key ridges hit the lock's pins [and you can hear those filtered clicks online here]. These clicks are vital to the inference analysis: the time between them allows the SpiKey software to compute the key's inter-ridge distances and what locksmiths call the "bitting depth" of those ridges: basically, how deeply they cut into the key shaft, or where they plateau out. If a key is inserted at a nonconstant speed, the analysis can be ruined, but the software can compensate for small speed variations.

The result of all this is that SpiKey software outputs the three most likely key designs that will fit the lock used in the audio file, reducing the potential search space from 330,000 keys to just three. "Given that the profile of the key is publicly available for commonly used [pin-tumbler lock] keys, we can 3D-print the keys for the inferred bitting codes, one of which will unlock the door," says Ramesh.

Worse Than FailureCodeSOD: A Backwards For

Aurelia is working on a project where some of the code comes from a client. In this case, it appears that the client has very good reasons for hiring an outside vendor to actually build the application.

Imagine you have some Java code which needs to take an array of integers and iterate across them in reverse, to concatenate a string. Oh, and you need to add one to each item as you do this.

You might be thinking about some combination of a map/reverseString.join operation, or maybe a for loop with a i-- type decrementer.

I’m almost certain you aren’t thinking about this.

public String getResultString(int numResults) {
	StringBuffer sb = null;
	
	for (int result[] = getResults(numResults); numResults-- > 0;) {
		int i = result[numResults];
		if( i == 0){
			int j = i + 1; 
			if (sb == null)
				sb = new StringBuffer();
			else
				sb.append(",");
				sb.append(j);
		}else{
			int j = i + 1; 
			if (sb == null)
				sb = new StringBuffer();
			else
				sb.append(",");
				sb.append(j);
		}
	}
	return sb.toString();
}

I really, really want you to look at that for loop: for (int result[] = getResults(numResults); numResults-- > 0;)

Just look at that. It’s… not wrong. It’s not… bad. It’s just written by an alien wearing a human skin suit. Our initializer actually populates the array we’re going to iterate across. Our bounds check also contains the decrement operation. We don’t have a decrement clause.

Then, if i == 0 we’ll do the exact same thing as if i isn’t 0, since our if and else branches contain the same code.

Increment i, and store the result in j. Why we don’t use the ++i or some other variation to be in-line with our weird for loop, I don’t know. Maybe they were done showing off.

Then, if our StringBuffer is null, we create one, otherwise we append a ",". This is one solution to the contatenator’s comma problem. Again, it’s not wrong, it’s just… unusual.

But this brings us to the thing which is actually, objectively, honestly bad. The indenting.

			if (sb == null)
				sb = new StringBuffer();
			else
				sb.append(",");
				sb.append(j);

Look at that last line. Does that make you angry? Look more closely. Look for the curly brackets. Oh, you don’t see any? Very briefly, when I was looking at this code, I thought, “Wait, does this discard the first item?” No, it just eschews brackets and then indents wrong to make sure we’re nice and confused when we look at the code.

It should read:

			if (sb == null)
				sb = new StringBuffer();
			else
				sb.append(",");
                        sb.append(j);
[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

CryptogramUsing Disinformation to Cause a Blackout

Interesting paper: "How weaponizing disinformation can bring down a city's power grid":

Abstract: Social media has made it possible to manipulate the masses via disinformation and fake news at an unprecedented scale. This is particularly alarming from a security perspective, as humans have proven to be one of the weakest links when protecting critical infrastructure in general, and the power grid in particular. Here, we consider an attack in which an adversary attempts to manipulate the behavior of energy consumers by sending fake discount notifications encouraging them to shift their consumption into the peak-demand period. Using Greater London as a case study, we show that such disinformation can indeed lead to unwitting consumers synchronizing their energy-usage patterns, and result in blackouts on a city-scale if the grid is heavily loaded. We then conduct surveys to assess the propensity of people to follow-through on such notifications and forward them to their friends. This allows us to model how the disinformation may propagate through social networks, potentially amplifying the attack impact. These findings demonstrate that in an era when disinformation can be weaponized, system vulnerabilities arise not only from the hardware and software of critical infrastructure, but also from the behavior of the consumers.

I'm not sure the attack is practical, but it's an interesting idea.

Krebs on SecurityVoice Phishers Targeting Corporate VPNs

The COVID-19 epidemic has brought a wave of email phishing attacks that try to trick work-at-home employees into giving away credentials needed to remotely access their employers’ networks. But one increasingly brazen group of crooks is taking your standard phishing attack to the next level, marketing a voice phishing service that uses a combination of one-on-one phone calls and custom phishing sites to steal VPN credentials from employees.

According to interviews with several sources, this hybrid phishing gang has a remarkably high success rate, and operates primarily through paid requests or “bounties,” where customers seeking access to specific companies or accounts can hire them to target employees working remotely at home.

And over the past six months, the criminals responsible have created dozens if not hundreds of phishing pages targeting some of the world’s biggest corporations. For now at least, they appear to be focusing primarily on companies in the financial, telecommunications and social media industries.

“For a number of reasons, this kind of attack is really effective,” said Allison Nixon, chief research officer at New York-based cyber investigations firm Unit 221B. “Because of the Coronavirus, we have all these major corporations that previously had entire warehouses full of people who are now working remotely. As a result the attack surface has just exploded.”

TARGET: NEW HIRES

A typical engagement begins with a series of phone calls to employees working remotely at a targeted organization. The phishers will explain that they’re calling from the employer’s IT department to help troubleshoot issues with the company’s virtual private networking (VPN) technology.

The employee phishing page bofaticket[.]com. Image: urlscan.io

The goal is to convince the target either to divulge their credentials over the phone or to input them manually at a website set up by the attackers that mimics the organization’s corporate email or VPN portal.

Zack Allen is director of threat intelligence for ZeroFOX, a Baltimore-based company that helps customers detect and respond to risks found on social media and other digital channels. Allen has been working with Nixon and several dozen other researchers from various security firms to monitor the activities of this prolific phishing gang in a bid to disrupt their operations.

Allen said the attackers tend to focus on phishing new hires at targeted companies, and will often pose as new employees themselves working in the company’s IT division. To make that claim more believable, the phishers will create LinkedIn profiles and seek to connect those profiles with other employees from that same organization to support the illusion that the phony profile actually belongs to someone inside the targeted firm.

“They’ll say ‘Hey, I’m new to the company, but you can check me out on LinkedIn’ or Microsoft Teams or Slack, or whatever platform the company uses for internal communications,” Allen said. “There tends to be a lot of pretext in these conversations around the communications and work-from-home applications that companies are using. But eventually, they tell the employee they have to fix their VPN and can they please log into this website.”

SPEAR VISHING

The domains used for these pages often invoke the company’s name, followed or preceded by hyphenated terms such as “vpn,” “ticket,” “employee,” or “portal.” The phishing sites also may include working links to the organization’s other internal online resources to make the scheme seem more believable if a target starts hovering over links on the page.

Allen said a typical voice phishing or “vishing” attack by this group involves at least two perpetrators: One who is social engineering the target over the phone, and another co-conspirator who takes any credentials entered at the phishing page and quickly uses them to log in to the target company’s VPN platform in real-time.

Time is of the essence in these attacks because many companies that rely on VPNs for remote employee access also require employees to supply some type of multi-factor authentication in addition to a username and password — such as a one-time numeric code generated by a mobile app or text message. And in many cases, those codes are only good for a short duration — often measured in seconds or minutes.

But these vishers can easily sidestep that layer of protection, because their phishing pages simply request the one-time code as well.

A phishing page (helpdesk-att[.]com) targeting AT&T employees. Image: urlscan.io

Allen said it matters little to the attackers if the first few social engineering attempts fail. Most targeted employees are working from home or can be reached on a mobile device. If at first the attackers don’t succeed, they simply try again with a different employee.

And with each passing attempt, the phishers can glean important details from employees about the target’s operations, such as company-specific lingo used to describe its various online assets, or its corporate hierarchy.

Thus, each unsuccessful attempt actually teaches the fraudsters how to refine their social engineering approach with the next mark within the targeted organization, Nixon said.

“These guys are calling companies over and over, trying to learn how the corporation works from the inside,” she said.

NOW YOU SEE IT, NOW YOU DON’T

All of the security researchers interviewed for this story said the phishing gang is pseudonymously registering their domains at just a handful of domain registrars that accept bitcoin, and that the crooks typically create just one domain per registrar account.

“They’ll do this because that way if one domain gets burned or taken down, they won’t lose the rest of their domains,” Allen said.

More importantly, the attackers are careful to do nothing with the phishing domain until they are ready to initiate a vishing call to a potential victim. And when the attack or call is complete, they disable the website tied to the domain.

This is key because many domain registrars will only respond to external requests to take down a phishing website if the site is live at the time of the abuse complaint. This requirement can stymie efforts by companies like ZeroFOX that focus on identifying newly-registered phishing domains before they can be used for fraud.

“They’ll only boot up the website and have it respond at the time of the attack,” Allen said. “And it’s super frustrating because if you file an abuse ticket with the registrar and say, ‘Please take this domain away because we’re 100 percent confident this site is going to be used for badness,’ they won’t do that if they don’t see an active attack going on. They’ll respond that according to their policies, the domain has to be a live phishing site for them to take it down. And these bad actors know that, and they’re exploiting that policy very effectively.”

A phishing page (github-ticket[.]com) aimed at siphoning credentials for a target organization’s access to the software development platform Github. Image: urlscan.io

SCHOOL OF HACKS

Both Nixon and Allen said the object of these phishing attacks seems to be to gain access to as many internal company tools as possible, and to use those tools to seize control over digital assets that can quickly be turned into cash. Primarily, that includes any social media and email accounts, as well as associated financial instruments such as bank accounts and any cryptocurrencies.

Nixon said she and others in her research group believe the people behind these sophisticated vishing campaigns hail from a community of young men who have spent years learning how to social engineer employees at mobile phone companies and social media firms into giving up access to internal company tools.

Traditionally, the goal of these attacks has been gaining control over highly-prized social media accounts, which can sometimes fetch thousands of dollars when resold in the cybercrime underground. But this activity gradually has evolved toward more direct and aggressive monetization of such access.

On July 15, a number of high-profile Twitter accounts were used to tweet out a bitcoin scam that earned more than $100,000 in a few hours. According to Twitter, that attack succeeded because the perpetrators were able to social engineer several Twitter employees over the phone into giving away access to internal Twitter tools.

Nixon said it’s not clear whether any of the people involved in the Twitter compromise are associated with this vishing gang, but she noted that the group showed no signs of slacking off after federal authorities charged several people with taking part in the Twitter hack.

“A lot of people just shut their brains off when they hear the latest big hack wasn’t done by hackers in North Korea or Russia but instead some teenagers in the United States,” Nixon said. “When people hear it’s just teenagers involved, they tend to discount it. But the kinds of people responsible for these voice phishing attacks have now been doing this for several years. And unfortunately, they’ve gotten pretty advanced, and their operational security is much better now.”

A phishing page (vzw-employee[.]com) targeting employees of Verizon. Image: DomainTools

PROPER ADULT MONEY-LAUNDERING

While it may seem amateurish or myopic for attackers who gain access to a Fortune 100 company’s internal systems to focus mainly on stealing bitcoin and social media accounts, that access — once established — can be re-used and re-sold to others in a variety of ways.

“These guys do intrusion work for hire, and will accept money for any purpose,” Nixon said. “This stuff can very quickly branch out to other purposes for hacking.”

For example, Allen said he suspects that once inside of a target company’s VPN, the attackers may try to add a new mobile device or phone number to the phished employee’s account as a way to generate additional one-time codes for future access by the phishers themselves or anyone else willing to pay for that access.

Nixon and Allen said the activities of this vishing gang have drawn the attention of U.S. federal authorities, who are growing concerned over indications that those responsible are starting to expand their operations to include criminal organizations overseas.

“What we see now is this group is really good on the intrusion part, and really weak on the cashout part,” Nixon said. “But they are learning how to maximize the gains from their activities. That’s going to require interactions with foreign gangs and learning how to do proper adult money laundering, and we’re already seeing signs that they’re growing up very quickly now.”

WHAT CAN COMPANIES DO?

Many companies now make security awareness and training an integral part of their operations. Some firms even periodically send test phishing messages to their employees to gauge their awareness levels, and then require employees who miss the mark to undergo additional training.

Such precautions, while important and potentially helpful, may do little to combat these phone-based phishing attacks that tend to target new employees. Both Allen and Nixon — as well as others interviewed for this story who asked not to be named — said the weakest link in most corporate VPN security setups these days is the method relied upon for multi-factor authentication.

A U2F device made by Yubikey, plugged into the USB port on a computer.

One multi-factor option — physical security keys — appears to be immune to these sophisticated scams. The most commonly used security keys are inexpensive USB-based devices. A security key implements a form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by inserting the USB device and pressing a button on the device. The key works without the need for any special software drivers.

The allure of U2F devices for multi-factor authentication is that even if an employee who has enrolled a security key for authentication tries to log in at an impostor site, the company’s systems simply refuse to request the security key if the user isn’t on their employer’s legitimate website, and the login attempt fails. Thus, the second factor cannot be phished, either over the phone or Internet.

In July 2018, Google disclosed that it had not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical security keys in place of one-time codes.

Probably the most popular maker of security keys is Yubico, which sells a basic U2F Yubikey for $20. It offers regular USB versions as well as those made for devices that require USB-C connections, such as Apple’s newer Mac OS systems. Yubico also sells more expensive keys designed to work with mobile devices. [Full disclosure: Yubico was recently an advertiser on this site].

Nixon said many companies will likely balk at the price tag associated with equipping each employee with a physical security key. But she said as long as most employees continue to work remotely, this is probably a wise investment given the scale and aggressiveness of these voice phishing campaigns.

“The truth is some companies are in a lot of pain right now, and they’re having to put out fires while attackers are setting new fires,” she said. “Fixing this problem is not going to be simple, easy or cheap. And there are risks involved if you somehow screw up a bunch of employees accessing the VPN. But apparently these threat actors really hate Yubikey right now.”

Kevin RuddWashington Post: China’s thirst for coal is economically shortsighted and environmentally reckless

First published in the Washington Post on 19 August 2020

Carbon emissions have fallen in recent months as economies have been shut down and put into hibernation. But whether the world will emerge from the pandemic in a stronger or weaker position to tackle the climate crisis rests overwhelmingly on the decisions that China will take.

China, as part of its plans to restart its economy, has already approved the construction of new coal-fired power plants accounting for some 17 gigawatts of energy this year, sending a collective shiver down the spines of environmentalists. This is more coal plants than it approved in the previous two years combined, and the total capacity now under development in China is larger than the remaining fleet operating in the United States.

At the same time, China has touted investments in so-called “new infrastructure,” such as electric-vehicle charging stations and rail upgrades, as integral to its economic recovery. But frankly, none of this will matter much if these new coal-fired power plants are built.

To be fair, the decisions to proceed with these coal projects largely rest in the hands of China’s provincial and regional governments and not in Beijing. However, this does not mean the central government has no power, nor that it won’t wear the reputational damage if the plants become a reality.

First, it is hard to see how China could meet one of its own commitments under the 2015 Paris climate agreement to peak its emissions by 2030 if these new plants are built. The pledge relies on China retiring much of its existing and relatively young coal fleet, which has been operational only for an average of 14 years. Bringing yet more coal capacity online now is therefore either economically shortsighted or environmentally reckless.

It would also put at risk the world’s collective long-term goal under the Paris agreement to keep temperature increases within 1.5 degrees Celsius, which the Intergovernmental Panel on Climate Change has said requires halving of global emissions between 2018 and 2030 and reaching net-zero emissions by the middle of the century.

It also is completely contrary to China’s own domestic interests, including President Xi Jinping’s desire to grow the economy, improve energy security and clean up the environment (or, as he says, to “make our skies blue again”).

But perhaps most importantly for the geopolitical hard heads in Beijing, it also risks unravelling the goodwill China has built up in recent years for staying the course on the fight against climate change in the face of the Trump administration’s retreat. This will especially be the case in the eyes of many vulnerable developing countries, including the world’s lowest-lying island nations that could face even greater risks if these plants are built.

For his part, former vice president Joe Biden has already got China’s thirst for coal in his sights. He speaks of the need for the United States to focus on how China is “exporting more dirty coal” through its support of carbon-intensive projects in its Belt and Road InitiativeStudies have found a Chinese role in more than 100 gigawatts of additional coal plants under construction across Asia and Africa, and even in Eastern Europe. It is hard to see how the first few months of a Biden administration would not make this an increasingly uncomfortable reality for Beijing at precisely the time the world would be welcoming with open the arms the return of U.S. climate leadership.

As a new paper published by the Asia Society Policy Institute highlights, China’s decisions on coal will also be among the most closely watched as it finalizes its next five-year plan, due out in 2021, as well as its mid-century decarbonization strategy and enhancements to its Paris targets ahead of the 2021 United Nations Climate Change Conference in Glasgow, Scotland. And although China may also have an enormously positive story to tell — continuing to lead the world in the deployment of renewable energy in 2019 — it is China’s decisions on coal that will loom large.

(Photo: Gwendolyn Stansbury/IFPRI)

The post Washington Post: China’s thirst for coal is economically shortsighted and environmentally reckless appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Shallow Perspective

There are times where someone writes code which does nothing. There are times where someone writes code which does something, but nothing useful. This is one of those times.

Ray H was going through some JS code, and found this “useful” method.

mapRowData (data) {
  if (isNullOrUndefined(data)) return null;
  return data.map(x => x);
}

Technically, this isn’t a “do nothing” method. It converts undefined values to null, and it returns a shallow copy of an array, assuming that you passed in an array.

The fact that it can return a null value or an array is one of those little nuisances that we accept, but probably should code around (without more context, it’s probably fine if this returned an empty array on bad inputs, for example).

But Ray adds: “Where this is used, it could just use the array data directly and get the same result.” Yes, it’s used in a handful of places, and in each of those places, there’s no functional difference between the original array and the shallow copy.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Rondam RamblingsHere we go again

Here is a snapshot of the current map of temporary flight restrictions (TFRs) issued by the FAA across the western U.S.:Almost every one of those red shapes is a major fire burning.  Compare that to a similar snapshot taken two years ago at about this same time of year.The regularity of these extreme heat and fire events is starting to get really scary.

,

LongNowA Tribute to Michael McElligott, creator of “Conversations at The Interval”

It is with great sadness that we share the news that our dear friend and colleague Michael McElligott is in hospice care. We want to take this moment to appreciate all that Michael has done for Long Now.

Most of the Long Now community knows Michael as the face of the Conversations at The Interval speaking series, which began in 02014 with the opening of Long Now’s Interval bar/cafe. But he did much more than host the talks. 

Michael had been a volunteer and associate of Long Now since 02006; he helped at events and Seminars, wrote for the blog and newsletter, and was a technical advisor. In 02013 he officially joined the staff to help raise funds for the construction of The Interval, run social media, and design and produce the Conversations at The Interval lecture series.

For the first five years of the series, each of the talks was painstakingly produced by Michael. This included finding speakers, developing the talk with the speakers, helping curate all the media associated with each talk, and oftentimes hosting the talks. Many of the production ideas explored in this series by Michael became adopted across other Long Now programs, and we are so thankful we got to work with him.

You can watch a playlist of all of Michael’s Interval talks here.

Planet DebianLisandro Damián Nicanor Pérez Meyer: Stepping down as Qt 6 maintainers

After quite some time maintaining Qt in Debian both Dmitry Shachnev and I decided to not maintain Qt 6 when it's published (expected in December 2020, see https://wiki.qt.io/Qt_6.0_Release). We will do our best to keep the Qt 5 codebase up and running.

We **love** Qt, but it's a huge codebase and requires time and build power, both things that we are currently lacking, so we decided it's time for us to step down and pass the torch. And a new major version seems the right point to do that.

We will be happy to review and/or sponsor other people's work or even occasionally do uploads, but we can't promise to do it regularly.

Some things we think potential Qt 6 maintainers should be familiar with are, of course, C++ packaging (specially symbols files) and CMake, as Qt 6 will be built with it.

We also encourage prospective maintainers to remove the source's -everywhere-src suffixes and just keep the base names as source package names: qtbase6, qtdeclarative6, etc.

It has been an interesting ride all these years, we really hope you enjoyed using Qt.

Thanks for everything,

Dmitry and Lisandro.

Note 20200818 12:12 ARST: I was asked if the move has anything to do with code quality or licensing. The answer is a huge no, Qt is a **great** project which we love. As stated before it's mostly about lack of free time to properly maintain it.

 

Planet DebianMolly de Blanc: Updates

We are currently working on a second draft of the Declaration of Digital Autonomy. We’re also working on some next steps, which I hadn’t really thought about existing before. Videos from GUADEC and HOPE are now online. We’ll be speaking at DebConf on August 29th.

I’ll be starting school soon, so I expect a lot of the content of what I’ll be writing (as well as the style) to shift a bit to reflect what I’m studying and how I’m expected to write for my program.

Kevin RuddMonocle 24 Radio: The Big Interview

INTERVIEW AUDIO
MONOCLE 24 RADIO
‘THE BIG INTERVIEW’
RECORDED LATE 2019
BROADCAST AUGUST 2020

The post Monocle 24 Radio: The Big Interview appeared first on Kevin Rudd.

CryptogramVaccine for Emotet Malware

Interesting story of a vaccine for the Emotet malware:

Through trial and error and thanks to subsequent Emotet updates that refined how the new persistence mechanism worked, Quinn was able to put together a tiny PowerShell script that exploited the registry key mechanism to crash Emotet itself.

The script, cleverly named EmoCrash, effectively scanned a user's computer and generated a correct -- but malformed -- Emotet registry key.

When Quinn tried to purposely infect a clean computer with Emotet, the malformed registry key triggered a buffer overflow in Emotet's code and crashed the malware, effectively preventing users from getting infected.

When Quinn ran EmoCrash on computers already infected with Emotet, the script would replace the good registry key with the malformed one, and when Emotet would re-check the registry key, the malware would crash as well, preventing infected hosts from communicating with the Emotet command-and-control server.

[...]

The Binary Defense team quickly realized that news about this discovery needed to be kept under complete secrecy, to prevent the Emotet gang from fixing its code, but they understood EmoCrash also needed to make its way into the hands of companies across the world.

Compared to many of today's major cybersecurity firms, all of which have decades of history behind them, Binary Defense was founded in 2014, and despite being one of the industry's up-and-comers, it doesn't yet have the influence and connections to get this done without news of its discovery leaking, either by accident or because of a jealous rival.

To get this done, Binary Defense worked with Team CYMRU, a company that has a decades-long history of organizing and participating in botnet takedowns.

Working behind the scenes, Team CYMRU made sure that EmoCrash made its way into the hands of national Computer Emergency Response Teams (CERTs), which then spread it to the companies in their respective jurisdictions.

According to James Shank, Chief Architect for Team CYMRU, the company has contacts with more than 125 national and regional CERT teams, and also manages a mailing list through which it distributes sensitive information to more than 6,000 members. Furthermore, Team CYMRU also runs a biweekly group dedicated to dealing with Emotet's latest shenanigans.

This broad and well-orchestrated effort has helped EmoCrash make its way around the globe over the course of the past six months.

[...]

Either by accident or by figuring out there was something wrong in its persistence mechanism, the Emotet gang did, eventually, changed its entire persistence mechanism on Aug. 6 -- exactly six months after Quinn made his initial discovery.

EmoCrash may not be useful to anyone anymore, but for six months, this tiny PowerShell script helped organizations stay ahead of malware operations -- a truly rare sight in today's cyber-security field.

Kevin RuddABC Late Night Live: US-China Relations

INTERVIEW AUDIO
RADIO INTERVIEW
ABC
LATE NIGHT LIVE
17 AUGUST 2020

Main topic: Foreign Affairs article ‘Beware the Guns of August — in Asia’

 

Image: The USS Ronald Reagan steams through the San Bernardino Strait, July 3, 2020, crossing from the Philippine Sea into the South China Sea. (Navy Petty Officer 3rd Class Jason Tarleton)

The post ABC Late Night Live: US-China Relations appeared first on Kevin Rudd.

Planet DebianJonathan Dowland: Come Together

Primal Scream — Come Together

This one rarely returns to its proper place, instead living in the small pile of records permanently next to my turntable. I'm a late convert to Primal Scream: I first heard the 10 minute Andrew Weatherall mix of Come Together on Tom Robinson's 6Music show. It's a remarkable record, more so to think that it's quite hard, in isolation, to actually hear Primal Scream's contribution. This is very much Weatherall's track, and (to me, at least) it does a great job of encapsulating the house music explosion of the time.

It's interesting to hear Terry Farley's mix, partially because the band's contribution is more evident, so you can get a glimpse of the material that Weatherall had to work with.

RIP Andrew Weatherall, 1963-2020.

Worse Than FailureCodeSOD: Carbon Copy

I avoid writing software that needs to send emails. It's just annoying code to build, interfacing with mailservers is shockingly frustrating, and honestly, users don't tend to like the emails that my software tends to send. Once upon a time, it was a system which would tell them it was time to calibrate a scale, and the business requirements were basically "spam them like three times a day the week a scale comes do," which shockingly everyone hated.

But Krista inherited some code that sends email. The previous developer was a "senior", but probably could have had a little more supervision and maybe some mentoring on the C# language.

One commit added this method, for sending emails:

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

That's not so bad, as these things go, though one has to wonder about parameters like fileName1 and fileName2. Do they only ever send exactly two files? Well, maybe when this method was written, but a few commits later, an overloaded version gets added:

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2, String fileName3) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); if (File.Exists(fileName3)) mailMsg.Attachments.Add(new Attachment(fileName3)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

And then, a few commits later, someone decided that they needed to send four files, sometimes.

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2, String fileName3, String fileName4) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); if (File.Exists(fileName3)) mailMsg.Attachments.Add(new Attachment(fileName3)); if (File.Exists(fileName4)) mailMsg.Attachments.Add(new Attachment(fileName4)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

Each time someone discovered a new case where they wanted to include a different number of attachments, the previous developer copy/pasted the same code, with minor revisions.

Krista wrote a single version which used a paramarray, which replaced all of these versions (and any other possible versions), without changing the calling semantics.

Though the real WTF is probably still forcing the BodyEncoding to be ASCII at this point in time. There's a whole lot of assumptions about your dataset which are probably not true, or at least no reliably true.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Rondam RamblingsIrit Gat, Ph.D. 25 November 1966 - 11 August 2020

With a heavy heart I bear witness to the untimely passing of Dr. Irit Gat last Tuesday at the age of 53.  Irit was the Dean of Behavioral and Social Sciences at Antelope Valley College in Lancaster, California.  She was also my younger sister.  She died peacefully of natural causes.I am going to miss her.  A lot.  I'm going to miss her smile.  I'm going to miss the way she said "Hey bro" when we

Planet DebianIan Jackson: Doctrinal obstructiveness in Free Software

Any software system has underlying design principles, and any software project has process rules. But I seem to be seeing more often, a pathological pattern where abstract and shakily-grounded broad principles, and even contrived and sophistic objections, are used to block sensible changes.

Today I will go through an example in detail, before ending with a plea:

PostgreSQL query planner, WITH [MATERIALIZED] optimisation fence

Background history

PostgreSQL has a sophisticated query planner which usually gets the right answer. For good reasons, the pgsql project has resisted providing lots of knobs to control query planning. But there are a few ways to influence the query planner, for when the programmer knows more than the planner.

One of these is the use of a WITH common table expression. In pgsql versions prior to 12, the planner would first make a plan for the WITH clause; and then, it would make a plan for the second half, counting the WITH clause's likely output as a given. So WITH acts as an "optimisation fence".

This was documented in the manual - not entirely clearly, but a careful reading of the docs reveals this behaviour:

The WITH query will generally be evaluated as written, without suppression of rows that the parent query might discard afterwards.

Users (authors of applications which use PostgreSQL) have been using this technique for a long time.

New behaviour in PostgreSQL 12

In PostgreSQL 12 upstream were able to make the query planner more sophisticated. In particular, it is now often capable of looking "into" the WITH common table expression. Much of the time this will make things better and faster.

But if WITH was being used for its side-effect as an optimisation fence, this change will break things: queries that ran very quickly in earlier versions might now run very slowly. Helpfully, pgsql 12 still has a way to specify an optimisation fence: specifying WITH ... AS MATERIALIZED in the query.

So far so good.

Upgrade path for existing users of WITH fence

But what about the upgrade path for existing users of the WITH fence behaviour? Such users will have to update their queries to add AS MATERIALIZED. This is a small change. Having to update a query like this is part of routine software maintenance and not in itself very objectionable. However, this change cannnot be made in advance because pgsql versions prior to 12 will reject the new syntax.

So the users are in a bit of a bind. The old query syntax can be unuseably slow with the new database and the new syntax is rejected by the old database. Upgrading both the database and the application, in lockstep, is a flag day upgrade, which every good sysadmin will want to avoid.

A solution to this problem

Colin Watson proposed a very simple solution: make the earlier PostgreSQL versions accept the new MATERIALIZED syntax. This is correct since the new syntax specifies precisely the actual behaviour of the old databases. It has no deleterious effect on any users of older pgsql versions. It makes it possible to add the new syntax to the application, before doing the database upgrade, decoupling the two upgrades.

Colin Watson even provided an implementation of this proposal.

The solution is rejected by upstream

Unfortunately upstream did not accept this idea. You can read the whole thread yourself if you like. But in summary, the objections were (italic indicates literal quotes):

  • New features don't gain a backpatch. This is a project policy. Of course this is not a new feature, and if it is an exception should be made. This was explained clearly in the thread.
  • I'm not sure the "we don't want to upgrade application code at the same time as the database" is really tenable. This is quite an astonishing statement, particularly given the multiple users who said they wanted to do precisely that.
  • I think we could find cases where we caused worse breaks between major versions. Paraphrasing: "We've done worse things in the past so we should do this bad thing too". WTF?
  • One disadvantage is that this will increase confusion for users, who'll get used to the behavior on 12, and then they'll get confused on older releases. This seems highly contrived. Surely the number of people who are likely to suffer this confusion is tiny. Providing the new syntax in old versions (including of course the appropriate changes to the docs everywhere) might well make such confusion less rather than more likely.
  • [Poster is concerned about] 11.6 and up allowing a syntax that 11.0-11.5 don't. People are likely to write code relying on this and then be surprised when it doesn't work on a slightly older server. And, similarly: we'll then have a lot more behavior differences between minor releases. Again this seems a contrived and unconvincing objection. As that first poster even notes: Still, is that so much different from cases where we fix a bug that prevented some statement from working? No, it isn't.
  • if we started looking we'd find many changes every year that we could justify partially or completely back-porting on similar grounds ... we'll certainly screw it up sometimes. This is a slippery slope argument. But there is no slippery slope: in particular, the proposed change does not change any of the substantive database logic, and the upstream developers will hardly have any difficulty rejecting future more risky backport proposals.
  • if you insist on using the same code with pre-12 and post-12 releases, this should be achievable (at least in most cases) by using the "offset 0" trick. What? First I had heard of it but this in fact turns out to be true! Read more about this, below...

I find these extremely unconvincing, even taken together. Many of them are very unattractive things to hear one's upstream saying.

At best they are knee-jerk and inflexible application of very general principles. The authors of these objections seem to have lost sight of the fact that these principles have a purpose. When these kind of software principles work against their purposes, they should be revised, or exceptions made.

At worst, it looks like a collective effort to find reasons - any reasons, no matter how bad - not to make this change.

The OFFSET 0 trick

One of the responses in the thread mentions OFFSET 0. As part of writing the queries in the Xen Project CI system, and preparing for our system upgrade, I had carefully read the relevant pgsql documentation. This OFFSET 0 trick was new to me.

But, now that I know the answer, it is easy to provide the right search terms and find, for example, this answer on stackmumble. Apparently adding a no-op OFFSET 0 to the subquery defeats the pgsql 12 query planner's ability to see into the subquery.

I think OFFSET 0 is the better approach since it's more obviously a hack showing that something weird is going on, and it's unlikely we'll ever change the optimiser behaviour around OFFSET 0 ... wheras hopefully CTEs will become inlineable at some point CTEs became inlineable by default in PostgreSQL 12.
So in fact there is a syntax for an optimisation fence that is accepted by both earlier and later PostgreSQL versions. It's even recommended by pgsql devs. It's just not documented, and is described by pgsql developers as a "hack". Astonishingly, the fact that it is a "hack" is given as a reason to use it!

Well, I have therefore deployed this "hack". No doubt it will stay in our codebase indefinitely.

Please don't be like that!

I could come up with a lot more examples of other projects that have exhibited similar arrogance. It is becoming a plague! But every example is contentious, and I don't really feel I need to annoy a dozen separate Free Software communities. So I won't make a laundry list of obstructiveness.

If you are an upstream software developer, or a distributor of software to users (eg, a distro maintainer), you have a lot of practical power. In theory it is Free Software so your users could just change it themselves. But for a user or downstream, carrying a patch is often an unsustainable amount of work and risk. Most of us have patches we would love to be running, but which we haven't even written because simply running a nonstandard build is too difficult, no matter how technically excellent our delta.

As an upstream, it is very easy to get into a mindset of defending your code's existing behaviour, and to turn your project's guidelines into inflexible rules. Constant exposure to users who make silly mistakes, and rudely ask for absurd changes, can lead to core project members feeling embattled.

But there is no need for an upstream to feel embattled! You have the vast majority of the power over the software, and over your project communication fora. Use that power consciously, for good.

I can't say that arrogance will hurt you in the short term. Users of software with obstructive upstreams do not have many good immediate options. But we do have longer-term choices: we can choose which software to use, and we can choose whether to try to help improve the software we use.

After reading Colin's experience, I am less likely to try to help improve the experience of other PostgreSQL users by contributing upstream. It doesn't seem like there would be any point. Indeed, instead of helping the PostgreSQL community I am now using them as an example of bad practice. I'm only half sorry about that.



comment count unavailable comments

CryptogramRobocall Results from a Telephony Honeypot

A group of researchers set up a telephony honeypot and tracked robocall behavior:

NCSU researchers said they ran 66,606 telephone lines between March 2019 and January 2020, during which time they said to have received 1,481,201 unsolicited calls -- even if they never made their phone numbers public via any source.

The research team said they usually received an unsolicited call every 8.42 days, but most of the robocall traffic came in sudden surges they called "storms" that happened at regular intervals, suggesting that robocallers operated using a tactic of short-burst and well-organized campaigns.

In total, the NCSU team said it tracked 650 storms over 11 months, with most storms being of the same size.

Research paper. USENIX talk. Slashdot thread.

Planet DebianNorbert Preining: KDE Apps 20.08 now available for Debian

KDE Apps bundle 20.08 has been released recently, and some of the packages are already updated in Debian/unstable. I have updated also all my packages to 20.08 and they are now available for x86_64, i586, and hopefully aarch64 (some issues remaining here still).

With the new release 20.08 I have also switched to versioned app repositories, so you need to update the apt sources directive. The new one is

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2008/Debian_Unstable/ ./

and similar for Testing.

Packages from the “other” repo that depend on apps, that is in particular Digikam, are currently rebuild and will be coinstallable soon.

Just to make sure, here is the full set of repositories I use on my computers:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma519/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2008/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./

Enjoy.

Worse Than FailureCodeSOD: Perls Can Change

Tomiko* inherited some web-scraping/indexing code from Dennis. The code started out just scanning candidate profiles for certain keywords, but grew, mutated, and eventually turned into something that also needed to download their CVs.

Now, Dennis was, as Tomiko puts it, "an interesting engineer". "Any agreed upon standard, he would aggressively oppose, and this can be seen in this code."

"This code" also happens to be in Perl, the "best" language for developers who don't like standards. And, it also happens to be connected to this infrastructure.

So let's start with the code, because this is the rare CodeSOD where the code itself isn't the WTF:

foreach my $n (0 .. @{$lines} - 1) { next if index($lines->[$n], 'RT::Spider::Deepweb::Controller::Factory->make(') == -1; # Don't let other cv_id survive. $lines->[$n] =~ s/,\s*cv_id\s*=>[^,)]+//; $lines->[$n] =~ s/,\s*cv_type\s*=>[^,)]+// if defined $cv_type; # Insert the new options. $lines->[$n] =~ s/\)/$opt)/; }

Okay, so it's a pretty standard for-each loop. We skip lines if they contain… wait, that looks like a Perl expression- RT::Spider::Deepweb::Controller::Factory->make('? Well, let's hold onto that thought, but keep trucking on.

Next, we do a few find-and-replace operations to ensure that we Don't let other cv_id survive. I'm not really sure what exactly that's supposed to mean, but Tomiko says, "Dennis never wrote a single meaningful comment".

Well, the regexes are pretty standard character-salad expressions; ugly, but harmless. If you take this code in isolation, it's not good, but it doesn't look terrible. Except, there's that next if line. Why are we checking to see if the input data contains a Perl expression?

Because our input data is a Perl script. Dennis was… efficient. He already had code that would download the candidate profiles. Instead of adding new code to download CVs, instead of refactoring the existing code so that it was generic enough to download both, Dennis instead decided to load the profile code into memory, scan it with regexes, and then eval it.

As Tomiko says: "You can't get more Perl than that."

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Krebs on SecurityMicrosoft Put Off Fixing Zero Day for 2 Years

A security flaw in the way Microsoft Windows guards users against malicious files was actively exploited in malware attacks for two years before last week, when Microsoft finally issued a software update to correct the problem.

One of the 120 security holes Microsoft fixed on Aug. 11’s Patch Tuesday was CVE-2020-1464, a problem with the way every supported version of Windows validates digital signatures for computer programs.

Code signing is the method of using a certificate-based digital signature to sign executable files and scripts in order to verify the author’s identity and ensure that the code has not been changed or corrupted since it was signed by the author.

Microsoft said an attacker could use this “spoofing vulnerability” to bypass security features intended to prevent improperly signed files from being loaded. Microsoft’s advisory makes no mention of security researchers having told the company about the flaw, which Microsoft acknowledged was actively being exploited.

In fact, CVE-2020-1464 was first spotted in attacks used in the wild back in August 2018. And several researchers informed Microsoft about the weakness over the past 18 months.

Bernardo Quintero is the manager at VirusTotal, a service owned by Google that scans any submitted files against dozens of antivirus services and displays the results. On Jan. 15, 2019, Quintero published a blog post outlining how Windows keeps the Authenticode signature valid after appending any content to the end of Windows Installer files (those ending in .MSI) signed by any software developer.

Quintero said this weakness would particularly acute if an attacker were to use it to hide a malicious Java file (.jar). And, he said, this exact attack vector was indeed detected in a malware sample sent to VirusTotal.

“In short, an attacker can append a malicious JAR to a MSI file signed by a trusted software developer (like Microsoft Corporation, Google Inc. or any other well-known developer), and the resulting file can be renamed with the .jar extension and will have a valid signature according Microsoft Windows,” Quintero wrote.

But according to Quintero, while Microsoft’s security team validated his findings, the company chose not to address the problem at the time.

“Microsoft has decided that it will not be fixing this issue in the current versions of Windows and agreed we are able to blog about this case and our findings publicly,” his blog post concluded.

Tal Be’ery, founder of Zengo, and Peleg Hadar, senior security researcher at SafeBreach Labs, penned a blog post on Sunday that pointed to a file uploaded to VirusTotal in August 2018 that abused the spoofing weakness, which has been dubbed GlueBall. The last time that August 2018 file was scanned at VirusTotal (Aug 14, 2020), it was detected as a malicious Java trojan by 28 of 59 antivirus programs.

More recently, others would likewise call attention to malware that abused the security weakness, including this post in June 2020 from the Security-in-bits blog.

Image: Securityinbits.com

Be’ery said the way Microsoft has handled the vulnerability report seems rather strange.

“It was very clear to everyone involved, Microsoft included, that GlueBall is indeed a valid vulnerability exploited in the wild,” he wrote. “Therefore, it is not clear why it was only patched now and not two years ago.”

Asked to comment on why it waited two years to patch a flaw that was actively being exploited to compromise the security of Windows computers, Microsoft dodged the question, saying Windows users who have applied the latest security updates are protected from this attack.

“A security update was released in August,” Microsoft said in a written statement sent to KrebsOnSecurity. “Customers who apply the update, or have automatic updates enabled, will be protected. We continue to encourage customers to turn on automatic updates to help ensure they are protected.”

Update, 12:45 a.m. ET: Corrected attribution on the June 2020 blog article about GlueBall exploits in the wild.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 13)

Here’s part thirteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Planet DebianArnaud Rebillout: Modify Vim syntax files for your taste

In this short how-to, we'll see how to make small modifications to a Vim syntax file, in order to change how a particular file format is highlighted. We'll go for a simple use-case: modify the Markdown syntax file, so that H1 and H2 headings (titles and subtitles, if you prefer) are displayed in bold. Of course, this won't be exactly as easy as expected, but no worries, we'll succeed in the end.

The calling

Let's start with a screenshot: how Vim displays Markdown files for me, someone who use the GNOME terminal with the Solarized light theme.

Vim - Markdown file with original highlighting

I'm mostly happy with that, except for one or two little details. I'd like to have the titles displayed in bold, for example, so that they're easier to spot when I skim through a Markdown file. It seems like a simple thing to ask, so I hope there can be a simple solution.

The first steps

Let's learn the basics.

In Vim world, the rules to highlight files formats are defined in the directory /usr/share/vim/vim82/syntax (I bet you'll have to adjust this path depending on the version of Vim that is installed on your system).

And so, for the Markdown file format, the rules are defined in the file /usr/share/vim/vim82/syntax/markdown.vim.

The first thing we could do is to have a look at this file, try to make sense of it, and maybe start to make some modifications.

But wait a moment. You should know that modifying a system file is not a great idea. First because your changes will be lost as soon as an update kicks in and the package manager replaces this file by a new version. Second, because you will quickly forget what files you modified, and what were your modifications, and if you do that too much, you might experience what is called "maintenance headache" in the long run.

So instead, maybe you DO NOT modify this file, and instead you copy it in your personal Vim folder, more precisely in ~/.vim/syntax. Create this directory if it does not exist:

mkdir -p ~/.vim/syntax
cp /usr/share/vim/vim82/syntax/markdown.vim ~/.vim/syntax

The file in your personal folder takes precedence over the system file of the same name in /usr/share/vim/vim82/syntax/, it is a replacement for the existing syntax files. And so from now on, Vim uses the file ~/.vim/syntax/markdown.vim, and this is where we can make our modifications.

(And by the way, this is explained in the Vim faq-24.12)

And so, it's already nice to know all of that, but wait, there's even better.

There's is another location of interest, and it is ~/.vim/after/syntax. You can drop syntax files in this directory, and these files are treated as additions to the existing syntax. So if you only want to make slight modifications, that's the way to go.

(And by the way, this is explained in the Vim faq-24.11)

So let's forget about a syntax replacement in ~/.vim/syntax/markdown.vim, and instead let's go for some syntax additions in ~/.vim/after/syntax/markdown.vim.

mkdir -p ~/.vim/after/syntax
touch ~/.vim/after/syntax/markdown.vim

Now, let's answer the initial question: how do we modify the highlighting rules for Markdown files, so that the titles are displayed in bold? First, we have to understand where are the rules that define the highlighting for titles. Here there are, from the file /usr/share/vim/vim82/syntax/markdown.vim:

hi def link markdownH1 htmlH1
hi def link markdownH2 htmlH2
hi def link markdownH3 htmlH3
...

You should know that H1 means Heading 1, and so on, and so we want to make H1 and H2 bold. What we can see here is that the headings in the Markdown files are highlighted like the headings in HTML files, and this is obviously defined in the file /usr/share/vim/vim82/syntax/html.vim. So let's have a look into this file:

hi def link htmlH1 Title
hi def link htmlH2 htmlH1
hi def link htmlH3 htmlH2
...

Let's keep digging a bit. Where is Title defined? For those using the default color scheme like me, this is defined straight in the Vim source code, in the file src/highlight.c.

CENT("Title term=bold ctermfg=DarkMagenta",
     "Title term=bold ctermfg=DarkMagenta gui=bold guifg=Magenta"),

And for those using custom color schemes, it might be defined in a file under /usr/share/vim/vim82/colors/.

Alright, so how do we override that? We can just define this kind of rules in our syntax additions file at ~/.vim/after/syntax/markdown.vim:

hi link markdownH1 markdownHxBold
hi link markdownH2 markdownHxBold
hi markdownHxBold  term=bold ctermfg=DarkMagenta gui=bold guifg=Magenta cterm=bold

As you can see, the only addition we made, compared to what's defined in src/highlight.c, is cterm=bold. And that's already enough to achieve the initial goal, make the titles (ie. H1 and H2) bold. The result can be seen in the following screenshot:

Vim - Markdown file with modified highlighting

The rabbit hole

So we could stop right here, and life would be easy and good.

However, with this solution there's still something that is not perfect. We use the color DarkMagenta as defined in the default color scheme. What I didn't mention however, is that this is applicable for a light background. If you have a dark background though, dark magenta won't be easy to read.

Actually, if you look a bit more into src/highlight.c, you will see that the default color scheme comes in two variants, one for a light background, and one for a dark background.

And so the definition for Title for a dark background is as follow:

CENT("Title term=bold ctermfg=LightMagenta",
     "Title term=bold ctermfg=LightMagenta gui=bold guifg=Magenta"),

Hmmm, so how do we do that in our syntax file? How can we support both light and dark background, so that the color is right in both cases?

After a bit of research, and after looking at other syntax files, it seems that the solution is to check for the value of the background option, and so our syntax file becomes:

hi link markdownH1 markdownHxBold
hi link markdownH2 markdownHxBold
if &background == "light"
  hi markdownHxBold term=bold ctermfg=DarkMagenta gui=bold guifg=Magenta cterm=bold
else
  hi markdownHxBold term=bold ctermfg=LightMagenta gui=bold guifg=Magenta cterm=bold
endif

In case you wonder, in Vim script you prefix Vim options with &, and so you get the value of the background option by writing &background. You can learn this kind of things in the Vim scripting cheatsheet.

And so, it's easy enough, except for one thing: it doesn't work. The headings always show up in DarkMagenta, even for a dark background.

This is why I called this paragraph "the rabbit hole", by the way.

So... Well after trying a few things, I noticed that in order to make it work, I would have to reload the syntax files with :syntax on.

At this point, the most likely explanation is that the background option is not set yet when the syntax files are loaded at startup, hence it needs to be reloaded manually afterward.

And after muuuuuuch research, I found out that it's actually possible to set a hook for when an option is modified. Meaning, it's possible to execute a function when the background option is modified. Quite cool actually.

And so, there it goes in my ~/.vimrc:

" Reload syntax when the background changes 
autocmd OptionSet background if exists("g:syntax_on") | syntax on | endif

For humans, this line reads as:

  1. when the background option is modified -- autocmd OptionSet background
  2. check if the syntax is on -- if exists("g:syntax_on")
  3. if that's the case, reload it -- syntax on

With that in place, my Markdown syntax overrides work for both dark and light background. Champagne!

The happy end

To finish, let me share my actual additions to the markdown.vim syntax. It makes H1 and H2 bold, along with their delimiters, and it also colors the inline code and the code blocks.

" H1 and H2 headings -> bold
hi link markdownH1 markdownHxBold
hi link markdownH2 markdownHxBold
" Heading delimiters (eg '#') and rules (eg '----', '====') -> bold
hi link markdownHeadingDelimiter markdownHxBold
hi link markdownRule markdownHxBold
" Code blocks and inline code -> highlighted
hi link markdownCode htmlH1

" The following test requires this addition to your vimrc:
" autocmd OptionSet background if exists("g:syntax_on") | syntax on | endif
if &background == "light"
  hi markdownHxBold term=bold ctermfg=DarkMagenta gui=bold guifg=Magenta cterm=bold
else
  hi markdownHxBold term=bold ctermfg=LightMagenta gui=bold guifg=Magenta cterm=bold
endif

And here's how it looks like with a light background:

Vim - Markdown file with final highlighting (light)

And a dark background:

Vim - Markdown file with final highlighting (dark)

That's all, that's very little changes compared to the highlighting from the original syntax file, and now that we understand how it's supposed to be done, it's not much effort to achieve it.

It's just that finding the workaround to make it work for both light and dark background took forever, and leaves the usual, unanswered question: bug or feature?

,

Planet DebianEnrico Zini: Historical links

Saint Guinefort was a dog who lived in France in the 13th century, worshipped through history as a saint until less than a century ago. The recurrence is soon, on the 22th of August.

Many think middle ages were about superstition, and generally a bad period. Black Death, COVID, and Why We Keep Telling the Myth of a Renaissance Golden Age and Bad Middle Ages tells a different, fascinating story.

Another fascinating middle age story is that of Christine de Pizan, author of The Book of the City of Ladies. This is a very good lecture about her (in Italian): Come pensava una donna nel Medioevo? 2 - Christine de Pizan. You can read some of her books at the Memory of the World library.

If you understand Italian, Alessandro Barbero gives fascinating lectures. You can find them index in a timeline, or in a map.

Still from around the middle ages, we get playing cards: see Playing Cards Around the World and Through the Ages.

If you want to go have a look in person, and you overshoot with your time machine, here's a convenient route planner for antique Roman roads.

View all historical links that I have shared.

Planet DebianBits from Debian: Debian turns 27!

Today is Debian's 27th anniversary. We recently wrote about some ideas to celebrate the DebianDay, you can join the party or organise something yourselves :-)

Today is also an opportunity for you to start or resume your contributions to Debian. For example, you can scratch your creative itch and suggest a wallpaper to be part of the artwork for the next release, have a look at the DebConf20 schedule and register to participate online (August 23rd to 29th, 2020), or put a Debian live image in a DVD or USB and give it to some person near you, who still didn't discover Debian.

Our favorite operating system is the result of all the work we do together. Thanks to everybody who has contributed in these 27 years, and happy birthday Debian!

Planet DebianAndrej Shadura: Useful FFmpeg commands for video editing

As a response to Antonio Terceiro’s blog post, I’m publishing some FFmpeg commands I’ve been using recently.

Embedding subtitles

Sometimes you have a video with subtitles in multiple languages and you don’t want to clutter the directory with a lot of similarly-named files — or maybe you want to be able to easily transfer the video and subtitles at once. In this case, it may be useful to embed to subtitles directly into the video container file.

ffmpeg -i video.mp4 -i video.eng.srt -map 0:v -map 0:a -c copy -map 1 \
        -c:s:0 mov_text -metadata:s:s:0 language="eng" video-out.mp4

This commands recodes the subtitle file into a format appropriate for the MP4 container and embeds it with a metadata element telling the video player what language it is in. You can add multiple subtitles at once, or you can also transcode the audio to AAC while doing so (I found that a lot of Android devices can’t play Ogg Vorbis streams):

ffmpeg -i video.mp4 -i video.deu.srt -i video.eng.srt -map 0:v -map 0:a \
        -c:v copy -c:a aac -map 1 -c:s:0 mov_text -metadata:s:s:0 language="deu" \
                           -map 2 -c:s:1 mov_text -metadata:s:s:1 language="eng" video-out.mp4

‘Hard’ subtitles

Sometimes you need to play the video with subtitles on devices not supporting them. In that case, it may be useful to ‘hardcode’ the subtitles directly into the video stream:

ffmpeg -i video.mp4 -vf subtitles=video.eng.srt video-out.mp4

Unfortunately, if you also want to apply more transformations to the video, it starts getting tricky, the -vf option is no longer enough:

ffmpeg -i video.mp4 -i overlay.jpg -filter:a "volume=10" \
        -filter_complex '[0:v][1:v]overlay[outv];[outv]subtitles=video.eng.srt' \
                        video-out.mp4

This command adds an overlay to the video stream (in my case I overlaid a full frame over the original video offering some explanations), increases the volume ten times and adds hard subtitles.

P.S. You can see the practical application of the above in this video with a head of one of the electoral commissions in Belarus forcing the members of the staff to manipulate the voting results. I transcribed the video in both Russian and English and encoded the English subtitles into the video.

LongNowKathryn Cooper’s Wildlife Movement Photography

Amazing wildlife photography by Kathryn Cooper reveals the brushwork of birds and their flocks through sky, hidden by the quickness of the human eye.

“Staple Newk” by Kathryn Cooper.

Ever since Eadweard Muybridge’s pioneering photography of animal locomotion in 01877 and 01878 (including the notorious “horse shot by pistol” frames from an era less concerned with animal experiments), the trend has been to unpack our lived experience of movement into serial, successive frames. The movie camera smears one pace layer out across another, lets the eye scrub over one small moment.

“UFO” by Kathryn Cooper.

In contrast, time-lapse and long exposure camerawork implodes the arc of moments, an integral calculus that gathers the entire gesture. Cooper’s flock photography is less the autopsy of high-speed video and more the graceful enzo drawn by a zen master.

Learn More

Planet DebianSteinar H. Gunderson: Numbering Scrabble leaves

I've toyed a bit with Quackle, a Scrabble AI, recently. I don't have any particular use for it, but it's a fun exercise in optimization; unlike chess AIs, where everything is hyper-optimized and tuned to death, it seems Scrabble AIs still have some low-hanging fruit to pick, so I've been sending some patches.

One interesting sub-problem is that of looking up superleaves. A leave in Scrabble (not a leaf!) is what remains on your rack after you play a word, and that needs to be taken into account. If, for instance, you lay a (single-letter) word and are left with ACEHLP, that is great, because those tiles go really well together with alomst everything else (as well as each other) and will give you a high chance of playing a bingo later. But if you're left with IIIIU, your next move will likely be to exchange tiles, so that's a low-scoring leave. (Of course, you'll never be able to choose between those two specific leaves, but you could easily have to choose between e.g. ERS and JUU. The former is great, the latter is hard to work with.)

Superleaves come from a table (I don't know how they were calculated in the first place), and there are roughly a million of them for English. Great! We can just stick them into a hash table, done deal. Back when I worked in Google, someone was only half-joking when they said “if you're at a Google interview and don't know the answer to the interview question, it's probably hash table”… but can we do better? In particular, std::unordered_map (Quackle is C++) isn't fantastic in most implementations, especially since we'd like to stay within the L2 cache if possible, so perhaps we could just replace the entire thing with a flat array? Also, well, it's an interesting problem in its own right.

For the array, we'd need a way (not involving a hash table!) to give each leave a position in the array, and calculate that position quickly. Note that this is distinct from enumerating all possible leaves, which is easy with some recursion; this is numbering one given leave.

So what we are after is a minimal perfect hash function for leaves, except just reimplementing one of those algorithms didn't appeal to me. Let's sum up some requirements:

  • We'd like to map each leave into a unique integer between 0 and 914,623 inclusive. (We could probably accept some holes if need be.) No two different leaves can map to the same integer, but we don't care about which goes where.
  • There are 27 different tiles; the 26 English letters and the blank.
  • There are 100 tiles in all; the tile distribution is known ahead of time. This means you cannot have e.g. a leave CCCCD, because there are only two Cs in the bag.
  • Leaves are 1 to 6 letters long.
  • Order does not matter; QU and UQ are the same.
  • Computation must be about as fast as computing the hash of the string.
  • Any tables involved should be small (think a couple hundred bytes) and fast to precompute, as we'd like to adapt the algorithm to any language and tile distribution.

For simplicity right now, let's say we fix the length of the leaves to six letters; it's trivial just to have six tables and append them, so this takes away some complexity and none of the generality.

Second, we can convert our “ordering does not matter” specification into imposing order on the rack. Simply sort the leave before the rest of the algorithm runs; Quackle already does this. In a sense, we've chosen a canonical representation for each leave, and forbid all others.

Even so, I fiddled with this problem for a while, and finding the right angle of attack wasn't immediately easy. My first thought was that this should be easy with some combinatorics; the first tile (call it a) can have a value 0..26, then the second one (b) can be perhaps 10..26 if the first tile is J, so that leave would be a + 27 * (b - 10) + 27 * 17 * (c - ...) + … it doesn't really work out. You end with something that's uniquely decodable, which shows that there are no collisions, but there are many integers that don't map to anything, so you get a way too large array. (If you see it as a compression problem, some leaves get shorter bit strings than others since e.g. having the leave start with W means there's only one possible rack WWXYYZ, which isn't what we want here.)

I also considered “shapes” of racks, e.g. for three-letter racks, you can have either three different ones (1–1–1), one duplicated and then a different one (2–1), the other way around (1–2) or three equals (3). But it didn't seem to be going anywhere either, and the annoying fact that there's a max on each tile doesn't make the combinatorics solution any easier.

The insight that helped me eventually was a trivial one: If you can count, you can place! If you're in a queue in the supermarket, and you can see that there are five people in front of you, you know you're number six in the queue. This requires precise counting of subsets, though, and a good way of ordering those subsets. (We could probably do with overcounting in certain situations, but let's not go there.) So let's start with counting how many leaves there are (even though I already mentioned there are 914,624 :-) ) in all.

First, I'm going to turn the problem a bit on its head. Remember how we turned the “ordering doesn't matter” rule into a constraint; now we'll be doing the same thing for the “all Cs are the same, but there are only two of them” constraint. We'll pretend we've numbered all the tiles with some invisible ink; our C tiles are now called C1 and C2, our A tiles are A1..A9 and so on. We impose the rule that you cannot have an (n+1) tile in your leave unless you also have the n tile; so, you cannot have C2 unless you have C1, you cannot have U3 without both U1 and U2, and so on. (This means you cannot have A7, A8 or A9 at all, since leaves are not long enough, but we won't be using that.) Again, we've gone towards a more canonical representation, so now our job is to see how many ways we can pick out six tiles out of 100 with our two constraints.

This just screams dynamic programming, and the recursion is not difficult in this case. We'll create a function called N(T, L) that counts “how many leaves of length L can we create, if tile number T is the first tile?”. For L=1, the answer is obviously 1, so N(T, 1) = 1. For L > 1, we can define it recursively; for every T' > T (remember ordering!) that doesn't interfere with our canonicality constraint, we get N(T', L - 1) leaves, so just sum up those. Legal values for T' are easy to figure out; we can pick the one next to T (e.g. if T was E2, T' can be E3) and apart from that, only F1, G1, H1 and so on.

So the sum over all possible N(T, 6) (where T is the tile number for A1, B1, C1, etc.) will give us the number of different 6-letter leaves; or N(-1, 7) if you want. This is 737,311 six-letter leaves, 148,150 five-letter leaves and so on down to 27 one-letter leaves; in total, 914,624 if we allow any length.

Great, so that allowed us to count. Now for the numbering problem, which is fairly similar. The idea is that we'll impose a restriction on the leaves in the form of which tiles you're allowed to start on (which, due to the ordering, restricts which ones you're allowed to use), and then gradually loosen up that restriction. More restrictive leaves come first; the most restrictive choice is to start with W1, which gives us WWXYYZ as the most restricted leave, which is number 0. There's only one possibility with W (we can count that using our function N!), so we know that if we start with V1, our position has to be 1 or greater! And if we start with U1, we can call on our function again and see that there are 12 leaves that start with V1 or W1, so our position has to be at least 12.

This really gives the rest of the algorithm away. First, convert the leave into tile numbers (e.g. AEER becomes A1, E1, E2, R1, which are tiles number 0, 18, 19, 71). Then, for the first tile (T=0), count out how leaves can be made without allowing that tile, by summing N(T', 4) for all legal T' > 0. (The sums can be easily precomputed in a table.) Now we know where the numbering of the A... leaves start, so we just need to figure out where the E.. sub-leaves start. So for the second tile, count how many sub-leaves can be made without allowing that tile, by summing N(T', 3) for all legal T' > 18. And so on. It's just one addition and a small table lookup per tile, which is fairly fast. Mission accomplished!

Oh, and the most important optimization? Don't do the lookup twice…

Planet DebianGunnar Wolf: DebConf20 talk recorded

Following Antonio Terceiro’s post on tips for using ffmpeg for editing video, I will also share a bit of my experience producing my video for my session in DebConf20.

I recorded my talk today. As Terceiro mentioned, even though I’m used to speaking in front of my webcam (i.e. for my classes and some smaller conferences I’ve worked on during the COVID lockdown), it does feel a bit weird to present a live talk to… nobody :-|

OK, one step back. Why are we doing this? Because our hardworking friends of the DebConf20 video team recommended so. In order to minimize connecitvity issues from the variety of speakers throughout the world, we were requested to pre-record the exposition part of our talks, send them to the video team (deadline: today 2020-08-16, in case you still owe yours!), and make sure to be present at the end of the talk for the Q&A session. Of course, for a 45 minute talk, I prepared a 30 minute presentation, saving time for said Q&A session.

Anyway, I used the excellent OBS studiolive video mixing/editing program (of course, Debian packages are available. This allowed me to set up several predefined views (combinations and layouts of the presentation, webcam, and maybe some other sources) and professionally and elegantly switch between them on the fly.

I am still a newbie with OBS, but I surely see it becoming a part of my day to day streaming. Of course, my setup still was obvious (me looking right every now and then to see or control OBS, as I work on a dual-monitor setup…)

Anyway, the experience was very good, much smoother and faster than what I usually have to do when editing video. But just as I was finishing thanking the (future) audience and closing the recording… I had to tell the camera, “oh, fuck!”

The button labeled “Start Recording”… Had not been pressed. So, did I just lose 30 minutes of my life, plus a half-decent delivered talk? No, fortunately not. I had previously been playing with OBS, and configured some things. The button I did press was “Start Streaming”.

So, my talk (swearing included, of course) was dutifully streamed over to my YouTube channel. It seems up to five people got a sneak preview as to what will my DebConf participation be (of course, I’ve de-listed the video). I pulled it with the always-handy youtube-dl, edited out my curses using kdenlive, and pushed it to the DebConf video server.

Oh, make sure you follow the advice for recording presentations. It has all the relevant advice, the settings you should use, and much more welcome information if you are new to this.

So… Next week, DebConf20! Be there or be square!

,

Planet DebianAntonio Terceiro: Useful ffmpeg commands for editing video

For DebConf20, we are recommending that speakers pre-record the presentation part of their talks, and will have live Q&A. We had a smaller online MiniDebConf a couple of months ago, where for instance I had connectivity issues during my talk, so even though it feels too artificial, I guess pre-recording can decrease by a lot the likelihood of a given talk going bad.

Paul Gevers and I submitted a short 20 min talk giving an update on autopkgtest, ci.debian.net and friends. We will provide the latest updates on autopkgtest, autodep8, debci, ci.debian.net, and its integration with the Debian testing migration software, britney.

We agreed on a split of the content, each one recorded their part, and I offered to join them together. The logical chaining of the topics is such that we can't just concatenate the recordings, so we need to interlace our parts.

So I set out to do a full video editing work. I have done this before, although in a simpler way, for one of the MiniDebconfs we held in Curitiba. In that case, it was just cutting the noise at the beginning and the end of the recording, and adding beginning and finish screens with sponsors logos etc.

The first issue I noticed was that both our recordings had a decent amount of audio noise. To extract the audio track from the videos, I resorted to How can I extract audio from video with ffmpeg? on Stack Overflow:

ffmpeg -i input-video.avi -vn -acodec copy output-audio.aac

I then edited the audio with Audacity. I passed a noise reduction filter a couple of times, then a compressor filter to amplify my recording on mine, as Paul's already had a good volume. And those are my more advanced audio editing skills, which I acquired doing my own podcast.

I now realized I could have just muted the audio tracks from the original clip and align the noise-free audio with it, but I ended up creating new video files with the clean audio. Another member of the Stack Overflow family came to the rescue, in How to merge audio and video file in ffmpeg. To replace the audio stream, we can do something like this:

ffmpeg -i video.mp4 -i audio.wav -c:v copy -c:a aac -map 0:v:0 -map 1:a:0 output.mp4

Paul's recording had a 4:3 aspect ratio, while the requested format is 16:9. This late in the game, there was zero chance I would request him to redo the recording. So I decided to add those black bars on the side to make it the right aspect when showing full screen. And yet again the quickest answer I could find came from the Stack Overflow empire: ffmpeg: pillarbox 4:3 to 16:9:

ffmpeg -i "input43.mkv" -vf "scale=640x480,setsar=1,pad=854:480:107:0" [etc..]

The final editing was done with pitivi, which is what I have used before. I'm a very basic user, but I could do what I needed. It was basically splitting the clips at the right places, inserting the slides as images and aligning them with the video, and making most our video appear small in the corner when presenting the slides.

P.S.: all the command lines presented here are examples, basically copied from the linked Q&As, and have to be adapted to your actual input and output formats.

Planet DebianSylvain Beucler: Planet upgrade

planet.gnu.org logo

The system running planet.gnu.org was upgraded/reinstalled to Debian 10 "buster" :)
Documentation was updated.

Let me know if you notice any issue - planet@gnu.org.

For the next upgrade, we'll have to decide whether to takeover Planet Venus and upgrade it to Python 3, or migrate to another Planet software.
Suggestions/help welcome :)

Planet DebianJelmer Vernooij: Debian Janitor: 8,200 landed changes landed so far

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

The bot has been submitting merge requests for about seven months now. The rollout has happened gradually across the Debian archive, and the bot is now enabled for all packages maintained on Salsa, GitLab, GitHub and Launchpad.

There are currently over 1,000 open merge requests, and close to 3,400 merge requests have been merged so far. Direct pushes are enabled for a number of large Debian teams, with about 5,000 direct pushes to date. That covers about 11,000 lintian tags of varying severities (about 75 different varieties) fixed across Debian.

Janitor pushes over time Janitor merges over time

For more information about the Janitor's lintian-fixes efforts, see the landing page

,

CryptogramFriday Squid Blogging: Editing the Squid Genome

Scientists have edited the genome of the Doryteuthis pealeii squid with CRISPR. A first.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityMedical Debt Collection Firm R1 RCM Hit in Ransomware Attack

R1 RCM Inc. [NASDAQ:RCM], one of the nation’s largest medical debt collection companies, has been hit in a ransomware attack.

Formerly known as Accretive Health Inc., Chicago-based R1 RCM brought in revenues of $1.18 billion in 2019. The company has more than 19,000 employees and contracts with at least 750 healthcare organizations nationwide.

R1 RCM acknowledged taking down its systems in response to a ransomware attack, but otherwise declined to comment for this story.

The “RCM” portion of its name refers to “revenue cycle management,” an industry which tracks profits throughout the life cycle of each patient, including patient registration, insurance and benefit verification, medical treatment documentation, and bill preparation and collection from patients.

The company has access to a wealth of personal, financial and medical information on tens of millions of patients, including names, dates of birth, Social Security numbers, billing information and medical diagnostic data.

It’s unclear when the intruders first breached R1’s networks, but the ransomware was unleashed more than a week ago, right around the time the company was set to release its 2nd quarter financial results for 2020.

R1 RCM declined to discuss the strain of ransomware it is battling or how it was compromised. Sources close to the investigation tell KrebsOnSecurity the malware is known as Defray.

Defray was first spotted in 2017, and its purveyors have a history of specifically targeting companies in the healthcare space. According to Trend Micro, Defray usually is spread via booby-trapped Microsoft Office documents sent via email.

“The phishing emails the authors use are well-crafted,” Trend Micro wrote. For example, in an attack targeting a hospital, the phishing email was made to look like it came from a hospital IT manager, with the malicious files disguised as patient reports.

Email security company Proofpoint says the Defray ransomware is somewhat unusual in that it is typically deployed in small, targeted attacks as opposed to large-scale “spray and pray” email malware campaigns.

“It appears that Defray may be for the personal use of specific threat actors, making its continued distribution in small, targeted attacks more likely,” Proofpoint observed.

A recent report (PDF) from Corvus Insurance notes that ransomware attacks on companies in the healthcare industry have slowed in recent months, with some malware groups even dubiously pledging they would refrain from targeting these firms during the COVID-19 pandemic. But Corvus says that trend is likely to reverse in the second half of 2020 as the United States moves cautiously toward reopening.

Corvus found that while services that scan and filter incoming email for malicious threats can catch many ransomware lures, an estimated 75 percent of healthcare companies do not use this technology.

CryptogramUpcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

CryptogramDrovorub Malware

The NSA and FBI have jointly disclosed Drovorub, a Russian malware suite that targets Linux.

Detailed advisory. Fact sheet. News articles. Reddit thread.

Planet DebianRussell Coker: Jitsi on Debian

I’ve just setup an instance of the Jitsi video-conference software for my local LUG. Here is an overview of how to set it up on Debian.

Firstly create a new virtual machine to run it. Jitsi is complex and has lots of inter-dependencies. It’s packages want to help you by dragging in other packages and configuring them. This is great if you have a blank slate to start with, but if you already have one component installed and running then it can break things. It wants to configure the Prosody Jabber server and a web server and my first attempt at an install failed when it tried to reconfigure the running instances of Prosody and Apache.

Here’s the upstream install docs [1]. They cover everything fairly well, but I’ll document the configuration I wanted (basic public server with password required to create a meeting).

Basic Installation

The first thing to do is to get a short DNS name like j.example.com. People will type that every time they connect and will thank you for making it short.

Using Certbot for certificates is best. It seems that you need them for j.example.com and auth.j.example.com.

apt install curl certbot
/usr/bin/letsencrypt certonly --standalone -d j.example.com,auth.j.example.com -m you@example.com
curl https://download.jitsi.org/jitsi-key.gpg.key | gpg --dearmor > /etc/apt/jitsi-keyring.gpg
echo "deb [signed-by=/etc/apt/jitsi-keyring.gpg] https://download.jitsi.org stable/" > /etc/apt/sources.list.d/jitsi-stable.list
apt-get update
apt-get -y install jitsi-meet

When apt installs jitsi-meet and it’s dependencies you get asked many questions for configuring things. Most of it works well.

If you get the nginx certificate wrong or don’t have the full chain then phone clients will abort connections for no apparent reason, it seems that you need to edit /etc/nginx/sites-enabled/j.example.com.conf to use the following ssl configuration:

ssl_certificate /etc/letsencrypt/live/j.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/j.example.com/privkey.pem;

Then you have to edit /etc/prosody/conf.d/j.example.com.cfg.lua to use the following ssl configuration:

key = "/etc/letsencrypt/live/j.example.com/privkey.pem";
certificate = "/etc/letsencrypt/live/j.example.com/fullchain.pem";

It seems that you need to have an /etc/hosts entry with the public IP address of your server and the names “j.example.com j auth.j.example.com”. Jitsi also appears to use the names “speakerstats.j.example.com conferenceduration.j.example.com lobby.j.example.com conference.j.example.com conference.j.example.com internal.auth.j.example.com” but they aren’t required for a basic setup, I guess you could add them to /etc/hosts to avoid the possibility of strange errors due to it not finding an internal host name. There are optional features of Jitsi which require some of these names, but so far I’ve only used the basic functionality.

Access Control

This section describes how to restrict conference creation to authenticated users.

The secure-domain document [2] shows how to restrict access, but I’ll summarise the basics.

Edit /etc/prosody/conf.avail/j.example.com.cfg.lua and use the following line in the main VirtualHost section:

        authentication = "internal_hashed"

Then add the following section:

VirtualHost "guest.j.example.com"
        authentication = "anonymous"
        c2s_require_encryption = false
        modules_enabled = {
            "turncredentials";
        }

Edit /etc/jitsi/meet/j.example.com-config.js and add the following line:

        anonymousdomain: 'guest.j.example.com',

Edit /etc/jitsi/jicofo/sip-communicator.properties and add the following line:

org.jitsi.jicofo.auth.URL=XMPP:j.example.com

Then run commands like the following to create new users who can create rooms:

prosodyctl register admin j.example.com

Then restart most things (Prosody at least, maybe parts of Jitsi too), I rebooted the VM.

Now only the accounts you created on the Prosody server will be able to create new meetings. You should be able to add, delete, and change passwords for users via prosodyctl while it’s running once you have set this up.

Conclusion

Once I gave up on the idea of running Jitsi on the same server as anything else it wasn’t particularly difficult to set up. Some bits were a little fiddly and hopefully this post will be a useful resource for people who have trouble understanding the documentation. Generally it’s not difficult to install if it is the only thing running on a VM.

Planet DebianMarkus Koschany: My Free Software Activities in July 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in August) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Last month GCC 10 became the new default compiler for Debian 11 and compilation errors are now release critical. The change affected dozens of games in the archive but fortunately most of them are rather easy to fix and a quick workaround is available. I uploaded several packages with patches from Reiner Herrmann including blastem, freegish, gngb, phlipple, xaos, xboard, gamazons and freesweep. I could add to this list atomix, teg, neverball and biniax2. I am quite confident we can fix the rest of those FTBFS bugs before the freeze.
  • Finally freeorion 0.4.10 was released last month. Among new gameplay changes and bug fixes, freeorion’s Python 2 code was ported to Python 3.
  • Due to the ongoing Python 2 removal pygame-sdl2 in unstable could no longer be built from source and I had to upload the new Python 3 version from experimental. This in turn breaks renpy, a framework for developing visual-novel type games. At the moment it is uncertain if there will be a Python 3 version of renpy for Debian 11 in time while this issue is still being worked on upstream.
  • I uploaded a new upstream release of mgba, a Game Boy Advance emulator, for Ryan Tandy.

Debian Java

Misc

  • I fixed the GCC 10 FTBFS in iftop and packaged a new upstream release of osmo, a lean and lightweight personal organizer.
  • New versions of privacybadger, binaryen, wabt and most importantly ublock-origin are also available now. Since the new binary packages webext-ublock-origin-firefox and webext-ublock-origin-chromium were finally accepted into the archive, I am planning to package version 1.29.0 now.

Debian LTS

This was my 53. month as a paid contributor and I have been paid to work 15 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-2278-2. Issued a regression update for squid3. It was discovered that the patch for CVE-2019-12523 interrupted the communication between squid and icap or ecap services. The setup is most commonly used with clamav or similar antivirus scanners. I debugged the problem and created a new patch to address the error. In this process I also updated the patch for CVE-2019-12529 to use more code from Debian’s cryptographic nettle library. I also enabled the test suite by default now and corrected a failing test.
  • I have been working on fixing CVE-2020-15049 in squid3. The upstream patch for the 4.x series appears to be simple but to completely address the underlying problem, squid3 requires a backport of the new HttpHeader parsing code which has improved a lot over the last couple of years. The patch is complete but requires more testing. A new update will follow soon.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 „Jessie“. This was my 26. month and I have been paid to work 13,25 hours on ELTS.

  • ELA-242-1. Issued a security update for tomcat7 fixing 1 CVE.
  • ELA-243-1. Issued a security update for tomcat8 fixing 1 CVE.
  • ELA-253-1. Issued a security update for imagemagick fixing 18 CVE.
  • ELA-254-1. Issued a security update for libssh fixing 1 CVE.

Thanks for reading and see you next time.

LongNowHow to Be in Time

Photograph: Scott Thrift.

“We already have timepieces that show us how to be on time. These are timepieces that show us how to be in time.”

– Scott Thrift

Slow clocks are growing in popularity, perhaps as a tonic for or revolt against the historical trend of ever-faster timekeeping mechanisms.

Given that bell tower clocks were originally used to keep monastic observances of the sacred hours, it seems appropriate to restore some human agency in timing and give kairos back some of the territory it lost to the minute and second hands so long ago…

Scott Thrift’s three conceptual timepieces measure with only one hand each, counting 24 hour, one-month, and one-year cycles with each revolution. Not quite 10,000 years, but it’s a consumer-grade start.

“Right now we’re living in the long-term effects of short-term thinking. I don’t think it’s possible really for us to commonly think long term if the way that we tell time is with a short-term device that just shows the seconds, minutes, and hours. We’re precluded to seeing things in the short term.”

-Scott Thrift

Planet DebianJonathan Carter: bashtop, now in buster-backports

Recently, I discovered bashtop, yet another fancy top-like utility that’s mostly written in bash (it uses some python3-psutil and shells out to other common system utilities). I like its use of high-colour graphics and despite being written in bash, it’s not as resource heavy as I would have expected and also quite snappy (even on a raspberry pi). While writing this post, I also discovered that the author of bashtop ported it to Python and that the python version is called bpytop (hmm, doesn’t quite have the same ring to it), which is even faster and less resource intensive than the bash version (although I haven’t tried that yet, I guess I will soon…).

I set out to package it, but someone beat me to it, but since I’m also on the backports team these days, I went ahead and backported it for buster. So if you have backports enabled, you can now install it using “apt install bashtop -t buster-backports”.

Dylan Aïssi, who packaged bashtop in Debian, has already filed an ITP for bpytop, so we’ll soon have yet another top-like tool in our collection :-)

Planet DebianSven Hoexter: Retrieve WLAN PSK via nmcli

Note to myself so I do not have to figure that out every few month when I've to dig out a WLAN PSK from my existing configuration.

Step 1: Figure out the UUID of the network:

$ nmcli con show
NAME                  UUID                                  TYPE      DEVICE          
br-59d010130b86       d8672d3d-7cf6-484f-9ef8-e6ec3e73bef7  bridge    br-59d010130b86 
FRITZ!Box 7411        1ed1cec1-f586-4e75-ba6d-c9f6f4cba6e2  wifi      wlp4s0
[...]

Step 2: Request to view the PSK for this network based on the UUID

$ nmcli --show-secrets --fields 802-11-wireless-security.psk con show '1ed1cec1-f586-4e75-ba6d-c9f6f4cba6e2'
802-11-wireless-security.psk:           0815471123420511111

Planet DebianJonathan Dowland: Generic Haskell

When I did the work described earlier in template haskell, I also explored generic programming in Haskell to solve a particular problem. StrIoT is a program generator: it outputs source code, which may depend upon other modules, which need to be imported via declarations at the top of the source code files.

The data structure that StrIoT manipulates contains information about what modules are loaded to resolve the names that have been used in the input code, so we can walk that structure to automatically derive an import list. The generic programming tools I used for this are from Scrap Your Boilerplate (SYB), a module written to complement a paper of the same name. In this code snippet, everything and mkQ are from SYB:

extractNames :: Data a => a -> [Name]
extractNames = everything (++) (\a -> mkQ [] f a)
     where f = (:[])

The input must be any type which implements typeclass Data, as must all its members (and their members etc.): This holds for the Template Haskell Exp types. The output is a normal list of Names. The utility function has a more specific type Name -> [Name]. This is all that's needed to walk over the heterogeneous data structures and do something specific (f) when we encounter a Name.

Post-processing the Names to get a list of modules is simple

 nub . catMaybes . map nameModule . concatMap extractNames

Unfortunately, there's a weird GHC behaviour relating to the module names for some Prelude functions that makes the above less useful in practice. For example, the Prelude function words :: String -> [String] can normally be used without an explicit import (since it's a Prelude function). However, once round-tripped through a Name, it becomes GHC.OldList.words. Attempting to import GHC.OldList fails in some contexts, because it's a hidden module or package. I've been meaning to investigate further and, if necessary, file a GHC bug about this.

For this reason I've scrapped all the above and gone with a different plan. We go back to requiring the user to specify their required import list explicitly. We then walk over the Exp data type prior to code generation and decanonicalize all the Names. I also use generic programming/SYB to do this:

unQualifyNames :: Exp -> Exp
unQualifyNames = everywhere (\a -> mkT f a)
     where f :: Name -> Name
           f n = if n == '(.)
            then mkName "."
            else (mkName . last . splitOn "." . pprint) n

I've had to special-case composition (.) since that code-point is also used as the delimiter between package, module and function. Otherwise this looks very similar to the earlier function, except using everywhere and mkT (make transformation) instead of everything and mkQ (make query).

Worse Than FailureError'd: New Cat Nullness

"Honest! If I could give you something that had a 'cat' in it, I would!" wrote Gordon P.

 

"You'd think Outlook would hage told me sooner about these required updates," Carlos writes.

 

Colin writes, "Asking for a friend, does balsamic olive oil still have to be changed every 3,000 miles?"

 

"I was looking for Raspberry Pi 4 cases on my local Amazon.co.jp when I stumbled upon a pretty standard, boring WTF. Desparate to find an actual picture of the case I was after, I changed to Amazon.com and I guess I got what I wanted," George wrote. (Here are the short versions: https://www.amazon.co.jp/dp/B07TFDFGZFhttps://www.amazon.com/dp/B07TFDFGZF)

 

Kevin wrote, "Ah, I get it. Shiny and blinky ads are SO last decade. Real container advertisers nowadays get straight to the point!"

 

"I noticed this in the footer of an email from my apartment management company and well, I'm intrigued at the possibility of 'rewards'," wrote Peter C.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianKeith Packard: picolibc-news

Picolibc Updates

I thought work on picolibc would slow down at some point, but I keep finding more things that need work. I spent a few weeks working in libm and then discovered some important memory allocation bugs in the last week that needed attention too.

Cleaning up the Picolibc Math Library

Picolibc uses the same math library sources as newlib, which includes code from a range of sources:

  • SunPro (Sun Microsystems). This forms the bulk of the common code for the math library, with copyright dates stretching back to 1993. This code is designed for processors with FPUs and uses 'float' for float functions and 'double' for double functions.

  • NetBSD. This is where the complex functions came from, with Copyright dates of 2007.

  • FreeBSD. fenv support for aarch64, arm and sparc

  • IBM. SPU processor support along with a number of stubs for long-double support where long double is the same as double

  • Various processor vendors have provided processor-specific code for exceptions and a few custom functions.

  • Szabolcs Nagy and Wilco Dijkstra (ARM). These two re-implemented some of the more important functions in 2017-2018 for both float and double using double precision arithmetic and fused multiply-add primitives to improve performance for systems with hardware double precision support.

The original SunPro math code had been split into two levels at some point:

  1. IEEE-754 functions. These offer pure IEEE-754 semantics, including return values and exceptions. They do not set the POSIX errno value. These are all prefixed with __ieee754_ and can be called directly by applications if desired.

  2. POSIX functions. These can offer POSIX semantics, including setting errno and returning expected values when errno is set.

New Code Sponsored by ARM

Szabolcs Nagy and Wilco Dijkstra's work in the last few years has been to improve the performance of some of the core math functions, which is much appreciated. They've adopted a more modern coding style (C99) and written faster code at the expense of a larger memory foot print.

One interesting choice was to use double computations for the float implementations of various functions. This makes these functions shorter and more accurate than versions done using float throughout. However, for machines which don't have HW double, this pulls in soft double code which adds considerable size to the resulting binary and slows down the computations, especially if the platform does support HW float.

The new code also takes advantage of HW fused-multiply-add instructions. Those offer more precision than a sequence of primitive instructions, and so the new code can be much shorter as a result.

The method used to detect whether the target machine supported fma operations was slightly broken on 32-bit ARM platforms, where those with 'float' fma acceleration but without 'double' fma acceleration would use the shorter code sequence, but with an emulated fma operation that used the less-precise sequence of operations, leading to significant reductions in the quality of the resulting math functions.

I fixed the double fma detection and then also added float fma detection along with implementations of float and double fma for ARM and RISC-V. Now both of those platforms get fma-enhanced math functions where available.

Errno Adventures

I'd submitted patches to newlib a while ago that aliased the regular math library names to the __ieee754_ functions when the library was configured to not set errno, which is pretty common for embedded environments where a shared errno is a pain anyways.

Note the use of the word “can” in remark about the old POSIX wrapper functions. That's because all of these functions are run-time switchable between “_IEEE_” and “_POSIX_” mode using the _LIB_VERSION global symbol. When left in the usual _IEEE_ mode, none of this extra code was ever executed, so these wrapper functions never did anything beyond what the underlying __ieee754_ functions did.

The new code by Nagy and Dijkstra changed how functions are structured to eliminate the underlying IEEE-754 api. These new functions use tail calls to various __math_ error reporting functions. Those can be configured at library build time to set errno or not, centralizing those decisions in a few functions.

The result of this combination of source material is that in the default configuration, some library functions (those written by Nagy and Dijkstra) would set errno and others (the old SunPro code) would not. To disable all errno references, the library would need to be compiled with a set of options, -D_IEEE_LIBM to disable errno in the SunPro code and -DWANT_ERRNO=0 to disable errno in the new code. To enable errno everywhere, you'd set -D_POSIX_MODE to make the default value for _LIB_VERSION be _POSIX_ instead of _IEEE_.

To clean all of this up, I removed the run-time _LIB_VERSION variable and made that compile-time. In combination with the earlier work to alias the __ieee754_ functions to the regular POSIX names when _IEEE_LIBM was defined this means that the old SunPro POSIX functions now only get used when _IEEE_LIBM is not defined, and in that case the _LIB_VERSION tests always force use of the errno setting code. In addition, I made the value of WANT_ERRNO depend on whether _IEEE_LIBM was defined, so now a single definition (-D_IEEE_LIBM) causes all of the errno handling from libm to be removed, independent of which code is in use.

As part of this work, I added a range of errno tests for the math functions to find places where the wrong errno value was being used.

Exceptions

As an alternative to errno, C also provides for IEEE-754 exceptions through the fenv functions. These have some significant advantages, including having independent bits for each exception type and having them accumulate instead of sharing errno with a huge range of other C library functions. Plus, they're generally implemented in hardware, so you get exceptions for both library functions and primitive operations.

Well, you should get exceptions everywhere, except that the GCC soft float libraries don't support them at all. So, errno can still be useful if you need to know what happened in your library functions when using soft floats.

Newlib has recently seen a spate of fenv support being added for various architectures, so I decided that it would be a good idea to add some tests. I added tests for both primitive operations, and then tests for library functions to check both exceptions and errno values. Oddly, this uncovered a range of minor mistakes in various math functions. Lots of these were mistakes in the SunPro POSIX wrapper functions where they modified the return values from the __ieee754_ implementations. Simply removing those value modifications fixed many of those errors.

Fixing Memory Allocator bugs

Picolibc inherits malloc code from newlib which offers two separate implementations, one big and fast, the other small and slow(er). Selecting between them is done while building the library, and as Picolibc is expected to be used on smaller systems, the small and slow one is the default.

Contributed by someone from ARM back in 2012/2013, nano-mallocr reminds me of the old V7 memory allocator. A linked list, sorted in address order, holds discontiguous chunks of available memory.

Allocation is done by searching for a large enough chunk in the list. The first one large enough is selected, and if it is large enough, a chunk is split off and left on the free list while the remainder is handed to the application. When the list doesn't have any chunk large enough, sbrk is called to get more memory.

Free operations involve walking the list and inserting the chunk in the right location, merging the freed memory with any immediately adjacent chunks to reduce fragmentation.

The size of each chunk is stored just before the first byte of memory used by the application, where it remains while the memory is in use and while on the free list. The free list is formed by pointers stored in the active area of the chunk, so the only overhead for chunks in use is the size field.

Something Something Padding

To deal with the vagaries of alignment, the original nano-mallocr code would allow for there to be 'padding' between the size field and the active memory area. The amount of padding could vary, depending on the alignment required for a particular chunk (in the case of memalign, that padding can be quite large). If present, nano-mallocr would store the padding value in the location immediately before the active area and distinguish that from a regular size field by a negative sign.

The whole padding thing seems mysterious to me -- why would it ever be needed when the allocator could simply create chunks that were aligned to the required value and a multiple of that value in size. The only use I could think of was for memalign; adding this padding field would allow for less over-allocation to find a suitable chunk. I didn't feel like this one (infrequent) use case was worth the extra complexity; it certainly caused me difficulty in reading the code.

A Few Bugs

In reviewing the code, I found a couple of easy-to-fix bugs.

  • calloc was not checking for overflow in multiplication. This is something I've only heard about in the last five or six years -- multiplying the size of each element by the number of elements can end up wrapping around to a small value which may actually succeed and cause the program to mis-behave.

  • realloc copied new_size bytes from the original location to the new location. If the new size was larger than the old, this would read off the end of the original allocation, potentially disclosing information from an adjacent allocation or walk off the end of physical memory and cause some hard fault.

Time For Testing

Once I had uncovered a few bugs in this code, I decided that it would be good to write a few tests to exercise the API. With the tests running on four architectures in nearly 60 variants, it seemed like I'd be able to uncover at least a few more failures:

  • Error tests. Allocating too much memory and make sure the correct errors were returned and that nothing obviously untoward happened.

  • Touch tests. Just call the allocator and validate the return values.

  • Stress test. Allocate lots of blocks, resize them and free them. Make sure, using 'mallinfo', that the malloc arena looked reasonable.

These new tests did find bugs. But not where I expected them. Which is why I'm so fond of testing.

GCC Optimizations

One of my tests was to call calloc and make sure it returned a chunk of memory that appeared to work or failed with a reasonable value. To my surprise, on aarch64, that test never finished. It worked elsewhere, but on that architecture it hung in the middle of calloc itself. Which looked like this:

void * nano_calloc(malloc_size_t n, malloc_size_t elem)
{
    ptrdiff_t bytes;
    void * mem;

    if (__builtin_mul_overflow (n, elem, &bytes))
    {
    RERRNO = ENOMEM;
    return NULL;
    }
    mem = nano_malloc(bytes);
    if (mem != NULL) memset(mem, 0, bytes);
    return mem;
}

Note the naming here -- nano_mallocr uses nano_ prefixes in the code, but then uses #defines to change their names to those expected in the ABI. (No, I don't understand why either). However, GCC sees the real names and has some idea of what these functions are supposed to do. In particular, the pattern:

foo = malloc(n);
if (foo) memset(foo, '\0', n);

is converted into a shorter and semantically equivalent:

foo = calloc(n, 1);

Alas, GCC doesn't take into account that this optimization is occurring inside of the implementation of calloc.

Another sequence of code looked like this:

chunk->size = foo
nano_free((char *) chunk + CHUNK_OFFSET);

Well, GCC knows that the content of memory passed to free cannot affect the operation of the application, and so it converted this into:

nano_free((char *) chunk + CHUNK_OFFSET);

Remember that nano_mallocr stores the size of the chunk just before the active memory. In this case, nano_mallocr was splitting a large chunk into two pieces, setting the size of the left-over part and placing that on the free list. Failing to set that size value left whatever was there before for the size and usually resulted in the free list becoming quite corrupted.

Both of these problems can be corrected by compiling the code with a couple of GCC command-line switches (-fno-builtin-malloc and -fno-builtin-free).

Reworking Malloc

Having spent this much time reading through the nano_mallocr code, I decided to just go through it and make it easier for me to read today, hoping that other people (which includes 'future me') will also find it a bit easier to follow. I picked a couple of things to focus on:

  1. All newly allocated memory should be cleared. This reduces information disclosure between whatever code freed the memory and whatever code is about to use the memory. Plus, it reduces the effect of un-initialized allocations as they now consistently get zeroed memory. Yes, this masks bugs. Yes, this goes slower. This change is dedicated to Kees Cook, but please blame me for it not him.

  2. Get rid of the 'Padding' notion. Every time I read this code it made my brain hurt. I doubt I'll get any smarter in the future.

  3. Realloc could use some love, improving its efficiency in common cases to reduce memory usage.

  4. Reworking linked list walking. nano_mallocr uses a singly-linked free list and open-codes all list walking. Normally, I'd switch to a library implementation to avoid introducing my own bugs, but in this fairly simple case, I think it's a reasonable compromise to open-code the list operations using some patterns I learned while working at MIT from Bob Scheifler.

  5. Discover necessary values, like padding and the limits of the memory space, from the environment rather than having them hard-coded.

Padding

To get rid of 'Padding' in malloc, I needed to make sure that every chunk was aligned and sized correctly. Remember that there is a header on every allocated chunk which is stored before the active memory which contains the size of the chunk. On 32-bit machines, that size is 4 bytes. If the machine requires allocations to be aligned on 8-byte boundaries (as might be the case for 'double' values), we're now going to force the alignment of the header to 8-bytes, wasting four bytes between the size field and the active memory.

Well, the existing nano_mallocr code also wastes those four bytes to store the 'padding' value. Using a consistent alignment for chunk starting addresses and chunk sizes has made the code a lot simpler and easier to reason about while not using extra memory for normal allocation. Except for memalign, which I'll cover in the next section.

realloc

The original nano_realloc function was as simple as possible:

mem = nano_malloc(new_size);
if (mem) {
    memcpy(mem, old, MIN(old_size, new_size));
    nano_free(old);
}
return mem;

However, this really performs badly when the application is growing a buffer while accumulating data. A couple of simple optimizations occurred to me:

  1. If there's a free chunk just after the original location, it could be merged to the existing block and avoid copying the data.

  2. If the original chunk is at the end of the heap, call sbrk() to increase the size of the chunk.

The second one seems like the more important case; in a small system, the buffer will probably land at the end of the heap at some point, at which point growing it to the size of available memory becomes quite efficient.

When shrinking the buffer, instead of allocating new space and copying, if there's enough space being freed for a new chunk, create one and add it to the free list.

List Walking

Walking singly-linked lists seem like one of the first things we see when learning pointer manipulation in C:

for (element = head; element; element = element->next)
    do stuff ...

However, this becomes pretty complicated when 'do stuff' includes removing something from the list:

prev = NULL;
for (element = head; element; element = element->next)
    ...
    if (found)
        break;
    ...
    prev = element

if (prev != NULL)
    prev->next = element->next;
else
    head = element->next;

An extra variable, and a test to figure out how to re-link the list. Bob showed me a simpler way, which I'm sure many people are familiar with:

for (ptr = &head; (element = *ptr); ptr = &(element->next))
    ...
    if (found)
        break;

*ptr = element->next;

Insertion is similar, as you would expect:

for (ptr = &head; (element = *ptr); ptr = &(element->next))
    if (found)
        break;

new_element->next = element;
*ptr = new_element;

In terms of memory operations, it's the same -- each 'next' pointer is fetched exactly once and the list is re-linked by performing a single store. In terms of reading the code, once you've seen this pattern, getting rid of the extra variable and the conditionals around the list update makes it shorter and less prone to errors.

In the nano_mallocr code, instead of using 'prev = NULL', it actually used 'prev = free_list', and the test for updating the head was 'prev == element', which really caught me unawares.

System Parameters

Any malloc implementation needs to know a couple of things about the system it's running on:

  1. Address space. The maximum range of possible addresses sets the limit on how large a block of memory might be allocated, and hence the size of the 'size' field. Fortunately, we've got the 'size_t' type for this, so we can just use that.

  2. Alignment requirements. These derive from the alignment requirements of the basic machine types, including pointers, integers and floating point numbers which are formed from a combination of machine requirements (some systems will fault if attempting to use memory with the wrong alignment) along with a compromise between memory usage and memory system performance.

I decided to let the system tell me the alignment necessary using a special type declaration and the 'offsetof' operation:

typedef struct {
    char c;
    union {
    void *p;
    double d;
    long long ll;
    size_t s;
    } u;
} align_t;

#define MALLOC_ALIGN        (offsetof(align_t, u))

Because C requires struct fields to be stored in order of declaration, the 'u' field would have to be after the 'c' field, and would have to be assigned an offset equal to the largest alignment necessary for any of its members. Testing on a range of machines yields the following alignment requirements:

Architecture Alignment
x86_64 8
RISC-V 8
aarch64 8
arm 8
x86 4

So, I guess I could have just used a constant value of '8' and not worried about it, but using the compiler-provided value means that running picolibc on older architectures might save a bit of memory at no real cost in the code.

Now, the header containing the 'size' field can be aligned to this value, and all allocated blocks can be allocated in units of this value.

memalign

memalign, valloc and pvalloc all allocate memory with restrictions on the alignment of the base address and length. You'd think these would be simple -- allocate a large chunk, align within that chunk and return the address. However, they also all require that the address can be successfully passed to free. Which means that the allocator needs to do some tricks to make it all work. Essentially, you allocate 'lots' of memory and then arrange that any bytes at the head and tail of the allocation can be returned to the free list.

The tail part is easy; if it's large enough to form a free chunk (which must contain the size and a 'next' pointer for the free list), it can be split off. Otherwise, it just sits at the end of the allocation being wasted space.

The head part is a bit tricky when it's not large enough to form a free chunk. That's where the 'padding' business came in handy; that can be as small as a 'size_t' value, which (on 32-bit systems) is only four bytes.

Now that we're giving up trying to reason about 'padding', any extra block at the start must be big enough to hold a free block, which includes the size and a next pointer. On 32-bit systems, that's just 8 bytes which (for most of our targets) is the same as the alignment value we're using. On 32-bit systems that can use 4-byte alignment, and on 64-bit systems, it's possible that the alignment required by the application for memalign and the alignment of a chunk returned by malloc might be off by too small an amount to create a free chunk.

So, we just allocate a lot of extra space; enough so that we can create a block of size 'toosmall + align' at the start and create a free chunk of memory out of that.

This works, and at least returns all of the unused memory back for other allocations.

Sending Patches Back to Newlib

I've sent the floating point fixes upstream to newlib where they've already landed on master. I've sent most of the malloc fixes, but I'm not sure they really care about seeing nano_mallocr refactored. If they do, I'll spend the time necessary to get the changes ported back to the newlib internal APIs and merged upstream.

Planet DebianJohn Goerzen: In Which COVID-19 Misinformation Leads To A Bunch of Graphs Made With Rust

A funny — and by funny, I mean sad — thing has happened. Recently the Kansas Department of Health and Environment (KDHE) has been analyzing data from the patchwork implementation of mask requirements in Kansas. They came to a conclusion that shouldn’t be surprising to anyone: masks help. They published a chart showing this. A right-wing propaganda publication got ahold of this, and claimed the numbers were “doctored” because there were two-different Y-axes.

I set about to analyze the data myself from public sources, and produced graphs of various kinds using a single Y-axis and supporting the idea that the graphs were not, in fact, doctored. Here’s one graph that’s showing that:

In order to do that, I had imported COVID-19 data from various public sources. Many states in the US are large enough to have significant variation in COVID-19 conditions, and many of the source people look at don’t show county-level data over time. I wanted to do that.

Eventually, I wrote covid19db, which ingests data from a number of public sources and generates a SQLite database file. Using Github Actions, this file is automatically updated every morning and available for download. Or, you can download the code and generate a database yourself locally.

Then, I wrote covid19ks, which generates various pretty graphs covering the data. These graphs, incidentally, turn out to highlight just how poorly the United States is doing compared to the rest of the industrialized world.

I hope that these resources, and especially covid19db, might be useful to others that would like to analyze the data. The code isn’t the prettiest since it was done in a hurry, but I think that functionally this is useful.

Planet DebianReproducible Builds (diffoscope): diffoscope 156 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 156. This version includes the following changes:

[ Chris Lamb ]
* Update PPU tests for compatibility with Free Pascal versions 3.2.0 or
  greater. (Closes: #968124)
* Emit a debug-level logging message when our ppudump(1) version does not
  match file header.
* Add and use an assert_diff helper that loads and compares a fixture output
  to avoid a bunch of test boilerplate.

[ Frazer Clews ]
* Apply some pylint suggestions to the codebase.

You find out more by visiting the project homepage.

,

CryptogramThe NSA on the Risks of Exposing Location Data

The NSA has issued an advisory on the risks of location data.

Mitigations reduce, but do not eliminate, location tracking risks in mobile devices. Most users rely on features disabled by such mitigations, making such safeguards impractical. Users should be aware of these risks and take action based on their specific situation and risk tolerance. When location exposure could be detrimental to a mission, users should prioritize mission risk and apply location tracking mitigations to the greatest extent possible. While the guidance in this document may be useful to a wide range of users, it is intended primarily for NSS/DoD system users.

The document provides a list of mitigation strategies, including turning things off:

If it is critical that location is not revealed for a particular mission, consider the following recommendations:

  • Determine a non-sensitive location where devices with wireless capabilities can be secured prior to the start of any activities. Ensure that the mission site cannot be predicted from this location.
  • Leave all devices with any wireless capabilities (including personal devices) at this non-sensitive location. Turning off the device may not be sufficient if a device has been compromised.
  • For mission transportation, use vehicles without built-in wireless communication capabilities, or turn off the capabilities, if possible.

Of course, turning off your wireless devices is itself a signal that something is going on. It's hard to be clandestine in our always connected world.

News articles.

CryptogramUAE Hack and Leak Operations

Interesting paper on recent hack-and-leak operations attributed to the UAE:

Abstract: Four hack-and-leak operations in U.S. politics between 2016 and 2019, publicly attributed to the United Arab Emirates (UAE), Qatar, and Saudi Arabia, should be seen as the "simulation of scandal" ­-- deliberate attempts to direct moral judgement against their target. Although "hacking" tools enable easy access to secret information, they are a double-edged sword, as their discovery means the scandal becomes about the hack itself, not about the hacked information. There are wider consequences for cyber competition in situations of constraint where both sides are strategic partners, as in the case of the United States and its allies in the Persian Gulf.

Planet DebianSven Hoexter: An Average IT Org

Supply chain attacks are a known issue, and also lately there was a discussion around the relevance of reproducible builds. Looking in comparison at an average IT org doing something with the internet, I believe the pressing problem is neither supply chain attacks nor a lack of reproducible builds. The real problem is the amount of prefabricated binaries supplied by someone else, created in an unknown build environment with unknown tools, the average IT org requires to do anything.

The Mess the World Runs on

By chance I had an opportunity to look at what some other people I know use, and here is the list I could compile by scratching just at the surface:

  • 80% of what HashiCorp releases. Vagrant, packer, nomad, terraform, just all of it. In the case of terraform of course with a bunch of providers and for Vagrant with machine images from the official registry.
  • Lots of ansible usecases, usually retrieved by pip.
  • Jenkins + a myriad of plugins from the Jenkins plugin registry.
  • All the tools/SDKs of a cloud provider du jour to interface with the Cloud. Mostly via 3rd party Debian repository.
  • docker (the repo for dockerd) and DockerHub
  • Mobile SDKs.
  • Kafka fetched somewhere from apache.org.
  • Binary downloads from github. Many. Go and Rust make it possible.
  • Elastic, more or less the whole stack they offer via their Debian repo.
  • Postgres + the tools around it from the apt.postgresql.org Debian repo.
  • archive.debian.org because it's hard to keep up at times.
  • Maven Central.

Of course there are also all the script language repos - Python, Ruby, Node/Typescript - around as well.

Looking at myself, who's working in a different IT org but with a similar focus, I have the following lingering around on my for work laptop and retrieved it as a binary from a 3rd party:

  • dockerd from the docker repo
  • vscode from the microsoft repo
  • vivaldi from the vivaldi repo
  • Google Cloud SDK from the google repo
  • terraform + all the providers from hashicorp
  • govc form github
  • containerdiff from github(yes, by now included in Debian main)
  • github gh cli tool from github
  • wtfutil from github

Yes some of that is even non-free and might contain spyw^telemetry.

Takeway I

By guessing based on Pareto Principle probably 80% of the software mentioned above is also open source software. But, and here we leave Pareto behind, close to none is build by the average IT org from source.

Why should the average IT org care about advanced issues like supply chain attacks on source code and mitigations, when it already gets into very hot water the day DockerHub closes down, HashiCorp moves from open core to full proprietary or Elastic decides to no longer offer free binary builds?

The reality out there seems to be that infrastructure of "modern" IT orgs is managed similar to the Windows 95 installation of my childhood. You just grab running binaries from somewhere and run them. The main difference seems to be that you no longer have the inconvenience of downloading a .xls from geocities you've to rename to .rar and that it's legal.

Takeway II

In the end the binary supply is like a drug for the user, and somehow the Debian project is also just another dealer / middle man in this setup. There are probably a lot of open questions to think about in that context.

Are we the better dealer because we care about signed sources we retrieve from upstream and because we engage in reproducible build projects?

Are our own means of distributing binaries any better than a binary download from github via https with a manual checksum verification, or the Debian repo at download.docker.com?

Is the approach of the BSD/Gentoo ports, where you have to compile at least some software from source, the better one?

Do I really want to know how some of the software is actually build?

Or some more candid ones like is gnutls a good choice for the https support in apt and how solid is the gnupg code base? Update: Regarding apt there seems to be some movement.

Planet DebianNorbert Preining: Switching from KDE/Plasma to Gnome3 for one week

This guy is doing a great work, providing patches to improve kwin, and then tried Gnome3 for a week, 7 days. His verdict:

overall, after one week of using this, I can say…

it’s f**king HORRIBLE!!!

…well, it’s not as bad as I thought… but yeah, it was overall a pretty unpleasant experience…
I definitely have seen improvements, but the desktop is still not in shape.

I mean, it’s still not as usable as KDE is….

Honestly, I can’t agree more. I have tried Gnome3 for over a year, again and again, and it feels like a block of concrete put onto the feet of dissidents by Italian mafia bosses. It drowns and kills you.

Here are the links: Start, Day 1, Day 2, Day 3, Day 4, Day 5, Day 6, Day 7.

Thanks a lot for these blog posts, incredibly informative and convincing!

Kevin RuddBBC World: US-China Tensions

E&OE TRANSCRIPT
TV INTERVIEW
BBC WORLD
13 AUGUST 2020

Topics: Foreign Affairs article ‘Beware the Guns of August – in Asia’

Mike Embley
Beijing’s crackdown on Hong Kong’s democracy movement has attracted strong criticism both Washington and Beijing hitting key figures with sanctions and closing consulates in recent weeks. Is that not the only issue where the two countries don’t see eye to eye tensions have been escalating on a range of fronts, including the Chinese handling of the pandemic, the American decision to ban Huawei and Washington’s allegations of human rights abuses against Uighur Muslims in Xinjiang. So where is all this heading? Let’s try and find out we speak to Kevin Rudd, former Australian Prime Minister, of course, now the president of the Asia Society Policy Institute. Welcome very good to talk to you. You’ve been very vocal about China’s attitudes to democracy in Hong Kong, also the tit for tat sanctions between the US and China where do you think all this is heading?

Kevin Rudd
Well, if our prism for analysis is where does the US China relationship go? The bottom line is we haven’t seen this relationship in such fundamental disrepair in about half a century. And as a result, whether it’s Hong Kong, or whether it’s Taiwan or events unfolding in the South China Sea, this is pushing the relationship into greater and greater levels of crisis. What concerns those of us who study this professionally. And who know both systems of government reasonably well, both in Beijing and Washington, is that the probability of a crisis unfolding either in the Taiwan Straits or in the South China Sea is now growing. And the probability of escalation is now real into a serious shooting match. And the lesson of history is it’s very difficult to de escalate under those circumstances.

Mike Embley
Yes, I think you’ve spoken in terms of the risk of a hot war, actual war between the US and China. Are you serious?

Kevin Rudd
I am serious and I’ve not said this before. I’ve been a student of US-China relations for the last 35 years. And I’ve I take a genuinely sceptical approach to people who have sounded the alarms in previous periods of the relationship. But those of us who have observed this through the prism of history, I think have got a responsibility to say to decision makers both in Washington and in Beijing right now be careful what you wish for, because this is catapulting in a particular direction. When you look at the South China Sea in particular, there you have a huge amount of metal on metal, that is a large number of American ships and a large number of People’s Liberation Army Navy ships, similar number of aircraft, the rules of engagement, the standard operating procedures of these vessels are unbeknownst to the rest of us, we’ve had near misses before. What I’m pointing to is that if we actually have a collision, or a sinking or a crash, what then ensues in terms of crisis management on both sides when we last had this in 2001 2002 in the Bush administration, the state of the US China relationship was pretty good. Right now 20 years later, it is fundamentally appalling. That’s why many of us are deeply concerned, and are sounding this concern both to Beijing and Washington.

Mike Embley
And yet you know, of course, China is such a power economically and is making its presence felt in so many places in the world. There is a sense that really China can pretty much do what it wants, how do you avoid the kind of situation you’re describing?

Kevin Rudd
Well, the government in Beijing needs to understand the importance of restraint as well in terms of its own calculus of its own long term national interests. And that is China’s current cause of action across a range of fronts is in fact causing a massive international reaction against China now, unprecedented against again, the measures of the last 40 or 50 years. You now have fundamental dislocations in the relationship not just with Washington, but with Canada, with Australia, with United Kingdom, with Japan, with the Republic of Korea, and a whole bunch of others as well, including those in various parts of continental Europe. And so therefore, looking at this from the prism of Beijing’s own interests, there are those in Beijing who will be raising the argument, are we pushing too far too hard, too fast. And the responsibility of the rest of us is to say to that cautionary advice within Beijing, all power to your arm in restraining China from this course of action, but also in equal measure saying into our friends in Washington, particularly in a presidential election season, where Republicans and Democrats are seeking to outflank each other to the right, on China strategy, that this is no time to engage in, shall we say, symbolic acts for a domestic political purpose in the United States presidential election context, which can have real national security consequences in Southeast Asia and then globally.

Mike Embley
Mr. Rudd, you say very clearly what you hope will happen what you hope China will realize, what do you think actually will happen? Are you optimistic in a nutshell or pessimistic?

Kevin Rudd
The reason for me writing the piece I’ve just done in Foreign Affairs Magazine, which is entitled “Beware The Guns of August”, for those of us obviously familiar with what happened in August of 1914. Is that on balance I am pessimistic, that the political cultures in both capitals right now are fully seized of the risks that they are playing with on the high seas and over Taiwan as well. Hong Kong, the matters you were referring to before, frankly, add further to the deterioration of the surrounding political relationship between the two countries. But in terms of incendiary actions of a national security nature, it’s events in the Taiwan straits and it’s events on the high seas in the South China Sea, which are most likely to trigger this. And to answer your question directly right now, until we see the other side of the US presidential election. I remain on balance concerned and pessimistic.

Mike Embley
Right. Kevin Rudd Thank you very much for talking to us.

Kevin Rudd
Good to be with you.

The post BBC World: US-China Tensions appeared first on Kevin Rudd.

Kevin RuddAustralian Jewish News: Michael Gawenda and ‘The Powerbroker’

With the late Shimon Peres in 2012.

This article was first published by The Australian Jewish News on 13 August 2020.

The factional manoeuvrings of Labor’s faceless men a decade ago are convoluted enough without demonstrable misrepresentations by authors like Michael Gawenda in his biography of Mark Leibler, The Powerbroker.

Gawenda claims my memoir, The PM Years, blames the leadership coup on Leibler’s hardline faction of Australia’s Israel lobby, “plotting” in secret with Julia Gillard – a vision of “extreme, verging on conspiratorial darkness”. This is utter fabrication on his part. My simple challenge to Gawenda is to specify where I make such claims. He can’t. If he’d bothered to call me before publishing, I would have told him so.

Let me be clear: I have never claimed, nor do I believe, that Leibler or AIJAC were involved in the coup. It was conceived and executed almost entirely by factional warlords who blamed me for stymieing their individual ambitions.

It’s true my relationship with Leibler was strained in 2010 after Mossad agents stole the identities of four Australians living in Israel. Using false passports, they slipped into Dubai to assassinate a Hamas operative. They broke our laws and breached our trust.

The Mossad also jeopardised the safety of every Australian who travels on our passports in the Middle East. Unless this stopped, any Australian would be under suspicion, exposing them to arbitrary detention or worse.

More shocking, this wasn’t their first offence. The Mossad explicitly promised to stop abusing Australian passports after an incident in 2003, in a memorandum kept secret to spare Israel embarrassment. It didn’t work. They reoffended because they thought Australia was weak and wouldn’t complain.

We needed a proportional response to jolt Israeli politicians to act, without fundamentally damaging our valued relationship. Australia’s diplomatic, national security and intelligence establishments were unanimous: we should expel the Mossad’s representative in Canberra. This would achieve our goal but make little practical difference to Australia-Israel cooperation. Every minister in the national security committee agreed, including Gillard.

But obdurate elements of Australia’s Israel lobby accused us of overreacting. How could we treat our friend Israel like this? How did we know it was them? Wasn’t this just the usual murky business of espionage? According to Leibler, Diaspora leaders should “not criticise any Israeli government when it comes to questions of Israeli security”. Any violation of law, domestic or international, is acceptable. Never mind every citizen’s duty to uphold our laws and protect Australian lives.

I invited Leibler and others to dinner at the Lodge to reassure them the affair, although significant, wouldn’t derail the relationship. I sat politely as Leibler berated me. Boasting of his connections, he wanted to personally arrange meetings with the Mossad to smooth things over. We had, of course, already done this.

Apropos of nothing, Leibler then leaned over and, in what seemed to me a slightly menacing manner, suggested Julia was “looking very good in the polls” and “a great friend of Israel”. This surprised me, not least because I believed, however foolishly, that my deputy was loyal.

Leibler’s denials are absorbed wholly by Gawenda, solely on the basis of his notes. Give us a break, Michael – why would Leibler record such behaviour? It’s also meaningless that others didn’t hear him since, as often happens at dinners, multiple conversations occur around the table. The truth is it did happen, hence why I recorded it in my book. I have no reason to invent such an anecdote.

In fairness to Gillard, her eagerness to befriend Leibler reflected the steepness of her climb on Israel. She emerged from organisations that historically antagonised Israel – the Socialist Left and Australian Union of Students – and often overcompensated by swinging further towards AIJAC than longstanding Labor policy allowed.

By contrast, my reputation was well established, untainted by the anti-Israel sentiment sometimes found on the political left. A lifelong supporter of Israel and security for its people, I defied Labor critics by proudly leading Parliament in praise of the Jewish State’s achievements. I have consistently denounced the BDS campaign targeting Israeli businesses, both in office and since. My government blocked numerous shipments of potential nuclear components to Iran, and commissioned legal advice on charging president Mahmoud Ahmadinejad with incitement to genocide against the Jewish people. I’m as proud of this record as I am of my longstanding support for a two-state solution.

I have never considered that unequivocal support for Israel means unequivocal support for the policies of the Netanyahu government. For example, the annexation plan in the West Bank would be disastrous for Israel’s future security and fundamentally breach international law – a view shared by UK Conservative PM Boris Johnson. Israel, like the Australian Jewish community, is not monolithic; my concerns are shared by ordinary Israelis as well as many members of the Knesset.

Michael Gawenda is free to criticise me for things I’ve said and done (ironically, as editor of The Age, he didn’t consider me left-wing enough!), but his assertions in this account are flatly untrue.

The post Australian Jewish News: Michael Gawenda and ‘The Powerbroker’ appeared first on Kevin Rudd.

Planet DebianErich Schubert: Publisher MDPI lies to prospective authors

The publisher MDPI is a spammer and lies.

If you upload a paper draft to arXiv, MDPI will send spam to the authors to solicit submission. Within minutes of an upload I received the following email (sent by MDPI staff, not some overly eager new editor):

We read your recent manuscript "[...]" on
arXiv, and sincerely invite you to submit it to our journal Future
Internet, if it has not been published or submitted elsewhere.

Future Internet (ISSN 1999-5903, indexed by Scopus, Ei compendex,
*ESCI*-Web of Science) is a journal on Internet technologies and the
information society. It maintains a rigorous and fast peer review system
with a median publication time of 35 days from submission to online
publication, and 3 days from acceptance to publication. The journal
scope is shown here:
https://www.mdpi.com/journal/futureinternet/about.
Editorial Board: https://www.mdpi.com/journal/futureinternet/editors.

Since Future Internet is an open access journal there is a publication
fee. Your paper will be published, with a 20% discount (amounting to 200
CHF), and provided that it is accepted after our standard peer-review
procedure. 

First of all, the email begins with a lie. Because this paper clearly states that it is submitted elsewhere. Also, it fits other journals much better, and if they had read even just the abstract, they would have known.

This is predatory behavior by MDPI. Clearly, it is just about getting as many submissions as possible. The journal charges 1000 CHF (next year, 1400 CHF) to publish the papers. Its about the money.

Also, there have been reports that MDPI ignores the reviews, and always publishes even when reviewers recommended rejection…

The reviewer requests I have received from MDPI came with unreasonable deadlines, which will not allow for a thorough peer review. Hence I asked to not ever be emailed by them again. I must assume that many other qualified reviewers do the same. MDPI boasts in their 2019 annual report a median time to first decision of 19 days – in my discipline the typical time window to ask for reviews is at least a month (for shorter conference papers, not full journal articles), because professors tend to have lots of other duties, hence they need more flexibility. Above paper has been submitted in March, and is now under review for 4 months already. This is an annoying long time window, and I would appreciate if this were less, but it shows how extremely short the MDPI time frame is. They also claim 269.1k submissions and 106.2k published papers, so the acceptance rate is around 40% on average, and assuming that there are some journals with higher standards there then some must have acceptance rates much higher than this. I’d assume that many reputable journals have 40% desk-rejection rate for papers that are not even on-topic …

The average cost to authors is given as 1144 CHF (after discounts, 25% waived feeds etc.), so they, so we are talking about 120 million CHF of revenue from authors. Is that what you want academic publishing to be?

I am not happy with some of the established publishers such as Elsevier that also overcharge universities heavily. I do think we need to change academic publishing, and arXiv is a big improvement here. But I do not respect publishers such as MDPI that lie and send spam.

Worse Than FailureCodeSOD: Don't Stop the Magic

Don’t you believe in magic strings and numbers being bad? From the perspective of readability and future maintenance, constants are better. We all know this is true, and we all know that it can sometimes go too far.

Douwe Kasemier has a co-worker that has taken that a little too far.

For example, they have a Java method with a signature like this:

Document addDocument(Action act, boolean createNotification);

The Action type contains information about what action to actually perform, but it will result in a Document. Sometimes this creates a notification, and sometimes it doesn’t.

Douwe’s co-worker was worried about the readability of addDocument(myAct, true) and addDocument(myAct, false), so they went ahead and added some constants:

    private static final boolean NO_NOTIFICATION = false;
    private static final boolean CREATE_NOTIFICATION = true;

Okay, now, I don’t love this, but it’s not the worst thing…

public Document doActionWithNotification(Action act) {
  addDocument(act, CREATE_NOTIFICATION);
}

public Document doActionWithoutNotification(Action act) {
  addDocument(act, NO_NOTIFICATION);
}

Okay, now we’re just getting silly. This is at least diminishing returns of readability, if not actively harmful to making the code clear.

    private static final int SIX = 6;
    private static final int FIVE = 5;
    public String findId(String path) {
      String[] folders = path.split("/");
      if (folders.length >= SIX && (folders[FIVE].startsWith(PREFIX_SR) || folders[FIVE].startsWith(PREFIX_BR))) {
          return folders[FIVE].substring(PREFIX_SR.length());
      }
      return null;
    }

Ah, there we go. The logical conclusion: constants for 5 and 6. And yet they didn’t feel the need to make a constant for "/"?

At least this in maintainable, so that when the value of FIVE changes, the method doesn’t need to change.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Planet DebianNorbert Preining: KDE/Plasma Status Update 2020-08-13

Short status update on my KDE/Plasma packages for Debian sid and testing:

  • Frameworks update to 5.73
  • Apps are now available from the main archive in the same upstream version and superseeding my packages
  • New architecture: aarch64

Hope that helps a few people. See this post for how to setup archives.

Enjoy.

CryptogramSmart Lock Vulnerability

Yet another Internet-connected door lock is insecure:

Sold by retailers including Amazon, Walmart, and Home Depot, U-Tec's $139.99 UltraLoq is marketed as a "secure and versatile smart deadbolt that offers keyless entry via your Bluetooth-enabled smartphone and code."

Users can share temporary codes and 'Ekeys' to friends and guests for scheduled access, but according to Tripwire researcher Craig Young, a hacker able to sniff out the device's MAC address can help themselves to an access key, too.

UltraLoq eventually fixed the vulnerabilities, but not in a way that should give you any confidence that they know what they're doing.

EDITED TO ADD (8/12): More.

CryptogramCybercrime in the Age of COVID-19

The Cambridge Cybercrime Centre has a series of papers on cybercrime during the coronavirus pandemic.

EDITED TO ADD (8/12): Interpol report.

CryptogramTwitter Hacker Arrested

A 17-year-old Florida boy was arrested and charged with last week's Twitter hack.

News articles. Boing Boing post. Florida state attorney press release.

This is a developing story. Post any additional news in the comments.

EDITED TO ADD (8/1): Two others have been charged as well.

EDITED TO ADD (8/11): The online bail hearing was hacked.

Krebs on SecurityWhy & Where You Should Plant Your Flag

Several stories here have highlighted the importance of creating accounts online tied to your various identity, financial and communications services before identity thieves do it for you. This post examines some of the key places where everyone should plant their virtual flags.

As KrebsOnSecurity observed back in 2018, many people — particularly older folks — proudly declare they avoid using the Web to manage various accounts tied to their personal and financial data — including everything from utilities and mobile phones to retirement benefits and online banking services. From that story:

“The reasoning behind this strategy is as simple as it is alluring: What’s not put online can’t be hacked. But increasingly, adherents to this mantra are finding out the hard way that if you don’t plant your flag online, fraudsters and identity thieves may do it for you.”

“The crux of the problem is that while most types of customer accounts these days can be managed online, the process of tying one’s account number to a specific email address and/or mobile device typically involves supplying personal data that can easily be found or purchased online — such as Social Security numbers, birthdays and addresses.”

In short, although you may not be required to create online accounts to manage your affairs at your ISP, the U.S. Postal Service, the credit bureaus or the Social Security Administration, it’s a good idea to do so for several reasons.

Most importantly, the majority of the entities I’ll discuss here allow just one registrant per person/customer. Thus, even if you have no intention of using that account, establishing one will be far easier than trying to dislodge an impostor who gets there first using your identity data and an email address they control.

Also, the cost of planting your flag is virtually nil apart from your investment of time. In contrast, failing to plant one’s flag can allow ne’er-do-wells to create a great deal of mischief for you, whether it be misdirecting your service or benefits elsewhere, or canceling them altogether.

Before we dive into the list, a couple of important caveats. Adding multi-factor authentication (MFA) at these various providers (where available) and/or establishing a customer-specific personal identification number (PIN) also can help secure online access. For those who can’t be convinced to use a password manager, even writing down all of the account details and passwords on a slip of paper can be helpful, provided the document is secured in a safe place.

Perhaps the most important place to enable MFA is with your email accounts. Armed with access to your inbox, thieves can then reset the password for any other service or account that is tied to that email address.

People who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control.

Secondly, guard the security of your mobile phone account as best you can (doing so might just save your life). The passwords for countless online services can be reset merely by entering a one-time code sent via text message to the phone number on file for the customer’s account.

And thanks to the increasing prevalence of a crime known as SIM swapping, thieves may be able to upend your personal and financial life simply by tricking someone at your mobile service provider into diverting your calls and texts to a device they control.

Most mobile providers offer customers the option of placing a PIN or secret passphrase on their accounts to lessen the likelihood of such attacks succeeding, but these protections also usually fail when the attackers are social engineering some $12-an-hour employee at a mobile phone store.

Your best option is to reduce your overall reliance on your phone number for added authentication at any online service. Many sites now offer MFA options that are app-based and not tied to your mobile service, and this is your best option for MFA wherever possible.

YOUR CREDIT FILES

First and foremost, all U.S. residents should ensure they have accounts set up online at the three major credit bureaus — Equifax, Experian and Trans Union.

It’s important to remember that the questions these bureaus will ask to verify your identity are not terribly difficult for thieves to answer or guess just by referencing public records and/or perhaps your postings on social media.

You will need accounts at these bureaus if you wish to freeze your credit file. KrebsOnSecurity has for many years urged all readers to do just that, because freezing your file is the best way to prevent identity thieves from opening new lines of credit in your name. Parents and guardians also can now freeze the files of their dependents for free.

For more on what a freeze entails and how to place or thaw one, please see this post. Beyond the big three bureaus, Innovis is a distant fourth bureau that some entities use to check consumer creditworthiness. Fortunately, filing a freeze with Innovis likewise is free and relatively painless.

It’s also a good idea to notify a company called ChexSystems to keep an eye out for fraud committed in your name. Thousands of banks rely on ChexSystems to verify customers who are requesting new checking and savings accounts, and ChexSystems lets consumers place a security alert on their credit data to make it more difficult for ID thieves to fraudulently obtain checking and savings accounts. For more information on doing that with ChexSystems, see this link.

If you placed a freeze on your file at the major bureaus more than a few years ago but haven’t revisited the bureaus’ sites lately, it might be wise to do that soon. Following its epic 2017 data breach, Equifax reconfigured its systems to invalidate the freeze PINs it previously relied upon to unfreeze a file, effectively allowing anyone to bypass that PIN if they can glean a few personal details about you. Experian’s site also has undermined the security of the freeze PIN.

I mentioned planting your flag at the credit bureaus first because if you plan to freeze your credit files, it may be wise to do so after you have planted your flag at all the other places listed in this story. That’s because these other places may try to check your identity records at one or more of the bureaus, and having a freeze in place may interfere with that account creation.

YOUR FINANCIAL INSTITUTIONS

I can’t tell you how many times people have proudly told me they don’t bank online, and prefer to manage all of their accounts the old fashioned way. I always respond that while this is totally okay, you still need to establish an online account for your financial providers because if you don’t someone may do it for you.

This goes doubly for any retirement and pension plans you may have. It’s a good idea for people with older relatives to help those individuals set up and manage online identities for their various accounts — even if those relatives never intend to access any of the accounts online.

This process is doubly important for parents and relatives who have just lost a spouse. When someone passes away, there’s often an obituary in the paper that offers a great deal of information about the deceased and any surviving family members, and identity thieves love to mine this information.

YOUR GOVERNMENT

Whether you’re approaching retirement, middle-aged or just starting out in your career, you should establish an account online at the U.S. Social Security Administration. Maybe you don’t believe Social Security money will actually still be there when you retire, but chances are you’re nevertheless paying into the system now. Either way, the plant-your-flag rules still apply.

Ditto for the Internal Revenue Service. A few years back, ID thieves who specialize in perpetrating tax refund fraud were massively registering people at the IRS’s website to download key data from their prior years’ tax transcripts. While the IRS has improved its taxpayer validation and security measures since then, it’s a good idea to mark your territory here as well.

The same goes for your state’s Department of Motor Vehicles (DMV), which maintains an alarming amount of information about you whether you have an online account there or not. Because the DMV also is the place that typically issues state drivers licenses, you really don’t want to mess around with the possibility that someone could register as you, change your physical address on file, and obtain a new license in your name.

Last but certainly not least, you should create an account for your household at the U.S. Postal Service’s Web site. Having someone divert your mail or delay delivery of it for however long they like is not a fun experience.

Also, the USPS has this nifty service called Informed Delivery, which lets residents view scanned images of all incoming mail prior to delivery. In 2018, the U.S. Secret Service warned that identity thieves have been abusing Informed Delivery to let them know when residents are about to receive credit cards or notices of new lines of credit opened in their names. Do yourself a favor and create an Informed Delivery account as well. Note that multiple occupants of the same street address can each have their own accounts.

YOUR HOME

Online accounts coupled with the strongest multi-factor authentication available also are important for any services that provide you with telephone, television and Internet access.

Strange as it may sound, plenty of people who receive all of these services in a bundle from one ISP do not have accounts online to manage their service. This is dangerous because if thieves can establish an account on your behalf, they can then divert calls intended for you to their own phones.

My original Plant Your Flag piece in 2018 told the story of an older Florida man who had pricey jewelry bought in his name after fraudsters created an online account at his ISP and diverted calls to his home phone number so they could intercept calls from his bank seeking to verify the transactions.

If you own a home, chances are you also have an account at one or more local utility providers, such as power and water companies. If you don’t already have an account at these places, create one and secure access to it with a strong password and any other access controls available.

These frequently monopolistic companies traditionally have poor to non-existent fraud controls, even though they effectively operate as mini credit bureaus. Bear in mind that possession of one or more of your utility bills is often sufficient documentation to establish proof of identity. As a result, such records are highly sought-after by identity thieves.

Another common way that ID thieves establish new lines of credit is by opening a mobile phone account in a target’s name. A little-known entity that many mobile providers turn to for validating new mobile accounts is the National Consumer Telecommunications and Utilities Exchange, or nctue.com. Happily, the NCTUE allows consumers to place a freeze on their file by calling their 800-number, 1-866-349-5355. For more information on the NCTUE, see this page.

Have I missed any important items? Please sound off in the comments below.

CryptogramCryptanalysis of an Old Zip Encryption Algorithm

Mike Stay broke an old zipfile encryption algorithm to recover $300,000 in bitcoin.

DefCon talk here.

Planet DebianMichael Stapelberg: Hermetic packages (in distri)

In distri, packages (e.g. emacs) are hermetic. By hermetic, I mean that the dependencies a package uses (e.g. libusb) don’t change, even when newer versions are installed.

For example, if package libusb-amd64-1.0.22-7 is available at build time, the package will always use that same version, even after the newer libusb-amd64-1.0.23-8 will be installed into the package store.

Another way of saying the same thing is: packages in distri are always co-installable.

This makes the package store more robust: additions to it will not break the system. On a technical level, the package store is implemented as a directory containing distri SquashFS images and metadata files, into which packages are installed in an atomic way.

Out of scope: plugins are not hermetic by design

One exception where hermeticity is not desired are plugin mechanisms: optionally loading out-of-tree code at runtime obviously is not hermetic.

As an example, consider glibc’s Name Service Switch (NSS) mechanism. Page 29.4.1 Adding another Service to NSS describes how glibc searches $prefix/lib for shared libraries at runtime.

Debian ships about a dozen NSS libraries for a variety of purposes, and enterprise setups might add their own into the mix.

systemd (as of v245) accounts for 4 NSS libraries, e.g. nss-systemd for user/group name resolution for users allocated through systemd’s DynamicUser= option.

Having packages be as hermetic as possible remains a worthwhile goal despite any exceptions: I will gladly use a 99% hermetic system over a 0% hermetic system any day.

Side note: Xorg’s driver model (which can be characterized as a plugin mechanism) does not fall under this category because of its tight API/ABI coupling! For this case, where drivers are only guaranteed to work with precisely the Xorg version for which they were compiled, distri uses per-package exchange directories.

Implementation of hermetic packages in distri

On a technical level, the requirement is: all paths used by the program must always result in the same contents. This is implemented in distri via the read-only package store mounted at /ro, e.g. files underneath /ro/emacs-amd64-26.3-15 never change.

To change all paths used by a program, in practice, three strategies cover most paths:

ELF interpreter and dynamic libraries

Programs on Linux use the ELF file format, which contains two kinds of references:

First, the ELF interpreter (PT_INTERP segment), which is used to start the program. For dynamically linked programs on 64-bit systems, this is typically ld.so(8).

Many distributions use system-global paths such as /lib64/ld-linux-x86-64.so.2, but distri compiles programs with -Wl,--dynamic-linker=/ro/glibc-amd64-2.31-4/out/lib/ld-linux-x86-64.so.2 so that the full path ends up in the binary.

The ELF interpreter is shown by file(1), but you can also use readelf -a $BINARY | grep 'program interpreter' to display it.

And secondly, the rpath, a run-time search path for dynamic libraries. Instead of storing full references to all dynamic libraries, we set the rpath so that ld.so(8) will find the correct dynamic libraries.

Originally, we used to just set a long rpath, containing one entry for each dynamic library dependency. However, we have since switched to using a single lib subdirectory per package as its rpath, and placing symlinks with full path references into that lib directory, e.g. using -Wl,-rpath=/ro/grep-amd64-3.4-4/lib. This is better for performance, as ld.so uses a per-directory cache.

Note that program load times are significantly influenced by how quickly you can locate the dynamic libraries. distri uses a FUSE file system to load programs from, so getting proper -ENOENT caching into place drastically sped up program load times.

Instead of compiling software with the -Wl,--dynamic-linker and -Wl,-rpath flags, one can also modify these fields after the fact using patchelf(1). For closed-source programs, this is the only possibility.

The rpath can be inspected by using e.g. readelf -a $BINARY | grep RPATH.

Environment variable setup wrapper programs

Many programs are influenced by environment variables: to start another program, said program is often found by checking each directory in the PATH environment variable.

Such search paths are prevalent in scripting languages, too, to find modules. Python has PYTHONPATH, Perl has PERL5LIB, and so on.

To set up these search path environment variables at run time, distri employs an indirection. Instead of e.g. teensy-loader-cli, you run a small wrapper program that calls precisely one execve system call with the desired environment variables.

Initially, I used shell scripts as wrapper programs because they are easily inspectable. This turned out to be too slow, so I switched to compiled programs. I’m linking them statically for fast startup, and I’m linking them against musl libc for significantly smaller file sizes than glibc (per-executable overhead adds up quickly in a distribution!).

Note that the wrapper programs prepend to the PATH environment variable, they don’t replace it in its entirely. This is important so that users have a way to extend the PATH (and other variables) if they so choose. This doesn’t hurt hermeticity because it is only relevant for programs that were not present at build time, i.e. plugin mechanisms which, by design, cannot be hermetic.

Shebang interpreter patching

The Shebang of scripts contains a path, too, and hence needs to be changed.

We don’t do this in distri yet (the number of packaged scripts is small), but we should.

Performance requirements

The performance improvements in the previous sections are not just good to have, but practically required when many processes are involved: without them, you’ll encounter second-long delays in magit which spawns many git processes under the covers, or in dracut, which spawns one cp(1) process per file.

Downside: rebuild of packages required to pick up changes

Linux distributions such as Debian consider it an advantage to roll out security fixes to the entire system by updating a single shared library package (e.g. openssl).

The flip side of that coin is that changes to a single critical package can break the entire system.

With hermetic packages, all reverse dependencies must be rebuilt when a library’s changes should be picked up by the whole system. E.g., when openssl changes, curl must be rebuilt to pick up the new version of openssl.

This approach trades off using more bandwidth and more disk space (temporarily) against reducing the blast radius of any individual package update.

Downside: long env variables are cumbersome to deal with

This can be partially mitigated by removing empty directories at build time, which will result in shorter variables.

In general, there is no getting around this. One little trick is to use tr : '\n', e.g.:

distri0# echo $PATH
/usr/bin:/bin:/usr/sbin:/sbin:/ro/openssh-amd64-8.2p1-11/out/bin

distri0# echo $PATH | tr : '\n'
/usr/bin
/bin
/usr/sbin
/sbin
/ro/openssh-amd64-8.2p1-11/out/bin

Edge cases

The implementation outlined above works well in hundreds of packages, and only a small handful exhibited problems of any kind. Here are some issues I encountered:

Issue: accidental ABI breakage in plugin mechanisms

NSS libraries built against glibc 2.28 and newer cannot be loaded by glibc 2.27. In all likelihood, such changes do not happen too often, but it does illustrate that glibc’s published interface spec is not sufficient for forwards and backwards compatibility.

In distri, we could likely use a per-package exchange directory for glibc’s NSS mechanism to prevent the above problem from happening in the future.

Issue: wrapper bypass when a program re-executes itself

Some programs try to arrange for themselves to be re-executed outside of their current process tree. For example, consider building a program with the meson build system:

  1. When meson first configures the build, it generates ninja files (think Makefiles) which contain command lines that run the meson --internal helper.

  2. Once meson returns, ninja is called as a separate process, so it will not have the environment which the meson wrapper sets up. ninja then runs the previously persisted meson command line. Since the command line uses the full path to meson (not to its wrapper), it bypasses the wrapper.

Luckily, not many programs try to arrange for other process trees to run them. Here is a table summarizing how affected programs might try to arrange for re-execution, whether the technique results in a wrapper bypass, and what we do about it in distri:

technique to execute itself uses wrapper mitigation
run-time: find own basename in PATH yes wrapper program
compile-time: embed expected path no; bypass! configure or patch
run-time: argv[0] or /proc/self/exe no; bypass! patch

One might think that setting argv[0] to the wrapper location seems like a way to side-step this problem. We tried doing this in distri, but had to revert and go the other way.

Misc smaller issues

Appendix: Could other distributions adopt hermetic packages?

At a very high level, adopting hermetic packages will require two steps:

  1. Using fully qualified paths whose contents don’t change (e.g. /ro/emacs-amd64-26.3-15) generally requires rebuilding programs, e.g. with --prefix set.

  2. Once you use fully qualified paths you need to make the packages able to exchange data. distri solves this with exchange directories, implemented in the /ro file system which is backed by a FUSE daemon.

The first step is pretty simple, whereas the second step is where I expect controversy around any suggested mechanism.

Appendix: demo (in distri)

This appendix contains commands and their outputs, run on upcoming distri version supersilverhaze, but verified to work on older versions, too.

Large outputs have been collapsed and can be expanded by clicking on the output.

The /bin directory contains symlinks for the union of all package’s bin subdirectories:

distri0# readlink -f /bin/teensy_loader_cli
/ro/teensy-loader-cli-amd64-2.1+g20180927-7/bin/teensy_loader_cli

The wrapper program in the bin subdirectory is small:

distri0# ls -lh $(readlink -f /bin/teensy_loader_cli)
-rwxr-xr-x 1 root root 46K Apr 21 21:56 /ro/teensy-loader-cli-amd64-2.1+g20180927-7/bin/teensy_loader_cli

Wrapper programs execute quickly:

distri0# strace -fvy /bin/teensy_loader_cli |& head | cat -n
     1  execve("/bin/teensy_loader_cli", ["/bin/teensy_loader_cli"], ["USER=root", "LOGNAME=root", "HOME=/root", "PATH=/ro/bash-amd64-5.0-4/bin:/r"..., "SHELL=/bin/zsh", "TERM=screen.xterm-256color", "XDG_SESSION_ID=c1", "XDG_RUNTIME_DIR=/run/user/0", "DBUS_SESSION_BUS_ADDRESS=unix:pa"..., "XDG_SESSION_TYPE=tty", "XDG_SESSION_CLASS=user", "SSH_CLIENT=10.0.2.2 42556 22", "SSH_CONNECTION=10.0.2.2 42556 10"..., "SSHTTY=/dev/pts/0", "SHLVL=1", "PWD=/root", "OLDPWD=/root", "=/usr/bin/strace", "LD_LIBRARY_PATH=/ro/bash-amd64-5"..., "PERL5LIB=/ro/bash-amd64-5.0-4/ou"..., "PYTHONPATH=/ro/bash-amd64-5.b0-4/"...]) = 0
     2  arch_prctl(ARCH_SET_FS, 0x40c878)       = 0
     3  set_tid_address(0x40ca9c)               = 715
     4  brk(NULL)                               = 0x15b9000
     5  brk(0x15ba000)                          = 0x15ba000
     6  brk(0x15bb000)                          = 0x15bb000
     7  brk(0x15bd000)                          = 0x15bd000
     8  brk(0x15bf000)                          = 0x15bf000
     9  brk(0x15c1000)                          = 0x15c1000
    10  execve("/ro/teensy-loader-cli-amd64-2.1+g20180927-7/out/bin/teensy_loader_cli", ["/ro/teensy-loader-cli-amd64-2.1+"...], ["USER=root", "LOGNAME=root", "HOME=/root", "PATH=/ro/bash-amd64-5.0-4/bin:/r"..., "SHELL=/bin/zsh", "TERM=screen.xterm-256color", "XDG_SESSION_ID=c1", "XDG_RUNTIME_DIR=/run/user/0", "DBUS_SESSION_BUS_ADDRESS=unix:pa"..., "XDG_SESSION_TYPE=tty", "XDG_SESSION_CLASS=user", "SSH_CLIENT=10.0.2.2 42556 22", "SSH_CONNECTION=10.0.2.2 42556 10"..., "SSHTTY=/dev/pts/0", "SHLVL=1", "PWD=/root", "OLDPWD=/root", "=/usr/bin/strace", "LD_LIBRARY_PATH=/ro/bash-amd64-5"..., "PERL5LIB=/ro/bash-amd64-5.0-4/ou"..., "PYTHONPATH=/ro/bash-amd64-5.0-4/"...]) = 0

Confirm which ELF interpreter is set for a binary using readelf(1):

distri0# readelf -a /ro/teensy-loader-cli-amd64-2.1+g20180927-7/out/bin/teensy_loader_cli | grep 'program interpreter'
[Requesting program interpreter: /ro/glibc-amd64-2.31-4/out/lib/ld-linux-x86-64.so.2]

Confirm the rpath is set to the package’s lib subdirectory using readelf(1):

distri0# readelf -a /ro/teensy-loader-cli-amd64-2.1+g20180927-7/out/bin/teensy_loader_cli | grep RPATH
 0x000000000000000f (RPATH)              Library rpath: [/ro/teensy-loader-cli-amd64-2.1+g20180927-7/lib]

…and verify the lib subdirectory has the expected symlinks and target versions:

distri0# find /ro/teensy-loader-cli-amd64-*/lib -type f -printf '%P -> %l\n'
libc.so.6 -> /ro/glibc-amd64-2.31-4/out/lib/libc-2.31.so
libpthread.so.0 -> /ro/glibc-amd64-2.31-4/out/lib/libpthread-2.31.so
librt.so.1 -> /ro/glibc-amd64-2.31-4/out/lib/librt-2.31.so
libudev.so.1 -> /ro/libudev-amd64-245-11/out/lib/libudev.so.1.6.17
libusb-0.1.so.4 -> /ro/libusb-compat-amd64-0.1.5-7/out/lib/libusb-0.1.so.4.4.4
libusb-1.0.so.0 -> /ro/libusb-amd64-1.0.23-8/out/lib/libusb-1.0.so.0.2.0

To verify the correct libraries are actually loaded, you can set the LD_DEBUG environment variable for ld.so(8):

distri0# LD_DEBUG=libs teensy_loader_cli
[…]
       678:     find library=libc.so.6 [0]; searching
       678:      search path=/ro/teensy-loader-cli-amd64-2.1+g20180927-7/lib            (RPATH from file /ro/teensy-loader-cli-amd64-2.1+g20180927-7/out/bin/teensy_loader_cli)
       678:       trying file=/ro/teensy-loader-cli-amd64-2.1+g20180927-7/lib/libc.so.6
       678:
[…]

NSS libraries that distri ships:

find /lib/ -name "libnss_*.so.2" -type f -printf '%P -> %l\n'
libnss_myhostname.so.2 -> ../systemd-amd64-245-11/out/lib/libnss_myhostname.so.2
libnss_mymachines.so.2 -> ../systemd-amd64-245-11/out/lib/libnss_mymachines.so.2
libnss_resolve.so.2 -> ../systemd-amd64-245-11/out/lib/libnss_resolve.so.2
libnss_systemd.so.2 -> ../systemd-amd64-245-11/out/lib/libnss_systemd.so.2
libnss_compat.so.2 -> ../glibc-amd64-2.31-4/out/lib/libnss_compat.so.2
libnss_db.so.2 -> ../glibc-amd64-2.31-4/out/lib/libnss_db.so.2
libnss_dns.so.2 -> ../glibc-amd64-2.31-4/out/lib/libnss_dns.so.2
libnss_files.so.2 -> ../glibc-amd64-2.31-4/out/lib/libnss_files.so.2
libnss_hesiod.so.2 -> ../glibc-amd64-2.31-4/out/lib/libnss_hesiod.so.2

Planet DebianMichael Stapelberg: distri: 20x faster initramfs (initrd) from scratch

In case you are not yet familiar with why an initramfs (or initrd, or initial ramdisk) is typically used when starting Linux, let me quote the wikipedia definition:

“[…] initrd is a scheme for loading a temporary root file system into memory, which may be used as part of the Linux startup process […] to make preparations before the real root file system can be mounted.”

Many Linux distributions do not compile all file system drivers into the kernel, but instead load them on-demand from an initramfs, which saves memory.

Another common scenario, in which an initramfs is required, is full-disk encryption: the disk must be unlocked from userspace, but since userspace is encrypted, an initramfs is used.

Motivation

Thus far, building a distri disk image was quite slow:

This is on an AMD Ryzen 3900X 12-core processor (2019):

distri % time make cryptimage serial=1
80.29s user 13.56s system 186% cpu 50.419 total # 19s image, 31s initrd

Of these 50 seconds, dracut’s initramfs generation accounts for 31 seconds (62%)!

Initramfs generation time drops to 8.7 seconds once dracut no longer needs to use the single-threaded gzip(1) , but the multi-threaded replacement pigz(1) :

This brings the total time to build a distri disk image down to:

distri % time make cryptimage serial=1
76.85s user 13.23s system 327% cpu 27.509 total # 19s image, 8.7s initrd

Clearly, when you use dracut on any modern computer, you should make pigz available. dracut should fail to compile unless one explicitly opts into the known-slower gzip. For more thoughts on optional dependencies, see “Optional dependencies don’t work”.

But why does it take 8.7 seconds still? Can we go faster?

The answer is Yes! I recently built a distri-specific initramfs I’m calling minitrd. I wrote both big parts from scratch:

  1. the initramfs generator program (distri initrd)
  2. a custom Go userland (cmd/minitrd), running as /init in the initramfs.

minitrd generates the initramfs image in ≈400ms, bringing the total time down to:

distri % time make cryptimage serial=1
50.09s user 8.80s system 314% cpu 18.739 total # 18s image, 400ms initrd

(The remaining time is spent in preparing the file system, then installing and configuring the distri system, i.e. preparing a disk image you can run on real hardware.)

How can minitrd be 20 times faster than dracut?

dracut is mainly written in shell, with a C helper program. It drives the generation process by spawning lots of external dependencies (e.g. ldd or the dracut-install helper program). I assume that the combination of using an interpreted language (shell) that spawns lots of processes and precludes a concurrent architecture is to blame for the poor performance.

minitrd is written in Go, with speed as a goal. It leverages concurrency and uses no external dependencies; everything happens within a single process (but with enough threads to saturate modern hardware).

Measuring early boot time using qemu, I measured the dracut-generated initramfs taking 588ms to display the full disk encryption passphrase prompt, whereas minitrd took only 195ms.

The rest of this article dives deeper into how minitrd works.

What does an initramfs do?

Ultimately, the job of an initramfs is to make the root file system available and continue booting the system from there. Depending on the system setup, this involves the following 5 steps:

1. Load kernel modules to access the block devices with the root file system

Depending on the system, the block devices with the root file system might already be present when the initramfs runs, or some kernel modules might need to be loaded first. On my Dell XPS 9360 laptop, the NVMe system disk is already present when the initramfs starts, whereas in qemu, we need to load the virtio_pci module, followed by the virtio_scsi module.

How will our userland program know which kernel modules to load? Linux kernel modules declare patterns for their supported hardware as an alias, e.g.:

initrd# grep virtio_pci lib/modules/5.4.6/modules.alias
alias pci:v00001AF4d*sv*sd*bc*sc*i* virtio_pci

Devices in sysfs have a modalias file whose content can be matched against these declarations to identify the module to load:

initrd# cat /sys/devices/pci0000:00/*/modalias
pci:v00001AF4d00001005sv00001AF4sd00000004bc00scFFi00
pci:v00001AF4d00001004sv00001AF4sd00000008bc01sc00i00
[…]

Hence, for the initial round of module loading, it is sufficient to locate all modalias files within sysfs and load the responsible modules.

Loading a kernel module can result in new devices appearing. When that happens, the kernel sends a uevent, which the uevent consumer in userspace receives via a netlink socket. Typically, this consumer is udev(7) , but in our case, it’s minitrd.

For each uevent messages that comes with a MODALIAS variable, minitrd will load the relevant kernel module(s).

When loading a kernel module, its dependencies need to be loaded first. Dependency information is stored in the modules.dep file in a Makefile-like syntax:

initrd# grep virtio_pci lib/modules/5.4.6/modules.dep
kernel/drivers/virtio/virtio_pci.ko: kernel/drivers/virtio/virtio_ring.ko kernel/drivers/virtio/virtio.ko

To load a module, we can open its file and then call the Linux-specific finit_module(2) system call. Some modules are expected to return an error code, e.g. ENODEV or ENOENT when some hardware device is not actually present.

Side note: next to the textual versions, there are also binary versions of the modules.alias and modules.dep files. Presumably, those can be queried more quickly, but for simplicitly, I have not (yet?) implemented support in minitrd.

2. Console settings: font, keyboard layout

Setting a legible font is necessary for hi-dpi displays. On my Dell XPS 9360 (3200 x 1800 QHD+ display), the following works well:

initrd# setfont latarcyrheb-sun32

Setting the user’s keyboard layout is necessary for entering the LUKS full-disk encryption passphrase in their preferred keyboard layout. I use the NEO layout:

initrd# loadkeys neo

3. Block device identification

In the Linux kernel, block device enumeration order is not necessarily the same on each boot. Even if it was deterministic, device order could still be changed when users modify their computer’s device topology (e.g. connect a new disk to a formerly unused port).

Hence, it is good style to refer to disks and their partitions with stable identifiers. This also applies to boot loader configuration, and so most distributions will set a kernel parameter such as root=UUID=1fa04de7-30a9-4183-93e9-1b0061567121.

Identifying the block device or partition with the specified UUID is the initramfs’s job.

Depending on what the device contains, the UUID comes from a different place. For example, ext4 file systems have a UUID field in their file system superblock, whereas LUKS volumes have a UUID in their LUKS header.

Canonically, probing a device to extract the UUID is done by libblkid from the util-linux package, but the logic can easily be re-implemented in other languages and changes rarely. minitrd comes with its own implementation to avoid cgo or running the blkid(8) program.

4. LUKS full-disk encryption unlocking (only on encrypted systems)

Unlocking a LUKS-encrypted volume is done in userspace. The kernel handles the crypto, but reading the metadata, obtaining the passphrase (or e.g. key material from a file) and setting up the device mapper table entries are done in user space.

initrd# modprobe algif_skcipher
initrd# cryptsetup luksOpen /dev/sda4 cryptroot1

After the user entered their passphrase, the root file system can be mounted:

initrd# mount /dev/dm-0 /mnt

5. Continuing the boot process (switch_root)

Now that everything is set up, we need to pass execution to the init program on the root file system with a careful sequence of chdir(2) , mount(2) , chroot(2) , chdir(2) and execve(2) system calls that is explained in this busybox switch_root comment.

initrd# mount -t devtmpfs dev /mnt/dev
initrd# exec switch_root -c /dev/console /mnt /init

To conserve RAM, the files in the temporary file system to which the initramfs archive is extracted are typically deleted.

How is an initramfs generated?

An initramfs “image” (more accurately: archive) is a compressed cpio archive. Typically, gzip compression is used, but the kernel supports a bunch of different algorithms and distributions such as Ubuntu are switching to lz4.

Generators typically prepare a temporary directory and feed it to the cpio(1) program. In minitrd, we read the files into memory and generate the cpio archive using the go-cpio package. We use the pgzip package for parallel gzip compression.

The following files need to go into the cpio archive:

minitrd Go userland

The minitrd binary is copied into the cpio archive as /init and will be run by the kernel after extracting the archive.

Like the rest of distri, minitrd is built statically without cgo, which means it can be copied as-is into the cpio archive.

Linux kernel modules

Aside from the modules.alias and modules.dep metadata files, the kernel modules themselves reside in e.g. /lib/modules/5.4.6/kernel and need to be copied into the cpio archive.

Copying all modules results in a ≈80 MiB archive, so it is common to only copy modules that are relevant to the initramfs’s features. This reduces archive size to ≈24 MiB.

The filtering relies on hard-coded patterns and module names. For example, disk encryption related modules are all kernel modules underneath kernel/crypto, plus kernel/drivers/md/dm-crypt.ko.

When generating a host-only initramfs (works on precisely the computer that generated it), some initramfs generators look at the currently loaded modules and just copy those.

Console Fonts and Keymaps

The kbd package’s setfont(8) and loadkeys(1) programs load console fonts and keymaps from /usr/share/consolefonts and /usr/share/keymaps, respectively.

Hence, these directories need to be copied into the cpio archive. Depending on whether the initramfs should be generic (work on many computers) or host-only (works on precisely the computer/settings that generated it), the entire directories are copied, or only the required font/keymap.

cryptsetup, setfont, loadkeys

These programs are (currently) required because minitrd does not implement their functionality.

As they are dynamically linked, not only the programs themselves need to be copied, but also the ELF dynamic linking loader (path stored in the .interp ELF section) and any ELF library dependencies.

For example, cryptsetup in distri declares the ELF interpreter /ro/glibc-amd64-2.27-3/out/lib/ld-linux-x86-64.so.2 and declares dependencies on shared libraries libcryptsetup.so.12, libblkid.so.1 and others. Luckily, in distri, packages contain a lib subdirectory containing symbolic links to the resolved shared library paths (hermetic packaging), so it is sufficient to mirror the lib directory into the cpio archive, recursing into shared library dependencies of shared libraries.

cryptsetup also requires the GCC runtime library libgcc_s.so.1 to be present at runtime, and will abort with an error message about not being able to call pthread_cancel(3) if it is unavailable.

time zone data

To print log messages in the correct time zone, we copy /etc/localtime from the host into the cpio archive.

minitrd outside of distri?

I currently have no desire to make minitrd available outside of distri. While the technical challenges (such as extending the generator to not rely on distri’s hermetic packages) are surmountable, I don’t want to support people’s initramfs remotely.

Also, I think that people’s efforts should in general be spent on rallying behind dracut and making it work faster, thereby benefiting all Linux distributions that use dracut (increasingly more). With minitrd, I have demonstrated that significant speed-ups are achievable.

Conclusion

It was interesting to dive into how an initramfs really works. I had been working with the concept for many years, from small tasks such as “debug why the encrypted root file system is not unlocked” to more complicated tasks such as “set up a root file system on DRBD for a high-availability setup”. But even with that sort of experience, I didn’t know all the details, until I was forced to implement every little thing.

As I suspected going into this exercise, dracut is much slower than it needs to be. Re-implementing its generation stage in a modern language instead of shell helps a lot.

Of course, my minitrd does a bit less than dracut, but not drastically so. The overall architecture is the same.

I hope my effort helps with two things:

  1. As a teaching implementation: instead of wading through the various components that make up a modern initramfs (udev, systemd, various shell scripts, …), people can learn about how an initramfs works in a single place.

  2. I hope the significant time difference motivates people to improve dracut.

Appendix: qemu development environment

Before writing any Go code, I did some manual prototyping. Learning how other people prototype is often immensely useful to me, so I’m sharing my notes here.

First, I copied all kernel modules and a statically built busybox binary:

% mkdir -p lib/modules/5.4.6
% cp -Lr /ro/lib/modules/5.4.6/* lib/modules/5.4.6/
% cp ~/busybox-1.22.0-amd64/busybox sh

To generate an initramfs from the current directory, I used:

% find . | cpio -o -H newc | pigz > /tmp/initrd

In distri’s Makefile, I append these flags to the QEMU invocation:

-kernel /tmp/kernel \
-initrd /tmp/initrd \
-append "root=/dev/mapper/cryptroot1 rdinit=/sh ro console=ttyS0,115200 rd.luks=1 rd.luks.uuid=63051f8a-54b9-4996-b94f-3cf105af2900 rd.luks.name=63051f8a-54b9-4996-b94f-3cf105af2900=cryptroot1 rd.vconsole.keymap=neo rd.vconsole.font=latarcyrheb-sun32 init=/init systemd.setenv=PATH=/bin rw vga=836"

The vga= mode parameter is required for loading font latarcyrheb-sun32.

Once in the busybox shell, I manually prepared the required mount points and kernel modules:

ln -s sh mount
ln -s sh lsmod
mkdir /proc /sys /run /mnt
mount -t proc proc /proc
mount -t sysfs sys /sys
mount -t devtmpfs dev /dev
modprobe virtio_pci
modprobe virtio_scsi

As a next step, I copied cryptsetup and dependencies into the initramfs directory:

% for f in /ro/cryptsetup-amd64-2.0.4-6/lib/*; do full=$(readlink -f $f); rel=$(echo $full | sed 's,^/,,g'); mkdir -p $(dirname $rel); install $full $rel; done
% ln -s ld-2.27.so ro/glibc-amd64-2.27-3/out/lib/ld-linux-x86-64.so.2
% cp /ro/glibc-amd64-2.27-3/out/lib/ld-2.27.so ro/glibc-amd64-2.27-3/out/lib/ld-2.27.so
% cp -r /ro/cryptsetup-amd64-2.0.4-6/lib ro/cryptsetup-amd64-2.0.4-6/
% mkdir -p ro/gcc-libs-amd64-8.2.0-3/out/lib64/
% cp /ro/gcc-libs-amd64-8.2.0-3/out/lib64/libgcc_s.so.1 ro/gcc-libs-amd64-8.2.0-3/out/lib64/libgcc_s.so.1
% ln -s /ro/gcc-libs-amd64-8.2.0-3/out/lib64/libgcc_s.so.1 ro/cryptsetup-amd64-2.0.4-6/lib
% cp -r /ro/lvm2-amd64-2.03.00-6/lib ro/lvm2-amd64-2.03.00-6/

In busybox, I used the following commands to unlock the root file system:

modprobe algif_skcipher
./cryptsetup luksOpen /dev/sda4 cryptroot1
mount /dev/dm-0 /mnt

Planet DebianMichael Stapelberg: distri: a Linux distribution to research fast package management

Over the last year or so I have worked on a research linux distribution in my spare time. It’s not a distribution for researchers (like Scientific Linux), but my personal playground project to research linux distribution development, i.e. try out fresh ideas.

This article focuses on the package format and its advantages, but there is more to distri, which I will cover in upcoming blog posts.

Motivation

I was a Debian Developer for the 7 years from 2012 to 2019, but using the distribution often left me frustrated, ultimately resulting in me winding down my Debian work.

Frequently, I was noticing a large gap between the actual speed of an operation (e.g. doing an update) and the possible speed based on back of the envelope calculations. I wrote more about this in my blog post “Package managers are slow”.

To me, this observation means that either there is potential to optimize the package manager itself (e.g. apt), or what the system does is just too complex. While I remember seeing some low-hanging fruit¹, through my work on distri, I wanted to explore whether all the complexity we currently have in Linux distributions such as Debian or Fedora is inherent to the problem space.

I have completed enough of the experiment to conclude that the complexity is not inherent: I can build a Linux distribution for general-enough purposes which is much less complex than existing ones.

① Those were low-hanging fruit from a user perspective. I’m not saying that fixing them is easy in the technical sense; I know too little about apt’s code base to make such a statement.

Key idea: packages are images, not archives

One key idea is to switch from using archives to using images for package contents. Common package managers such as dpkg(1) use tar(1) archives with various compression algorithms.

distri uses SquashFS images, a comparatively simple file system image format that I happen to be familiar with from my work on the gokrazy Raspberry Pi 3 Go platform.

This idea is not novel: AppImage and snappy also use images, but only for individual, self-contained applications. distri however uses images for distribution packages with dependencies. In particular, there is no duplication of shared libraries in distri.

A nice side effect of using read-only image files is that applications are immutable and can hence not be broken by accidental (or malicious!) modification.

Key idea: separate hierarchies

Package contents are made available under a fully-qualified path. E.g., all files provided by package zsh-amd64-5.6.2-3 are available under /ro/zsh-amd64-5.6.2-3. The mountpoint /ro stands for read-only, which is short yet descriptive.

Perhaps surprisingly, building software with custom prefix values of e.g. /ro/zsh-amd64-5.6.2-3 is widely supported, thanks to:

  1. Linux distributions, which build software with prefix set to /usr, whereas FreeBSD (and the autotools default), which build with prefix set to /usr/local.

  2. Enthusiast users in corporate or research environments, who install software into their home directories.

Because using a custom prefix is a common scenario, upstream awareness for prefix-correctness is generally high, and the rarely required patch will be quickly accepted.

Key idea: exchange directories

Software packages often exchange data by placing or locating files in well-known directories. Here are just a few examples:

  • gcc(1) locates the libusb(3) headers via /usr/include
  • man(1) locates the nginx(1) manpage via /usr/share/man.
  • zsh(1) locates executable programs via PATH components such as /bin

In distri, these locations are called exchange directories and are provided via FUSE in /ro.

Exchange directories come in two different flavors:

  1. global. The exchange directory, e.g. /ro/share, provides the union of the share sub directory of all packages in the package store.
    Global exchange directories are largely used for compatibility, see below.

  2. per-package. Useful for tight coupling: e.g. irssi(1) does not provide any ABI guarantees, so plugins such as irssi-robustirc can declare that they want e.g. /ro/irssi-amd64-1.1.1-1/out/lib/irssi/modules to be a per-package exchange directory and contain files from their lib/irssi/modules.

Search paths sometimes need to be fixed

Programs which use exchange directories sometimes use search paths to access multiple exchange directories. In fact, the examples above were taken from gcc(1) ’s INCLUDEPATH, man(1) ’s MANPATH and zsh(1) ’s PATH. These are prominent ones, but more examples are easy to find: zsh(1) loads completion functions from its FPATH.

Some search path values are derived from --datadir=/ro/share and require no further attention, but others might derive from e.g. --prefix=/ro/zsh-amd64-5.6.2-3/out and need to be pointed to an exchange directory via a specific command line flag.

FHS compatibility

Global exchange directories are used to make distri provide enough of the Filesystem Hierarchy Standard (FHS) that third-party software largely just works. This includes a C development environment.

I successfully ran a few programs from their binary packages such as Google Chrome, Spotify, or Microsoft’s Visual Studio Code.

Fast package manager

I previously wrote about how Linux distribution package managers are too slow.

distri’s package manager is extremely fast. Its main bottleneck is typically the network link, even at high speed links (I tested with a 100 Gbps link).

Its speed comes largely from an architecture which allows the package manager to do less work. Specifically:

  1. Package images can be added atomically to the package store, so we can safely skip fsync(2) . Corruption will be cleaned up automatically, and durability is not important: if an interactive installation is interrupted, the user can just repeat it, as it will be fresh on their mind.

  2. Because all packages are co-installable thanks to separate hierarchies, there are no conflicts at the package store level, and no dependency resolution (an optimization problem requiring SAT solving) is required at all.
    In exchange directories, we resolve conflicts by selecting the package with the highest monotonically increasing distri revision number.

  3. distri proves that we can build a useful Linux distribution entirely without hooks and triggers. Not having to serialize hook execution allows us to download packages into the package store with maximum concurrency.

  4. Because we are using images instead of archives, we do not need to unpack anything. This means installing a package is really just writing its package image and metadata to the package store. Sequential writes are typically the fastest kind of storage usage pattern.

Fast installation also make other use-cases more bearable, such as creating disk images, be it for testing them in qemu(1) , booting them on real hardware from a USB drive, or for cloud providers such as Google Cloud.

Fast package builder

Contrary to how distribution package builders are usually implemented, the distri package builder does not actually install any packages into the build environment.

Instead, distri makes available a filtered view of the package store (only declared dependencies are available) at /ro in the build environment.

This means that even for large dependency trees, setting up a build environment happens in a fraction of a second! Such a low latency really makes a difference in how comfortable it is to iterate on distribution packages.

Package stores

In distri, package images are installed from a remote package store into the local system package store /roimg, which backs the /ro mount.

A package store is implemented as a directory of package images and their associated metadata files.

You can easily make available a package store by using distri export.

To provide a mirror for your local network, you can periodically distri update from the package store you want to mirror, and then distri export your local copy. Special tooling (e.g. debmirror in Debian) is not required because distri install is atomic (and update uses install).

Producing derivatives is easy: just add your own packages to a copy of the package store.

The package store is intentionally kept simple to manage and distribute. Its files could be exchanged via peer-to-peer file systems, or synchronized from an offline medium.

distri’s first release

distri works well enough to demonstrate the ideas explained above. I have branched this state into branch jackherer, distri’s first release code name. This way, I can keep experimenting in the distri repository without breaking your installation.

From the branch contents, our autobuilder creates:

  1. disk images, which…

  2. a package repository. Installations can pick up new packages with distri update.

  3. documentation for the release.

The project website can be found at https://distr1.org. The website is just the README for now, but we can improve that later.

The repository can be found at https://github.com/distr1/distri

Project outlook

Right now, distri is mainly a vehicle for my spare-time Linux distribution research. I don’t recommend anyone use distri for anything but research, and there are no medium-term plans of that changing. At the very least, please contact me before basing anything serious on distri so that we can talk about limitations and expectations.

I expect the distri project to live for as long as I have blog posts to publish, and we’ll see what happens afterwards. Note that this is a hobby for me: I will continue to explore, at my own pace, parts that I find interesting.

My hope is that established distributions might get a useful idea or two from distri.

There’s more to come: subscribe to the distri feed

I don’t want to make this post too long, but there is much more!

Please subscribe to the following URL in your feed reader to get all posts about distri:

https://michael.stapelberg.ch/posts/tags/distri/feed.xml

Next in my queue are articles about hermetic packages and good package maintainer experience (including declarative packaging).

Feedback or questions?

I’d love to discuss these ideas in case you’re interested!

Please send feedback to the distri mailing list so that everyone can participate!

Planet DebianMichael Stapelberg: Linux distributions: Can we do without hooks and triggers?

Hooks are an extension feature provided by all package managers that are used in larger Linux distributions. For example, Debian uses apt, which has various maintainer scripts. Fedora uses rpm, which has scriptlets. Different package managers use different names for the concept, but all of them offer package maintainers the ability to run arbitrary code during package installation and upgrades. Example hook use cases include adding daemon user accounts to your system (e.g. postgres), or generating/updating cache files.

Triggers are a kind of hook which run when other packages are installed. For example, on Debian, the man(1) package comes with a trigger which regenerates the search database index whenever any package installs a manpage. When, for example, the nginx(8) package is installed, a trigger provided by the man(1) package runs.

Over the past few decades, Open Source software has become more and more uniform: instead of each piece of software defining its own rules, a small number of build systems are now widely adopted.

Hence, I think it makes sense to revisit whether offering extension via hooks and triggers is a net win or net loss.

Hooks preclude concurrent package installation

Package managers commonly can make very little assumptions about what hooks do, what preconditions they require, and which conflicts might be caused by running multiple package’s hooks concurrently.

Hence, package managers cannot concurrently install packages. At least the hook/trigger part of the installation needs to happen in sequence.

While it seems technically feasible to retrofit package manager hooks with concurrency primitives such as locks for mutual exclusion between different hook processes, the required overhaul of all hooks¹ seems like such a daunting task that it might be better to just get rid of the hooks instead. Only deleting code frees you from the burden of maintenance, automated testing and debugging.

① In Debian, there are 8620 non-generated maintainer scripts, as reported by find shard*/src/*/debian -regex ".*\(pre\|post\)\(inst\|rm\)$" on a Debian Code Search instance.

Triggers slow down installing/updating other packages

Personally, I never use the apropos(1) command, so I don’t appreciate the man(1) package’s trigger which updates the database used by apropos(1). The process takes a long time and, because hooks and triggers must be executed serially (see previous section), blocks my installation or update.

When I tell people this, they are often surprised to learn about the existance of the apropos(1) command. I suggest adopting an opt-in model.

Unnecessary work if programs are not used between updates

Hooks run when packages are installed. If a package’s contents are not used between two updates, running the hook in the first update could have been skipped. Running the hook lazily when the package contents are used reduces unnecessary work.

As a welcome side-effect, lazy hook evaluation automatically makes the hook work in operating system images, such as live USB thumb drives or SD card images for the Raspberry Pi. Such images must not ship the same crypto keys (e.g. OpenSSH host keys) to all machines, but instead generate a different key on each machine.

Why do users keep packages installed they don’t use? It’s extra work to remember and clean up those packages after use. Plus, users might not realize or value that having fewer packages installed has benefits such as faster updates.

I can also imagine that there are people for whom the cost of re-installing packages incentivizes them to just keep packages installed—you never know when you might need the program again…

Implemented in an interpreted language

While working on hermetic packages (more on that in another blog post), where the contained programs are started with modified environment variables (e.g. PATH) via a wrapper bash script, I noticed that the overhead of those wrapper bash scripts quickly becomes significant. For example, when using the excellent magit interface for Git in Emacs, I encountered second-long delays² when using hermetic packages compared to standard packages. Re-implementing wrappers in a compiled language provided a significant speed-up.

Similarly, getting rid of an extension point which mandates using shell scripts allows us to build an efficient and fast implementation of a predefined set of primitives, where you can reason about their effects and interactions.

② magit needs to run git a few times for displaying the full status, so small overhead quickly adds up.

Incentivizing more upstream standardization

Hooks are an escape hatch for distribution maintainers to express anything which their packaging system cannot express.

Distributions should only rely on well-established interfaces such as autoconf’s classic ./configure && make && make install (including commonly used flags) to build a distribution package. Integrating upstream software into a distribution should not require custom hooks. For example, instead of requiring a hook which updates a cache of schema files, the library used to interact with those files should transparently (re-)generate the cache or fall back to a slower code path.

Distribution maintainers are hard to come by, so we should value their time. In particular, there is a 1:n relationship of packages to distribution package maintainers (software is typically available in multiple Linux distributions), so it makes sense to spend the work in the 1 and have the n benefit.

Can we do without them?

If we want to get rid of hooks, we need another mechanism to achieve what we currently achieve with hooks.

If the hook is not specific to the package, it can be moved to the package manager. The desired system state should either be derived from the package contents (e.g. required system users can be discovered from systemd service files) or declaratively specified in the package build instructions—more on that in another blog post. This turns hooks (arbitrary code) into configuration, which allows the package manager to collapse and sequence the required state changes. E.g., when 5 packages are installed which each need a new system user, the package manager could update /etc/passwd just once.

If the hook is specific to the package, it should be moved into the package contents. This typically means moving the functionality into the program start (or the systemd service file if we are talking about a daemon). If (while?) upstream is not convinced, you can either wrap the program or patch it. Note that this case is relatively rare: I have worked with hundreds of packages and the only package-specific functionality I came across was automatically generating host keys before starting OpenSSH’s sshd(8)³.

There is one exception where moving the hook doesn’t work: packages which modify state outside of the system, such as bootloaders or kernel images.

③ Even that can be moved out of a package-specific hook, as Fedora demonstrates.

Conclusion

Global state modifications performed as part of package installation today use hooks, an overly expressive extension mechanism.

Instead, all modifications should be driven by configuration. This is feasible because there are only a few different kinds of desired state modifications. This makes it possible for package managers to optimize package installation.

Planet DebianMichael Stapelberg: Optional dependencies don’t work

In the i3 projects, we have always tried hard to avoid optional dependencies. There are a number of reasons behind it, and as I have recently encountered some of the downsides of optional dependencies firsthand, I summarized my thoughts in this article.

What is a (compile-time) optional dependency?

When building software from source, most programming languages and build systems support conditional compilation: different parts of the source code are compiled based on certain conditions.

An optional dependency is conditional compilation hooked up directly to a knob (e.g. command line flag, configuration file, …), with the effect that the software can now be built without an otherwise required dependency.

Let’s walk through a few issues with optional dependencies.

Inconsistent experience in different environments

Software is usually not built by end users, but by packagers, at least when we are talking about Open Source.

Hence, end users don’t see the knob for the optional dependency, they are just presented with the fait accompli: their version of the software behaves differently than other versions of the same software.

Depending on the kind of software, this situation can be made obvious to the user: for example, if the optional dependency is needed to print documents, the program can produce an appropriate error message when the user tries to print a document.

Sometimes, this isn’t possible: when i3 introduced an optional dependency on cairo and pangocairo, the behavior itself (rendering window titles) worked in all configurations, but non-ASCII characters might break depending on whether i3 was compiled with cairo.

For users, it is frustrating to only discover in conversation that a program has a feature that the user is interested in, but it’s not available on their computer. For support, this situation can be hard to detect, and even harder to resolve to the user’s satisfaction.

Packaging is more complicated

Unfortunately, many build systems don’t stop the build when optional dependencies are not present. Instead, you sometimes end up with a broken build, or, even worse: with a successful build that does not work correctly at runtime.

This means that packagers need to closely examine the build output to know which dependencies to make available. In the best case, there is a summary of available and enabled options, clearly outlining what this build will contain. In the worst case, you need to infer the features from the checks that are done, or work your way through the --help output.

The better alternative is to configure your build system such that it stops when any dependency was not found, and thereby have packagers acknowledge each optional dependency by explicitly disabling the option.

Untested code paths bit rot

Code paths which are not used will inevitably bit rot. If you have optional dependencies, you need to test both the code path without the dependency and the code path with the dependency. It doesn’t matter whether the tests are automated or manual, the test matrix must cover both paths.

Interestingly enough, this principle seems to apply to all kinds of software projects (but it slows down as change slows down): one might think that important Open Source building blocks should have enough users to cover all sorts of configurations.

However, consider this example: building cairo without libxrender results in all GTK application windows, menus, etc. being displayed as empty grey surfaces. Cairo does not fail to build without libxrender, but the code path clearly is broken without libxrender.

Can we do without them?

I’m not saying optional dependencies should never be used. In fact, for bootstrapping, disabling dependencies can save a lot of work and can sometimes allow breaking circular dependencies. For example, in an early bootstrapping stage, binutils can be compiled with --disable-nls to disable internationalization.

However, optional dependencies are broken so often that I conclude they are overused. Read on and see for yourself whether you would rather commit to best practices or not introduce an optional dependency.

Best practices

If you do decide to make dependencies optional, please:

  1. Set up automated testing for all code path combinations.
  2. Fail the build until packagers explicitly pass a --disable flag.
  3. Tell users their version is missing a dependency at runtime, e.g. in --version.

Worse Than FailureTeleconference Horror

Jcacweb cam

In the spring of 2020, with very little warning, every school in the United States shut down due to the ongoing global pandemic. Classrooms had to move to virtual meeting software like Zoom, which was never intended to be used as the primary means of educating grade schoolers. The teachers did wonderfully with such little notice, and most kids finished out the year with at least a little more knowledge than they started. This story takes place years before then, when online schooling was seen as an optional add-on and not a necessary backup plan in case of plague.

TelEdu provided their take on such a thing in the form of a free third-party add-on for Moodle, a popular e-learning platform. Moodle provides space for teachers to upload recordings and handouts; TelEdu takes it one step further by adding a "virtual classroom" complete with a virtual whiteboard. The catch? You have to pay a subscription fee to use the free module, otherwise it's nonfunctional.

Initech decided they were on a tight schedule to implement a virtual classroom feature for their corporate training, so they went ahead and bought the service without testing it. They then scheduled a demonstration to the client, still without testing it. The client's 10-man team all joined to test out the functionality, and it wasn't long before the phone started ringing off the hook with complaints: slowness, 504 errors, blank pages, the whole nine yards.

That's where Paul comes in to our story. Paul was tasked with finding what had gone wrong and completing the integration. The most common complaint was that Moodle was being slow, but upon testing it himself, Paul found that only the TelEdu module pages were slow, not the rest of the install. So far so good. The code was open-source, so he went digging through to find out what in view.php was taking so long:

$getplan = telEdu_get_plan();
$paymentinfo = telEdu_get_payment_info();
$getclassdetail = telEdu_get_class($telEduclass->class_id);
$pricelist = telEdu_get_price_list($telEduclass->class_id);

Four calls to get info about the class, three of them to do with payment. Not a great start, but not necessarily terrible, either. So, how was the info fetched?

function telEdu_get_plan() {
    $data['task'] = TELEDU_TASK_GET_PLAN;
    $result = telEdu_get_curl_info($data);
    return $result;
}

"They couldn't possibly ... could they?" Paul wondered aloud.

function telEdu_get_payment_info() {
    $data['task'] = TELEDU_TASK_GET_PAYMENT_INFO;
    $result = telEdu_get_curl_info($data);
    return $result;
}

Just to make sure, Paul next checked what telEdu_get_curl_info actually did:


function telEdu_get_curl_info($data) {
    global $CFG;
    require_once($CFG->libdir . '/filelib.php');

    $key = $CFG->mod_telEdu_apikey;
    $baseurl = $CFG->mod_telEdu_baseurl;

    $urlfirstpart = $baseurl . "/" . $data['task'] . "?apikey=" . $key;

    if (($data['task'] == TELEDU_TASK_GET_PAYMENT_INFO) || ($data['task'] == TELEDU_TASK_GET_PLAN)) {
        $location = $baseurl;
    } else {
        $location = telEdu_post_url($urlfirstpart, $data);
    }

    $postdata = '';
    if ($data['task'] == TELEDU_TASK_GET_PAYMENT_INFO) {
        $postdata = 'task=getPaymentInfo&apikey=' . $key;
    } else if ($data['task'] == TELEDU_TASK_GET_PLAN) {
        $postdata = 'task=getplan&apikey=' . $key;
    }

    $options = array(
        'CURLOPT_RETURNTRANSFER' => true, 'CURLOPT_SSL_VERIFYHOST' => false, 'CURLOPT_SSL_VERIFYPEER' => false,
    );

    $curl = new curl();
    $result = $curl->post($location, $postdata, $options);

    $finalresult = json_decode($result, true);
    return $finalresult;
}

A remote call to another API using, of all things, a shell call out to cURL, which queried URLs from the command line. Then it waited for the result, which was clocking in at anywhere between 1 and 30 seconds ... each call. The result wasn't used anywhere, either. It seemed to be just a precaution in case somewhere down the line they wanted these things.

After another half a day of digging through the rest of the codebase, Paul gave up. Sales told the client that "Due to the high number of users, we need more time to make a small server calibration."

The calibration? Replacing TelEdu with BigBlueButton. Problem solved.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianDirk Eddelbuettel: RcppSimdJson 0.1.1: More Features

A first update following for the exciting RcppSimdJson 0.1.0 release last month is now on CRAN. Version 0.1.1 brings further enhancements such direct parsing of raw chars, working with compressed files as well as much expanded querying ability all thanks to Brendan, some improvements to our demos thanks to Daniel as well as a small fix via a one-liner borrowed from upstream for a reported UBSAN issue.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed; see the video of the talk by Daniel Lemire at QCon (also voted best talk).

The detailed list of changes follows.

Changes in version 0.1.1 (2020-08-10)

  • Corrected incorrect file deletion when mixing local and remote files (Brendan in #34) closing #33.

  • Added support for raw vectors, compressed files, and compressed downloads (Dirk and Brendan in #36, #39, and #45 closing #35 and addressing issues raised in #40 and #44).

  • Examples in two demos are now more self-sufficient (Daniel Lemire and Dirk in #42).

  • Expanded query functionality to include single, flat, and nested queries (Brendan in #45 closing #43).

  • Split error handling parameters from error_ok/on_error into parse_error_ok/on_parse_error and query_error_ok/on_query_error (Brendan in #45).

  • One-line upstream change to address sanitizer error on cast.

Courtesy of CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityMicrosoft Patch Tuesday, August 2020 Edition

Microsoft today released updates to plug at least 120 security holes in its Windows operating systems and supported software, including two newly discovered vulnerabilities that are actively being exploited. Yes, good people of the Windows world, it’s time once again to backup and patch up!

At least 17 of the bugs squashed in August’s patch batch address vulnerabilities Microsoft rates as “critical,” meaning they can be exploited by miscreants or malware to gain complete, remote control over an affected system with little or no help from users. This is the sixth month in a row Microsoft has shipped fixes for more than 100 flaws in its products.

The most concerning of these appears to be CVE-2020-1380, which is a weaknesses in Internet Explorer that could result in system compromise just by browsing with IE to a hacked or malicious website. Microsoft’s advisory says this flaw is currently being exploited in active attacks.

The other flaw enjoying active exploitation is CVE-2020-1464, which is a “spoofing” bug in virtually all supported versions of Windows that allows an attacker to bypass Windows security features and load improperly signed files. For more on this flaw, see Microsoft Put Off Fixing Zero for 2 Years.

Trend Micro’s Zero Day Initiative points to another fix — CVE-2020-1472 — which involves a critical issue in Windows Server versions that could let an unauthenticated attacker gain administrative access to a Windows domain controller and run an application of their choosing. A domain controller is a server that responds to security authentication requests in a Windows environment, and a compromised domain controller can give attackers the keys to the kingdom inside a corporate network.

“It’s rare to see a Critical-rated elevation of privilege bug, but this one deserves it,” said ZDI’S Dustin Childs. “What’s worse is that there is not a full fix available.”

Perhaps the most “elite” vulnerability addressed this month earned the distinction of being named CVE-2020-1337, and refers to a security hole in the Windows Print Spooler service that could allow an attacker or malware to escalate their privileges on a system if they were already logged on as a regular (non-administrator) user.

Satnam Narang at Tenable notes that CVE-2020-1337 is a patch bypass for CVE-2020-1048, another Windows Print Spooler vulnerability that was patched in May 2020. Narang said researchers found that the patch for CVE-2020-1048 was incomplete and presented their findings for CVE-2020-1337 at the Black Hat security conference earlier this month. More information on CVE-2020-1337, including a video demonstration of a proof-of-concept exploit, is available here.

Adobe has graciously given us another month’s respite from patching Flash Player flaws, but it did release critical security updates for its Acrobat and PDF Reader products. More information on those updates is available here.

Keep in mind that while staying up-to-date on Windows patches is a must, it’s important to make sure you’re updating only after you’ve backed up your important data and files. A reliable backup means you’re less likely to pull your hair out when the odd buggy patch causes problems booting the system.

So do yourself a favor and backup your files before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And as ever, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Planet DebianJonathan Carter: GameMode in Debian

What is GameMode, what does it do?

About two years ago, I ran into some bugs running a game on Debian, so installed Windows 10 on a spare computer and ran it on there. I learned that when you launch a game in Windows 10, it automatically disables notifications, screensaver, reduces power saving measures and gives the game maximum priority. I thought “Oh, that’s actually quite nice, but we probably won’t see that kind of integration on Linux any time soon”. The very next week, I read the initial announcement of GameMode, a tool from Feral Interactive that does a bunch of tricks to maximise performance for games running on Linux.

When GameMode is invoked it:

  • Sets the kernel performance governor from ‘powersave’ to ‘performance’
  • Provides I/O priority to the game process
  • Optionally sets nice value to the game process
  • Inhibits the screensaver
  • Tweak the kernel scheduler to enable soft real-time capabilities (handled by the MuQSS kernel scheduler, if available in your kernel)
  • Sets GPU performance mode (NVIDIA and AMD)
  • Attempts GPU overclocking (on supported NVIDIA cards)
  • Runs custom pre/post run scripts. You might want to run a script to disable your ethereum mining or suspend VMs when you start a game and resume it all once you quit.

How GameMode is invoked

Some newer games (proprietary games like “Rise of the Tomb Raider”, “Total War Saga: Thrones of Britannia”, “Total War: WARHAMMER II”, “DiRT 4” and “Total War: Three Kingdoms”) will automatically invoke GameMode if it’s installed. For games that don’t, you can manually evoke it using the gamemoderun command.

Lutris is a tool that makes it easy to install and run games on Linux, and it also integrates with GameMode. (Lutris is currently being packaged for Debian, hopefully it will make it in on time for Bullseye).

Screenshot of Lutris, a tool that makes it easy to install your non-Linux games, which also integrates with GameMode.

GameMode in Debian

The latest GameMode is packaged in Debian (Stephan Lachnit and I maintain it in the Debian Games Team) and it’s also available for Debian 10 (Buster) via buster-backports. All you need to do to get up and running with GameMode is to install the ‘gamemode’ package.

GameMode in Debian supports 64 bit and 32 bit mode, so running it with older games (and many proprietary games) still work. Some distributions (like Arch Linux), have dropped 32 bit support, so 32 bit games on such systems lose any kind of integration with GameMode even if you can get those games running via other wrappers on such systems.

We also include a binary called ‘gamemode-simulate-game’ (installed under /usr/games/). This is a minimalistic program that will invoke gamemode automatically for 10 seconds and then exit without an error if it was successful. Its source code might be useful if you’d like to add GameMode support to your game, or patch a game in Debian to automatically invoke it.

In Debian we install Gamemode’s example config file to /etc/gamemode.ini where a user can customise their system-wide preferences, or alternatively they can place a copy of that in ~/.gamemode.ini with their personal preferences. In this config file, you can also choose to explicitly allow or deny games.

GameMode might also be useful for many pieces of software that aren’t games. I haven’t done any benchmarks on such software yet, but it might be great for users who use CAD programs or use a combination of their CPU/GPU to crunch a large amount of data.

I’ve also packaged an extension for GNOME called gamemode-extension. The Debian package is called ‘gnome-shell-extension-gamemode’. You’ll need to enable it using gnome-tweaks after installation, it will then display a green controller in your notification area whenever GameMode is active. It’s only in testing/bullseye since it relies on a newer gnome-shell than what’s available in buster.

Running gamemode-simulate-game, with the shell extension showing that it’s activated in the top left corner.

Planet DebianMike Gabriel: No Debian LTS Work in July 2020

In July 2020, I was originally assigned 8h of work on Debian LTS as a paid contributor, but holiday season overwhelmed me and I did not do any LTS work, at all.

The assigned hours from July I have taken with me into August 2020.

light+love,
Mike

Cory DoctorowTerra Nullius

Terra Nullius is my March 2019 column in Locus magazine; it explores the commonalities between the people who claim ownership over the things they use to make new creative works and the settler colonialists who arrived in various “new worlds” and declared them to be empty, erasing the people who were already there as a prelude to genocide.

I was inspired by the story of Aloha Poke, in which a white dude from Chicago secured a trademark for his “Aloha Poke” midwestern restaurants, then threatened Hawai’ians who used “aloha” in the names of their restaurants (and later, by the Dutch grifter who claimed a patent on the preparation of teff, an Ethiopian staple grain that has been cultivated and refined for about 7,000 years).

MP3 Link

CryptogramCollecting and Selling Mobile Phone Location Data

The Wall Street Journal has an article about a company called Anomaly Six LLC that has an SDK that's used by "more than 500 mobile applications." Through that SDK, the company collects location data from users, which it then sells.

Anomaly Six is a federal contractor that provides global-location-data products to branches of the U.S. government and private-sector clients. The company told The Wall Street Journal it restricts the sale of U.S. mobile phone movement data only to nongovernmental, private-sector clients.

[...]

Anomaly Six was founded by defense-contracting veterans who worked closely with government agencies for most of their careers and built a company to cater in part to national-security agencies, according to court records and interviews.

Just one of the many Internet companies spying on our every move for profit. And I'm sure they sell to the US government; it's legal and why would they forgo those sales?

Kevin RuddCNN: South China Sea and the US-China Tech War

E&OE TRANSCRIPT
TELEVISION INTERVIEW
CNN, FIRST MOVE
11 AUGUST 2020

Topics: Foreign Affairs article; US-China tech war

Zain Asher
In a sobering assessment in Foreign Affairs magazine, the former Australian Prime Minister Kevin Rudd warns that diplomatic relations are crumbling and raise the possibility of armed conflict. Mr Rudd, who is president of the Asia Society Policy Institute, joins us live now. So Mr Rudd, just walk us through this. You believe that armed conflict is possible and, is this relationship at this point, in your opinion, quite frankly, beyond repair?

Kevin Rudd
It’s not beyond repair, but we’ve got to be blunt about the fact that the level of deterioration has been virtually unprecedented at least in the last half-century. And things are moving at a great pace in terms of the scenarios, the two scenarios which trouble us most are the Taiwan straits and the South China Sea. In the Taiwan straits, we see consistent escalation of tensions between Washington and Beijing. And certainly, in the South China Sea, the pace and intensity of naval and air activity in and around that region increases the possibility, the real possibility, of collisions at sea and collisions in the air. And the question then becomes: do Beijing and Washington really have an intention to de-escalate or then to escalate, if such a crisis was to unfold?

Zain Asher
How do they de-escalate? Is the only way at this point, or how do they reverse the sort of tensions between them? Is the main way at this point that, you know, a new administration comes in in November and it can be reset? If Trump gets re-elected, can there be de-escalation? If so, how?

Kevin Rudd
Well the purpose of my writing the article in Foreign Affairs, which you referred to before, was to, in fact, talk about the real dangers we face in the next three months. That is, before the US presidential election. We all know that in the US right now, that tensions or, shall I say, political pressure on President Trump are acute. But what people are less familiar of within the West is the fact that in Chinese politics there is also pressure on Xi Jinping for a range of domestic and external reasons as well. So what I have simply said is: in this next three months, where we face genuine political pressure operating on both political leaders, if we do have an incident, that is an unplanned incident or collision in the air or at sea, we now have a tinderbox environment. Therefore, the plans which need to be put in place between the grown-ups in the US and Chinese militaries is to have a mechanism to rapidly de-escalate should a collision occur. I’m not sure that those plans currently exist.

Zain Asher
Let’s talk about tech because President Donald Trump, as you know, is forcing ByteDance, the company that owns TikTok, to sell its assets and no longer operate in the US. The premise is that there are national security fears and also this idea that TikTok is handing over user data from American citizens to the Chinese government. How real and concrete are those fears, or is this purely politically motivated? Are the fears justified, in other words?

Kevin Rudd
As far as TikTok is concerned, this is way beyond my paygrade in terms of analysing the technological capacities of a) the company and b) the ability of the Chinese security authorities to backdoor them. What I can say is this a deliberate decision on the part of the US administration to radically escalate the technology war. In the past, it was a war about Huawei and 5G. It then became an unfolding conflict over the question of the future access to semiconductors, computer chips. And now we have, as it were, the unfolding ban imposed by the administration on Chinese-sourced computer apps, including this one, for TikTok. So this is a throwing-down of the gauntlet by the US administration. What I believe we will see, however, is Chinese retaliation. I think they will find a corporate mechanism to retaliate, given the actions taken not just against ByteDance and TikTok, but of course against WeChat. And so the pattern of escalation that we were talking about earlier in technology, the economy, trade, investment, finance, and the hard stuff in national security continues to unfold, which is why we need sober heads to prevail in the months ahead.

The post CNN: South China Sea and the US-China Tech War appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: The Concatenator

In English, there's much debate over the "Oxford Comma": in a list of items, do you put a comma between the penultimate item and the "and" before the final one? For example: "The conference featured bad programmers, Remy and TheDailyWTF readers" versus "The conference featured bad programmers, Remy, and the TheDailyWTF readers."

I'd like to introduce a subtly different one: "the concatenator's comma", or if we want to be generic "the concatenator's seperator character", but that doesn't have the same ring to it. If you're planning to list items as a string, you might to something like this pseudocode:

for each item in items result.append(item + ", ")

This naive approach does pose a problem: we'll have an extra comma. So maybe you have to add logic to decide if you're on the first or last item, and insert (or fail to insert) commas as appropriate. Or, maybe isn't a problem- if we're generating JSON, for example, we can just leave the trailing commas. This isn't universally true, of course, but many formats will ignore extra separators. Edit: I was apparently hallucinating when I wrote this; one of the most annoying things about JSON is that you can't do this.

Like, for example, URL query strings, which don't require a "sub-delim" like "&" to have anything following it.

But fortunately for us, no matter what language we're using, there's almost certainly an API that makes it so that we don't have to do string concatenation anyway, so why even bring it up?

Well, because Mike has a co-worker that has read the docs well enough to know that PHP has a substr method, but not well enough to know it has an http_build_query method. Or even an implode method, which handles string concats for you. Instead, they wrote this:

$query = ''; foreach ($postdata as $var => $val) { $query .= $var .'='. $val .'&'; } $query = substr($query, 0, -1);

This code exploits a little-observed feature of substr: a negative length reads back from the end. So this lops off that trailing "&", which is both unnecessary and one of the most annoying ways to do this.

Maybe it's not enough to RTFM, as Mike puts it, maybe you need to "RTEFM": read the entire manual.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Planet DebianDirk Eddelbuettel: nanotime 0.3.1: Misc Build Fixes for Yuge New Features!

The nanotime 0.3.0 release four days ago was so exciting that we decided to do it again! Kidding aside, and fairly extensive tests notwithstanding we were bitten by a few build errors: who knew clang on macOS needed extra curlies to be happy, another manifestation of Solaris having no idea what a timezone setting “America/New_York” is, plus some extra pickyness from the SAN tests and whatnot. So Leonardo and I gave it some extra care over the weekend, uploaded it late yesterday and here we are with 0.3.1. Thanks again to CRAN for prompt processing even though they are clearly deluged shortly before their (brief) summer break.

nanotime relies on the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from work by Leonardo Silvestri who rejigged internals in S4—and now added new types for periods, intervals and durations.

The NEWS snippet adds full details.

Changes in version 0.3.1 (2020-08-09)

  • Several small cleanups to ensure a more robust compilation (Leonardo and Dirk in #75 fixing #74).

  • Show Solaris some extra love by skipping tests and examples with a timezone (Dirk in #76).

Thanks to CRANberries there is also a diff to the previous version. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureI'm Blue

Designers are used to getting vague direction from clients. "It should have more pop!" or "Can you make the blue more blue?" But Kevin was a contractor who worked on embedded software, so he didn't really expect to have to deal with that, even if he did have to deal with colors a fair bit.

Kevin was taking over a contract from another developer to build software for a colorimeter, a device to measure color. When companies, like paint companies, care about color, they tend to really care about color, and need to be able to accurately observe a real-world color. Once you start diving deep into color theory, you start having to think about things like observers, and illuminants and tristimulus models and "perceptual color spaces".

The operating principle of the device was fairly simple. It had a bright light, of a well known color temperature. It had a brightness sensor. It had a set of colored filter gels that would pass in front of the sensor. Place the colorimeter against an object, and the bright light would reflect off the surface, through each of the filters in turn and record the brightness. With a little computation, you can determine, with a high degree of precision, what color something is.

Now, this is a scientific instrument, and that means that the code which runs it, even though it's proprietary, needs to be vetted by scientists. The device needs to be tested against known samples. Deviations need to be corrected for, and then carefully justified. There should be no "magic numbers" in the code that aren't well documented and explained. If, for example, the company gets its filter gels from a new vendor and they filter slightly different frequencies, the commit needs to link to the datasheets for those gels to explain the change. Similarly, if a sensor has a frequency response that means that the samples may be biased, you commit that with a link to the datasheet showing that to be the case.

Which is why Kevin was a little surprised by the commit by his predecessor. The message read: "Nathan wants the blue 'more blue'? Fine. the blue is more blue." Nathan was the product owner.

The corresponding change was a line which read:

blue += 20;

Well, Nathan got what he wanted. It's a good thing he didn't ask for it to "pop" though.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianRuss Allbery: rra-c-util 8.3

In this release of my utility library for my other packages, I finally decided to drop support for platforms without a working snprintf.

This dates back to the early 2000s and a very early iteration of this package. At the time, there were still some older versions of UNIX without snprintf at all. More commonly, it was buggy. The most common problem was that it would return -1 if the buffer wasn't large enough rather than returning the necessary size of the buffer. Or, in some cases, it wouldn't support a buffer size of 0 and a NULL buffer to get the necessary size.

At the time I added this support for INN and some other packages, Solaris had several of these issues. But C99 standardized the correct snprintf behavior, and slowly every maintained operating system was fixed. (I forget whether it was fixed in Solaris 8 or Solaris 9, but regardless, Solaris has had a working snprintf for many years.) Meanwhile, the replacement function (Patrick Powell's version, also used by mutt and other packages) was a huge wad of code and a corresponding test suite. Over time, I've increased the aggressiveness of linters to try to catch more dangerous C pitfalls, and that's required carrying more and more small modifications plus a preamble to disable various warnings that I didn't want to try to fix.

The straw that broke the camel's back was Clang's new case fallthrough warning. Clang stopped supporting the traditional /* fallthrough */ comment. It now prefers [[clang:fallthrough]] syntax, but of course older compilers choke on that. It does support the GCC __attribute__((__fallthrough__)) syntax, but older compilers don't like that construction because they think it's an empty statement. It was a mess, and I decided the time had come to drop this support effort.

At this point, if you're still running an operating system without C99 snprintf, I think it's essentially a retrocomputing or at least extremely stable legacy production situation, and you're unlikely to want the latest and greatest releases of new software. Hopefully that assumption is correct, or at least correct enough.

(I realize the right solution to this problem is probably for me to use Gnulib for portability. But converting to it is a whole other project with a lot of other implications and machinery, and I'm not sure that's what I want to spend time on.)

Also in this release is a fix for network tests on hosts with no IPv4 addresses (more on this when I release the next version of remctl), fixes for style issues found by Perl::Critic::Freenode, and some other test suite improvements.

You can get the latest version from the rra-c-util distribution page.

,

LongNowThe Deep Sea

As detailed in the exquisite documentary Proteus, the ocean floor was until very recently a repository for the dreams of humankind — the receptacle for our imagination. But when the H.M.S. Challenger expedition surveyed the world’s deep-sea life and brought it back for cataloging by now-legendary illustrator Ernst Haeckel (who coined the term “ecology”), the hidden benthic universe started coming into view. What we found, and what we continue to discover on the ocean floor, is far stranger than the monsters we’d projected.

This spectacular site by Neal Agarwal brings depth into focus. You’ve surfed the Web; now take a few and dive all the way down to Challenger Deep, scrolling past the animals that live at every depth.

Just as The Long Now situates us in a humbling, Copernican experience of temporality, Deep Sea reminds us of just how thin of a layer surface life exists in. Just as with Stewart Brand’s pace layers, the further down you go, the slower everything unfolds: the cold and dark and pressure slow the evolutionary process, dampening the frequency of interactions between creatures, bestowing space and time for truly weird and wondrous and as-yet-uncategorized life.

Dig in the ground and you might pull up the fossils of some strange long-gone organisms. Dive to the bottom of the ocean and you might find them still alive down there, the unmolested records of an ancient world still drifting in slow motion, going about their days-without-days…

For evidence of time-space commutability, settle in for a sublime experience that (like benthic life itself) makes much of very little: just one page, one scroll bar, and one journey to a world beyond.

(Mobile device suggested: this scroll goes in, not just across…)

Learn More:

  • The “Big Here” doesn’t get much bigger than Neal Agarwal‘s The Size of Space, a new interactive visualization that provides a dose of perspective on our place in the universe.

,

CryptogramFriday Squid Blogging: New SQUID

There's a new SQUID:

A new device that relies on flowing clouds of ultracold atoms promises potential tests of the intersection between the weirdness of the quantum world and the familiarity of the macroscopic world we experience every day. The atomtronic Superconducting QUantum Interference Device (SQUID) is also potentially useful for ultrasensitive rotation measurements and as a component in quantum computers.

"In a conventional SQUID, the quantum interference in electron currents can be used to make one of the most sensitive magnetic field detectors," said Changhyun Ryu, a physicist with the Material Physics and Applications Quantum group at Los Alamos National Laboratory. "We use neutral atoms rather than charged electrons. Instead of responding to magnetic fields, the atomtronic version of a SQUID is sensitive to mechanical rotation."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Kevin RuddThe Guardian: If the Liberal Party truly cared about racial injustice, they would pay their fair share to Close the Gap

Published in the Guardian on 7 August 2020

Throughout our country’s modern history, the treatment of our Aboriginal and Torres Strait Islander brothers and sisters has been appalling. It has also been inconsistent with the original instructions from the British Admiralty to treat the Indigenous peoples of this land with proper care and respect. From first encounter to the frontier wars, the stolen generations and ongoing institutionalised racism, First Nations people have been handed a raw deal. The gaps between Indigenous and non-Indigenous Australians’ outcomes in areas of education, employment, health, housing and justice are a product of historical, intergenerational maltreatment.

In 2008, I apologised to the stolen generations and Indigenous Australians for the racist laws and policies of successive Australian governments. The apology may have been 200 years late, but it was an important part of the reconciliation process.

But the apology meant nothing if it wasn’t backed by action. For this reason, my government acted on Aboriginal and Torres Strait Islander social justice commissioner Tom Calma’s call to Close the Gap. We worked hard to push this framework through the Council of Australian governments so that all states and territories were on board with the strategy. We also funded it, with $4.6bn committed to achieve each of the six targets we set. While the targets and funding were critical to any improvements in the lives of Indigenous Australians, we suspected the Coalition would scrap our programs once they returned to government. After all, only a few years earlier, John Howard’s Indigenous affairs minister was denying the very existence of the stolen generations. Howard himself had refused to deliver an apology for a decade. And then both he and Peter Dutton decided to boycott the official apology in 2008.

To ensure that the Closing the Gap strategy would not be abandoned, we made it mandatory for the prime minister to stand before the House of Representatives each year and account for the success and failures in reaching the targets that were set.

Had we not adopted the Closing the Gap framework, would we now be on target to have 95% of Indigenous four year-olds enrolled in early childhood education? I think not. Would we have halved the gap for young Indigenous adults to have completed year 12 by 2020? I think not. And would we see progress on closing the gap in child mortality, and literacy and numeracy skills? No, I think not.

Despite these achievements, the most recent Closing the Gap report nonetheless showed Australia was not on track to meet four of the deadlines we’d originally set. A major reason for this is that federal funding for the closing the gap strategy collapsed under Tony Abbott, the great wrecking-ball of Australian politics, whose government cut $534.4m from programs dedicated to improving the lives of Indigenous Australians. And it’s never been restored by Abbott’s successors. It’s all there in the budget papers.

Whatever targets are put in place, governments must commit to physical resourcing of Closing the Gap. They are not going to be delivered by magic.

On Thursday last week, the new national agreement on Closing the Gap was announced. I applaud Pat Turner and other Indigenous leaders who will now sit with the leaders of the commonwealth, states, territories and local government to devise plans to achieve the new targets they have negotiated.

Scott Morrison, however, sought to discredit our government’s targets, rather than coming clean about the half-billion-dollar funding cuts that had made it impossible to achieve these targets under any circumstances. His argument that the original targets were conjured out of thin air by my government is demonstrably untrue. The truth is, Jenny Macklin, the responsible minister, spoke widely with Indigenous leaders to prioritise the areas that needed to be urgently addressed in the original Closing the Gap targets. Furthermore, if Morrison is now truly awakened to the intrinsic value of listening to Indigenous Australians, I look forward to him enshrining an Indigenous voice to parliament in the Constitution, given this is the universal position of all Indigenous groups.

Yet amid the welter of news coverage of the new closing the gap agreement, the central question remains: who will be paying the bill? While shared responsibility to close the gap between all levels of government and Indigenous organisations might sound like good news, this will quickly unravel into a political blame game if the commonwealth continues to shirk its financial duty.

The announcement this week that the commonwealth would allocate $45m over four years is just a very bad joke. This is barely 10% of what the Liberals cut from our national Closing the Gap strategy. And barely 1% of our total $4.5bn national program to meet our targets agreed to with the states and territories in 2009.

The Liberals want you to believe they care about racial injustice. But they don’t believe there are any votes in it. This is well understood by Scotty From Marketing, a former state director of the Liberal party, who lives and breathes polling and focus groups. That’s why they are not even pretending to fund the realisation of the new more “realistic” targets they have so loudly proclaimed.

The post The Guardian: If the Liberal Party truly cared about racial injustice, they would pay their fair share to Close the Gap appeared first on Kevin Rudd.

Worse Than FailureError'd: All Natural Errors

"I'm glad the asdf is vegan. I'm really thinking of going for the asasdfsadf, though. With a name like that, you know it's got to be 2 1/2 times as good for you," writes VJ.

 

Phil G. wrote, "Get games twice as fast with Epic's new multidimensional downloads!"

 

"But...it DOES!" Zed writes.

 

John M. wrote, "I appreciate the helpful suggestion, but I think I'll take a pass."

 

"java.lang.IllegalStateException...must be one of those edgy indie games! I just hope it's not actually illegal," writes Matthijs .

 

"For added flavor, I received this reminder two hours after I'd completed my checkout and purchased that very same item_name," Aaron K. writes.

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Krebs on SecurityHacked Data Broker Accounts Fueled Phony COVID Loans, Unemployment Claims

A group of thieves thought to be responsible for collecting millions in fraudulent small business loans and unemployment insurance benefits from COVID-19 economic relief efforts gathered personal data on people and businesses they were impersonating by leveraging several compromised accounts at a little-known U.S. consumer data broker, KrebsOnSecurity has learned.

In June, KrebsOnSecurity was contacted by a cybersecurity researcher who discovered that a group of scammers was sharing highly detailed personal and financial records on Americans via a free web-based email service that allows anyone who knows an account’s username to view all email sent to that account — without the need of a password.

The source, who asked not to be identified in this story, said he’s been monitoring the group’s communications for several weeks and sharing the information with state and federal authorities in a bid to disrupt their fraudulent activity.

The source said the group appears to consist of several hundred individuals who collectively have stolen tens of millions of dollars from U.S. state and federal treasuries via phony loan applications with the U.S. Small Business Administration (SBA) and through fraudulent unemployment insurance claims made against several states.

KrebsOnSecurity reviewed dozens of emails the fraud group exchanged, and noticed that a great many consumer records they shared carried a notation indicating they were cut and pasted from the output of queries made at Interactive Data LLC, a Florida-based data analytics company.

Interactive Data, also known as IDIdata.com, markets access to a “massive data repository” on U.S. consumers to a range of clients, including law enforcement officials, debt recovery professionals, and anti-fraud and compliance personnel at a variety of organizations.

The consumer dossiers obtained from IDI and shared by the fraudsters include a staggering amount of sensitive data, including:

-full Social Security number and date of birth;
-current and all known previous physical addresses;
-all known current and past mobile and home phone numbers;
-the names of any relatives and known associates;
-all known associated email addresses
-IP addresses and dates tied to the consumer’s online activities;
-vehicle registration, and property ownership information
-available lines of credit and amounts, and dates they were opened
-bankruptcies, liens, judgments, foreclosures and business affiliations

Reached via phone, IDI Holdings CEO Derek Dubner acknowledged that a review of the consumer records sampled from the fraud group’s shared communications indicates “a handful” of authorized IDI customer accounts had been compromised.

“We identified a handful of legitimate businesses who are customers that may have experienced a breach,” Dubner said.

Dubner said all customers are required to use multi-factor authentication, and that everyone applying for access to its services undergoes a rigorous vetting process.

“We absolutely credential businesses and have several ways do that and exceed the gold standard, which is following some of the credit bureau guidelines,” he said. “We validate the identity of those applying [for access], check with the applicant’s state licensor and individual licenses.”

Citing an ongoing law enforcement investigation into the matter, Dubner declined to say if the company knew for how long the handful of customer accounts were compromised, or how many consumer records were looked up via those stolen accounts.

“We are communicating with law enforcement about it,” he said. “There isn’t much more I can share because we don’t want to impede the investigation.”

The source told KrebsOnSecurity he’s identified more than 2,000 people whose SSNs, DoBs and other data were used by the fraud gang to file for unemployment insurance benefits and SBA loans, and that a single payday can land the thieves $20,000 or more. In addition, he said, it seems clear that the fraudsters are recycling stolen identities to file phony unemployment insurance claims in multiple states.

ANALYSIS

Hacked or ill-gotten accounts at consumer data brokers have fueled ID theft and identity theft services of various sorts for years. In 2013, KrebsOnSecurity broke the news that the U.S. Secret Service had arrested a 24-year-old man named Hieu Minh Ngo for running an identity theft service out of his home in Vietnam.

Ngo’s service, variously named superget[.]info and findget[.]me, gave customers access to personal and financial data on more than 200 million Americans. He gained that access by posing as a private investigator to a data broker subsidiary acquired by Experian, one of the three major credit bureaus in the United States.

Ngo’s ID theft service superget.info

Experian was hauled before Congress to account for the lapse, and assured lawmakers there was no evidence that consumers had been harmed by Ngo’s access. But as follow-up reporting showed, Ngo’s service was frequented by ID thieves who specialized in filing fraudulent tax refund requests with the Internal Revenue Service, and was relied upon heavily by an identity theft ring operating in the New York-New Jersey region.

Also in 2013, KrebsOnSecurity broke the news that ssndob[.]ms, then a major identity theft service in the cybercrime underground, had infiltrated computers at some of America’s large consumer and business data aggregators, including LexisNexis Inc., Dun & Bradstreet, and Kroll Background America Inc.

The now defunct SSNDOB identity theft service.

In 2006, The Washington Post reported that a group of five men used stolen or illegally created accounts at LexisNexis subsidiaries to lookup SSNs and other personal information more than 310,000 individuals. And in 2004, it emerged that identity thieves masquerading as customers of data broker Choicepoint had stolen the personal and financial records of more than 145,000 Americans.

Those compromises were noteworthy because the consumer information warehoused by these data brokers can be used to find the answers to so-called knowledge-based authentication (KBA) questions used by companies seeking to validate the financial history of people applying for new lines of credit.

In that sense, thieves involved in ID theft may be better off targeting data brokers like IDI and their customers than the major credit bureaus, said Nicholas Weaver, a researcher at the International Computer Science Institute and lecturer at UC Berkeley.

“This means you have access not only to the consumer’s SSN and other static information, but everything you need for knowledge-based authentication because these are the types of companies that are providing KBA data.”

The fraud group communications reviewed by this author suggest they are cashing out primarily through financial instruments like prepaid cards and a small number of online-only banks that allow consumers to establish accounts and move money just by providing a name and associated date of birth and SSN.

While most of these instruments place daily or monthly limits on the amount of money users can deposit into and withdraw from the accounts, some of the more popular instruments for ID thieves appear to be those that allow spending, sending or withdrawal of between $5,000 to $7,000 per transaction, with high limits on the overall number or dollar value of transactions allowed in a given time period.

KrebsOnSecurity is investigating the extent to which a small number of these financial instruments may be massively over-represented in the incidence of unemployment insurance benefit fraud at the state level, and in SBA loan fraud at the federal level. Anyone in the financial sector or state agencies with information about these apparent trends may confidentially contact this author at krebsonsecurity @ gmail dot com, or via the encrypted message service Wickr at “krebswickr“.

The looting of state unemployment insurance programs by identity thieves has been well documented of late, but far less public attention has centered on fraud targeting Economic Injury Disaster Loan (EIDL) and advance grant programs run by the U.S. Small Business Administration in response to the COVID-19 crisis.

Late last month, the SBA Office of Inspector General (OIG) released a scathing report (PDF) saying it has been inundated with complaints from financial institutions reporting suspected fraudulent EIDL transactions, and that it has so far identified $250 million in loans given to “potentially ineligible recipients.” The OIG said many of the complaints were about credit inquiries for individuals who had never applied for an economic injury loan or grant.

The figures released by the SBA OIG suggest the financial impact of the fraud may be severely under-reported at the moment. For example, the OIG said nearly 3,800 of the 5,000 complaints it received came from just six financial institutions (out of several thousand across the United States). One credit union reportedly told the U.S. Justice Department that 59 out of 60 SBA deposits it received appeared to be fraudulent.

LongNowChildhood as a solution to explore–exploit tensions

Big questions abound regarding the protracted childhood of Homo sapiens, but there’s a growing argument that it’s an adaptation to the increased complexity of our social environment and the need to learn longer and harder in order to handle the ever-raising bar of adulthood. (Just look to the explosion of requisite schooling over the last century for a concrete example of how childhood grows along with social complexity.)

It’s a tradeoff between genetic inheritance and enculturation — see also Kevin Kelly’s remarks in The Inevitable that we have entered an age of lifelong learning and the 21st Century requires all of us to be permanent “n00bs”, due to the pace of change and the scale at which we have to grapple with evolutionarily relevant sociocultural information.

New research from Past Long Now Seminar Speaker Alison Gopnik:

“I argue that the evolution of our life history, with its distinctively long, protected human childhood, allows an early period of broad hypothesis search and exploration, before the demands of goal-directed exploitation set in. This cognitive profile is also found in other animals and is associated with early behaviours such as neophilia and play. I relate this developmental pattern to computational ideas about explore–exploit trade-offs, search and sampling, and to neuroscience findings. I also present several lines of empirical evidence suggesting that young human learners are highly exploratory, both in terms of their search for external information and their search through hypothesis spaces. In fact, they are sometimes more exploratory than older learners and adults.”

Alison Gopnik, “Childhood as a solution to explore-exploit tensions” in Philosophical Transactions of the Royal Society B.

Worse Than FailureCodeSOD: A Slow Moving Stream

We’ve talked about Java’s streams in the past. It’s hardly a “new” feature at this point, but its blend of “being really useful” and “based on functional programming techniques” and “different than other APIs” means that we still have developers struggling to figure out how to use it.

Jeff H has a co-worker, Clarence, who is very “anti-stream”. “It creates too many copies of our objects, so it’s terrible for memory, and it’s so much slower. Don’t use streams unless you absolutely have to!” So in many a code review, Jeff submits some very simple, easy to read, and fast-performing bit of stream code, and Clarence objects. “It’s slow. It wastes memory.”

Sometimes, another team member goes to bat for Jeff’s code. Sometimes they don’t. But then, in a recent review, Clarence submitted his own bit of stream code.

schedules.stream().forEach(schedule -> visitors.stream().forEach(scheduleVisitor -> {
    scheduleVisitor.visitSchedule(schedule);

    if (schedule.getDays() != null && !schedule.getDays().isEmpty()) {
        schedule.getDays().stream().forEach(day -> visitors.stream().forEach(dayVisitor -> {
            dayVisitor.visitDay(schedule, day);

            if (day.getSlots() != null && !day.getSlots().isEmpty()) {
                day.getSlots().stream().forEach(slot -> visitors.stream().forEach(slotVisitor -> {
                    slotVisitor.visitSlot(schedule, day, slot);
                }));
            }
        }));
    }
}));

That is six nested “for each” operations, and they’re structured so that we iterate across the same list multiple times. For each schedule, we look at each visitor on that schedule, then we look at each day for that schedule, and then we look at every visitor again, then we look at each day’s slots, and then we look at each visitor again.

Well, if nothing else, we understand why Clarence thinks the Java Streams API is slow. This code did not pass code review.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Chaotic IdealismTo a Newly Diagnosed Autistic Teenager

I was recently asked by a 14-year-old who had just been diagnosed autistic what advice I had to give. This is what I said.

The thing that helped me most was understanding myself and talking to other autistic people, so you’re already well on that road.

The more you learn about yourself, the more you learn about how you *learn*… meaning that you can become better at teaching yourself to communicate with neurotypicals.

Remember though: The goal is to communicate. Blending in is secondary, or even irrelevant, depending on your priorities. If you can get your ideas from your brain to theirs, and understand what they’re saying, and live in the world peacefully without hurting anyone and without putting yourself in danger, then it does not matter how different you are or how differently you do things.

Autistic is not better and not worse than neurotypical; it’s simply different. Having a disability is a normal part of human life; it’s nothing to be proud of and nothing to be ashamed of. Disability doesn’t stop you from being talented or from becoming unusually skilled, especially with practice. Being different means that you see things from a different perspective, which means that as you grow and gain experience you will be able to provide solutions to problems that other people simply don’t see, to contribute skills that most people don’t have.

Learn to advocate for yourself. If you have an IEP, go to the meetings and ask questions about what help is available and what problems you have. When you are mistreated, go to someone you trust and ask for help; and if you can’t get help, protect yourself as best you can. Learn to stand up for yourself, to keep other people from taking advantage of you. Also learn to help other people stay safe.

Your best social connections now will be anyone who treats you with kindness. You can tell whether someone is kind by observing how they treat those they have power over when nobody, or nobody with much influence, is watching. You want people who are honest, or who only lie when they are trying to protect others’ feelings. Talk to these people; explain that you are not very good with social things and that you sometimes embarrass yourself or accidentally insult people, and that you would like them to tell you when you are doing something clumsy, offensive, confusing, or cringeworthy. Explain to these people that you would prefer to know about mistakes you are making, because if you are not told you will never be able to correct those mistakes.

Learn to apologize, and learn that an apology simply means, “I recognize I have made a mistake and shall work to correct it in the future.” An apology is not a sign of failure or an admission of inferiority. Sometimes an apology can even mean, “I have made a mistake that I could not control; if I had been able to control it, I would not have made the mistake.” Therefore, it is okay to apologize if you have simply made an honest mistake. The best apology includes an explanation of how you will fix your mistake or what you will change to keep it from happening in the future.

Learn not to apologize when you have done nothing wrong. Do not apologize for being different, for standing up for yourself or for other people, or for having an opinion others disagree with. You do not need to justify your existence. You should never give in to the pressure to say, “I am autistic, but that’s okay because I have this skill and that talent.” The correct statement is, “I am autistic, and that is okay.” You don’t need to do anything to be valuable. You just need to be human.

If someone uses you to fulfill their own desires but doesn’t give things back in return; if someone doesn’t care about your needs when you tell them; if someone can tell you are hurt and doesn’t care; then that is a person you cannot trust.

In general, you can expect your teen years to be harder than your young-adult years. As you grow and gain experience, you’ll gain skills and you’ll gather a library of techniques to help you navigate the social and sensory world, to help you deal with your emotions and with your relationships. You will never be perfect–but then, nobody is. What you’re aiming for is useful, functional skills, in whatever form they take, whether they are the typical way of doing things or not. As the saying goes: If it looks stupid but it works, it isn’t stupid.

Keep trying. Take good care of yourself. When you are tired, rest. Learn to push yourself to your limits, but not beyond; and learn where those limits are. When you are tired from something that would not tire a neurotypical, be unashamed about your need for down time. Learn to say “no” when you don’t want something, and learn to say “yes” when you want something but you are a little bit intimidated by it because it is new or complicated or unpredictable. Learn to accept failure and learn from it. Help others. Make your world better. Make your own way. Grow. Live.

You’ll be okay.

Krebs on SecurityPorn Clip Disrupts Virtual Court Hearing for Alleged Twitter Hacker

Perhaps fittingly, a Web-streamed court hearing for the 17-year-old alleged mastermind of the July 15 mass hack against Twitter was cut short this morning after mischief makers injected a pornographic video clip into the proceeding.

17-year-old Graham Clark of Tampa, Fla. was among those charged in the July 15 Twitter hack. Image: Hillsborough County Sheriff’s Office.

The incident occurred at a bond hearing held via the videoconferencing service Zoom by the Hillsborough County, Fla. criminal court in the case of Graham Clark. The 17-year-old from Tampa was arrested earlier this month on suspicion of social engineering his way into Twitter’s internal computer systems and tweeting out a bitcoin scam through the accounts of high-profile Twitter users.

Notice of the hearing was available via public records filed with the Florida state attorney’s office. The notice specified the Zoom meeting time and ID number, essentially allowing anyone to participate in the proceeding.

Even before the hearing officially began it was clear that the event would likely be “zoom bombed.” That’s because while participants were muted by default, they were free to unmute their microphones and transmit their own video streams to the channel.

Sure enough, less than a minute had passed before one attendee not party to the case interrupted a discussion between Clark’s attorney and the judge by streaming a live video of himself adjusting his face mask. Just a few minutes later, someone began interjecting loud music.

It became clear that presiding Judge Christopher C. Nash was personally in charge of administering the video hearing when, after roughly 15 seconds worth of random chatter interrupted the prosecution’s response, Nash told participants he was removing the troublemakers as quickly as he could.

Judge Nash, visibly annoyed immediately after one of the many disruptions to today’s hearing.

What transpired a minute later was almost inevitable given the permissive settings of this particular Zoom conference call: Someone streamed a graphic video clip from Pornhub for approximately 15 seconds before Judge Nash abruptly terminated the broadcast.

With the ongoing pestilence that is the COVID-19 pandemic, the nation’s state and federal courts have largely been forced to conduct proceedings remotely via videoconferencing services. While Zoom and others do offer settings that can prevent participants from injecting their own audio and video into the stream unless invited to do so, those settings evidently were not enabled in today’s meeting.

At issue before the court today was a defense motion to modify the amount of the defendant’s bond, which has been set at $750,000. The prosecution had argued that Clark should be required to show that any funds used toward securing that bond were gained lawfully, and were not merely the proceeds from his alleged participation in the Twitter bitcoin scam or some other form of cybercrime.

Florida State Attorney Andrew Warren’s reaction as a Pornhub clip began streaming to everyone in today’s Zoom proceeding.

Mr. Clark’s attorneys disagreed, and spent most of the uninterrupted time in today’s hearing explaining why their client could safely be released under a much smaller bond and close supervision restrictions.

On Sunday, The New York Times published an in-depth look into Clark’s wayward path from a small-time cheater and hustler in online games like Minecraft to big-boy schemes involving SIM swapping, a form of fraud that involves social engineering employees at mobile phone companies to gain control over a target’s phone number and any financial, email and social media accounts associated with that number.

According to The Times, Clark was suspected of being involved in a 2019 SIM swapping incident which led to the theft of 164 bitcoins from Gregg Bennett, a tech investor in the Seattle area. That theft would have been worth around $856,000 at the time; these days 164 bitcoins is worth approximately $1.8 million.

The Times said that soon after the theft, Bennett received an extortion note signed by Scrim, one of the hacker handles alleged to have been used by Clark. From that story:

“We just want the remainder of the funds in the Bittrex,” Scrim wrote, referring to the Bitcoin exchange from which the coins had been taken. “We are always one step ahead and this is your easiest option.”

In April, the Secret Service seized 100 Bitcoins from Mr. Clark, according to government forfeiture documents. A few weeks later, Mr. Bennett received a letter from the Secret Service saying they had recovered 100 of his Bitcoins, citing the same code that was assigned to the coins seized from Mr. Clark.

Florida prosecutor Darrell Dirks was in the middle of explaining to the judge that investigators are still in the process of discovering the extent of Clark’s alleged illegal hacking activities since the Secret Service returned the 100 bitcoin when the porn clip was injected into the Zoom conference.

Ultimately, Judge Nash decided to keep the bond amount as is, but to remove the condition that Clark prove the source of the funds.

Clark has been charged with 30 felony counts and is being tried as an adult. Federal prosecutors also have charged two other young men suspected of playing roles in the Twitter hack, including a 22-year-old from Orlando, Fla. and a 19-year-old from the United Kingdom.

Kevin RuddABC RN: South China Sea

E&OE TRANSCRIPT
RADIO INTERVIEW
RN BREAKFAST
ABC RADIO NATIONAL
5 AUGUST 2020

Fran Kelly
Prime Minister Scott Morrison today will warn of the unprecedented militarization of the Indo-Pacific which he says has become the epicentre of strategic competition between the US and China. In his virtual address to the Aspen Security Forum in the United States, Scott Morrison will also condemn the rising frequency of cyber attacks and the new threats democratic nations are facing from foreign interference. This speech coincides with a grim warning from former prime minister Kevin Rudd that the threat of armed conflict in the region is especially high in the run-up to the US presidential election in November. Kevin Rudd, welcome back to breakfast.

Kevin Rudd
Thanks for having me on the program, Fran.

Fran Kelly
Kevin Rudd, you’ve written in the Foreign Affairs journal that the US-China tensions could lead to, quote, a hot war not just a cold one. That conflict, you say, is no longer unthinkable. It’s a fairly alarming assessment. Just how likely do you rate the confrontation in the Indo-Pacific other coming three or four months?

Kevin Rudd
Well, Fran, I think it’s important to narrow our geographical scope here. Prime Minister Morrison is talking about a much wider theatre. My comments in Foreign Affairs are about crisis scenarios emerging over what will happen or could happen in Hong Kong over the next three months leading up to the presidential election. And I think things in Hong Kong are more likely to get worse than better. What’s happening in relation to the Taiwan Straits where things have become much sharper than before in terms of actions on both sides, that’s the Chinese and the United States. But the thrust of my article is that the real problem area in terms of crisis management, crisis escalation, etc, lies in the South China Sea. And what I simply try to pull together is the fact that we now have a much greater concentration of military hardware, ships at sea, aircraft flying reconnaissance missions, together with changes in deployments by the Chinese fighters and bombers now into the actual Paracel Islands themselves in the north part of the South China Sea. Together with the changes in the declaratory postures of both sides. So what I do in this article this pull these threads together and say to both sides: be careful what you wish for; you’re playing with fire.

Fran Kelly
And when you talk about a heightened risk of armed conflict, or you’re talking about a being confined to a flare-up in one very specific location like the South China Sea?

Kevin Rudd
What I try to do is to go to where could a crisis actually emerge?

Fran Kelly
Yeah.

Kevin Rudd
If you go across the whole spectrum of conflicts, at the moment between China and the United States on a whole range of policies, all roads tend to lead back to the South China Sea because it’s effectively a ruleless environment at the moment. We have contested views of both territorial and maritime sovereignty. And that’s where my concern, Fran, is that we have a crisis, which comes about through a collision at sea, a collision in the air, and given the nationalist politics now in Washington because of the presidential election, but also the nationalist politics in China, as its own leadership go to their annual August retreat, Beidaihe, that it’s a very bad admixture which could result in a crisis for allies like Australia, which have treaty obligations with the United States through the ANZUS treaty. This is a deeply concerning set of developments because if the crisis erupts, what then does the Australian government do?

Fran Kelly
Well, what does it do in your view from your viewpoint as a former Prime Minister. You know Australia tries to walk a very fine line by Washington and Beijing. That’s proved very difficult lately, but we are in the ANZUS alliance. Would we need to get involved militarily?

Kevin Rudd
Let me put it in these terms: Australia, like other countries dealing with China’s greater regional and international assertiveness, has had to adjust its strategy. We can have a separate debate, Fran, about what that strategy should be across the board in terms of the economy, technology, Huawei in the rest. But what I’ve sought to do in this article is go specifically to the possibility of a national security crisis. Now, if I was Prime Minister Morrison, what I’d be doing in the current circumstances is taking out the fire hose to those in Washington and to the extent that you can to those in Beijing, and simply make it as plain as possible through private diplomacy and public statements, the time has come for de-escalation because the obligations under the treaty, Fran, to go to your direct question, are relatively clear. What it says in one of the operational clauses of the ANZUS treaty of 1951 is that if the armed forces of either of the contracting parties, namely Australia or the United States, come under attack in the Pacific area, then the allies shall meet and consult to meet the common danger. That, therefore, puts us as an Australian ally directly into this frame. Hence my call for people to be very careful about the months which lie ahead.

Fran Kelly
In terms of ‘the time has come for de-escalation’, that message, do we see signs that that was the message very clearly being given by the Foreign Minister and the Defense Minister when they’re in Washington last week? Marise Payne didn’t buy into Secretary of State Mike Pompeo’s very elevated rhetoric aimed at China, kept a distance there. And is it your view that this danger period will be over come the first Tuesday in November, the presidential election?

Kevin Rudd
I think when we looking at the danger of genuine armed conflict between China and United States, that is now with us for a long period of time, whoever wins in November, including under the Democrats. What I’m more concerned about, however, is given that President Trump is in desperate domestic political circumstances at present in Washington, and that there will be a temptation to continue to elevate. And also domestic politics are playing their role in China itself where Xi Jinping is under real pressure because of the state of the Chinese economy because of COVID and a range of other factors. On Australia, you asked directly about what Marise Payne was doing in Washington. I think finally the penny dropped with Prime Minister Morrison and Foreign Minister Payne that the US presidential election campaign strategy was beginning to directly influence the content of rational national security policy. I think wisely they decided to step back slightly from that.

Fran Kelly
Former Prime Minister Kevin Rudd is our guest. Kevin Rudd, this morning Scott Morrison, the Prime Minister, is addressing the US Aspen Security Forum. He’s also talking about rising tensions in the Indo-Pacific. He’s pledged that Australia won’t be a bystander, quote, who will leave it to others in the region. He wants other like-minded democracies of the region to step up and act in some kind of alliance. Is that the best way to counter Beijing’s rising aggression and assertiveness?

Kevin Rudd
Well, Prime Minister Morrison seems to like making speeches but I’ve yet to see evidence of a coherent Australian national China strategy in terms of what the government is operationally doing as opposed to what it continues to talk about. So my concern on his specific proposal is: what are you doing, Mr Morrison? The talk of an alliance I think is misplaced. The talk of, shall we say, a common policy approach to the challenges which China now represents, that is an entirely appropriate course of action and something which we sought to do during our own period in government, but it’s a piece of advice which Morrison didn’t bother listening to himself when he unilaterally went out and called for an independent global investigation into the origins of COVID-19. Far wiser, if Morrison had taken his own counsel and brought together a coalition of the policy willing first and said: do we have a group of 10 robust states standing behind this proposal? And the reason for that, Fran, is that makes it much harder then for Beijing to unilaterally pick off individual countries.

Fran Kelly
Kevin Rudd, thank you very much for joining us on Breakfast.

Kevin Rudd
Good to be with you, Fran.

Fran Kelly
Former Prime Minister Kevin Rudd. He’s president of the Asia Society Policy Institute in New York, and the article that he’s just penned is in the Foreign Affairs journal.

The post ABC RN: South China Sea appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Private Code Review

Jessica has worked with some cunning developers in the past. To help cope with some of that “cunning”, they’ve recently gone out searching for new developers.

Now, there were some problems with their job description and salary offer, specifically, they were asking for developers who do too much and get paid too little. Which is how Jessica started working with Blair. Jessica was hoping to staff up her team with some mid-level or junior developers with a background in web development. Instead, she got Blair, a 13+ year veteran who had just started doing web development in the past six months.

Now, veteran or not, there is a code review process, so everything Blair does goes through code review. And that catches some… annoying habits, but every once in awhile, something might sneak through. For example, he thinks static is a code smell, and thus removes the keyword any time he sees it. He’ll rewrite most of the code to work around it, except once the method was called from a cshtml template file, so no one discovered that it didn’t work until someone reported the error.

Blair also laments that with all the JavaScript and loosely typed languages, kids these days don’t understand the importance of separation of concerns and putting a barrier between interface and implementation. To prove his point, he submitted his MessageBL class. BL, of course, is to remind you that this class is “business logic”, which is easy to forget because it’s in an assembly called theappname.businesslogic.

Within that class, he implemented a bunch of data access methods, and this pair of methods lays out the pattern he followed.

public async Task<LinkContentUpdateTrackingModel> GetLinkAndContentTrackingModelAndUpdate(int id, Msg msg)
{
    return await GetLinkAndContentTrackingAndUpdate(id, msg);
}

/// <summary>
/// LinkTrackingUpdateLinks
/// returns: HasAnalyticsConfig, LinkTracks, ContentTracks
/// </summary>
/// <param name="id"></param>
/// <param name="msg"></param>
private async Task<LinkContentUpdateTrackingModel> GetLinkAndContentTrackingAndUpdate(int id, Msg msg)
{
  //snip
}

Here, we have one public method, and one private method. Their names, as you can see, are very similar. The public method does nothing but invoke the private method. This public method is, in fact, the only place the private method is invoked. The public method, in turn, is called only twice, from one controller.

This method also doesn’t ever need to be called, because the same block of code which constructs this object also fetches the relevant model objects. So instead of going back to the database with this thing, we could just use the already fetched objects.

But the real magic here is that Blair was veteran enough to know that he should put some “thorough” documentation using Visual Studio’s XML comment features. But he put the comments on the private method.

Jessica was not the one who reviewed this code, but adds:

I won’t blame the code reviewer for letting this through. There’s only so many times you can reject a peer review before you start questioning yourself. And sometimes, because Blair has been here so long, he checks code in without peer review as it’s a purely manual process.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

LongNowTraditional Ecological Knowledge

Archaeologist Stefani Crabtree writes about her work to reconstruct Indigenous food and use networks for the National Park Service:

Traditional Ecological Knowledge gets embedded in the choices that people make when they consume, and how TEK can provide stability of an ecosystem. Among Martu, the use of fire for hunting and the knowledge of the habits of animals are enshrined in the Dreamtime stories passed inter-generationally; these Dreamtime stories have material effects on the food web, which were detected in our simulations. The ecosystem thrived with Martu; it was only through their removal that extinctions began to cascade through the system.

Kevin RuddForeign Affairs: Beware the Guns of August — in Asia

U.S. Navy photo by Mass Communication Specialist 2nd Class Taylor DiMartino

Published in Foreign Affairs on August 3, 2020.

In just a few short months, the U.S.-Chinese relationship seems to have returned to an earlier, more primal age. In China, Mao Zedong is once again celebrated for having boldly gone to war against the Americans in Korea, fighting them to a truce. In the United States, Richard Nixon is denounced for creating a global Frankenstein by introducing Communist China to the wider world. It is as if the previous half century of U.S.-Chinese relations never happened.

The saber rattling from both Beijing and Washington has become strident, uncompromising, and seemingly unending. The relationship lurches from crisis to crisis—from the closures of consulates to the most recent feats of Chinese “wolf warrior” diplomacy to calls by U.S. officials for the overthrow of the Chinese Communist Party (CCP). The speed and intensity of it all has desensitized even seasoned observers to the scale and significance of change in the high politics of the U.S.-Chinese relationship. Unmoored from the strategic assumptions of the previous 50 years but without the anchor of any mutually agreed framework to replace them, the world now finds itself at the most dangerous moment in the relationship since the Taiwan Strait crises of the 1950s.

The question now being asked, quietly but nervously, in capitals around the world is, where will this end? The once unthinkable outcome—actual armed conflict between the United States and China—now appears possible for the first time since the end of the Korean War. In other words, we are confronting the prospect of not just a new Cold War, but a hot one as well.

Click here to read the rest of the article at Foreign Affairs.

The post Foreign Affairs: Beware the Guns of August — in Asia appeared first on Kevin Rudd.

Worse Than FailureA Massive Leak

"Memory leaks are impossible in a garbage collected language!" is one of my favorite lies. It feels true, but it isn't. Sure, it's much harder to make them, and they're usually much easier to track down, but you can still create a memory leak. Most times, it's when you create objects, dump them into a data structure, and never empty that data structure. Usually, it's just a matter of finding out what object references are still being held. Usually.

A few months ago, I discovered a new variation on that theme. I was working on a C# application that was leaking memory faster than bad waterway engineering in the Imperial Valley.

A large, glowing, computer-controlled chandelier

I don't exactly work in the "enterprise" space anymore, though I still interact with corporate IT departments and get to see some serious internal WTFs. This is a chandelier we built for the Allegheny Health Network's Cancer Institute which recently opened in Pittsburgh. It's 15 meters tall, weighs about 450kg, and is broken up into 30 segments, each with hundreds of addressable LEDs in a grid. The software we were writing was built to make them blink pretty.

Each of those 30 segments is home to a single-board computer with their GPIO pins wired up to addressable LEDs. Each computer runs a UDP listener, and we blast them with packets containing RGB data, which they dump to the LEDs using a heavily tweaked version of LEDScape.

This is our standard approach to most of our lighting installations. We drop a Beaglebone onto a custom circuit board and let it drive the LEDs, then we have a render-box someplace which generates frame data and chops it up into UDP packets. Depending on the environment, we can drive anything from 30-120 frames per second this way (and probably faster, but that's rarely useful).

Apologies to the networking folks, but this works very well. Yes, we're blasting many megabytes of raw bitmap data across the network, but we're usually on our own dedicated network segment. We use UDP because, well, we don't care about the data that much. A dropped packet or an out of order packet isn't going to make too large a difference in most cases. We don't care if our destination Beaglebone is up or down, we just blast the packets out onto the network, and they get there reliably enough that the system works.

Now, normally, we do this from Python programs on Linux. For this particular installation, though, we have an interactive kiosk which provides details about cancer treatments and patient success stories, and lets the users interact with the chandelier in real time. We wanted to show them a 3D model of the chandelier on the screen, and show them an animation on the UI that was mirrored in the physical object. After considering our options, we decided this was a good case for Unity and C#. After a quick test of doing multitouch interactions, we also decided that we shouldn't deploy to Linux (Unity didn't really have good Linux multitouch support), so we would deploy a Windows kiosk. This meant we were doing most of our development on MacOS, but our final build would be for Windows.

Months go by. We worked on the software while building the physical pieces, which meant the actual testbed hardware wasn't available for most of the development cycle. Custom electronics were being refined and physical designs were changing as we iterated to the best possible outcome. This is normal for us, but it meant that we didn't start getting real end-to-end testing until very late in the process.

Once we started test-hanging chandelier pieces, we started basic developer testing. You know how it is: you push the run button, you test a feature, you push the stop button. Tweak the code, rinse, repeat. Eventually, though, we had about 2/3rds of the chandelier pieces plugged in, and started deploying to the kiosk computer, running Windows.

We left it running, and the next time someone walked by and decided to give the screen a tap… nothing happened. It was hung. Well, that could be anything. We rebooted and checked again, and everything seems fine, until a few minutes later, when it's hung… again. We checked the task manager- which hey, everything is really slow, and sure enough, RAM is full and the computer is so slow because it's constantly thrashing to disk.

We're only a few weeks before we actually have to ship this thing, and we've discovered a massive memory leak, and it's such a sudden discovery that it feels like the draining of Lake Agassiz. No problem, though, we go back to our dev machines, fire it up in the profiler, and start looking for the memory leak.

Which wasn't there. The memory leak only appeared in the Windows build, and never happened in the Mac or Linux builds. Clearly, there must be some different behavior, and it must be around object lifecycles. When you see a memory leak in a GCed language, you assume you're creating objects that the GC ends up thinking are in use. In the case of Unity, your assumption is that you're handing objects off to the game engine, and not telling it you're done with them. So that's what we checked, but we just couldn't find anything that fit the bill.

Well, we needed to create some relatively large arrays to use as framebuffers. Maybe that's where the problem lay? We keep digging through the traces, we added a bunch of profiling code, we spent days trying to dig into this memory leak…

… and then it just went away. Our memory leak just became a Heisenbug, our shipping deadline was even closer, and we officially knew less about what was going wrong than when we started. For bonus points, once this kiosk ships, it's not going to be connected to the Internet, so if we need to patch the software, someone is going to have to go onsite. And we aren't going to have a suitable test environment, because we're not exactly going to build two gigantic chandeliers.

The folks doing assembly had the whole chandelier built up, hanging in three sections (we don't have any 14m tall ceiling spaces), and all connected to the network for a smoke test. There wasn't any smoke, but they needed to do more work. Someone unplugged a third of the chandelier pieces from the network.

And the memory leak came back.

We use UDP because we don't care if our packet sends succeed or not. Frame-by-frame, we just want to dump the data on the network and hope for the best. On MacOS and Linux, our software usually uses a sender thread that just, at the end of the day, wraps around calls to the send system call. It's simple, it's dumb, and it works. We ignore errors.

In C#, though, we didn't do things exactly the same way. Instead, we used the .NET UdpClient object and its SendAsync method. We assumed that it would do roughly the same thing.

We were wrong.

await client.SendAsync(packet, packet.Length, hostip, port);

Async operations in C# use Tasks, which are like promises or futures in other environments. It lets .NET manage background threads without the developer worrying about the details. The await keyword is syntactic sugar which lets .NET know that it can hand off control to another thread while we wait. While we await here, we don't actually await the results of the await, because again: we don't care about the results of the operation. Just send the packet, hope for the best.

We don't care- but Windows does. After a load of investigation, what we discovered is that Windows would first try and resolve the IP address. Which, if a host was down, obviously it couldn't. But Windows was friendly, Windows was smart, and Windows wasn't going to let us down: it kept the Task open and kept trying to resolve the address. It held the task open for 3 seconds before finally deciding that it couldn't reach the host and errored out.

An error which, as I stated before, we were ignoring, because we didn't care.

Still, if you can count and have a vague sense of the linear passage of time, you can see where this is going. We had 30 hosts. We sent each of the 30 packets every second. When one or more of those hosts were down, Windows would keep each of those packets "alive" for 3 seconds. By the time that one expired, 90 more had queued up behind it.

That was the source of our memory leak, and our Heisenbug. If every Beaglebone was up, we didn't have a memory leak. If only one of them was down, the leak was pretty slow. If ten or twenty were out, the leak was a waterfall.

I spent a lot of time reading up on Windows networking after this. Despite digging through the socket APIs, I honestly couldn't figure out how to defeat this behavior. I tried various timeout settings. I tried tracking each task myself and explicitly timing them out if they took longer than a few frames to send. I was never able to tell Windows, "just toss the packet and hope for the best".

Well, my co-worker was building health monitoring on the Beaglebones anyway. While the kiosk wasn't going to be on the Internet via a "real" Internet connection, we did have a cellular modem attached, which we could use to send health info, so getting pings that say "hey, one of the Beaglebones failed" is useful. So my co-worker hooked that into our network sending layer: don't send frames to Beaglebones which are down. Recheck the down Beaglebones every five minutes or so. Continue to hope for the best.

This solution worked. We shipped. The device looks stunning, and as patients and guests come to use it, I hope they find some useful information, a little joy, and maybe some hope while playing with it. And while there may or may not be some ugly little hacks still lurking in that code, this was the one thing which made me say: WTF.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityRobocall Legal Advocate Leaks Customer Data

A California company that helps telemarketing firms avoid getting sued for violating a federal law that seeks to curb robocalls has leaked the phone numbers, email addresses and passwords of all its customers, as well as the mobile phone numbers and other data on people who have hired lawyers to go after telemarketers.

The Blacklist Alliance provides technologies and services to marketing firms concerned about lawsuits under the Telephone Consumer Protection Act (TCPA), a 1991 law that restricts the making of telemarketing calls through the use of automatic telephone dialing systems and artificial or prerecorded voice messages. The TCPA prohibits contact with consumers — even via text messages — unless the company has “prior express consent” to contact the consumer.

With statutory damages of $500 to $1,500 per call, the TCPA has prompted a flood of lawsuits over the years. From the telemarketer’s perspective, the TCPA can present something of a legal minefield in certain situations, such as when a phone number belonging to someone who’d previously given consent gets reassigned to another subscriber.

Enter The Blacklist Alliance, which promises to help marketers avoid TCPA legal snares set by “professional plaintiffs and class action attorneys seeking to cash in on the TCPA.” According to the Blacklist, one of the “dirty tricks” used by TCPA “frequent filers” includes “phone flipping,” or registering multiple prepaid cell phone numbers to receive calls intended for the person to whom a number was previously registered.

Lawyers representing TCPA claimants typically redact their clients’ personal information from legal filings to protect them from retaliation and to keep their contact information private. The Blacklist Alliance researches TCPA cases to uncover the phone numbers of plaintiffs and sells this data in the form of list-scrubbing services to telemarketers.

“TCPA predators operate like malware,” The Blacklist explains on its website. “Our Litigation Firewall isolates the infection and protects you from harm. Scrub against active plaintiffs, pre litigation complainers, active attorneys, attorney associates, and more. Use our robust API to seamlessly scrub these high-risk numbers from your outbound campaigns and inbound calls, or adjust your suppression settings to fit your individual requirements and appetite for risk.”

Unfortunately for the Blacklist paying customers and for people represented by attorneys filing TCPA lawsuits, the Blacklist’s own Web site until late last week leaked reams of data to anyone with a Web browser. Thousands of documents, emails, spreadsheets, images and the names tied to countless mobile phone numbers all could be viewed or downloaded without authentication from the domain theblacklist.click.

The directory also included all 388 Blacklist customer API keys, as well as each customer’s phone number, employer, username and password (scrambled with the relatively weak MD5 password hashing algorithm).

The leaked Blacklist customer database points to various companies you might expect to see using automated calling systems to generate business, including real estate and life insurance providers, credit repair companies and a long list of online advertising firms and individual digital marketing specialists.

The very first account in the leaked Blacklist user database corresponds to its CEO Seth Heyman, an attorney in southern California. Mr. Heyman did not respond to multiple requests for comment, although The Blacklist stopped leaking its database not long after that contact request.

Two other accounts marked as administrators were among the third and sixth registered users in the database; those correspond to two individuals at Riip Digital, a California-based email marketing concern that serves a diverse range of clients in the lead generation business, from debt relief and timeshare companies, to real estate firms and CBD vendors.

Riip Digital did not respond to requests for comment. But According to Spamhaus, an anti-spam group relied upon by many Internet service providers (ISPs) to block unsolicited junk email, the company has a storied history of so-called “snowshoe spamming,” which involves junk email purveyors who try to avoid spam filters and blacklists by spreading their spam-sending systems across a broad swath of domains and Internet addresses.

The irony of this data leak is that marketers who constantly scrape the Web for consumer contact data may not realize the source of the information, and end up feeding it into automated systems that peddle dubious wares and services via automated phone calls and text messages. To the extent this data is used to generate sales leads that are then sold to others, such a leak could end up causing more legal problems for The Blacklist’s customers.

The Blacklist and their clients talk a lot about technologies that they say separate automated telephonic communications from dime-a-dozen robocalls, such as software that delivers recorded statements that are manually selected by a live agent. But for your average person, this is likely a distinction without a difference.

Robocalls are permitted for political candidates, but beyond that if the recording is a sales message and you haven’t given your written permission to get calls from the company on the other end, the call is illegal. According to the Federal Trade Commission (FTC), companies are using auto-dialers to send out thousands of phone calls every minute for an incredibly low cost.

In fiscal year 2019, the FTC received 3.78 million complaints about robocalls. Readers may be able to avoid some marketing calls by registering their mobile number with the Do Not Call registry, but the list appears to do little to deter all automated calls — particularly scam calls that spoof their real number. If and when you do receive robocalls, consider reporting them to the FTC.

Some wireless providers now offer additional services and features to help block automated calls. For example, AT&T offers wireless customers its free Call Protect app, which screens incoming calls and flags those that are likely spam calls. See the FCC’s robocall resource page for links to resources at your mobile provider. In addition, there are a number of third-party mobile apps designed to block spammy calls, such as Nomorobo and TrueCaller.

Obviously, not all telemarketing is spammy or scammy. I have friends and relatives who’ve worked at non-profits that rely a great deal on fundraising over the phone. Nevertheless, readers who are fed up with telemarketing calls may find some catharsis in the Jolly Roger Telephone Company, which offers subscribers a choice of automated bots that keep telemarketers engaged for several minutes. The service lets subscribers choose which callers should get the bot treatment, and then records the result.

For my part, the volume of automated calls hitting my mobile number got so bad that I recently enabled a setting on my smart phone to simply send to voicemail all calls from numbers that aren’t already in my contacts list. This may not be a solution for everyone, but since then I haven’t received a single spammy jingle.

CryptogramBlackBerry Phone Cracked

Australia is reporting that a BlackBerry device has been cracked after five years:

An encrypted BlackBerry device that was cracked five years after it was first seized by police is poised to be the key piece of evidence in one of the state's longest-running drug importation investigations.

In April, new technology "capabilities" allowed authorities to probe the encrypted device....

No details about those capabilities.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 12)

Here’s part twelve of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Worse Than FailureCodeSOD: A Unique Choice

There are many ways to mess up doing unique identifiers. It's a hard problem, and that's why we've sorta agreed on a few distinct ways to do it. First, we can just autonumber. Easy, but it doesn't always scale that well, especially in distributed systems. Second, we can use something like UUIDs: mix a few bits of real data in with a big pile of random data, and you can create a unique ID. Finally, there are some hashing-related options, where the data itself generates its ID.

Tiffanie was digging into some weird crashes in a database application, and discovered that their MODULES table couldn't decide which was correct, and opted for two: MODULE_ID, an autonumbered field, and MODULE_UUID, which one would assume, held a UUID. There were also the requsite MODULE_NAME and similar fields. A quick scan of the table looked like:

MODULE_ID MODULE_NAME MODULE_UUID MODULE_DESC
0 Defects 8461aa9b-ba38-4201-a717-cee257b73af0 Defects
1 Test Plan 06fd18eb-8214-4431-aa66-e11ae2a6c9b3 Test Plan

Now, using both UUIDs and autonumbers is a bit suspicious, but there might be a good reason for that (the UUIDs might be used for tracking versions of installed modules, while the ID is the local database-reference for that, so the ID shouldn't change ever, but the UUID might). Still, given that MODULE_NAME and MODULE_DESC both contain exactly the same information in every case, I suspect that this table was designed by the Department of Redunancy Department.

Still, that's hardly the worst sin you could commit. What would be really bad would be using the wrong datatype for a column. This is a SQL Server database, and so we can safely expect that the MODULE_ID is numeric, the MODULE_NAME and MODULE_DESC must be text, and clearly the MODULE_UUID field should be the UNIQUEIDENTIFIER type, right?

Well, let's look at one more row from this table:

MODULE_ID MODULE_NAME MODULE_UUID MODULE_DESC
11 Releases Releases does not have a UUID Releases

Oh, well. I think I have a hunch what was causing the problems. Sure enough, the program was expecting the UUID field to contain UUIDs, and was failing when a field contained something that couldn't be converted into a UUID.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityThree Charged in July 15 Twitter Compromise

Three individuals have been charged for their alleged roles in the July 15 hack on Twitter, an incident that resulted in Twitter profiles for some of the world’s most recognizable celebrities, executives and public figures sending out tweets advertising a bitcoin scam.

Amazon CEO Jeff Bezos’s Twitter account on the afternoon of July 15.

Nima “Rolex” Fazeli, a 22-year-old from Orlando, Fla., was charged in a criminal complaint in Northern California with aiding and abetting intentional access to a protected computer.

Mason “Chaewon” Sheppard, a 19-year-old from Bognor Regis, U.K., also was charged in California with conspiracy to commit wire fraud, money laundering and unauthorized access to a computer.

A U.S. Justice Department statement on the matter does not name the third defendant charged in the case, saying juvenile proceedings in federal court are sealed to protect the identity of the youth. But an NBC News affiliate in Tampa reported today that authorities had arrested 17-year-old Graham Clark as the alleged mastermind of the hack.

17-year-old Graham Clark of Tampa, Fla. was among those charged in the July 15 Twitter hack. Image: Hillsborough County Sheriff’s Office.

Wfla.com said Clark was hit with 30 felony charges, including organized fraud, communications fraud, one count of fraudulent use of personal information with over $100,000 or 30 or more victims, 10 counts of fraudulent use of personal information and one count of access to a computer or electronic device without authority. Clark’s arrest report is available here (PDF). A statement from prosecutors in Florida says Clark will be charged as an adult.

On Thursday, Twitter released more details about how the hack went down, saying the intruders “targeted a small number of employees through a phone spear phishing attack,” that “relies on a significant and concerted attempt to mislead certain employees and exploit human vulnerabilities to gain access to our internal systems.”

By targeting specific Twitter employees, the perpetrators were able to gain access to internal Twitter tools. From there, Twitter said, the attackers targeted 130 Twitter accounts, tweeting from 45 of them, accessing the direct messages of 36 accounts, and downloading the Twitter data of seven.

Among the accounts compromised were democratic presidential candidate Joe BidenAmazon CEO Jeff BezosPresident Barack ObamaTesla CEO Elon Musk, former New York Mayor Michael Bloomberg and investment mogul Warren Buffett.

The hacked Twitter accounts were made to send tweets suggesting they were giving away bitcoin, and that anyone who sent bitcoin to a specified account would be sent back double the amount they gave. All told, the bitcoin accounts associated with the scam received more than 400 transfers totaling more than $100,000.

Sheppard’s alleged alias Chaewon was mentioned twice in stories here since the July 15 incident. On July 16, KrebsOnSecurity wrote that just before the Twitter hack took place, a member of the social media account hacking forum OGUsers named Chaewon advertised they could change email address tied to any Twitter account for $250, and provide direct access to accounts for between $2,000 and $3,000 apiece.

The OGUsers forum user “Chaewon” taking requests to modify the email address tied to any twitter account.

On July 17, The New York Times ran a story that featured interviews with several people involved in the attack. The young men told The Times they weren’t responsible for the Twitter bitcoin scam and had only brokered the purchase of accounts from the Twitter hacker — who they referred to only as “Kirk.”

One of those interviewed by The Times used the alias “Ever So Anxious,” and said he was a 19-year from the U.K. In my follow-up story on July 22, it emerged that Ever So Anxious was in fact Chaewon.

The person who shared that information was the principal subject of my July 16 post, which followed clues from tweets sent by one of the accounts claimed during the Twitter compromise back to a 21-year-old from the U.K. who uses the nickname PlugWalkJoe.

That individual shared a series of screenshots showing he had been in communications with Chaewon/Ever So Anxious just prior to the Twitter hack, and had asked him to secure several desirable Twitter usernames from the Twitter hacker. He added that Chaewon/Ever So Anxious also was known as “Mason.”

The negotiations over highly-prized Twitter usernames took place just prior to the hijacked celebrity accounts tweeting out bitcoin scams. PlugWalkJoe is pictured here chatting with Ever So Anxious/Chaewon/Mason using his Discord username “Beyond Insane.”

On July 22, KrebsOnSecurity interviewed Mason/Chaewon/Ever So Anxious, who confirmed that PlugWalkJoe had indeed asked him to ask Kirk to change the profile picture and display name for a specific Twitter account on July 15. Mason/Chaewon/Ever So Anxious acknowledged that while he did act as a “middleman” between Kirk and others seeking to claim desirable Twitter usernames, he had nothing to do with the hijacking of the VIP Twitter accounts for the bitcoin scam that same day.

“Encountering Kirk was the worst mistake I’ve ever made due to the fact it has put me in issues I had nothing to do with,” he said. “If I knew Kirk was going to do what he did, or if even from the start if I knew he was a hacker posing as a rep I would not have wanted to be a middleman.”

Another individual who told The Times he worked with Ever So Anxious/Chaewon/Mason in communicating with Kirk said he went by the nickname “lol.” On July 22, KrebsOnSecurity identified lol as a young man who went to high school in Danville, Calif.

Federal investigators did not mention lol by his nickname or his real name, but the charging document against Sheppard says that on July 21 federal agents executed a search warrant at a residence in Northern California to question a juvenile who assisted Kirk and Chaewon in selling access to Twitter accounts. According to that document, the juvenile and Chaewon had discussed turning themselves in to authorities after the Twitter hack became publicly known.

CryptogramFriday Squid Blogging: Squid Proteins for a Better Face Mask

Researchers are synthesizing squid proteins to create a face mask that better survives cleaning. (And you thought there was no connection between squid and COVID-19.) The military thinks this might have applications for self-healing robots.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramData and Goliath Book Placement

Notice the copy of Data and Goliath just behind the head of Maine Senator Angus King.

Screenshot of MSNBC interview with Angus King

This demonstrates the importance of a vibrant color and a large font.

Kevin RuddABC: Closing The Gap, AUSMIN & Public Health

E&OE TRANSCRIPT
TELEVISION INTERVIEW

ABC NEWS CHANNEL
AFTERNOON BRIEFING
31 JULY 2020

Patricia Karvelas
My next guest this afternoon is the former prime minister Kevin Rudd. He’s the man that delivered the historic Apology to the stolen generations and launched the original Close the Gap targets. Of course, yesterday, there was a big revamp of the Close the Gap so we thought it was a good idea to talk to the man originally responsible. Kevin Rudd, welcome.

Kevin Rudd
Good to be with you. Patricia.

Patricia Karvelas
Prime Minister Scott Morrison said there had been a failure to partner with Indigenous people to develop and deliver the 2008 targets. Is that something you regret?

Kevin Rudd
Oh, Prime Minister Morrison is always out to differentiate himself from what previous Labor governments have done. We worked closely with Indigenous leaders at the time through minister Jenny Macklin in framing those Closing the Gap targets. The bottom line is: we deliver the National Apology; we established a Closing the Gap framework, which we thought should be measurable; and on top of that, Patricia, what we also did was, we negotiated the first-ever commonwealth-state agreement in 2008-9 over the following 10-year period, which had Closing the Gap targets as the basis for the funding commitments by the commonwealth and the states. Those things have been sustained into the future. If the Indigenous leadership of Australia have decided that it’s time to refresh the targets then I support Pat Turner’s leadership and I support what Indigenous leaders have done.

Patricia Karvelas
She’s got a seat at the table though. I remember, you know, I covered it extensively at the time. But she has got a point and they have a point that they now have a seat at the table in a different partnership model than was delivered originally.

Kevin Rudd
Well, as you know, the realities back in 2007 were radically different. Back then there was a huge partisan fight over whether we should have a National Apology. We had people like Peter Dutton and Tony Abbott threatening not to participate in the Apology. So it was a highly partisan environment back then. So these things evolve over time. The Apology remains in place. The national statement each year on the anniversary of the Apology remains in place on progress in achieving Closing the Gap, our successes and our failures. But yes, I welcome any advance that’s been made. But here’s the rub, Patricia: why have there been challenges in delivering on previous Closing the Gap targets? In large part it’s because in the 2014 budget, the first year after the current Coalition government took office, as you know, someone who’s covered the area extensively, they pulled out half a billion dollars worth of funding. Now you’re not going to achieve targets, if simultaneously you gut the funding capacity to act in these areas. That’s what essentially happened over the last five-to-six years.

Patricia Karvelas
That’s absolutely part of the story. But is it all of the story? I mean, if you look at failure to deliver on these targets, it’s been very disappointing for Aboriginal Australians. But I think for Australians who wanted to see the gap closed because it’s the right thing to do; it’s the kind of country they want to live in. There are other reasons aren’t there, that the gap hasn’t been closed? Isn’t one of the reasons that it’s lacked Aboriginal authority and ownership, that it’s been a top-down approach?

Kevin Rudd
Well, I welcome the statement by Pat Turner in bringing Indigenous leadership to the table with these new targets for the future. I’m fully supportive of that. You’re looking at someone who has stood for a lifetime in empowerment of Indigenous organisations. As I said, realities change over time, and I welcome what will happen in the future. But the bottom line is, Patricia, with or without Indigenous leadership from the ground up, nothing will happen in the absence of physical resources as well. And that is a critical part of the equation as I think you’ve just agreed with me. And we can have as many notional targets as we like, but if on day two you, as it were, disembowel the funding arrangements, which is what happened under the current government, guess what: nothing happens. And I note that when these new targets were announced yesterday that Ken Wyatt and the Prime Minister were silent on the question of future funding commitments by the commonwealth. So our Closing the Gap targets, yes, they weren’t all realised. We were on track to achieve two of the six targets that we set. We made some progress on another two. And we were kind of flatlining when it came to the remaining two. But I make no apology for measurement, Patricia, because unless you measure things, guess what? They never happen. And so I’m all for actually an annual report card on success and failure. That’s why I did it in the first place, and without apology.

Patricia Karvelas
I want to move on just to another story that was big this week. What did you make of this week’s AUSMIN talks and the Foreign Minister’s emphasis on Australia taking what is an independent position here, particularly with our relationship with China, was that significant?

Kevin Rudd
Well, whacko! The Australian Foreign Minister says we should have an independent foreign policy! Hold the front page! I mean, for God’s sake.

Patricia Karvelas
Well, it was in the AUSMIN framework. I mean, it wasn’t just a statement to the media, do you think?

Kevin Rudd
Yeah, yeah, but you know, the function of the national government of Australia is to run the foreign policy of Australia, an independent foreign policy. And if the conservatives have recently discovered this principle is a good one, well, I welcome them to the table. That’s been our view for about the last hundred years that the Australian Labor Party has been engaged in the foreign policy debates of this country. But why did she say that? That’s the more important question, I think, Patricia. I think the Australian Government, both Morrison and the Foreign Minister looked at Secretary of State Pompeo’s speech at the Nixon Library a week or so ago when effectively he called for a Second Cold War against China and, within that, called for the overthrow of the Chinese Communist Party. Even for the current Australian conservative government, that looked like a bridge too far, and I think they basically took fright at what they were walking into. And my judgment is: it’s very important to separate out our national interests from those the United States; secondly, understand what a combined allied strategy could and should be on China, as opposed to finding yourself wrapped up either in President Trump’s re-election strategy or Secretary of State Pompeo’s interest in securing the Republican nomination in 2024. These are quite separate political matters as opposed to national strategy.

Patricia Karvelas
Just on COVID, before I let you go, the Queensland Government has declared all of Greater Sydney as a COVID-19 hotspot and the state’s border will be closed to people travelling from that region from 1am on Saturday. Is that the right decision?

Kevin Rudd
Well, absolutely right. I mean, Premier Palaszczuk has faced like every premier, Daniel Andrews and Gladys Berejiklian, very hard public policy decisions. But what Premier Palaszczuk has done — and I’ve been here in Queensland for the last three and a half months now, observing this on a daily basis — is that she has taken the Chief Medical Officer’s advice day-in, day-out and acted accordingly. She’s come under enormous attack within Queensland, led initially by the Murdoch media, followed up by Frecklington, the leader of the LNP, saying ‘open the borders’. In fact, I think Frecklington called for the borders to be opened some 60 or 70 separate times, but to give Palaszczuk her due, she’s just stood her ground and said ‘my job is to give effect to the Chief Medical Officer’s advice, despite all the political clamour to the contrary’. So as she did then and as she does now, I think that’s right in terms of the public health and wellbeing of your average Queenslanders, including me.

Patricia Karvelas
Including you. And now you are very much a long-standing Queenslander being there for that long. Kevin Rudd, thank you so much for joining us this afternoon.

Kevin Rudd
Still from Queensland. Here to help. Bye.

Patricia Karvelas
Always. That’s the former prime minister Kevin Rudd, Joining me to talk about yesterday’s Closing the Gap announcement, defending his government’s legacy there but also, of course, talking about the failure as well to deliver on those targets. But particularly pointed comments around the withdrawal of funding in relation to Indigenous affairs which happened under the Abbott Government and he says was responsible for the failure to deliver at the rate that was expected, and it’s been obviously a disappointing journey not quite as planned. Now, a whole bunch of new targets.

The post ABC: Closing The Gap, AUSMIN & Public Health appeared first on Kevin Rudd.

LongNowPredicting the Animals of the Future

Jim Cooke / Gizmodo

Gizmodo asks half a dozen natural historians to speculate on who is going to be doing what jobs on Earth after the people disappear. One of the streams that runs wide and deep through this series of fun thought experiments is how so many niches stay the same through catastrophic changes in the roster of Earth’s animals. Dinosaurs die out but giant predatory birds evolve to take their place; butterflies took over from (unrelated) dot-winged, nectar-sipping giant lacewing pollinator forebears; before orcas there were flippered ocean-going crocodiles, and there will probably be more one day.

In Annie Dillard’s Pulitzer Prize-winning Pilgrim at Tinker Creek, she writes about a vision in which she witnesses glaciers rolling back and forth “like blinds” over the Appalachian Mountains. In this Gizmodo piece, Alexis Mychajliw of the La Brea Tar Pits & Museum talks about how fluctuating sea levels connected island chains or made them, fusing and splitting populations in great oscillating cycles, shrinking some creatures and giantizing others. There’s something soothing in the view from orbit paleontologists, like other deep-time mystics, possess, embody, and transmit: a sense for the clockwork of the cosmos and its orderliness, an appreciation for the powerful resilience of life even in the face of the ephemerality of life-forms.

While everybody interviewed here has obvious pet futures owing to their areas of interest, hold all of them superimposed together and you’ll get a clearer image of the secret teachings of biology…

(This article must have been inspired deeply by Dougal Dixon’s book After Man, but doesn’t mention him – perhaps a fair turn, given Dixon was accused of plagiarizing Wayne Barlowe for his follow-up, Man After Man.)

Worse Than FailureError'd: Please Reboot Faster, I Can't Wait Any Longer

"Saw this at a German gas station along the highway. The reboot screen at the pedestal just kept animating the hourglass," writes Robin G.

 

"Somewhere, I imagine there's a large number of children asking why their new bean bag is making them feel hot and numb," Will N. wrote.

 

Joel B. writes, "I came across these 'deals' on the Microsoft Canada store. Normally I'd question it, but based on my experiences with Windows, I bet, to them, the math checks out."

 

Kyle H. wrote, "Truly, nothing but the best quality strip_zeroes will be accepted."

 

"My Nan is going to be thrilled at the special discount on these masks!" Paul R. wrote.

 

Paul G. writes, "I know it seemed like the hours were passing more slowly, and thanks to Apple, I now know why."

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

MELinks July 2020

iMore has an insightful article about Apple’s transition to the ARM instruction set for new Mac desktops and laptops [1]. I’d still like to see them do something for the server side.

Umair Haque wrote an insightful article about How the American Idiot Made America Unlivable [2]. We are witnessing the destruction of a once great nation.

Chris Lamb wrote an interesting blog post about comedy shows with the laugh tracks edited out [3]. He then compares that to social media with the like count hidden which is an interesting perspective. I’m not going to watch TV shows edited in that way (I’ve enjoyed BBT inspite of all the bad things about it) and I’m not going to try and hide like counts on social media. But it’s interesting to consider these things.

Cory Doctorow wrote an interesting Locus article suggesting that we could have full employment by a transition to renewable energy and methods for cleaning up the climate problems we are too late to prevent [4]. That seems plausible, but I think we should still get a Universal Basic Income.

The Thinking Shop has posters and decks of cards with logical fallacies and cognitive biases [5]. Every company should put some of these in meeting rooms. Also they have free PDFs to download and print your own posters.

gayhomophobe.com [6] is a site that lists powerful homophobic people who hurt GLBT people but then turned out to be gay. It’s presented in an amusing manner, people who hurt others deserve to be mocked.

Wired has an insightful article about the shutdown of Backpage [7]. The owners of Backpage weren’t nice people and they did some stupid things which seem bad (like editing posts to remove terms like “lolita”). But they also worked well with police to find criminals. The opposition to what Backpage were doing conflates sex trafficing, child prostitution, and legal consenting adult sex work. Taking down Backpage seems to be a bad thing for the victims of sex trafficing, for consenting adult sex workers, and for society in general.

Cloudflare has an interesting blog post about short lived certificates for ssh access [8]. Instead of having user’s ssh keys stored on servers each user has to connect to a SSO server to obtain a temporary key before connecting, so revoking an account is easy.

CryptogramFake Stories in Real News Sites

Fireeye is reporting that a hacking group called Ghostwriter broke into the content management systems of Eastern European news sites to plant fake stories.

From a Wired story:

The propagandists have created and disseminated disinformation since at least March 2017, with a focus on undermining NATO and the US troops in Poland and the Baltics; they've posted fake content on everything from social media to pro-Russian news websites. In some cases, FireEye says, Ghostwriter has deployed a bolder tactic: hacking the content management systems of news websites to post their own stories. They then disseminate their literal fake news with spoofed emails, social media, and even op-eds the propagandists write on other sites that accept user-generated content.

That hacking campaign, targeting media sites from Poland to Lithuania, has spread false stories about US military aggression, NATO soldiers spreading coronavirus, NATO planning a full-on invasion of Belarus, and more.


Kevin RuddAIIA: The NT’s Global Opportunities and Challenges

REMARKS AT THE LAUNCH OF THE
NORTHERN TERRITORY BRANCH OF THE
AUSTRALIAN INSTITUTE OF INTERNATIONAL AFFAIRS

 

 

Image: POIS Tom Gibson/ADF

 

The post AIIA: The NT’s Global Opportunities and Challenges appeared first on Kevin Rudd.

Krebs on SecurityIs Your Chip Card Secure? Much Depends on Where You Bank

Chip-based credit and debit cards are designed to make it infeasible for skimming devices or malware to clone your card when you pay for something by dipping the chip instead of swiping the stripe. But a recent series of malware attacks on U.S.-based merchants suggest thieves are exploiting weaknesses in how certain financial institutions have implemented the technology to sidestep key chip card security features and effectively create usable, counterfeit cards.

A chip-based credit card. Image: Wikipedia.

Traditional payment cards encode cardholder account data in plain text on a magnetic stripe, which can be read and recorded by skimming devices or malicious software surreptitiously installed in payment terminals. That data can then be encoded onto anything else with a magnetic stripe and used to place fraudulent transactions.

Newer, chip-based cards employ a technology known as EMV that encrypts the account data stored in the chip. The technology causes a unique encryption key — referred to as a token or “cryptogram” — to be generated each time the chip card interacts with a chip-capable payment terminal.

Virtually all chip-based cards still have much of the same data that’s stored in the chip encoded on a magnetic stripe on the back of the card. This is largely for reasons of backward compatibility since many merchants — particularly those in the United States — still have not fully implemented chip card readers. This dual functionality also allows cardholders to swipe the stripe if for some reason the card’s chip or a merchant’s EMV-enabled terminal has malfunctioned.

But there are important differences between the cardholder data stored on EMV chips versus magnetic stripes. One of those is a component in the chip known as an integrated circuit card verification value or “iCVV” for short — also known as a “dynamic CVV.”

The iCVV differs from the card verification value (CVV) stored on the physical magnetic stripe, and protects against the copying of magnetic-stripe data from the chip and the use of that data to create counterfeit magnetic stripe cards. Both the iCVV and CVV values are unrelated to the three-digit security code that is visibly printed on the back of a card, which is used mainly for e-commerce transactions or for card verification over the phone.

The appeal of the EMV approach is that even if a skimmer or malware manages to intercept the transaction information when a chip card is dipped, the data is only valid for that one transaction and should not allow thieves to conduct fraudulent payments with it going forward.

However, for EMV’s security protections to work, the back-end systems deployed by card-issuing financial institutions are supposed to check that when a chip card is dipped into a chip reader, only the iCVV is presented; and conversely, that only the CVV is presented when the card is swiped. If somehow these do not align for a given transaction type, the financial institution is supposed to decline the transaction.

The trouble is that not all financial institutions have properly set up their systems this way. Unsurprisingly, thieves have known about this weakness for years. In 2017, I wrote about the increasing prevalence of “shimmers,” high-tech card skimming devices made to intercept data from chip card transactions.

A close-up of a shimmer found on a Canadian ATM. Source: RCMP.

More recently, researchers at Cyber R&D Labs published a paper detailing how they tested 11 chip card implementations from 10 different banks in Europe and the U.S. The researchers found they could harvest data from four of them and create cloned magnetic stripe cards that were successfully used to place transactions.

There are now strong indications the same method detailed by Cyber R&D Labs is being used by point-of-sale (POS) malware to capture EMV transaction data that can then be resold and used to fabricate magnetic stripe copies of chip-based cards.

Earlier this month, the world’s largest payment card network Visa released a security alert regarding a recent merchant compromise in which known POS malware families were apparently modified to target EMV chip-enabled POS terminals.

“The implementation of secure acceptance technology, such as EMV® Chip, significantly reduced the usability of the payment account data by threat actors as the available data only included personal account number (PAN), integrated circuit card verification value (iCVV) and expiration date,” Visa wrote. “Thus, provided iCVV is validated properly, the risk of counterfeit fraud was minimal. Additionally, many of the merchant locations employed point-to-point encryption (P2PE) which encrypted the PAN data and further reduced the risk to the payment accounts processed as EMV® Chip.”

Visa did not name the merchant in question, but something similar seems to have happened at Key Food Stores Co-Operative Inc., a supermarket chain in the northeastern United States. Key Food initially disclosed a card breach in March 2020, but two weeks ago updated its advisory to clarify that EMV transaction data also was intercepted.

“The POS devices at the store locations involved were EMV enabled,” Key Food explained. “For EMV transactions at these locations, we believe only the card number and expiration date would have been found by the malware (but not the cardholder name or internal verification code).”

While Key Food’s statement may be technically accurate, it glosses over the reality that the stolen EMV data could still be used by fraudsters to create magnetic stripe versions of EMV cards presented at the compromised store registers in cases where the card-issuing bank hadn’t implemented EMV correctly.

Earlier today, fraud intelligence firm Gemini Advisory released a blog post with more information on recent merchant compromises — including Key Food — in which EMV transaction data was stolen and ended up for sale in underground shops that cater to card thieves.

“The payment cards stolen during this breach were offered for sale in the dark web,” Gemini explained. “Shortly after discovering this breach, several financial institutions confirmed that the cards compromised in this breach were all processed as EMV and did not rely on the magstripe as a fallback.”

Gemini says it has verified that another recent breach — at a liquor store in Georgia — also resulted in compromised EMV transaction data showing up for sale at dark web stores that sell stolen card data. As both Gemini and Visa have noted, in both cases proper iCVV verification from banks should render this intercepted EMV data useless to crooks.

Gemini determined that due to the sheer number of stores affected, it’s extremely unlikely the thieves involved in these breaches intercepted the EMV data using physically installed EMV card shimmers.

“Given the extreme impracticality of this tactic, they likely used a different technique to remotely breach POS systems to collect enough EMV data to perform EMV-Bypass Cloning,” the company wrote.

Stas Alforov, Gemini’s director of research and development, said financial institutions that aren’t performing these checks risk losing the ability to notice when those cards are used for fraud.

That’s because many banks that have issued chip-based cards may assume that as long as those cards are used for chip transactions, there is virtually no risk that the cards will be cloned and sold in the underground. Hence, when these institutions are looking for patterns in fraudulent transactions to determine which merchants might be compromised by POS malware, they may completely discount any chip-based payments and focus only on those merchants at which a customer has swiped their card.

“The card networks are catching on to the fact that there’s a lot more EMV-based breaches happening right now,” Alforov said. “The larger card issuers like Chase or Bank of America are indeed checking [for a mismatch between the iCVV and CVV], and will kick back transactions that don’t match. But that is clearly not the case with some smaller institutions.”

For better or worse, we don’t know which financial institutions have failed to properly implement the EMV standard. That’s why it always pays to keep a close eye on your monthly statements, and report any unauthorized transactions immediately. If your institution lets you receive transaction alerts via text message, this can be a near real-time way to keep an eye out for such activity.

CryptogramImages in Eye Reflections

In Japan, a cyberstalker located his victim by enhancing the reflections in her eye, and using that information to establish a location.

Reminds me of the image enhancement scene in Blade Runner. That was science fiction, but now image resolution is so good that we have to worry about it.

LongNowThe Digital Librarian as Essential Worker

Michelle Swanson, an Oregon-based educator and educational consultant, has written a blog post on the Internet Archive on the increased importance of digital librarians during the pandemic:

With public library buildings closed due to the global pandemic, teachers, students, and lovers of books everywhere have increasingly turned to online resources for access to information. But as anyone who has ever turned up 2.3 million (mostly unrelated) results from a Google search knows, skillfully navigating the Internet is not as easy as it seems. This is especially true when conducting serious research that requires finding and reviewing older books, journals and other sources that may be out of print or otherwise inaccessible.

Enter the digital librarian.

Michelle Swanson, “Digital Librarians – Now More Essential Than Ever” from the Internet Archive.

Kevin Kelly writes (in New Rules for the New Economy and in The Inevitable) about how an information economy flips the relative valuation of questions and answers — how search makes useless answers nearly free and useful questions even more precious than before, and knowing how to reliably produce useful questions even more precious still.

But much of our knowledge and outboard memory is still resistant to or incompatible with web search algorithms — databases spread across both analog and digital, with unindexed objects or idiosyncratic cataloging systems. Just as having map directions on your phone does not outdo a local guide, it helps to have people intimate with a library who can navigate the weird specifics. And just as scientific illustrators still exist to mostly leave out the irrelevant and make a paper clear as day (which cameras cannot do, as of 02020), a librarian is a sharp instrument that cuts straight through the extraneous info to what’s important.

Knowing what to enter in a search is one thing; knowing when it won’t come up in search and where to look amidst an analog collection is another skill entirely. Both are necessary at a time when libraries cannot receive (as many) scholars in the flesh, and what Penn State Prof Rich Doyle calls the “infoquake” online — the too-much-all-at-once-ness of it all — demands an ever-sharper reason just to stay afloat.

Learn More

  • Watch Internet Archive founder Brewster Kahle’s 02011 Long Now talk, “Universal Access to All Knowledge.”

Worse Than FailureCodeSOD: A Variation on Nulls

Submitter “NotAThingThatHappens” stumbled across a “unique” way to check for nulls in C#.

Now, there are already a few perfectly good ways to check for nulls. variable is null, for example, or use nullable types specifically. But “NotAThingThatHappens” found approach:

if(((object)someObjThatMayBeNull) is var _)
{
    //object is null, react somehow
} 
else
{ 
  UseTheObjectInAMethod(someObjThatMayBeNull);
}

What I hate most about this is how cleverly it exploits the C# syntax to work.

Normally, the _ is a discard. It’s meant to be used for things like tuple unpacking, or in cases where you have an out parameter but don’t actually care about the output- foo(out _) just discards the output data.

But _ is also a perfectly valid identifier. So var _ creates a variable _, and the type of that variable is inferred from context- in this case, whatever type it’s being compared against in someObjThatMayBeNull. This variable is scoped to the if block, so we don’t have to worry about it leaking into our namespace, but since it’s never initialized, it’s going to choose the appropriate default value for its type- and for reference types, that default is null. By casting explicitly to object, we guarantee that our type is a reference type, so this makes sure that we don’t get weird behavior on value types, like integers.

So really, this is just an awkward way of saying objectCouldBeNull is null.

NotAThingThatHappens adds:

The code never made it to production… but I was surprised that the compiler allowed this.
It’s stupid, but it WORKS!

It’s definitely stupid, it definitely works, I’m definitely glad it’s not in your codebase.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Sam VargheseHistory lessons at a late stage of life

In 1987, I got a job in Dubai, to work for a newspaper named Khaleej (Gulf) Times. I was chosen because the interviewer was a jolly Briton who came down to Bombay to do the interview on 12 June.

Malcolm Payne, the first editor of the newspaper that had been started in 1978 by Iranian brothers named Galadari, told me that he had always wanted to come and pick some people to work at the paper. By then he had been pushed out of the editorship by the politics of both Pakistani and Indian journalists who worked there.

For some strange reason, he took a liking to me. At the end of about 45 minutes of what was a much more robust conversation than I had ever experienced in earlier job interviews, which were normally tense affairs, Payne told me, “You’re a good bugger, Samuel. I’ll see you in Dubai.”

I took it with a pinch of salt. Anyway, I reckoned that I would know in a matter of months whether he pulling my leg or not in few months. I was more focused on my upcoming wedding which was to be scheduled shortly.

But, Payne turned out to be a man of his word. In September, I got a telegram from Dubai asking me to send copies of my passport in order that a visa could be obtained for me to work in Dubai. I had mixed emotions: on the one hand, I was happy that a chance to get out of the grinding poverty I lived in had presented itself. At the same time, I was worried about leaving my sickly mother in India; by then, she had been a widow for a few months and I was her only son.

When my mother-in-law to be heard about the job opportunity, she insisted that the wedding should be held before I left for Dubai. Probably she thought that once I went to the Persian Gulf, I would begin to look for another woman.

The wedding was duly fixed for 19 October and I was to leave for Dubai on 3 November.

After I landed in Dubai, I learnt about the tension that exists between most Indian and Pakistanis as a result of the partition of the subcontinent in 1947. Pakistanis are bitter because they feel that they were forced to leave for a country that had turned out to be a basket case, subsisting only because of aid from the US, and Indians felt that the Pakistanis had been the ones to force Britain, then the colonial ruler, to split the country.

Never did this enmity come to the fore more than when India and Pakistan sent their cricket teams to the UAE — Dubai is part of this country — to play in a tournament organised there by some businessman from Sharjah.

Of course, the whole raison d’etre for the tournament was the Indo-Pakistan enmity; pitting teams that had a history of this sort against each other was like staging a proxy war. What’s more, there were both expatriate Indians and Pakistanis in large numbers waiting eagerly to buy tickets and pour into what was literally a coliseum.
The other teams who were invited — sometimes there was a three-way contest, at others a four-way fight — were just there to make up the numbers.

And the organisers always prayed for an India-Pakistan final.

A year before I arrived in Dubai, a Pakistani batsman known as Javed Miandad had taken his team to victory by hitting a six off the last ball; the contests were limited to 50 overs a side. He was showered with gifts by rich Pakistanis and one even gifted him some land. Such was the euphoria a victory in the former desert generated.

Having been born and raised in Sri Lanka, I knew nothing of the history of India. My parents did not clue me in either. I learnt all about the grisly history of the subcontinent after I landed in Dubai.

That enmity resulted in several other incidents worth telling, which I shall relate soon.

,

Krebs on SecurityHere’s Why Credit Card Fraud is Still a Thing

Most of the civilized world years ago shifted to requiring computer chips in payment cards that make it far more expensive and difficult for thieves to clone and use them for fraud. One notable exception is the United States, which is still lurching toward this goal. Here’s a look at the havoc that lag has wrought, as seen through the purchasing patterns at one of the underground’s biggest stolen card shops that was hacked last year.

In October 2019, someone hacked BriansClub, a popular stolen card bazaar that uses this author’s likeness and name in its marketing. Whoever compromised the shop siphoned data on millions of card accounts that were acquired over four years through various illicit means from legitimate, hacked businesses around the globe — but mostly from U.S. merchants. That database was leaked to KrebsOnSecurity, which in turn shared it with multiple sources that help fight payment card fraud.

An ad for BriansClub has been using my name and likeness for years to peddle millions of stolen credit cards.

Among the recipients was Damon McCoy, an associate professor at New York University’s Tandon School of Engineering [full disclosure: NYU has been a longtime advertiser on this blog]. McCoy’s work in probing the credit card systems used by some of the world’s biggest purveyors of junk email greatly enriched the data that informed my 2014 book Spam Nation, and I wanted to make sure he and his colleagues had a crack at the BriansClub data as well.

McCoy and fellow NYU researchers found BriansClub earned close to $104 million in gross revenue from 2015 to early 2019, and listed over 19 million unique card numbers for sale. Around 97% of the inventory was stolen magnetic stripe data, commonly used to produce counterfeit cards for in-person payments.

“What surprised me most was there are still a lot of people swiping their cards for transactions here,” McCoy said.

In 2015, the major credit card associations instituted new rules that made it riskier and potentially more expensive for U.S. merchants to continue allowing customers to swipe the stripe instead of dip the chip. Complicating this transition was the fact that many card-issuing U.S. banks took years to replace their customer card stocks with chip-enabled cards, and countless retailers dragged their feet in updating their payment terminals to accept chip-based cards.

Indeed, three years later the U.S. Federal Reserve estimated (PDF) that 43.3 percent of in-person card payments were still being processed by reading the magnetic stripe instead of the chip. This might not have been such a big deal if payment terminals at many of those merchants weren’t also compromised with malicious software that copied the data when customers swiped their cards.

Following the 2015 liability shift, more than 84 percent of the non-chip cards advertised by BriansClub were sold, versus just 35 percent of chip-based cards during the same time period.

“All cards without a chip were in much higher demand,” McCoy said.

Perhaps surprisingly, McCoy and his fellow NYU researchers found BriansClub customers purchased only 40% of its overall inventory. But what they did buy supports the notion that crooks generally gravitate toward cards issued by financial institutions that are perceived as having fewer or more lax protections against fraud.

Source: NYU.

While the top 10 largest card issuers in the United States accounted for nearly half of the accounts put up for sale at BriansClub, only 32 percent of those accounts were sold — and at a roughly half the median price of those issued by small- and medium-sized institutions.

In contrast, more than half of the stolen cards issued by small and medium-sized institutions were purchased from the fraud shop. This was true even though by the end of 2018, 91 percent of cards for sale from medium-sized institutions were chip-based, and 89 percent from smaller banks and credit unions. Nearly all cards issued by the top ten largest U.S. card issuers (98 percent) were chip-enabled by that time.

REGION LOCK

The researchers found BriansClub customers strongly preferred cards issued by financial institutions in specific regions of the United States, specifically Colorado, Nevada, and South Carolina.

“For whatever reason, those regions were perceived as having lower anti-fraud systems or those that were not as effective,” McCoy said.

Cards compromised from merchants in South Carolina were in especially high demand, with fraudsters willing to spend twice as much on those cards per capita than any other state — roughly $1 per resident.

That sales trend also was reflected in the support tickets filed by BriansClub customers, who frequently were informed that cards tied to the southeastern United States were less likely to be restricted for use outside of the region.

Image: NYU.

McCoy said the lack of region locking also made stolen cards issued by banks in China something of a hot commodity, even though these cards demanded much higher prices (often more than $100 per account): The NYU researchers found virtually all available Chinese cards were sold soon after they were put up for sale. Ditto for the relatively few corporate and business cards for sale.

A lack of region locks may also have caused card thieves to gravitate toward buying up as many cards as they could from USAA, a savings bank that caters to active and former military service members and their immediate families. More than 83 percent of the available USAA cards were sold between 2015 and 2019, the researchers found.

Although Visa cards made up more than half of accounts put up for sale (12.1 million), just 36 percent were sold. MasterCards were the second most-plentiful (3.72 million), and yet more than 54 percent of them sold.

American Express and Discover, which unlike Visa and MasterCard are so-called “closed loop” networks that do not rely on third-party financial institutions to issue cards and manage fraud on them, saw 28.8 percent and 33 percent of their stolen cards purchased, respectively.

PREPAIDS

Some people concerned about the scourge of debit and credit card fraud opt to purchase prepaid cards, which generally enjoy the same cardholder protections against fraudulent transactions. But the NYU team found compromised prepaid accounts were purchased at a far higher rate than regular debit and credit cards.

Several factors may be at play here. For starters, relatively few prepaid cards for sale were chip-based. McCoy said there was some data to suggest many of these prepaids were issued to people collecting government benefits such as unemployment and food assistance. Specifically, the “service code” information associated with these prepaid cards indicated that many were restricted for use at places like liquor stores and casinos.

“This was a pretty sad finding, because if you don’t have a bank this is probably how you get your wages,” McCoy said. “These cards were disproportionately targeted. The unfortunate and striking thing was the sheer demand and lack of [chip] support for prepaid cards. Also, these cards were likely more attractive to fraudsters because [the issuer’s] anti-fraud countermeasures weren’t up to par, possibly because they know less about their customers and their typical purchase history.”

PROFITS

The NYU researchers estimate BriansClub pulled in approximately $24 million in profit over four years. They calculated this number by taking the more than $100 million in total sales and subtracting commissions paid to card thieves who supplied the shop with fresh goods, as well as the price of cards that were refunded to buyers. BriansClub, like many other stolen card shops, offers refunds on certain purchases if the buyer can demonstrate the cards were no longer active at the time of purchase.

On average, BriansClub paid suppliers commissions ranging from 50-60 percent of the total value of the cards sold. Card-not-present (CNP) accounts — or those stolen from online retailers and purchased by fraudsters principally for use in defrauding other online merchants — fetched a much steeper supplier commission of 80 percent, but mainly because these cards were in such high demand and low supply.

The NYU team found card-not-present sales accounted for just 7 percent of all revenue, even though card thieves clearly now have much higher incentives to target online merchants.

A story here last year observed that this exact supply and demand tug-of-war had helped to significantly increase prices for card-not-present accounts across multiple stolen credit card shops in the underground. Not long ago, the price of CNP accounts was less than half that of card-present accounts. These days, those prices are roughly equivalent.

One likely reason for that shift is the United States is the last of the G20 nations to fully transition to more secure chip-based payment cards. In every other country that long ago made the chip card transition, they saw the same dynamic: As they made it harder for thieves to counterfeit physical cards, the fraud didn’t go away but instead shifted to online merchants.

The same progression is happening now in the United States, only the demand for stolen CNP data still far outstrips supply. Which might explain why we’ve seen such a huge uptick over the past few years in e-commerce sites getting hacked.

“Everyone points to this displacement effect from card-present to card-not-present fraud,” McCoy said. “But if the supply isn’t there, there’s only so much room for that displacement to occur.”

No doubt the epidemic of card fraud has benefited mightily from hacked retail chains — particularly restaurants — that still allow customers to swipe chip-based cards. But as we’ll see in a post to be published tomorrow, new research suggests thieves are starting to deploy ingenious methods for converting card data from certain compromised chip-based transactions into physical counterfeit cards.

A copy of the NYU research paper is available here (PDF).

LongNowThe Unexpected Influence of Cosmic Rays on DNA

Samuel Velasco/Quanta Magazine

Living in a world with multiple spatiotemporal scales, the very small and fast can often drive the future of the very large and slow: Microscopic genetic mutations change macroscopic anatomy. Undetectably small variations in local climate change global weather patterns (the infamous “butterfly effect”).

And now, one more example comes from a new theory about why DNA on modern Earth only twists in one of two possible directions:

Our spirals might all trace back to an unexpected influence from cosmic rays. Cosmic ray showers, like DNA strands, have handedness. Physical events typically break right as often as they break left, but some of the particles in cosmic ray showers tap into one of nature’s rare exceptions. When the high energy protons in cosmic rays slam into the atmosphere, they produce particles called pions, and the rapid decay of pions is governed by the weak force — the only fundamental force with a known mirror asymmetry.

Millions if not billions of cosmic ray strikes could be required to yield one additional free electron in a [right-handed] strand, depending on the event’s energy. But if those electrons changed letters in the organisms’ genetic codes, those tweaks may have added up. Over perhaps a million years…cosmic rays might have accelerated the evolution of our earliest ancestors, letting them out-compete their [left-handed] rivals.

In other words, properties of the subatomic world seem to have conferred a benefit to the potential for innovation among right-handed nucleic acids, and a “talent” for generating useful copying errors led to the entrenched monopoly we observe today.

But that isn’t the whole story. Read more at Quanta.

Worse Than FailureCodeSOD: True if Documented

“Comments are important,” is one of those good rules that often gets misapplied. No one wants to see a method called addOneToSet and a comment that tells us Adds one item to the set.

Still, a lot of our IDEs and other tooling encourage these kinds of comments. You drop a /// or /* before a method or member, and you get an autostubbed out comment that gives you a passable, if useless, comment.

Scott Curtis thinks that is where this particular comment originated, but over time it decayed into incoherent nonsense:

///<summary> True to use quote value </summary>
///
///<value> True if false, false if not </value>
private readonly bool _mUseQuoteValue

True if false, false if not. Or, worded a little differently, documentation makes code less clear, clearer if not.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Rondam RamblingsThe insidious problem of racism

Take a moment to seriously think about what is wrong with racism.  If you're like most people, your answer will probably be that racism is bad because it's a form of prejudice, and prejudice is bad.  This is not wrong, but it misses a much deeper, more insidious issue.  The real problem with racism is that it is that it can be (and usually is) rationalized and those rationalizations can turn into

CryptogramSurvey of Supply Chain Attacks

The Atlantic Council has a released a report that looks at the history of computer supply chain attacks.

Key trends from their summary:

  1. Deep Impact from State Actors: There were at least 27 different state attacks against the software supply chain including from Russia, China, North Korea, and Iran as well as India, Egypt, the United States, and Vietnam.States have targeted software supply chains with great effect as the majority of cases surveyed here did, or could have, resulted in remote code execution. Examples: CCleaner, NotPetya, Kingslayer, SimDisk, and ShadowPad.

  2. Abusing Trust in Code Signing: These attacks undermine public key cryptography and certificates used to ensure the integrity of code. Overcoming these protections is a critical step to enabling everything from simple alterations of open-source code to complex nation-state espionage campaigns. Examples: ShadowHammer, Naid/McRAT, and BlackEnergy 3.

  3. Hijacking Software Updates: 27% of these attacks targeted software updates to insert malicious code against sometimes millions of targets. These attacks are generally carried out by extremely capable actors and poison updates from legitimate vendors. Examples: Flame, CCleaner 1 & 2, NotPetya, and Adobe pwdum7v71.

  4. Poisoning Open-Source Code: These incidents saw attackers either modify open-source code by gaining account access or post their own packages with names similar to common examples. Attacks targeted some of the most widely used open source tools on the internet. Examples: Cdorked/Darkleech, RubyGems Backdoor, Colourama, and JavaScript 2018 Backdoor.

  5. Targeting App Stores: 22% of these attacks targeted app stores like the Google Play Store, Apple's App Store, and other third-party app hubs to spread malware to mobile devices. Some attacks even targeted developer tools ­ meaning every app later built using that tool was potentially compromised. Examples: ExpensiveWall, BankBot, Gooligan, Sandworm's Android attack, and XcodeGhost.

Recommendations included in the report. The entirely open and freely available dataset is here.

Worse Than FailureCodeSOD: Underscoring the Comma

Andrea writes to confess some sins, though I'm not sure who the real sinner is. To understand the sins, we have to talk a little bit about C/C++ macros.

Andrea was working on some software to control a dot-matrix display from an embedded device. Send an array of bytes to it, and the correct bits on the display light up. Now, if you're building something like this, you want an easy way to "remember" the proper sequences. So you might want to do something like:

uint8_t glyph0[] = {'0', 0x0E, 0x11, 0x0E, 0};
uint8_t glyph1[] = {'1', 0x09, 0x1F, 0x01, 0};

And so on. And heck, you might want to go so far as to have a lookup array, so you might have a const uint8_t *const glyphs[] = {glyph0, glyph1…}. Now, you could just hardcode those definitions, but wouldn't it be cool to use macros to automate that a bit, as your definitions might change?

Andrea went with a style known as X macros, which let you specify one pattern of data which can be re-used by redefining X. So, for example, I could do something like:

#define MY_ITEMS \
  X(a, 5) \
  X(b, 6) \
  X(c, 7)
  
#define X(name, value) int name = value;
MY_ITEMS
#undef X

This would generate:

int a = 5;
int b = 6;
int c = 7;

But I could re-use this, later:

#define X(name, data) name, 
int items[] = { MY_ITEMS nullptr};
#undef X

This would generate, in theory, something like: int items[] = {a,b,c,nullptr};

We are recycling the MY_ITEMS macro, and we're changing its behavior by altering the X macro that it invokes. This can, in practice, result in much more readable and maintainable code, especially code where you need to have parallel lists of items. It's also one of those things that the first time you see it, it's… surprising.

Now, this is all great, and it means that Andrea could potentially have a nice little macro system for defining arrays of bytes and a lookup array pointing to those arrays. There's just one problem.

Specifically, if you tried to write a macro like this:

#define GLYPH_DEFS \
  X(glyph0, {'0', 0x0E, 0x11, 0x0E, 0})

It wouldn't work. It doesn't matter what you actually define X to do, the preprocessor isn't aware of the C/C++ syntax. So it doesn't say "oh, that second comma is inside of an array initalizer, I'll ignore it", it says, "Oh, they're trying to pass more than two parameters to the macro X."

So, you need some way to define an array initializer that doesn't use commas. If macros got you into this situation, macros can get you right back out. Here is Andrea's solution:

#define _ ,  // Sorry.
#define GLYPH_DEFS \
	X(glyph0, { '0' _ 0x0E _ 0x11 _ 0x0E _ 0 } ) \
	X(glyph1, { '1' _ 0x09 _ 0x1F _ 0x01 _ 0 }) \
	X(glyph2, { '2' _ 0x13 _ 0x15 _ 0x09 _ 0 }) \
	X(glyph3, { '3' _ 0x15 _ 0x15 _ 0x0A _ 0 }) \
	X(glyph4, { '4' _ 0x18 _ 0x04 _ 0x1F _ 0 }) \
	X(glyph5, { '5' _ 0x1D _ 0x15 _ 0x12 _ 0 }) \
	X(glyph6, { '6' _ 0x0E _ 0x15 _ 0x03 _ 0 }) \
	X(glyph7, { '7' _ 0x10 _ 0x13 _ 0x0C _ 0 }) \
	X(glyph8, { '8' _ 0x0A _ 0x15 _ 0x0A _ 0 }) \
	X(glyph9, { '9' _ 0x08 _ 0x14 _ 0x0F _ 0 }) \
	X(glyphA, { 'A' _ 0x0F _ 0x14 _ 0x0F _ 0 }) \
	X(glyphB, { 'B' _ 0x1F _ 0x15 _ 0x0A _ 0 }) \
	X(glyphC, { 'C' _ 0x0E _ 0x11 _ 0x11 _ 0 }) \
	X(glyphD, { 'D' _ 0x1F _ 0x11 _ 0x0E _ 0 }) \
	X(glyphE, { 'E' _ 0x1F _ 0x15 _ 0x15 _ 0 }) \
	X(glyphF, { 'F' _ 0x1F _ 0x14 _ 0x14 _ 0 }) \

#define X(name, data) const uint8_t name [] = data ;
GLYPH_DEFS
#undef X

#define X(name, data) name _
const uint8_t *const glyphs[] = { GLYPH_DEFS nullptr };
#undef X
#undef _

So, when processing the X macro, we pass it a pile of _s, which aren't commas, so it doesn't complain. Then we expand the _ macro and voila: we have syntactically valid array initalizers. If Andrea ever changes the list of glyphs, adding or removing any, the macro will automatically sync the declaration of the individual arrays and their pointers over in the glyphs array.

Andrea adds:

The scope of this definition is limited to this data structure, in which the X macros are used, and it is #undef'd just after that. However, with all the stories of #define abuse on this site, I feel I still need to atone.
The testing sketch works perfectly.

Honestly, all sins are forgiven. There isn't a true WTF here, beyond "the C preprocessor is TRWTF". It's a weird, clever hack, and it's interesting to see this technique in use.

That said, as you might note: this was a testing sketch, just to prove a concept. Instead of getting clever with macros, your disposable testing code should probably just get to proving your concept as quickly as possible. You can worry about code maintainability later. So, if there are any sins by Andrea, it's the sin of overengineering a disposable test program.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Krebs on SecurityBusiness ID Theft Soars Amid COVID Closures

Identity thieves who specialize in running up unauthorized lines of credit in the names of small businesses are having a field day with all of the closures and economic uncertainty wrought by the COVID-19 pandemic, KrebsOnSecurity has learned. This story is about the victims of a particularly aggressive business ID theft ring that’s spent years targeting small businesses across the country and is now pivoting toward using that access for pandemic assistance loans and unemployment benefits.

Most consumers are likely aware of the threat from identity theft, which occurs when crooks apply for new lines of credit in your name. But the same crime can be far more costly and damaging when thieves target small businesses. Unfortunately, far too many entrepreneurs are simply unaware of the threat or don’t know how to be watchful for it.

What’s more, with so many small enterprises going out of business or sitting dormant during the COVID-19 pandemic, organized fraud rings have an unusually rich pool of targets to choose from.

Short Hills, N.J.-based Dun & Bradstreet [NYSE:DNB] is a data analytics company that acts as a kind of de facto credit bureau for companies: When a business owner wants to open a new line of credit, creditors typically check with Dun & Bradstreet to gauge the business’s history and trustworthiness.

In 2019, Dun & Bradstreet saw more than a 100 percent increase in business identity theft. For 2020, the company estimates an overall 258 percent spike in the crime. Dun & Bradstreet said that so far this year it has received over 4,700 tips and leads where business identity theft or malfeasance are suspected.

“The ferocity of cyber criminals to take advantage of COVID-19 uncertainties by preying on small businesses is disturbing,” said Andrew LaMarca, who leads the global high-risk and fraud team at Dun & Bradstreet.

For the past several months, Milwaukee, Wisc. based cyber intelligence firm Hold Security has been monitoring the communications between and among a businesses ID theft gang apparently operating in Georgia and Florida but targeting businesses throughout the United States. That surveillance has helped to paint a detailed picture of how business ID thieves operate, as well as the tricks they use to gain credit in a company’s name.

Hold Security founder Alex Holden said the group appears to target both active and dormant or inactive small businesses. The gang typically will start by looking up the business ownership records at the Secretary of State website that corresponds to the company’s state of incorporation. From there, they identify the officers and owners of the company, acquire their Social Security and Tax ID numbers from the dark web and other sources online.

To prove ownership over the hijacked firms, they hire low-wage image editors online to help fabricate and/or modify a number of official documents tied to the business — including tax records and utility bills.

The scammers frequently then file phony documents with the Secretary of State’s office in the name(s) of the business owners, but include a mailing address that they control. They also create email addresses and domain names that mimic the names of the owners and the company to make future credit applications appear more legitimate, and submit the listings to business search websites, such as yellowpages.com.

For both dormant and existing businesses, the fraudsters attempt to create or modify the target company’s accounts at Dun & Bradstreet. In some cases, the scammers create dashboard accounts in the business’s names at Dun & Bradstreet’s credit builder portal; in others, the bad guys have actually hacked existing business accounts at DNB, requesting a new DUNS numbers for the business (a DUNS number is a unique, nine-digit identifier for businesses).

Finally, after the bogus profiles are approved by Dun & Bradstreet, the gang waits a few weeks or months and then starts applying for new lines of credit in the target business’s name at stores like Home Depot, Office Depot and Staples. Then they go on a buying spree with the cards issued by those stores.

Usually, the first indication a victim has that they’ve been targeted is when the debt collection companies start calling.

“They are using mostly small companies that are still active businesses but currently not operating because of COVID-19,” Holden said. “With this gang, we see four or five people working together. The team leader manages the work between people. One person seems to be in charge of getting stolen cards from the dark web to pay for the reactivation of businesses through the secretary of state sites. Another team member works on revising the business documents and registering them on various sites. The others are busy looking for specific businesses they want to revive.”

Holden said the gang appears to find success in getting new lines of credit with about 20 percent of the businesses they target.

“One’s personal credit is nothing compared to the ability of corporations to borrow money,” he said. “That’s bad because while the credit system may be flawed for individuals, it’s an even worse situation on average when we’re talking about businesses.”

Holden said over the past few months his firm has seen communications between the gang’s members indicating they have temporarily shifted more of their energy and resources to defrauding states and the federal government by filing unemployment insurance claims and apply for pandemic assistance loans with the Small Business Administration.

“It makes sense, because they’ve already got control over all these dormant businesses,” he said. “So they’re now busy trying to get unemployment payments and SBA loans in the names of these companies and their employees.”

PHANTOM OFFICES

Hold Security shared data intercepted from the gang that listed the personal and financial details of dozens of companies targeted for ID theft, including Dun & Bradstreet logins the crooks had created for the hijacked businesses. Dun & Bradstreet declined to comment on the matter, other than to say it was working with federal and state authorities to alert affected businesses and state regulators.

Among those targeted was Environmental Safety Consultants Inc. (ESC), a 37-year-old environmental engineering firm based in Bradenton, Fla. ESC owner Scott Russell estimates his company was initially targeted nearly two years ago, and that he first became aware something wasn’t right when he recently began getting calls from Home Depot’s corporate offices inquiring about the company’s delinquent account.

But Russell said he didn’t quite grasp the enormity of the situation until last year, when he was contacted by the manager of a virtual office space across town who told him about a suspiciously large number of deliveries at an office space that was rented out in his name.

Russell had never rented that particular office. Rather, the thieves had done it for him, using his name and the name of his business. The office manager said the deliveries came virtually non-stop, even though there was apparently no business operating within the rented premises. And in each case, shortly after the shipments arrived someone would show up and cart them away.

“She said we don’t think it’s you,” he recalled. “Turns out, they had paid for a lease in my name with someone else’s credit card. She shared with me a copy of the lease, which included a fraudulent ID and even a vehicle insurance card for a Land Cruiser we got rid of like 15 years ago. The application listed our home address with me and some woman who was not my wife’s name.”

The crates and boxes being delivered to his erstwhile office space were mostly computers and other high-priced items ordered from 10 different Office Depot credit cards that also were not in his name.

“The total value of the electronic equipment that was bought and delivered there was something like $75,000,” Russell said, noting that it took countless hours and phone calls with Office Depot to make it clear they would no longer accept shipments addressed to him or his company. “It was quite spine-tingling to see someone penned a lease in the name of my business and personal identity.”

Even though the virtual office manager had the presence of mind to take photocopies of the driver’s licenses presented by the people arriving to pick up the fraudulent shipments, the local police seemed largely uninterested in pursuing the case, Russell said.

“I went to the local county sheriff’s office and showed them all the documentation I had and the guy just yawned and said he’d get right on it,” he recalled. “The place where the office space was rented was in another county, and the detective I spoke to there about it was interested, but he could never get anyone from my county to follow up.”

RECYCLING VICTIMS

Russell said he believes the fraudsters initially took out new lines of credit in his company’s name and then used those to defraud others in a similar way. One of those victims is another victim on the gang’s target list obtained by Hold Security — Mary McMahan, owner of Fan Experiences, an event management company in Winter Park, Fla.

McMahan also had stolen goods from Office Depot and other stores purchased in her company’s name and delivered to the same office space rented in Russell’s name. McMahan said she and her businesses have suffered hundreds of thousands of dollars in fraud, and spent nearly as much in legal fees fending off collections firms and restoring her company’s credit.

McMahan said she first began noticing trouble almost four years ago, when someone started taking out new credit cards in her company’s name. At the same time, her business was used to open a new lease on a virtual office space in Florida that also began receiving packages tied to other companies victimized by business ID theft.

“About four years back, they hit my credit hard for a year, getting all these new lines of credit at Home Depot, Office Depot, Office Max, you name it,” she said. “Then they came back again two years ago and hit it hard for another year. They even went to the [Florida Department of Motor Vehicles] to get a driver’s license in my name.”

McMahan said the thieves somehow hacked her DNB account, and then began adding new officers and locations for her business listing.

“They changed the email and mailing address, and even went on Yelp and Google and did the same,” she said.

McMahan said she’s since locked down her personal and business credit to the point where even she would have a tough time getting a new line of credit or mortgage if she tried.

“There’s no way they can even utilize me anymore because there’s so many marks on my credit stating that it’s been stolen” she said. “These guys are relentless, and they recycle victims to defraud others until they figure out they can’t recycle them anymore.”

SAY…THAT’S A NICE CREDIT PROFILE YOU GOT THERE…

McMahan says she, too, has filed multiple reports about the crimes with local police, but has so far seen little evidence that anyone is interested in following up on the matter. For now, she is paying Dun and Bradstreet more than a $100 a month to monitor her business credit profile.

Dun & Bradstreet does offer a free version of credit monitoring called Credit Signal that lets business owners check their business credit scores and any inquiries made in the previous 14 days up to four times a year. However, those looking for more frequent checks or additional information about specific credit inquiries beyond 14 days are steered toward DNB’s subscription-based services.

Eva Velasquez, president of the Identity Theft Resource Center, a California-based nonprofit that assists ID theft victims, said she finds that troubling.

“When we look at these institutions that are necessary for us to operate and function in society and they start to charge us a fee for a service to fix a problem they helped create through their infrastructure, that’s just unconscionable,” Velasquez said. “We need to take a hard look at the infrastructures that businesses are beholden to and make sure the risk minimization protections they’re entitled to are not fee-based — particularly if it’s a problem created by the very infrastructure of the system.”

Velasquez said it’s unfortunate that small business owners don’t have the same protections afforded to consumers. For example, only recently did the three major consumer reporting bureaus allow all U.S. residents to place a freeze on their credit files for free.

“We’ve done a good job in educating the public that anyone can be victim of identity theft, and in compelling our infrastructure to provide robust consumer protection and risk minimization processes that are more uniform,” she said. “It’s still not good by any means, but it’s definitely better for consumers than it is for businesses. We currently put all the responsibility on the small business owner, and very little on the infrastructure and processes that should be designed to protect them but aren’t doing a great job, frankly.”

Rather, the onus continues to be on the business owner to periodically check with DNB and state agencies to monitor for any signs of unauthorized changes. Worse still, too many private and public organizations still don’t do a good enough job protecting employee identification and tax ID numbers that are so often abused in business identity theft, Velasquez said.

“You can put alerts and other protections in place but the problem is you have to go on a department by department and case by case basis,” she said. “The place to begin is your secretary of state’s office or wherever you file your documents to operate your business.

For its part, Dun & Bradstreet recently published a blog post outlining recommendations for businesses to ward off identity thieves. DNB says anyone who suspects fraudulent activity on their account should contact its support team.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 11)

Here’s part eleven of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

LongNowDiscovery in Mexican Cave May Drastically Change the Known Timeline of Humans’ Arrival to the Americas

Human history in the Americas may be twice long as long as previously believed — at least 26,500 years — according to authors of a new study at Mexico’s Chiquihuite cave and other sites throughout Central Mexico.

According to the study’s lead author Ciprian Ardelean:

“This site alone can’t be considered a definitive conclusion. But with other sites in North America like Gault (Texas), Bluefish Caves (Yukon), maybe Cactus Hill (Virginia)—it’s strong enough to favor a valid hypothesis that there were humans here probably before and almost surely during the Last Glacial Maximum.”

Worse Than FailureUltrabase

After a few transfers across departments at IniTech, Lydia found herself as a senior developer on an internal web team. They built intranet applications which covered everything from home-grown HR tools to home-grown supply chain tools, to home-grown CMSes, to home-grown "we really should have purchased something but the approval process is so onerous and the budgeting is so constrained that it looks cheaper to carry an IT team despite actually being much more expensive".

A new feature request came in, and it seemed extremely easy. There was a stored procedure that was normally invoked by a scheduled job. The admin users in one of the applications wanted to be able to invoke it on demand. Now, Lydia might be "senior", but she was new to the team, so she popped over to Desmond's cube to see what he thought.

"Oh, sure, we can do that, but it'll take about a week."

"A week?" Lydia asked. "A week? To add a button that invokes a stored procedure. It doesn't even take any parameters or return any results you'd need to display."

"Well, roughly 40 hours of effort, yeah. I can't promise it'd be a calendar week."

"I guess, with testing, and approvals, I could see it taking that long," Lydia said.

"Oh, no, that's just development time," Desmond said. "You're new to the team, so it's time you learned about Ultrabase."

Wyatt was the team lead. Lydia had met him briefly during her onboarding with the team, but had mostly been interacting with the other developers on the team. Wyatt, as it turned out, was a Certified Super Genius™, and was so smart that he recognized that most of their applications were, functionally, quite the same. CRUD apps, mostly. So Wyatt had "automated" the process, with his Ultrabase solution.

First, there was a configuration database. Every table, every stored procedure, every view or query, needed to be entered into the configuration database. Now, Wyatt, Certified Super Genius™, knew that he couldn't define a simple schema which would cover all the possible cases, so he didn't. He defined a fiendishly complicated schema with opaque and inconsistent validity rules. Once you had entered the data for all of your database objects, hopefully correctly, you could then execute the Data program.

The Data program would read through the configuration database, and through the glories of string concatenation generate a C# solution containing the definitions of your data model objects. The Data program itself was very fault tolerant, so fault tolerant that if anything went wrong, it still just output C# code, just not syntactically correct C# code. If the C# code couldn't compile, you needed to go back to the configuration database and figure out what was wrong.

Eventually, once you had a theoretically working data model library, you pushed the solution to the build server. That would build and sign the library with a corporate key, and publish it to their official internal software repository. This could take days or weeks to snake its way through all the various approval steps.

Once you had the official release of the datamodel, you could fire up the Data Access Layer tool, which would then pull down the signed version in the repository, and using reflection and the config database, the Data Access Layer program would generate a DAL. Assuming everything worked, you would push that to the build server, and then wait for that to wind its way through the plumbing of approvals.

Then the Business Logic Layer. Then the "Core" layer. The "UI Adapter Layer". The "Front End" layer.

Each layer required the previous layer to be in the corporate repository before you could generate it. Each layer also needed to check the config database. It was trivial to make an error that wouldn't be discovered until you tried to generate the front end layer, and if that happened, you needed to go all the way back to the beginning.

"Wyatt is working on a 'config validation tool' which he says will avoid some of these errors," Desmond said. "So we've got that to look forward to. Anyway, that's our process. Glad to have you on the team!"

Lydia was significantly less glad to be on the team, now that Desmond had given her a clearer picture of how it actually worked.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Rondam RamblingsAbortion restrictions result in more abortions

Not that this was ever in any serious doubt, but now there is actual data published in The Lancet showing that abortion restrictions increase the number of abortions: In 2015–19, there were 121.0 million unintended pregnancies annually (80% uncertainty interval [UI] 112.8–131.5), corresponding to a global rate of 64 unintended pregnancies (UI 60–70) per 1000 women aged 15–49 years. 61% (58–63)

Rondam RamblingsMark your calendars: I am debating Ken Hovind on July 9

I've recently taken up a new hobby of debating young-earth creationists on YouTube.  (It's a dirty job, but somebody's gotta do it.)  I've done two of them so far [1][2], both on a creationist channel called Standing For Truth.  My third debate will be against Kent Hovind, one of the more prominent and, uh, outspoken members of the YEC community.  In case you haven't heard of him, here's a sample

,

Krebs on SecurityThinking of a Cybersecurity Career? Read This

Thousands of people graduate from colleges and universities each year with cybersecurity or computer science degrees only to find employers are less than thrilled about their hands-on, foundational skills. Here’s a look at a recent survey that identified some of the bigger skills gaps, and some thoughts about how those seeking a career in these fields can better stand out from the crowd.

Virtually every week KrebsOnSecurity receives at least one email from someone seeking advice on how to break into cybersecurity as a career. In most cases, the aspirants ask which certifications they should seek, or what specialization in computer security might hold the brightest future.

Rarely am I asked which practical skills they should seek to make themselves more appealing candidates for a future job. And while I always preface any response with the caveat that I don’t hold any computer-related certifications or degrees myself, I do speak with C-level executives in cybersecurity and recruiters on a regular basis and frequently ask them for their impressions of today’s cybersecurity job candidates.

A common theme in these C-level executive responses is that a great many candidates simply lack hands-on experience with the more practical concerns of operating, maintaining and defending the information systems which drive their businesses.

Granted, most people who have just graduated with a degree lack practical experience. But happily, a somewhat unique aspect of cybersecurity is that one can gain a fair degree of mastery of hands-on skills and foundational knowledge through self-directed study and old fashioned trial-and-error.

One key piece of advice I nearly always include in my response to readers involves learning the core components of how computers and other devices communicate with one another. I say this because a mastery of networking is a fundamental skill that so many other areas of learning build upon. Trying to get a job in security without a deep understanding of how data packets work is a bit like trying to become a chemical engineer without first mastering the periodic table of elements.

But please don’t take my word for it. The SANS Institute, a Bethesda, Md. based security research and training firm, recently conducted a survey of more than 500 cybersecurity practitioners at 284 different companies in an effort to suss out which skills they find most useful in job candidates, and which are most frequently lacking.

The survey asked respondents to rank various skills from “critical” to “not needed.” Fully 85 percent ranked networking as a critical or “very important” skill, followed by a mastery of the Linux operating system (77 percent), Windows (73 percent), common exploitation techniques (73 percent), computer architectures and virtualization (67 percent) and data and cryptography (58 percent). Perhaps surprisingly, only 39 percent ranked programming as a critical or very important skill (I’ll come back to this in a moment).

How did the cybersecurity practitioners surveyed grade their pool of potential job candidates on these critical and very important skills? The results may be eye-opening:

“Employers report that student cybersecurity preparation is largely inadequate and are frustrated that they have to spend months searching before they find qualified entry-level employees if any can be found,” said Alan Paller, director of research at the SANS Institute. “We hypothesized that the beginning of a pathway toward resolving those challenges and helping close the cybersecurity skills gap would be to isolate the capabilities that employers expected but did not find in cybersecurity graduates.”

The truth is, some of the smartest, most insightful and talented computer security professionals I know today don’t have any computer-related certifications under their belts. In fact, many of them never even went to college or completed a university-level degree program.

Rather, they got into security because they were passionately and intensely curious about the subject, and that curiosity led them to learn as much as they could — mainly by reading, doing, and making mistakes (lots of them).

I mention this not to dissuade readers from pursuing degrees or certifications in the field (which may be a basic requirement for many corporate HR departments) but to emphasize that these should not be viewed as some kind of golden ticket to a rewarding, stable and relatively high-paying career.

More to the point, without a mastery of one or more of the above-mentioned skills, you simply will not be a terribly appealing or outstanding job candidate when the time comes.

BUT..HOW?

So what should you focus on, and what’s the best way to get started? First, understand that while there are a near infinite number of ways to acquire knowledge and virtually no limit to the depths you can explore, getting your hands dirty is the fastest way to learning.

No, I’m not talking about breaking into someone’s network, or hacking some poor website. Please don’t do that without permission. If you must target third-party services and sites, stick to those that offer recognition and/or incentives for doing so through bug bounty programs, and then make sure you respect the boundaries of those programs.

Besides, almost anything you want to learn by doing can be replicated locally. Hoping to master common vulnerability and exploitation techniques? There are innumerable free resources available; purpose-built exploitation toolkits like Metasploit, WebGoat, and custom Linux distributions like Kali Linux that are well supported by tutorials and videos online. Then there are a number of free reconnaissance and vulnerability discovery tools like Nmap, Nessus, OpenVAS and Nikto. This is by no means a complete list.

Set up your own hacking labs. You can do this with a spare computer or server, or with older hardware that is plentiful and cheap on places like eBay or Craigslist. Free virtualization tools like VirtualBox can make it simple to get friendly with different operating systems without the need of additional hardware.

Or look into paying someone else to set up a virtual server that you can poke at. Amazon’s EC2 services are a good low-cost option here. If it’s web application testing you wish to learn, you can install any number of web services on computers within your own local network, such as older versions of WordPress, Joomla or shopping cart systems like Magento.

Want to learn networking? Start by getting a decent book on TCP/IP and really learning the network stack and how each layer interacts with the other.

And while you’re absorbing this information, learn to use some tools that can help put your newfound knowledge into practical application. For example, familiarize yourself with Wireshark and Tcpdump, handy tools relied upon by network administrators to troubleshoot network and security problems and to understand how network applications work (or don’t). Begin by inspecting your own network traffic, web browsing and everyday computer usage. Try to understand what applications on your computer are doing by looking at what data they are sending and receiving, how, and where.

ON PROGRAMMING

While being able to program in languages like Go, Java, Perl, Python, C or Ruby may or may not be at the top of the list of skills demanded by employers, having one or more languages in your skillset is not only going to make you a more attractive hire, it will also make it easier to grow your knowledge and venture into deeper levels of mastery.

It is also likely that depending on which specialization of security you end up pursuing, at some point you will find your ability to expand that knowledge is somewhat limited without understanding how to code.

For those intimidated by the idea of learning a programming language, start by getting familiar with basic command line tools on Linux. Just learning to write basic scripts that automate specific manual tasks can be a wonderful stepping stone. What’s more, a mastery of creating shell scripts will pay handsome dividends for the duration of your career in almost any technical role involving computers (regardless of whether you learn a specific coding language).

GET HELP

Make no mistake: Much like learning a musical instrument or a new language, gaining cybersecurity skills takes most people a good deal of time and effort. But don’t get discouraged if a given topic of study seems overwhelming at first; just take your time and keep going.

That’s why it helps to have support groups. Seriously. In the cybersecurity industry, the human side of networking takes the form of conferences and local meetups. I cannot stress enough how important it is for both your sanity and career to get involved with like-minded people on a semi-regular basis.

Many of these gatherings are free, including Security BSides events, DEFCON groups, and OWASP chapters. And because the tech industry continues to be disproportionately populated by men, there are also a number cybersecurity meetups and membership groups geared toward women, such as the Women’s Society of Cyberjutsu and others listed here.

Unless you live in the middle of nowhere, chances are there’s a number of security conferences and security meetups in your general area. But even if you do reside in the boonies, the good news is many of these meetups are going virtual to avoid the ongoing pestilence that is the COVID-19 epidemic.

In summary, don’t count on a degree or certification to prepare you for the kinds of skills employers are going to understandably expect you to possess. That may not be fair or as it should be, but it’s likely on you to develop and nurture the skills that will serve your future employer(s) and employability in this field.

I’m certain that readers here have their own ideas about how newbies, students and those contemplating a career shift into cybersecurity can best focus their time and efforts. Please feel free to sound off in the comments. I may even update this post to include some of the better recommendations.

CryptogramFriday Squid Blogging: Introducing the Seattle Kraken

The Kraken is the name of Seattle's new NFL franchise.

I have always really liked collective nouns as sports team names (like the Utah Jazz or the Minnesota Wild), mostly because it's hard to describe individual players.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

LongNowThe Comet Neowise as seen from the ISS

For everyone who cannot see the Comet Neowise with their own eyes this week — or just wants to see it from a higher perch — this video by artist Seán Doran combines 550 NASA images from the International Space Station into a real-time view of the comet from 250 miles above Earth’s surface and 17,500 mph.

LongNowEnormous Dormice Once Roamed Mediterranean Islands

Pleistocene dormouse Leithia melitensis was the size of a house cat. New computer-aided reconstructions show a skull as long as an entire modern dormouse.

It’s a textbook example of “island gigantism,” in which, biologists hypothesize, fewer terrestrial predators and more pressure from predatory birds selects for a much larger body size in some island organisms.

CryptogramUpdate on NIST's Post-Quantum Cryptography Program

NIST has posted an update on their post-quantum cryptography program:

After spending more than three years examining new approaches to encryption and data protection that could defeat an assault from a quantum computer, the National Institute of Standards and Technology (NIST) has winnowed the 69 submissions it initially received down to a final group of 15. NIST has now begun the third round of public review. This "selection round" will help the agency decide on the small subset of these algorithms that will form the core of the first post-quantum cryptography standard.

[...]

For this third round, the organizers have taken the novel step of dividing the remaining candidate algorithms into two groups they call tracks. The first track contains the seven algorithms that appear to have the most promise.

"We're calling these seven the finalists," Moody said. "For the most part, they're general-purpose algorithms that we think could find wide application and be ready to go after the third round."

The eight alternate algorithms in the second track are those that either might need more time to mature or are tailored to more specific applications. The review process will continue after the third round ends, and eventually some of these second-track candidates could become part of the standard. Because all of the candidates still in play are essentially survivors from the initial group of submissions from 2016, there will also be future consideration of more recently developed ideas, Moody said.

"The likely outcome is that at the end of this third round, we will standardize one or two algorithms for encryption and key establishment, and one or two others for digital signatures," he said. "But by the time we are finished, the review process will have been going on for five or six years, and someone may have had a good idea in the interim. So we'll find a way to look at newer approaches too."

Details are here. This is all excellent work, and exemplifies NIST at its best. The quantum-resistant algorithms will be standardized far in advance of any practical quantum computer, which is how we all want this sort of thing to go.

Kevin RuddCNN: Cold War 1.5

INTERVIEW VIDEO
TV INTERVIEW
CONNECT THE WORLD, CNN
24 JULY 2020

Topics: US-China relations, Australia’s coronavirus second wave

BECKY ANDERSON: Kevin Rudd is the president of the Asia Society Policy Institute and he’s joining us now from the Sunshine Coast in Australia. It’s great to have you. This type of rhetoric you say is not new. But it does feel like we are approaching a precipitous point.

KEVIN RUDD: Well Becky, I think there’s been a lot of debate in recent months as to whether we’re on the edge of a new Cold War between China and the United States. Rather than being Cold War 2.0, I basically see it as Cold War 1.5. That is, it’s sliding in that direction, and sliding rapidly in that direction. But we’re by no means there yet. And one of the reasons we’re not there yet is because of the continued depth and breadth of the economic relationship between China and the United States, which was never the case, in terms of the historical relationship, between the United States and the Soviet Union during the first Cold War. That may change, but that I think is where we are right now.

ANDERSON: We haven’t seen an awful lot of retaliation nor very much of a narrative really from Beijing in response to some of this US anti-China narrative. What do you expect next from Beijing?

RUDD: Well, in terms of the consulate general, I think as night follows day, you’re likely to see either a radical reduction in overall American diplomatic staff numbers in China and consular staff numbers, or the direct reciprocal action, which would close for example, the US Consulate General in perhaps Chengdu or Wuhan or in Shenyang, somewhere like that. But this as you said before in your introduction, Becky, forms just one part of a much broader deterioration relationship. I’ve been observing the US-China relationship for the better part of 35 years. Really, since Nixon and Kissinger first went to Beijing in 1971/1972. This is the low point, the lowest point of the US-China relationship in now half a century. And it’s only heading in one direction. Is there an exit ramp? Open question. But the dynamics both in Beijing and in Washington are pulling this relationship right apart, and that leaves third countries in an increasingly difficult position.

ANDERSON: Yes, and I wanted to talk to you about that because Australia is continually torn between the sort of economic relationship with China that it has, and its strategic partnership with the US. We have seen the US to all intents and purposes, leaning on the UK over Huawei. How should other countries engage with China going forward?

RUDD: Well, one thing I think is to understand that Xi Jinping’s China is quite different from the China of Hu Jintao, Jiang Zemin or even Deng Xiaoping. And since Xi Jinping took over in 2012/2013, it’s a much more assertive China, right across the board. And even in this COVID reality of 2020, we see not just the Hong Kong national security legislation, we see new actions by China in the South China Sea, against Taiwan, against Japan, in the East China Sea, on the Sino-Indian border, and the frictions with Canada, Australia, the United Kingdom – you’ve just mentioned – and elsewhere as well. So, this is a new, assertive China – quite different from the one we’ve seen in the past. So, your question is entirely valid – how do, as it were, the democracies of Asia and the democracies of Europe and elsewhere respond to this new phenomenon on the global stage? I think it’s along these lines. Number one, be confident in the position which democracies have, that we believe in universal values, and human rights and democracy. And we’re not about to change. Number two, many of us, whether we’re in Asia or Europe, or longstanding allies, the United States, that’s not about change. But number three, to make it plain to our Chinese friends that on a reciprocal basis, we wish to have a mutually productive trade, investment, and capital markets relationship. And four, the big challenges of global governance – whether it’s pandemics, or climate change, or stability of global financial markets, and the current crisis we have around the world – where it is incumbent on all of us to work together. I think those four principles form a basis for us dealing with Xi Jinping’s China.

ANDERSON: Kevin, do you see this as a Cold War?

RUDD: As I said before, we’re trending that way. As I said, the big difference between the Soviet Union and the United States is that China and the United States are deeply economically in mesh and have become that way over the last 20 years or so. And that never was the case in the old Cold War. Secondly, in the old Cold War, we basically had a strategic relationship of mutually assured destruction, which came to the flashpoint of the Cuban Missile Crisis in the early 1960s. That’s not the case either. But I’ve got to say in all honesty, it’s trending in a fundamentally negative direction, and when we start to see actions like shutting down each other’s consulate generals, that does remind me of where we got to in the last Cold War as well. There should be an exit ramp, but it’s going to require a new strategic framework for the US-China relationship, based on what I describe as: manage strategic competition between these two powers, where each side’s red lines are well recognized, understood and observed – and competition occurs, as it were, in all other domains. At present, we don’t seem to have parameters or red lines at all.

ANDERSON: And we might have had this discussion four or five months ago. The new layer of course, is the coronavirus pandemic and the way that the US has responded which you say has provided an opportunity for the Chinese to steal a march on the US with regard to its position and its power around the world. Is Beijing, do you think – if you believe that there is a power vacuum at present after this coronavirus response – is Beijing taking advantage of that vacuum?

RUDD: Well, when the coronavirus broke out, China was, by definition, in a defensive position, because the virus came from Wuhan, and therefore, as the virus then spread across the world, China found itself in a deeply problematic position – not just the damage to its economy at home – but frankly its reputation abroad as well. However, President Trump’s America has demonstrated to the world that a) his administration can’t handle the virus within the United States itself, and b) there has been a phenomenal lack of American global leadership in dealing with the public health and global economic dimensions of – let’s call it the COVID-19 crisis – across the world. So, the argument that I’m attracted to is that both these great powers have been fundamentally damaged by the coronavirus crisis that has inflicted the world. So the challenge for the future is whether in fact we a) see a change in administration in Washington with Biden, and secondly, whether a democrat administration will choose to reassert American global leadership through the institutions of global governance, where frankly, the current administration has left so many vacuums across the UN system and beyond it. And that remains the open question – which I think the international community is focusing on – as we move towards that event in November, when the good people the United States cast their ballot.

ANDERSON: Yeah, no fascinating. I’ll just stick to the coronavirus for a final question for you and thank you for this sort of wide-ranging discussion. Australia, of course, applauded for its ability to act fast and flatten its coronavirus curve back in April. That has all been derailed. We’ve seen a second wave. It’s worse than the first. Earlier this week, the country reporting its worst day since the pandemic began despite new tough restrictions. What do you believe it will take to flatten the curve again? And are you concerned that the situation in Australia is slipping out of control?

RUDD: What the situation in the state of Victoria and the city of Melbourne in particular demonstrates is what we see in so many countries around the world, which is the ease with which a second wave effect can be made manifest. It’s not just of course in Australia. We see evidence of this in Hong Kong. We see it in other countries, where in fact, the initial management of the crisis was pretty effective. What the lesson of Melbourne, and the lesson of Victoria is for all of us, is that when it comes to maintaining the disciplines of social distancing, of proper quarantine arrangements, as well as contact tracing and the rest, that there is no, as it were, release of our discipline applied to these challenges. And in the case of Victoria, it was in Melbourne – it was simply a poor application of quarantine arrangements in a single hotel, or Australians returning from elsewhere in the world, that led to this community-level transmission. And that can happen in the northern part of the United Kingdom. It can happen in regional France; it can happen anywhere in Germany. What’s the message? Vigilance across the board, until we can eliminate this thing. We’ve still got a lot to learn from Jacinda Ardern’s success in New Zealand in virtually eliminating this virus altogether.

ANDERSON: With that, we’re going to leave it there. Kevin Rudd, former Prime Minister of Australia, it’s always a pleasure. Thank you very much indeed for joining us.

RUDD: Good to be with you.

ANDERSON: Extremely important subject, US-China relations at present.

The post CNN: Cold War 1.5 appeared first on Kevin Rudd.

Worse Than FailureError'd: Free Coff...Wait!

"Hey! I like free coffee! Let me just go ahead and...um...hold on a second..." writes Adam R.

 

"I know I have a lot of online meetings these days but I don't remember signing up for this one," Ged M. wrote.

 

Peter G. writes, "The $60 off this $1M nylon bag?! What a deal! I should buy three of them!"

 

"So, because it's free, it's null, so I guess that's how Starbucks' app logic works?" James wrote.

 

Graham K. wrote, "How very 'zen' of National Savings to give me this particular error when I went to change my address."

 

"I'm not sure I trust "scenem3.com" with their marketing services, if they send out unsolicited template messages. (Muster is German for template, Max Muster is our equivalent of John Doe.)" Lukas G. wrote.

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Krebs on SecurityNY Charges First American Financial for Massive Data Leak

In May 2019, KrebsOnSecurity broke the news that the website of mortgage title insurance giant First American Financial Corp. had exposed approximately 885 million records related to mortgage deals going back to 2003. On Wednesday, regulators in New York announced that First American was the target of their first ever cybersecurity enforcement action in connection with the incident, charges that could bring steep financial penalties.

First American Financial Corp.

Santa Ana, Calif.-based First American [NYSE:FAF] is a leading provider of title insurance and settlement services to the real estate and mortgage industries. It employs some 18,000 people and brought in $6.2 billion in 2019.

As first reported here last year, First American’s website exposed 16 years worth of digitized mortgage title insurance records — including bank account numbers and statements, mortgage and tax records, Social Security numbers, wire transaction receipts, and drivers license images.

The documents were available without authentication to anyone with a Web browser.

According to a filing (PDF) by the New York State Department of Financial Services (DFS), the weakness that exposed the documents was first introduced during an application software update in May 2014 and went undetected for years.

Worse still, the DFS found, the vulnerability was discovered in a penetration test First American conducted on its own in December 2018.

“Remarkably, Respondent instead allowed unfettered access to the personal and financial data of millions of its customers for six more months until the breach and its serious ramifications were widely publicized by a nationally recognized cybersecurity industry journalist,” the DFS explained in a statement on the charges.

A redacted screenshot of one of many millions of sensitive records exposed by First American’s Web site.

Reuters reports that the penalties could be significant for First American: The DFS considers each instance of exposed personal information a separate violation, and the company faces penalties of up to $1,000 per violation.

In a written statement, First American said it strongly disagrees with the DFS’s findings, and that its own investigation determined only a “very limited number” of consumers — and none from New York — had personal data accessed without permission.

In August 2019, the company said a third-party investigation into the exposure identified just 32 consumers whose non-public personal information likely was accessed without authorization.

When KrebsOnSecurity asked last year how long it maintained access logs or how far back in time that review went, First American declined to be more specific, saying only that its logs covered a period that was typical for a company of its size and nature.

But in Wednesday’s filing, the DFS said First American was unable to determine whether records were accessed prior to Jun 2018.

“Respondent’s forensic investigation relied on a review of web logs retained from June 2018 onward,” the DFS found. “Respondent’s own analysis demonstrated that during this 11-month period, more than 350,000 documents were accessed without authorization by automated ‘bots’ or ‘scraper’ programs designed to collect information on the Internet.

The records exposed by First American would have been a virtual gold mine for phishers and scammers involved in so-called Business Email Compromise (BEC) scams, which often impersonate real estate agents, closing agencies, title and escrow firms in a bid to trick property buyers into wiring funds to fraudsters. According to the FBI, BEC scams are the most costly form of cybercrime today.

First American’s stock price fell more than 6 percent the day after news of their data leak was published here. In the days that followed, the DFS and U.S. Securities and Exchange Commission each announced they were investigating the company.

First American released its first quarter 2020 earnings today. A hearing on the charges alleged by the DFS is slated for Oct. 26.

Kevin RuddBloomberg: US-China Relations Worsen

E&OE TRANSCRIPT
BLOOMBERG
23 JULY 2020

TOM MACKENZIE: Let’s start with your reaction to this latest sequence of events.

KEVIN RUDD: Well, structurally, the US-China relationship is in the worst state it’s been in about 50 years. It’s 50 years next year since Henry Kissinger undertook his secret diplomacy in Beijing. So, this relationship is in trouble strategically, militarily, diplomatically, politically, economically, trade, investment technology, and of course, in the wonderful world of espionage as well. And so, whereas this is a surprising move against a Chinese consulate general of the United States, it certainly fits within the fabric of a structural deterioration relationship underway now for quite a number of years.

MACKENZIE: So far, China, Beijing has taken what many would argue would be a proportionate response to actions by the US, at least in the last few months. Is there an argument now that this kind of action, calling for the closure of this consulate in Houston, will strengthen the hands of the hardliners here in Beijing, and it will force them to take a stronger response? What do you think ultimately will be the material reaction then from Beijing

RUDD: Well, on this particular consulate general closure, I think, as night follows day, you’ll see a Chinese decision to close an American consulate general in China. There are a number already within China. I think you would look to see what would happen with the future of the US Consulate General in say Shenyang up in the northeast, or in Chengdu in the west, because this tit-for-tat is alive very much in the way in which China views the necessity politically, to respond in like form to what the Americans have done. But overall, the Chinese leadership are a very hard-bitten, deeply experienced Marxist-Leninist leadership, who look at the broad view of the US-China relationship. They see it as structurally deteriorating. They see it in part as an inevitable reaction to China’s rise. And if you look carefully at some of the internal statements by Xi Jinping in recent months, the Chinese system is gearing up for what it describes internally as 20 to 30 years of growing friction in the US-China relationship, and that will make life difficult for all countries who have deep relationships with both countries.

MACKENZIE: Mike Pompeo, the US Secretary of State was in London talking to his counterparts there, and he called for a coalition with allies. Presumably, that will include at some point Australia, though we have yet to hear from their leaders about the sense of a coalition against China. Do you think this is significant? Do you think this is a shift in US policy? How much traction do you think Mike Pompeo and the Trump administration will get in forming a coalition to push back against China?

RUDD: Well, the truth is, most friends and allies of the United States, are waiting to see what happens in the US presidential election. There was a general expectation that President Trump will not be re-elected. Therefore, the attitude of friends and allies of the United States: well, what will be the policy cost here of an incoming Biden administration, in relation to China, and in critical areas like the economy, trade, investment technology and the rest? Bear in mind, however, that what has happened under Xi Jinping’s leadership, since he became leader of the Chinese Communist Party the end of 2012, is that China has progressively become more assertive in the promotion of its international interests, whether it’s in the South China Sea, the East China Sea, whether it’s in Europe, whether it’s the United States, whether its countries like Australia. And therefore, what is happening is that countries who are now experiencing this for the first time – the real impact of an assertive Chinese foreign policy – are themselves beginning to push back. And so whether it’s with American leadership or not, the bottom line is that what I now observe is that countries in Europe, democracies in Europe, democracies in Asia, are increasingly in discussion with one another about how do you deal with the emerging China challenge to the international rules based system. That I think is happening as a matter of course, whether or not Mike Pompeo seeks to lead it or not.

DAVID INGLES: Mr Rudd I’d like to pick it up there. David here, by the way, in Hong Kong. In terms of what do you think is the proper way to engage an emerging China? You’ve dealt with them at many levels. You understand how sensitive their past is to their leadership, and how that shapes where they think their country should be, their ambitions. How should the world – let alone the US, let’s set that aside – how should the rest of the world engage an emerging China?

RUDD: Well you’re right. In one capacity or another, I’ve been dealing with China for the last 35 years, since I first went to work there as an Australian embassy official way back in the 1980s. It’s almost the Mesolithic period now. And I’ve seen the evolution of China’s international posture over that period of time. And certainly, there is a clear dividing line with the emergence of Xi Jinping’s leadership, where China has ceased to hide its strength, bide its time, never to take the lead – that was Deng Xiaoping’s axiom for the past. And instead, we see a China under this new leadership, which is infinitely more assertive. And so my advice to governments when they asked me about this, is that governments need to have a coordinated China strategy themselves – just as China has a strategy for dealing with the rest of the world including the major countries and economies within it. But the principles of those strategies should be pretty basic. Number one, those of us who are democracies, we simply make it plain to the Chinese leadership that that’s our nature, our identity, and we’re not about change as far as our belief in universal human rights and values are concerned. Number two, most of us are allies with the United States for historical reasons, and current reasons as well. And that’s not going to change either. Number three, we would like to however, prosecute a mutually beneficial trade and investment and capital markets relationship with you in China, that works for both of us on the basis of reciprocity in each other’s markets. And four, there are so many global challenges out there at the moment – from the pandemic, through to global climate change action, and onto financial markets stability – which require us and China to work together in the major forums of the world like the G20. I think those principles should govern everyone’s approach to how you deal with this emerging and different China.

The post Bloomberg: US-China Relations Worsen appeared first on Kevin Rudd.