Planet Russell


CryptogramInsider Attack on the Carnegie Library

Greg Priore, the person in charge of the rare book room at the Carnegie Library, stole from it for almost two decades before getting caught.

It's a perennial problem: trusted insiders have to be trusted.

Worse Than FailureBidirectional

Merge-short arrows

Trung worked for a Microsoft and .NET framework shop that used AutoMapper to simplify object mapping between tiers. Their application's mapping configuration was performed at startup, as in the following C# snippet:

public void Configure(ConfigurationContext context)

where the AfterMap() method's Map delegate was to map discrepancies that AutoMapper couldn't.

One day, a senior dev named Van approached Trung for help. He was repeatedly getting AutoMapper's "Missing type map configuration or unsupported mapping. Mapping types Y -> X ..." error.

Trung frowned a little, wondering what was mysterious about this problem. "You're ... probably missing mapping configuration for Y to X," he said.

"No, I'm not!" Van pointed to his monitor, at the same code snippet above.

Trung shook his head. "That mapping is one-way, from X to Y only. You can create the reverse mapping by using the Bidirectional() extension method. Here ..." He leaned over to type in the addition:


This resolved Van's error. Both men returned to their usual business.

A few weeks later, Van approached Trung again, this time needing help with refactoring due to a base library change. While they huddled over Van's computer and dug through compilation errors, Trung kept seeing strange code within multiple AfterMap() delegates:

void Map(X src, Y desc)
desc.QueueId = src.Queue.Id;
src.Queue = Queue.GetById(desc.QueueId);

"Wait a minute!" Trung reached for the mouse to highlight two such lines and asked, "Why is this here?"

"The mapping is supposed to be bidirectional! Remember?" Van replied. "I’m copying from X to Y, then from Y to X."

Trung resisted the urge to clap a hand to his forehead or mutter something about CS101 and variable-swapping—not that this "swap" was necessary. "You realize you'd have nothing but X after doing that?"

The quizzical look on the senior developer's face assured Trung that Van hadn't realized any such thing.

Trung could only sigh and help Van trudge through the delegates he'd "fixed," working out a better mapping procedure for each.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianJunichi Uekawa: Updated page to record how-to video.

Updated page to record how-to video. I don't need big camera image, I just need a tiny image with probably my face on it. As a start, I tried extracting a rectangle.

Planet DebianNorbert Preining: Multiple GPUs for graphics and deep learning

For long time I have been using a good old nvidia GeForce GTX 1050 for my display and deep learning needs. I reported a few times how to get Tensorflow running on Debian/Sid, see here and here. Later on I switched to AMD GPU in the hope that an open source approach to both GPU driver as well as deep learning (ROCm) would improve the general experience. Unfortunately it turned out that AMD GPUs are generally not ready for deep learning usage.

The problems with AMD and ROCm are far and wide. First of all, it seems that for anything more complicated then simple stuff, AMD’s flagship RX 5700(XT) and all GFX10 (Navi) based cards are not(!!!) supported in ROCm. Yes, you read correct … AMD does not support 5700(XT) cards in the ROCm stack. Some simple stuff works, but nothing for real computations.

Then, even IF they would support, ROCm as distributed is currently a huge pain in the butt. The source code is a huge mess, and building usable packages from it is probably possible, but quite painful (I am member of the ROCm packaging team in Debian, and have tried many hours). And the packages provided by AMD are not installable on Debian/sid due to library incompatibilities.

So that left me with a bit a problem: for work I need to train quite some neural networks, do model selection, etc. Doing this on a CPU is a bit a burden. So at the end I decided to put the nVidia card back into the computer (well, after moving it to a bigger case – but that is a different story to tell). Here are the steps I did to get both cards working for their respective target: AMD GPU for driving the console and X (and games!), and the nVidia card doing the deep learning stuff (tensorflow using the GPU).

Starting point

Starting point was a working AMD GPU installation. The AMD GPU is also the first GPU card (top slot) and thus the one that is used by the BIOS and the Linux console. If you want the video output on the second card you need to trick, and probably don’t have console output, etc etc. So not a solution for me.

Installing libcuda1 and the nvidia kernel drivers

Next step was installing the libcuda1 package:

apt install libcuda1

This installs a lot of stuff, including the nvidia drivers, GLX libraries, alternatives setup, and update-glx tool and package.

The kernel module should be built and installed automatically for your kernel.

Installing CUDA

Follow more or less the instructions here and do

wget -O- | sudo tee /etc/apt/trusted.gpg.d/nvidia-cuda.asc
echo "deb /" | sudo tee /etc/apt/sources.list.d/nvidia-cuda.list
sudo apt-get update
sudo apt-get install cuda-libraries-10-1

Warning! At the moment Tensorflow packages require CUDA 10.1, so don’t install the 10.0 version. This might change in the future!

This will install lots of libs into /usr/local/cuda-10.1 and add the respective directory to the path by creating a file /etc/

Install CUDA CuDNN

One difficult to satisfy dependency are the CuDNN libraries. In our case we need the version 7 library for CUDA 10.1. To download these files one needs to have a NVIDIA developer account, which is quick and painless. After that go to the CuDNN page where one needs to select Archived releases and then Download cuDNN v7.N.N (xxxx NN, YYYY), for CUDA 10.1 and then cuDNN Runtime Library for Ubuntu18.04 (Deb).

At the moment (as of today) this will download a file libcudnn7_7.6.5.32-1+cuda10.1_amd64.deb which needs to be installed with dpkg -i libcudnn7_7.6.5.32-1+cuda10.1_amd64.deb.

Updating the GLX setting

Here now comes the very interesting part – one needs to set up the GLX libraries. Reading the output of update-glx --help and then the output of update-glx --list glx:

$ update-glx --help
update-glx is a wrapper around update-alternatives supporting only configuration
of the 'glx' and 'nvidia' alternatives. After updating the alternatives, it
takes care to trigger any follow-up actions that may be required to complete
the switch.
It can be used to switch between the main NVIDIA driver version and the legacy
drivers (eg: the 304 series, the 340 series, etc).
For users with Optimus-type laptops it can be used to enable running the discrete
GPU via bumblebee.
Usage: update-glx <command>
  --auto <name>            switch the master link <name> to automatic mode.
  --display <name>         display information about the <name> group.
  --query <name>           machine parseable version of --display <name>.
  --list <name>            display all targets of the <name> group.
  --config <name>          show alternatives for the <name> group and ask the
                           user to select which one to use.
  --set <name> <path>      set <path> as alternative for <name>.
<name> is the master name for this link group.
  Only 'nvidia' and 'glx' are supported.
<path> is the location of one of the alternative target files.
  (e.g. /usr/lib/nvidia)
$ update-glx --list glx

I was tempted into using

update-glx --config glx /usr/lib/mesa-diverted

because at the end the Mesa GLX libraries should be used to drive the display via the AMD GPU.

Unfortunately, with this neither the nvidia kernel module was loaded, the nvidia persistenced couldn’t run because the library libnvidia-cfg1 wasn’t found (not sure it was needed at all…), and with that also no way to run tensorflow on GPU.

So what I did I tried

update-glx --auto glx

(which is the same as update-glx --config glx /usr/lib/nvidia), and rebooted, and decided to check afterwards what is broken.

To my big surprise, the AMD GPU still worked out of the box, including direct rendering, and the games I tried (Overload, Supraland via Wine) all worked without a hinch.

Not that I really understand why the GLX libraries that are seemingly now in use are from nvidia but work the same (if anyone has an explanation, that would be great!), but since I haven’t had any problems till now, I am content.

Checking GPU usage in tensorflow

Make sure that you remove tensorflow-rocm and reinstall tensorflow with GPU support:

pip3 uninstall tensorflow-rocm
pip3 install --upgrade tensorflow-gpu

After that a simple

$ python3 -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
....(lots of output)
2020-09-02 11:57:04.673096: I tensorflow/core/common_runtime/gpu/] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3581 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:05:00.0, compute capability: 6.1)
tf.Tensor(1093.4915, shape=(), dtype=float32)

should indicate that the GPU is used by tensorflow!

The R Keras package should also work out of the box and pick up the system-wide tensorflow which in turn picks the GPU, see this post for example code to run for tests.


All in all it was easier than expected, despite the dances one has to do for nvidia to get the correct libraries. What still puzzles me is the selection option in update-glx, and might need a better support for secondary nvidia GPU cards.


Cory DoctorowGet Radicalized for a mere $2.99

The ebook of my 2019 book RADICALIZED — finalist for the Canada Reads ward, LA Library book of the year, etc — is on sale today for $2.99 on all major platforms!


There are a lot of ways to get radicalized in 2020, but this is arguably the cheapest.

Planet DebianRussell Coker: BBB vs Jitsi

I previously wrote about how I installed the Jitsi video-conferencing system on Debian [1]. We used that for a few unofficial meetings of LUV to test it out. Then we installed Big Blue Button (BBB) [2]. The main benefit of Jitsi over BBB is that it supports live streaming to YouTube. The benefits of BBB are a better text chat system and a “whiteboard” that allows conference participants to draw shared diagrams. So if you have the ability to run both systems then it’s best to use Jitsi when you have so many viewers that a YouTube live stream is needed and to use BBB in all other situations.

One problem is with the ability to run both systems. Jitsi isn’t too hard to install if you are installing it on a VM that is not used for anything else. BBB is a major pain no matter what you do. The latest version of BBB is 2.2 which was released in March 2020 and requires Ubuntu 16.04 (which was released in 2016 and has “standard support” until April next year) and doesn’t support Ubuntu 18.04 (released in 2018 and has “standard support” until 2023). The install script doesn’t check for correct apt repositories and breaks badly with no explanation if you don’t have Ubuntu Multiverse enabled.

I expect that they rushed a release because of the significant increase in demand for video conferencing this year. But that’s no reason for demanding the 2016 version of Ubuntu, why couldn’t they have developed on version 18.04 for the last 2 years? Since that release they have had 6 months in which they could have released a 2.2.1 version supporting Ubuntu 18.04 or even 20.04.

The dependency list for BBB is significant, among other things it uses LibreOffice for the whiteboard. This adds to the pain of installing and maintaining it. It wouldn’t surprise me if some of the interactions between all the different components have security issues.


If you want something that’s not really painful to install and run then use Jitsi.

If you need YouTube live streaming use Jitsi.

If you need whiteboards and a good text chat system or if you generally need to run things like a classroom then BBB is a good option. But only if you can manage it, know someone who can manage it for you, or are happy to pay for a managed service provider to do it for you.

CryptogramNorth Korea ATM Hack

The US Cybersecurity and Infrastructure Security Agency (CISA) published a long and technical alert describing a North Korea hacking scheme against ATMs in a bunch of countries worldwide:

This joint advisory is the result of analytic efforts among the Cybersecurity and Infrastructure Security Agency (CISA), the Department of the Treasury (Treasury), the Federal Bureau of Investigation (FBI) and U.S. Cyber Command (USCYBERCOM). Working with U.S. government partners, CISA, Treasury, FBI, and USCYBERCOM identified malware and indicators of compromise (IOCs) used by the North Korean government in an automated teller machine (ATM) cash-out scheme­ -- referred to by the U.S. Government as "FASTCash 2.0: North Korea's BeagleBoyz Robbing Banks."

The level of detail is impressive, as seems to be common in CISA's alerts and analysis reports.

Planet DebianSylvain Beucler: Debian LTS and ELTS - August 2020

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In August, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 21.75h for LTS (out of my 30 max; all done) and 14.25h for ELTS (out of my 20 max; all done).

We had a Birds of a Feather videoconf session at DebConf20, sadly with varying quality for participants (from very good to unusable), where we shared the first results of the LTS survey.

There were also discussions about evaluating our security reactivity, which proved surprisingly hard to estimate (neither CVE release date and criticality metrics are accurate nor easily available), and about when it is appropriate to use public naming in procedures.

Interestingly ELTS gained new supported packages, thanks to a new sponsor -- so far I'd seen the opposite, because we were close to the EOL.

As always, there were opportunities to de-dup work through mutual cooperation with the Debian Security team, and LTS/ELTS similar updates.

ELTS - Jessie

  • Fresh build VMs
  • rails/redmine: investigate issue, initially no-action as it can't be reproduced on Stretch and isn't supported in Jessie; follow-up when it's supported again
  • ghostscript: global triage: identify upstream fixed version, distinguish CVEs fixed within a single patch, bisect non-reproducible CVEs, reference missing commit (including at MITRE)
  • ghostscript: fix 25 CVEs, security upload ELA-262-1
  • ghostscript: cross-check against the later DSA-4748-1 (almost identical)
  • software-properties: jessie triage: mark back for update, at least for consistency with Debian Stretch and Ubuntu (all suites)
  • software-properties: security upload ELA-266-1
  • qemu: global triage: update status and patch/regression/reproducer links for 6 pending CVEs
  • qemu: jessie triage: fix 4 'unknown' lines for qemu following changes in package attribution for XSA-297, work continue in September

LTS - Stretch

  • sane-backends: global triage: sort and link patches for 7 CVEs
  • sane-backends: fix dep-8 test and notify the maintainer,
  • sane-backends: security upload DLA-2332-1
  • ghostscript: security upload DLA 2335-1 (cf. common ELTS work)
  • ghostscript: rebuild ("give back") on armhf, blame armhf, get told it was a concurrency / build system issue -_-'
  • software-properties: security upload DLA 2339-1 (cf. common ELTS work)
  • wordpress: global triage: reference regression for CVE-2020-4050
  • wordpress: stretch triage: update past CVE status, work continues in September with probably an upstream upgrade 4.7.5 -> 4.7.18
  • nginx: cross-check my July update against the later DSA-4750-1 (same fix)
  • DebConf BoF + IRC follow-up


Kevin Rudd2GB: Morrison’s Retirement Rip-Off


Topics: Superannuation; Cheng Lei consular matter

Ben Fordham
Now, superannuation to increase or not? Right now 9.5% of your wage goes towards super, and it sits there until you retire. From July 1 next year, the compulsory rate is going up. It will climb by half a per cent every year until it hits 12% in 2025. So it’s slowly going from 9.5% to 12%. Now that was legislated long before Coronavirus. Now we are in recession, and the government is hinting strongly that it’s ready to dump or delay the policy to increase super contributions. Now I reckon this is a genuine barbecue stopper. It’s not a question of Labor versus Liberal or left versus right. Some want their money now to help them out of hardship. Others say no, we have super for a reason and that is to save for the future. Former Prime Minister Kevin Rudd has got a strong view on this and he joins us on the line. Kevin Rudd, good morning to you.

Kevin Rudd
Morning, Ben. Thanks for having me on the program.

Ben Fordham
No problem. You want Scott Morrison and Josh Frydenberg to leave super alone.

Kevin Rudd
That’s right. And Mr Morrison did promise to maintain this policy which we brought in when he went to the people at the last election. And remember, Ben, back in 2014, they already deferred this for five years. Otherwise, this thing would be done and dusted and it’d be all the way up to 12 by now. I’m just worried we’re going to find one excuse after another to kick this into the Never Never Land. And the result is that working families, those people listening to your program this morning, are not going to have a decent nest egg for their retirement.

Ben Fordham
All right, most of those hard-working Aussies are telling me that they would like the option of having the money right now.

Kevin Rudd
Well, the problem with super is that if you open the floodgates and allow people what Morrison calls as ‘early access’, then what happens is they hollow out and then if you take out $10,000 now as a 35-year-old, by the time you retire you’re going to be $65,000 to $130,000 worse off. That’s how it builds up. So I’m really worried about that. And also, you know Ben, then we’re living longer. Once upon a time, we used to retire at 65 and we’d all be dead by 70. Guess what, that’s not the case anymore. People are living to 80, 90 and the young people listen to your program, or a large number of them, are going to be around until they’re 100. So what we have for retirement income is really important, otherwise you’re back on the age pension which, despite changes I made in office, is not hugely generous.

Ben Fordham
I’m sure you respect the view of the Reserve Bank governor Philip Lowe. Philip Lowe says lifting the super guarantee would reduce wages, cut consumer spending and cost jobs. So he’s got a very different view to you.

Kevin Rudd
Well, I’ve actually had a look at what Governor Lowe had to say. I’ve been reading his submission in the last 24 hours or so. On the question of the impact on wages, yes, he says it would be a potential deferral of wages, but he doesn’t express a view one way or the other to whether that is good or bad. But on employment and the argument used by the government that this is somehow some negative effect on employment, it just doesn’t stack up. By the way, Ben, remember, if this logic held that somehow if we don’t have the superannuation guarantee levy going up, that wages would increase; well, after the government deferred this for five years, starting from 2014, guess what, working people got no increase in their super, but also their wages have flatlined as well. I’m just worried about how this all lands at the end for working people wanting to have a decent retirement.

Ben Fordham
Okay, but don’t we need to be aware of the times that we’re living in? You said earlier, you’re concerned that the government’s looking for excuses to put this thing off or kill this thing off. Well, we do have a global health pandemic at the moment. Isn’t that the ultimate reason why we should be adjusting our position?

Kevin Rudd
There’s always a crisis. I took the country through the global financial crisis, which threw every economy in the world, every major one, into recession. We managed to avoid it here in Australia through a combination of good policy and some other factors as well. It didn’t cross our mind to kill super during that period of time, or superannuation increases. It was simply not in our view the right approach, because we were concerned about keeping the economy going in here and now, but also making proper preparations for the future. But then here’s the rub. If 9% is good enough for everybody, or 9.5% where it is at the moment, then why the politicians and their staffers currently on 15.4%? Very generous for them. Not so generous for working families. That’s what worries me.

Ben Fordham
We know that you keep a keen eye on China. We wake up this morning to the news that Chinese authorities have detained an Australian journalist, Cheng Lei, without charge. Is the timing of this at all suspicious?

Kevin Rudd
You know, Ben, I don’t know enough of the individual circumstances surrounding this case. I don’t want to say anything which jeopardizes the individual concerned. All I’d say is, the Australian Government has a responsibility to look after any Australian — Chinese Australian, Anglo Saxon Australian, whoever Australian — if they get into strife abroad. And I’m sure, knowing the professionalism of the Australian Foreign Service, that they’re doing everything physically possible at present to try and look after this person.

Ben Fordham
Yeah, we know that Marise Payne’s doing that this morning. We appreciate you jumping on the phone and talking to us.

Kevin Rudd
Thanks, Ben. Appreciate it.

Ben Fordham
Former Prime Minister Kevin Rudd, I reckon this is one of these issues where you can’t just put a line down the middle of the page and say, ‘okay, Labor supporters are going to think this way and Liberal supporters are going to think that way’. I think there are two schools of thought and it depends on your age, it depends on your circumstance, it depends on your attitude. Some say ‘give me the money now, it’s my money, not yours’. Others say ‘no, we have super for a reason, it’s there for our retirement’. Where do you stand? It’s 7.52 am.

The post 2GB: Morrison’s Retirement Rip-Off appeared first on Kevin Rudd.

Kevin RuddSunrise: Protecting Australian Retirees


Now two former Labor prime ministers have taken aim at the government demanding it go ahead with next year’s planned increased to compulsory super. Paul Keating introduced the scheme back in 1992 and says workers should not miss out.

Paul Keating
[Recording] They want to gyp ordinary people by two and a half per cent of their income for the rest of their life. I mean, the gall of it. I mean, the heartlessness of it.

Kevin Rudd, who moved to increase super contributions as well says the rise to 12% in the years ahead, should not be stalled.

Kevin Rudd
[Recording] This is a cruel assault by Morrison on the retirement income of working Australians and using the cover of COVID to try and get away with it.

The government is yet to make an official decision. Joining me now is the former prime minister Kevin Rudd. Kevin Rudd, good morning to you. Rather than being a cruel assault by the federal government, is it an acknowledgment that we’re going into the worst recession since the Depression?

Kevin Rudd
Well, you know, Kochie, there’s always been an excuse not to do super and not to continue with super. And what we’ve seen in the past is the Liberal Party at various stages just trying to kill this scheme which Paul Keating got going for the benefit of working Australians all those years ago. They had no real excuse for deferring this move from nine to 12%. When they did it back in 2014, and this would be a further deferral. Look, what’s really at stake here, Kochie, is just working families watching your program this morning, having a decent retirement. That’s why Paul brought it in.


Kevin Rudd
That’s why we both decided to come and speak out.

I absolutely agree with it, but it’s a matter of timing. What do you say to all the small business owners out there who are just trying to keep afloat? To say, hey gang, you’re gonna have to pay an extra half a per cent in super, that you’re going to have to pay on a quarterly basis, to add to your bills again, to try and survive this.

Kevin Rudd
Well, what Mr Morrison saying to those small business folks is the reason we don’t want to do this super increase is because it’s going to get in the road of a wage increase and you can’t have this both ways, mate. Either you’ve got an employer adding 0.5 by way of a wage increase, or by super that’s the bottom line here.


Kevin Rudd
You can’t simply argue that this is all going to disappear into some magic pudding. The bottom line is: the reason we did it this way, and Paul before me, was a small increment each year.


Kevin Rudd
But it builds up as you know, you’re a finance guy Kochie, into a ginormous nest egg for people.


Kevin Rudd
And for the country.

I do not disagree with the overall theory of it. It’s just in the timing. So what you’re saying is to Australian bosses around the country is to go to your staff and say, ‘no, you’re not going to get a pay increase, because I’m going to put more into your super, and you’ve got to like it or lump it’.

Kevin Rudd
Well, Kochie, if that was the case, why is it that we’ve had no super increase in the guarantee levy over the last five or six years, and wages growth has been absolutely doodly-squat over that period of time? In other words, the argument for the last five years is we couldn’t do an SGL increase from nine to 12 because would impact on wages. Guess what, we got no increase in super and no increase in real wages. And it just doesn’t hold, mate.

The Reserve Bank is saying don’t do it. Social services group are saying don’t do it.

Kevin Rudd
Well, mate, if you look carefully at what the governor the RBA says, he says on the impact on wages, yes, it is, in his language of wage deferral on which he does not express an opinion. And as for the employment, the jobs impact, he says he does not have a view. I think we need to be very careful in reading the detail of what governor Lowe has had to say. Our argument is just, what’s decent for working families? And why are the pollies and their staffers getting 15.4% and yet working families who Paul would try to look after with this massive reform 30 years ago, stuck at nine? I don’t think that’s fair. It’s a double standard.

Yep. I absolutely agree with you on that as well.

The post Sunrise: Protecting Australian Retirees appeared first on Kevin Rudd.

Planet DebianJunichi Uekawa: September.

September. Recently I've been writing and reading more golang code and I feel more comfortable with it. However every day I feel frustrated by the lack of features.

Worse Than FailureCodeSOD: Unknown Purpose

Networks are complex beasts, and as they grow, they get more complicated. Diagnosing and understanding problems on networks rapidly gets hard. “Fortunately” for the world, IniTech ships one of those tools.

Leonore works on IniTech’s protocol analyzer. As you might imagine, a protocol analyzer gathers a lot of data. In the case of IniTech’s product, the lowest level of data acquisition is frequently sampled voltage measurements over time. And it’s a lot of samples- depending on the protocol in question, it might need samples on the order of nanoseconds.

In Leonore’s case, those raw voltage samples are the “primary data”. Now, there are all sorts of cool things that you can do with that primary data, but those computations become expensive. If your goal is to be able to provide realtime updates to the UI, you can’t do most of those computations- you do those outside of the UI update loop.

But you can do some of them. Things like level crossings and timing information can be built quickly enough for the UI. These values are “secondary data”.

As data is collected, there are a number of other sections of the application which need to be notified: the UI and the various high-level analysis components. Architecturally, Leonore’s team made an event-driven approach to doing this. As data is collected, a DataUpdatedEvent fires. The DataUpdatedEvent fires twice: once for the “primary data” and once for the “secondary data”. These two events always happen in lockstep, and they happen so closely together that, for all other modules in the application, they can safely be considered simultaneous, and no components in the application ever only care about one- they always want to see both the primary and the secondary data.

So, to review: the data collection module outputs a pair of data updated events, one containing primary data, one containing secondary data, and can never do anything else, and these two events could basically be viewed as the same event by everything else in the application.

Which raises a question about this C++/COM enum, used to tag the different events:

  enum DataUpdatedEventType 
    [helpstring("Unknown data type.")] UnknownDataType = 0, 
    [helpstring("Primary data.")] PrimaryData = 1,
    [helpstring("Secondary data.")] SecondaryData = 2,

As stated, the distinction between primary/secondary events is unnecessary. In fact, sending two events makes all the consuming code more complicated, because in many cases, they can’t start working until they’ve received the secondary data, and thus have to cache the primary data until the next event arrives.

But that’s minor. The UnknownDataType is never used. It can never be used. There is no case in which the data collection module will ever output that. There’s no reason why it would ever need to output that. None of the consumers are prepared to handle that- sending an UnknownDataType would almost certainly cause a crash in most configurations.

So why is it there? I’ll let Leonore explain:

The only answer I can give is this: When this was written, half of us didn’t know what we were doing most of the time, and most of us didn’t know what we were doing half of the time. So now there’s an enum in the code base that has never been used and, I would submit, CAN never be used. Or maybe I ought to say SHOULD never be used. I would just delete it, but I’ve never quite been able to bring myself to do so.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianRuss Allbery: Review: Riot Baby

Review: Riot Baby, by Tochi Onyebuchi

Copyright: January 2020
ISBN: 1-250-21476-9
Format: Kindle
Pages: 176

From Ella's childhood, she sees visions of the future. They come at first with nose bleeds and other physical symptoms, but their worst aspect is that they're sad and dark. Ella is black, as are those around her, and their futures are full of shootings and gangs, death and trouble. As she grows older, she develops her Thing: powers that let her bend, move, and destroy things with her mind, and later to become invisible, teleport, and reshape the world. Ella has superpowers.

Ella is not the viewpoint character of most of Riot Baby, however. That is Kev, her younger brother, the riot baby of the title, born in South Central on the day of the Rodney King riots. Kev grows up in Harlem where they move after the destruction from the riots: keeping Ella's secret, making friends, navigating gang politics, watching people be harassed by the cops. Growing up black in the United States. Then Ella sees something awful in the future and disappears, and some time afterwards Kev ends up in Rikers Island.

One of the problems with writing reviews of every book I read is that sometimes I read books that I am utterly unqualified to review. This is one of those books. This novella is about black exhaustion and rage, about the experience of oppression, about how it feels to be inside the prison system. It's also a story in dialogue with an argument that isn't mine, between the patience and suffering of endurance and not making things worse versus the rage of using all the power that one has to force a change. Some parts of it sat uncomfortably and the ending didn't work for me on the first reading, but it's not possible for me to separate my reactions to the novella from being a white man and having a far different experience of the world.

I'm writing a review anyway because that's what I do when I read books, but even more than normal, take this as my personal reaction expressed in my quiet corner of the Internet. I'm not the person whose opinion of this story should matter.

In many versions of this novella, Ella would be the main character, since she's the one with superpowers. She does get some viewpoint scenes, but most of the focus is on Kev even when the narrative is following Ella. Kev trying to navigate the world, trying to survive prison, seeing his friends murdered by the police, and living as the target of oppression that Ella can escape. This was an excellent choice. Ella wouldn't have been as interesting of a character if the story were more focused on her developing powers instead of on the problems that she cannot solve.

The writing is visceral, immediate, and very evocative. Onyebuchi builds the narrative with a series of short and vividly-described moments showing the narrowing of Kev's life and Ella's exploration of her growing anger and search for a way to support and protect him.

This is not a story about nonviolent resistance or about the arc of the universe bending towards justice. Ella confronts this directly in a memorable scene in a church towards the end of the novella that for me was the emotional heart of the story. The previous generations, starting with Kev and Ella's mother, preach the gospel of endurance and survival and looking on the good side. The prison system eventually provides Kev a path to quiet and a form of peace. Riot Baby is a story about rejecting that approach to the continuing cycle of violence. Ella is fed up, tired, angry, and increasingly unconvinced that waiting for change is working.

I wasn't that positive on this story when I finished it, but it's stuck with me since I read it and my appreciation for it has grown while writing this review. It uses the power fantasy both to make a hard point about the problems power cannot solve and to recast the argument about pacifism and nonviolence in a challenging way. I'm still not certain what I think of it, but I'm still thinking about it, which says a lot. It deserves the positive attention that it's gotten.

Rating: 7 out of 10

Planet DebianPaul Wise: FLOSS Activities August 2020


This month I didn't have any particular focus. I just worked on issues in my info bubble.





  • Debian: restarted RAM eating service
  • Debian wiki: unblock IP addresses, approve accounts


The cython-blis/preshed/thinc/theano bugs and smart-open/python-importlib-metadata/python-pyfakefs/python-zipp/python-threadpoolctl backports were sponsored by my employer. All other work was done on a volunteer basis.


Planet DebianChris Lamb: Free software activities in August 2020

Here is another monthly update covering what I have been doing in the free software world during August 2020 (previous month):

  • Filed a pull request against django-enumfield, a library that provides an enumeration-like model field for the Django web development framework. The classproperty helper has been moved to django.utils.functional in newer versions of Django. [...]

  • Transferred the maintainership of my Strava Enhancement Suite Chrome extension to improve the user experience on the Strava athletic tracker to Pavel Dolecek.

  • As part of my role of being the assistant Secretary of the Open Source Initiative and a board director of Software in the Public Interest, I attended their respective monthly meetings and participated in various licensing and other discussions occurring on the internet, as well as the usual internal discussions, etc.

  • Filed a pull request for JSON-C, a reference counting library to allow you to easily manipulate JSON objects from C in order to make the documentation build reproducibly. [...]

  • Reviewed and merged some changes to my django-auto-one-to-one library for Django from Dan Palmer (which automatically creates and destroys associated model instances) to not configure signals for models that aren't installed and to honour INSTALLED_APPS during model setup. [...]

  • Merged a pull request from Michael K. to cleanup the codebase after dropping support for Python 2 and Django 1.x [...] in my django-slack library which provides a convenient wrapper between projects using the Django and the Slack chat platform.

  • Updated my django-staticfiles-dotd utility that concatenates Debian .d-style directories containing Javascript and CSS to drop unquote usage from the six compatibility library. [...]

I uploaded Lintian versions 2.86.0, 2.87.0, 2.88.0, 2.89.0, 2.90.0, 2.91.0 and 2.92.0, as well as made the following changes:

  • New features:

    • Check for StandardOutput= and StandardError= fields that use the deprecated syslog or syslog-console systemd facilities. (#966617)
    • Add support for clzip as an alternative for lzip. (#967083)
    • Check for User=nobody and Group=nogroup in systemd .service files. (#966623)
  • Bug fixes:

  • Reporting/interface:

  • Misc:

    • Add justification for the use of the lzip dependency in a previous debian/changelog entry. (#966817)
    • Update the generate-tag-summary release script to reflect change of tag definition filename extension change from .desc.tag. [...]
    • Revert a change to the spelling-error-in-rules-requires-root tag's severity; this is not a "spelling" check in the sense that it does not use our dictionary. [...]
    • Drop an unused $skip_tag argument in the extract_service_file_values routine. [...]


Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:

  • Filed a pull request for JSON-C, a reference counting library to allow you to easily construct JSON objects from C in order to make the documentation build reproducibly. [...]

  • In Debian, I:

  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.

  • Filed a build-failure bug against the muroar package that was discovered while doing reproducibility testing. (#968189)

  • We operate a large and many-featured Jenkins-based testing framework that powers This month, I updated the self-serve package rescheduler to use HTML <pre> tags when dumping any debugging data. [...]



I made the following changes to diffoscope, including preparing and uploading versions 155, 156, 157 and 158 to Debian:

  • New features:

    • Support extracting data of PGP signed data. (#214)
    • Try files named .pgp against pgpdump(1) to determine whether they are Pretty Good Privacy (PGP) files. (#211)
    • Support multiple options for all file extension matching. [...]
  • Bug fixes:

    • Don't raise an exception when we encounter XML files with <!ENTITY> declarations inside the Document Type Definition (DTD), or when a DTD or entity references an external resource. (#212)
    • pgpdump(1) can successfully parse some binary files, so check that the the parsed output contains something sensible before accepting it. [...]
    • Temporarily drop gnumeric from the Debian build-dependies as it has been removed from the testing distribution. (#968742)
    • Correctly use fallback_recognises to prevent matching .xsb binary XML files.
    • Correct identify signed PGP files as file(1) returns "data". (#211)
  • Logging improvements:

    • Emit a message when ppudump version does not match our file header. [...]
    • Don't use Python's repr(object) output in "Calling external command" messages. [...]
    • Include the filename in the "... not identified by any comparator" message. [...]
  • Codebase improvements:

    • Bump Python requirement from 3.6 to 3.7. Most distributions are either shipping with Python 3.5 or 3.7, so supporting 3.6 is not only somewhat unnecessary but also cumbersome to test locally. [...]
    • Drop some unused imports [...], drop an unnecessary dictionary comprehensions [...] and some unnecessary control flow [...].
    • Correct typo of "output" in a comment. [...]
  • Release process:

    • Move generation of debian/tests/control to an external script. [...]
    • Add some URLs for the site that will appear on [...]
    • Update "author" and "author email" in for and similar. [...]
  • Testsuite improvements:

    • Update PPU tests for compatibility with Free Pascal versions 3.2.0 or greater. (#968124) [...][...][...]
    • Mark that our identification test for .ppu files requires ppudump verson 3.2.0 or higher. [...]
    • Add an assert_diff helper that loads and compares a fixture output. [...][...][...][...]
  • Misc:



Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Investigated and triaged chrony [...], golang-1.8 [...], golang-go.crypto [...], golang-golang-x-net-dev [...], icingaweb2 [...], lua5.3 [...], mongodb [...], net-snmp, php7.0 [...], qt4-x11, qtbase-opensource-src, ruby-actionpack-page-caching [...], ruby-doorkeeper [...], ruby-json-jwt [...], ruby-kaminari [...], ruby-kaminari [...], ruby-rack-cors [...], shiro [...] & squirrelmail.

  • Frontdesk duties, responding to user/developer questions, reviewing others' packages, participating in mailing list discussions, attending the Debian LTS BoF at DebConf20 etc.

  • Issued DLA 2313-1 and ELA-257-1 to fix a privilege escalation vulnerability in Net-SNMP.

  • Issued ELA-263-1 for qtbase-opensource-src and ELA-261-1 for qt4-x11, two components of cross-platform C++ application framework. A specially-crafted XBM image file could have caused a buffer overread.

  • Issued ELA-268-1 to address unsafe serialisation vulnerabilities that were discovered in the PHP-based squirrelmail webmail client.

  • Issued DLA 2311-1 for zabbix, the PHP-based monitoring system to fix a potential cross-site scripting vulnerability via <iframe> HTML elements.

  • Issued DLA 2334-1 to fix a denial of service vulnerability in ruby-websocket-extensions, a library for managing long-lived HTTP 'WebSocket' connections.

  • Issued DLA 2345-1 for PHP 7.0 as it was discovered that there was a use-after-free vulnerability when parsing PHAR files, a method of putting entire PHP applications into a single file.

  • I also updated the Extended LTS website to pluralise the "Related CVEs" text in announcement emails [...] and dropped some trailing whitespace [...].

You can find out more about the project via the following video:


Uploads to Debian

  • memcached:

    • 1.6.6-2 — Enable TLS capabilities by default. (#968603)
    • 1.6.6-3 — Add libio-socket-ssl-perl to test TLS support and perform a general package refresh.
  • python-django:

    • 2.2.15-1 (unstable) — New upstream bugfix release
    • 3.1-1 (experimental) — New upstream release.
    • 2.2.15-2 (unstable) & 3.1-2 (experimental) — Set the same PYTHONPATH when executing the runtime tests as we do in the package build. (#968577)
  • docbook-to-man:

    • 2.0.0-43 — Refresh packaging, and upload some changes from the Debian Janitor.
    • 2.0.0-44 — Fix compatibility with GCC 10, restoring the missing /usr/bin/instant binary. (#968900)
  • hiredis (1.0.0-1) — New upstream release to experimental.

LongNowMichael McElligott, A Staple of San Francisco Art and Culture, Dies at 50

It is with great sadness that we share the news that Michael McElligott, an event producer, thespian, writer, long-time Long Now staff member, and relentless promoter of the San Francisco avant-garde, has died. He was 50 years old.

Michael battled an aggressive form of brain cancer over the past year. He kept his legendary sense of humor throughout his challenging treatment. He died surrounded by family, friends, and his long-time partner, Danielle Engelman, Long Now’s Director of Programs.

Most of the Long Now community knew Michael as the face of the Conversations at The Interval speaking series, which began in 02014 with the opening of Long Now’s Interval bar/cafe. But he did much more than host the talks. For the first five years of the series, each of the talks was painstakingly produced by Michael. This included finding speakers, developing the talk with the speakers, helping curate all the media associated with each talk, and oftentimes hosting the talks. Many of the production ideas explored in this series by Michael became adopted across other Long Now programs, and we are so thankful we got to work with him.

An event producer since his college days, Michael was active in San Francisco’s art and theater scene as a performer and instigator of unusual events for more than 20 years. From 01999 to 02003 Michael hosted and co-produced The Tentacle Sessions, a monthly series spotlighting accomplished individuals in the San Francisco Bay Area scene—including writers, artists, and scientists. He has produced and performed in numerous alternative theater venues over the years including Popcorn Anti-Theater, The EXIT, 21 Grand, Stage Werx, and the late, great Dark Room Theater. He also produced events for Speechless LIVE and The Battery SF.

Michael was a long-time blogger (usually under his nom de kunst mikl-em) for publications including celebrated arts magazine Hi Fructose and award-winning internet culture site Laughing Squid. His writing can be found in print in the Hi Fructose Collected Edition books and Tales of The San Francisco Cacophony Society in which he recounted some of his adventures with that noted countercultural group.

Beginning in the late 01990s as an employee at Hot Wired, Michael worked in technology in various marketing, technical and product roles. He worked at both startups and tech giants; helped launch products for both consumers and enterprise; and worked with some of the best designers and programmers in the industry.

Originally from Richmond, Virginia, he co-founded a college radio station and an underground art space before moving to San Francisco. In the Bay Area, he was involved with myriad artistic projects and creative ventures, including helping start the online radio station Radio Valencia.

Michael had been a volunteer and associate of Long Now since 02006; he helped at events and Seminars, wrote for the blog and newsletter, and was a technical advisor. In 02013 he officially joined the staff to help raise funds, run social media, and design and produce the Conversations at The Interval lecture series.

To honor Michael’s role in helping to fundraise for and build The Interval, we have finally completed the design on the donor wall which will be dedicated to him.  It should be completed in the next few weeks and will bear a plaque remembering Michael. You can watch a playlist of all of Michael’s Interval talks here. Below, you can find a compilation of moments from the more than a hundred talks that Michael hosted over the years.

“This moment is really both an end and a beginning. And like the name The Interval, which is a measure of time, and a place out of time, this is that interval.”

Michael McElligott

Kevin RuddPress Conference: Morrison’s Assault on Superannuation

31 AUGUST 2020

Kevin Rudd
The reason I’m speaking to you this afternoon, here in Brisbane, is that Paul Keating, former Prime Minister of Australia, and myself, have a deep passion for the future of superannuation, retirement income adequacy for working families for the future, the future of our national savings and the national economy. So former prime minister Paul Keating is speaking to the media now in Sydney, and I’m speaking to national media now in Brisbane. And I don’t think Paul and I have ever done a joint press conference before, albeit socially distanced between Brisbane and Sydney. But the reason we’re doing it today is because this is a major matter of public importance for the country.

Let tell you why. Keating is the architect of our national superannuation policy. This was some 30 years ago. And as a result of his efforts, we now have the real possibility of decent retirement income policy for working families for the first time in this country’s history. And on top of that, we’ve accumulated something like $3 trillion worth of national savings. If you ask the question today, why is it that Australia still has a triple-A credit rating around the world, it’s because we have a bucketload of national savings. And so Paul Keating should be thanked for that, not just for the macroeconomy though, but also for delivering this enormous dividend to working families and giving them retirement dignity. Of course, what we did in government was announce that we would move the superannuation guarantee level from 9% to 12%. And we legislated to that effect. And prior to last election, Mr Morrison said that that was also Liberal and National Party policy as well. What Mr Keating and I are deeply concerned about is whether, in fact, this core undertaking to Australian working families is now in the process of being junked.

There are two arguments, which I think we need to bear in mind. The first is already we’ve had the Morison Government rip out $40 billion-plus from people’s existing superannuation accounts. And the reason why they’ve done that is because they haven’t had an economic policy alternative other than to say to working families, if you’re doing it tough as a result of the COVID crisis, then you can go and raid your super. Well, that’s all very fine and dandy, but when those working people then go to retire in the decades ahead, they will have gutted their retirement income. And that’s because this government has allowed them to do that, and in fact forced them to do that, in the absence of an economic policy alternative. Therefore, we’ve had this slug taken to the existing national superannuation pile. But furthermore, the second big slug is this indication increasingly from both Mr Morrison and Mr Frydenberg that they’re now going to betray the Australian people, betray working families, by repudiating their last pre-election commitment by abandoning the increase from 9.5% where it is now to 12%. This is a cruel assault by Morrison on the retirement income of working Australians and using the cover of COVID to try and get away with it.

The argument which the Australian Government seems to be advancing to justify this most recent assault on retirement income policy is that they say that if we go ahead with increasing the superannuation guarantee level from 9.5% and to 10, to 10.5 to 11, to 11.5 to 12 in the years that are to come, that that will somehow depress natural wages growth in the Australian economy. Pigs might fly. That is the biggest bullshit argument I have ever heard against going ahead with decent provision for people’s superannuation savings for the future. There is no statistical foundation for it. There is no logical foundation for it. There is no data-based argument sustained. This is an increment of half-a-percent a year out for the next several years until we get to 12%. What is magic about 12%? It’s fundamental in terms of the calculations that have been done to provide people with decent superannuation adequacy, retirement income adequacy, when they stop working. That’s why we’re doing it. But the argument that somehow by not proceeding with the increase from 9.5 to 12%, we’re going to deny people a proper increase in wages in the period ahead is an absolute nonsense. There is no basis to that argument whatsoever.

And what does it mean for an average working family? If you’re currently on $70,000 a year and superannuation is frozen at 9.5%, and not increased to 12, by the time you retire, you’re going to be at least $70,000 worse off, than would otherwise be the case. Why have we, in successive Labor government’s been so passionate about superannuation policy? Because we believe that every Australian, every working family should have the opportunity for some decency, dignity and independence in their retirement. And guess what: as we live longer, we’re going to spend longer in retirement and this is going to mean more and more for the generations to come. Of course, what’s the alternative if we don’t have superannuation adequacy, and if this raid on super continues under cover of COVID again? Well, it means that Mr Morrison and Mr Frydenberg in the future are going to be forcing more and more people under the age pension and my challenge to Australians is simply this: do you really trust your future and your retirement to Mr Morrison’s generosity in years to come on the age pension? It’s a bit like saying that you trust Mr Morrison in terms of his custodianship of the aged care system in this country. Successive conservative governments have never supported effective increases to the age pension, and they’ve never properly supported the aged care sector either.

But the bottom line is, if you deny people dignity and independence through the superannuation system, and these measures which the current conservative government are undertaking and are foreshadowing take us further in that direction. Then there’s only one course left for people when they retire and that’s to go onto the age pension. One of the things I’m proudest of in our period government was that we brought about the biggest single adjustment and the aged pension in its history. It was huge, something like $65. And we made that as a one-off adjustment which was indexed to the future. But let me tell you, that would never happen under a conservative government. And therefore entrust people’s future retirement to the future generosity of whichever conservative government might be around at the time is frankly folly. The whole logic of us having a superannuation system is that every working Australian can have their own independent dignity in their own retirement. That’s what it’s about.

So my appeal to Mr Morrison and Mr Frydenberg today is: Scotty, Joshy, think about it again. This is a really bad idea. My appeal to them as human beings as they look to the retirement of people who are near and dear to them in the future is: don’t take a further meataxe to the retirement income of working families for the future. It’s just un-Australian. Thank you.

Well, what do you think of the argument that delaying the superannuation guarantee increase would actually give people more money in their take home pay? I know you’ve used fairly strong language.

Kevin Rudd
Well, it is a fraudulent argument. There’s nothing in the data to suggest that that would happen. Let me give you one small example. In the last seven or eight years, we’ve had significant productivity growth in the Australian economy, in part because of some of the reforms we brought about in the economy during our own period in government. These things flow through. But if you look at productivity growth on the one hand, and look at the negligible growth in real wage levels over that same period of time, there is no historical argument to suggest that somehow by sacrificing superannuation increases that you’re going to generate an increase in average wages and average income. There’s simply nothing in the argument whatsoever.

So therefore, I can only conclude that this is a made-up argument by Mr Morrison using COVID cover, when in fact, what is their motivation? The Liberal Party have never liked the compulsory superannuation scheme, ever. They’ve opposed it all the way through. And I can only think that the reason for that is because Mr Keating came up with the idea in the first place. And on top of it, that because we now have such large industry superannuation funds around Australia, and $3 trillion therefore worth of muscle in the superannuation industry, that somehow represents a threat to their side of politics. But the argument that this somehow is going to effect wage increases for future average Australians is simply without logical foundation.

Sure, but you’re comparing historical data with not exactly like-for-like given we’re now in a recession and the immediate future will be deeply in recession. So, in terms of the argument that delaying [inaudible] will end up increasing take-home pay packets. Do admit that, you know, by looking at historical data and looking at the current trajectory it’s not like for like?

Kevin Rudd
The bottom line is we’ve had relatively flat growth in the economy in the last several years, and I have seen so many times in recent decades conservative parties [inaudible] that somehow, by increasing superannuation, we’re going to depress average income levels. Remember, the conservatives have already delayed the implementation of this increase of 2.5% since they came to power in 2013-14. Whatever excuses they managed to marshall that time in so doing. But the bottom line is, as this data indicates, that hasn’t resulted in some significant increase in wages. In fact, the data suggests the reverse.

So what I’m suggesting to you is: for them to argue that a 0.5% a year increase in the superannuation guarantee level, is going to send a torpedo a’midships into the prospects of wage increases for working Australians makes no sense. What doesn’t make sense is the accumulation of those savings over a lifetime. If Paul Keating hadn’t done what he did back then, there’d be no $3 trillion worth of Australian national savings. Paul had the vision to do it. Good on him. We tried to complete that vision by going from 9 to 12. And this mob have tried to stop it. But the real people who miss out are your parents, your parents, and I’m sorry to tell you both, you’ll both get older and you too in terms of the adequacy of your retirement income when the day comes.

So if it’s so important then, why did you only increase it by 0.5% during your six years in government, sharing that period of course with Julia Gillard?

Kevin Rudd
Well, the bottom line is: we decided to increase it gradually, so that we would not present any one-off assault to the ability of employers and employees to enjoy reasonable wage increases. It was a small increase every year and, guess what: it continues to be a very small increase every year until we get to 12. The other thing I’d say, which I haven’t raised so far in our discussion today, is that for most of the last five years, I’ve been in the United States. I run an American think tank. When I’ve traveled around the world and people know of my background in Australian politics, I am always asked this question: how did you guys come up with such a brilliant national savings policy? Very few, if any other countries in the world have this. But what we have done is a marvelous piece of long-term planning for generations of Australians. And with great macroeconomic benefit for the Australian economy in terms of this pool of national savings. We’re the envy of the world.

And yet what are we doing? Turning around and trashing it. So the reason we are gradual about it was to be responsible, not give people a sudden 3% hit, to tailor it over time, and we did so, just like Paul did with the original move from zero, first to 3, then 6 to 9. It happened gradually. But the cumulative effect of this over time for people retiring in 10, 20, 30 40 years’ time is enormous. And that’s why these changes are so important to the future. As you know, I rarely call a press conference. Paul doesn’t call many press conferences either, but he and I are angry as hell that this mob have decided it seems to take a meataxe to this important part of our national economic future and our social wellbeing. That’s what it’s about.

So we know that [inaudible] super accounts have been wiped completely. What damage do you think that would do if it’s extended? So that people can continue to access their super?

Kevin Rudd
The damage it does for individual working Australians, as I said before, it throws them back onto the age pension. And the age pension is simply the absolute basic backbone, the absolute basic provision, for people’s retirement for the future. If no other options exist. And as I said, in office, we undertook a fundamental reform to take it from below poverty level to above poverty level. But if you want for the future, for folks who are retiring to look at that as their option, well, if you continue to destroy this nation’s superannuation nest egg, that’s exactly where you’re going to end up. I can’t understand the logic of this. I thought conservatives were supposed to favour thrift. I thought conservatives were supposed to favour saving. They’re accusation against those of us who come from the Labor side of politics apparently is that we love to spend; actually, we like to save, and we do it through a national savings policy. Good for working families and good for the national economy.

And I think it’s just wrong that people have as their only option there for the future to be thrown back on the age pension and on that point, apart from the wellbeing of individual families, think about the impact in the future on the national budget. Most countries say to me that they envy our national savings policy because it takes pressure off the national budget in the future. Why do you think so many of the ratings agencies are marking economies down around the world? Because they haven’t made adequate future provision for retirement. They haven’t made adequate provision for the future superannuation entitlements of government employees as well. So what we have with the Future Fund, which I concede readily was an initiative of the conservative government, but supported by us on a bipartisan basis, is dealing with that liability in terms of the retirement income needs of federal public servants. But in terms of the rest of the nation, that’s what our national superannuation policy was about. Two arms to it. So I can’t understand why a conservative government would want to take the meataxe to [inaudible].

Following on from your comments in 2018 when you said national Labor should look at distancing themselves from the CFMEU, do you think that’s something Queensland Labor should do given the events of last week?

Kevin Rudd
Who are you from by the way?

The Courier-Mail.

Kevin Rudd
Well, when the Murdoch media ask me a question, I’m always skeptical in terms of why it’s been asked. So I don’t know the context of this particular question. I simply stand by my historical comments.

Do you think in light of what happened last week, Michael Ravbar came out quite strongly against Queensland Labor as that they have no economic plan and that the left faction was a bit not tapped into what everyone was thinking normally. So I just wanted to know whether that’s something you think should happen at the state level?

Kevin Rudd
What I know about the Murdoch media is that you have no interest in the future of the Labor government and no interest in the future of the Labor Party. What you’re interested in is a headline in tomorrow’s Courier-Mail which attacks the Palaszczuk government. I don’t intend to provide that for you. I kind of know what the agenda is here. I’ve been around for a long time and I know what instructions you’re going to get.

But let me say this about the Palaszczuk government: the Palaszczuk government has a strong economic record. The Palaszczuk government has handled the COVID crisis well. The Palaszczuk government is up against an LNP opposition led by Frecklington which has repeatedly called for Queensland’s borders to be opened. For for those reasons, the state opposition has no credibility. And for those reasons, Annastacia Palaszczuk has bucketloads of credibility. So as the internal debates, I will leave it to you and all the journalists who will follow them from the Curious Mail.

Do you think Labor will do well at the election, Mr Rudd?

Kevin Rudd
That’s a matter for the Queensland people but Annastacia Palaszczuk, given all the challenges that state premiers are facing right now, is doing a first-class job in very difficult circumstances. I used to work for state government. I was Wayne Goss’s chief of staff. I used to be director-general of the Cabinet Office. And I do know something about how state governments operate. And I think she should be commended given the difficult choices which are available to her at this time for running a steady ship. [inaudible] Thanks very much.

The post Press Conference: Morrison’s Assault on Superannuation appeared first on Kevin Rudd.

Cory DoctorowHow to Destroy Surveillance Capitalism

For this week’s podcast, I read an excerpt from “How to Destroy Surveillance Capitalism,” a free short book (or long pamphlet, or “nonfiction novella”) I published with Medium’s Onezero last week. HTDSC is a long critical response to Shoshanna Zuboff’s book and paper on the subject, which re-centers the critique on monopolism and the abusive behavior it abets, while expressing skepticism that surveillance capitalists are really as good at manipulating our behavior as they claim to be. It is a gorgeous online package, and there’s a print/ebook edition following.


Planet DebianJonathan Carter: Free Software Activities for 2020-08

Debian packaging

2020-08-07: Sponsor package python-sabyenc (4.0.2-1) for Debian unstable (Python team request).

2020-08-07: Sponsor package gpxpy (1.4.2-1) for Debian unstable (Python team request).

2020-08-07: Sponsor package python-jellyfish (0.8.2-1) for Debian unstable (Python team request).

2020-08-08: Sponsor package django-ipwire (3.0.0-1) for Debian unstable (Python team request).

2020-08-08: Sponsor package python-mongoengine (0.20.0-1) for Debian unstable (Python team request).

2020-08-08: Review package pdfminer (20191020+dfsg-3) (Needs some more work) (Python team request).

2020-08-08: Upload package bundlewrap (4.1.0-1) to Debian unstable.

2020-08-09: Sponsor package pdfminer (20200726-1) for Debian unstable (Python team request).

2020-08-09: Sponsor package spyne (2.13.15-1) for Debian unstable (Python team request).

2020-08-09: Review package mod-wsgi (4.6.8-2) (Needs some more work) (Python team request).

2020-08-10: Sponsor package nfoview (1.28-1) for Debian unstable (Python team request).

2020-08-11: Sponsor package pymupdf (1.17.4+ds1-1) for Debian unstable (Python team request).

2020-08-11: Upload package calamares (3.2.28-1) to Debian ubstable.

2020-08-11: Upload package xabacus (8.2.9-1) to Debian unstable.

2020-08-11: Upload package bashtop (0.9.25-1~bpo10+1) to Debian buster-backports.

2020-08-11: Upload package live-tasks (11.0.3) to Debian unstable (Closes: #942834, #965999, #956525, #961728).

2020-08-12: Upload package calamares-settings-debian (10.0.20-1+deb10u4) to Debian buster (Closes: #968267, #968296).

2020-08-13: Upload package btfs (2.22-1) to Debian unstable.

2020-08-14: Upload package calamares ( to Debian unstable.

2020-08-14: Upload package bundlewrap (4.1.1-1) to Debian unstable.

2020-08-19: Upload package gnome-shell-extension-dash-to-panel (38-2) to Debian unstable) (Closes: #968613).

2020-08-19: Sponsor package mod-wsgi (4.7.1-1) for Debian unstable (Python team request).

2020-08-19: Review package tqdm (4.48.2-1) (Needs some more work) (Python team request).

2020-08-19: Sponsor package tqdm (4.48.2-1) to unstable (Python team request).

2020-08-19: Upload package calamares ( to unstable (Python team request).

CryptogramSeny Kamara on "Crypto for the People"

Seny Kamara gave an excellent keynote talk this year at the (online) CRYPTO Conference. He talked about solving real-world crypto problems for marginalized communities around the world, instead of crypto problems for governments and corporations. Well worth watching and listening to.

Worse Than FailureThoroughly Tested

Zak S worked for a retailer which, as so often happens, got swallowed up by Initech's retail division. Zak's employer had a big, ugly ERP systems. Initech had a bigger, uglier ERP and once the acquisition happened, they all needed to play nicely together.

These kinds of marriages are always problematic, but this particular one was made more challenging: Zak's company ran their entire ERP system from a cluster of Solaris servers- running on SPARC CPUs. Since upgrading that ERP system to run in any other environment was too expensive to seriously consider, the existing services were kept on life-support (with hardware replacements scrounged from the Vintage Computing section of eBay), while Zak's team was tasked with rebuilding everything- point-of-sale, reporting, finance, inventory and supply chain- atop Initech's ERP system.

The project was launched with the code name "Cold Stone", with Glenn as new CTO. At the project launch, Glenn stressed that, "This is a high impact project, with high visibility throughout the organization, so it's on us to ensure that the deliverables are completed on time, on budget, to provide maximum value to the business and to that end, I'll be starting a series of meetings to plan the meetings and checkpoints we'll use to ensure that we have an action-plan that streamlines our…"

"Cold Stone" launched with a well defined project scope, but about 15 seconds after launch, that scope exploded. New "business critical" systems were discovered under every rock, and every department in the company had a moment of, "Why weren't we consulted on this plan? Our vital business process isn't included in your plan!" Or, "You shouldn't have included us in this plan, because our team isn't interested in a software upgrade, we're going to continue using the existing system until the end of time, thank you very much."

The expanding scope required expanding resources. Anyone with any programming experience more complex than "wrote a cool formula in Excel" was press-ganged into the project. You know how to script sending marketing emails? Get on board. You wrote a shell script to purge old user accounts? Great, you're going to write a plugin to track inventory at retail stores.

The project burned through half a dozen business analysts and three project managers, and that's before the COVID-19 outbreak forced the company to quickly downsize, and squish together several project management roles into one person.

"Fortunately" for Initech, that one person was Edyth, who was one of those employees who has given their entire life over to the company, and refuses to sotp working until the work is done. She was the sort of manager who would schedule meetings at 12:30PM, because she knew no one else would be scheduling meetings during the lunch hour. Or, schedule a half hour meeting at 4:30PM, when the workday ends at 5PM, then let it run long, "Since we're all here anyway, let's keep going." She especially liked to abuse video conferencing for this.

As the big ball of mud grew, the project slowly, slowly eased its way towards completion. And as that deadline approached, Edyth started holding meetings which focused on testing. Which is where Edyth started to raise some concerns.

"Lucy," Edyth said, "I noticed that you've marked the test for integration between the e-commerce site and the IniRewards™ site as not-applicable?"

"Well, yeah," Lucy said. "It says to test IniRewards™ signups on the e-commerce site, but our site doesn't do that. Signups entirely happen on the IniRewards™ site. There isn't really any integration."

"Oh," Edyth said. "So that sounds like it's a Zak thing?"

Zak stared at his screen for a moment. He was responsible for the IniRewards™ site, a port of their pre-acquisition customer rewards system to work with Initech's rewards system. He hadn't written it, but somewhere along the way, he became the owner of it, for reasons which remain murky. "Uh… it's a static link."

Edyth nodded, as if she understood what that meant. "So how long will that take to test? A day? Do you need any special setup for this test?"

"It's… a link. I'll click it."

"Great, yes," Edyth said. "Why don't you write up the test plan document for this user story, and then we'll schedule the test for… next week? Can you do it any earlier?"

"I can do it right now," Zak said.

"No, no," Edyth said. "We need to schedule these tests in advance so you're not interacting with anyone else using the test environment. I'll set up a followup meeting to review your test plan."

Test plans, of course, had a template which needed to be filled out. It was a long document, loaded with boiler plate, for the test to be, "Click the 'Rewards Signup' link in the e-commerce site footer. Expected behavior: the browser navigates to the IniRewards™ home page."

Zak added the document to the project document site, labelled "IniRewards Hyper-Link Test", and waited for the next meeting with Edyth to discuss schedule. This time, Glenn, the CTO was in the meeting.

"This 'Hyper-Link' test sounds very important," Glenn said. He enunciated "hyper-link" like it was a word in a foreign language. "Can we move that up in the schedule? I'd like that done tomorrow."

"I… can do it right now," Zak said. "It won't interact with other tests-"

"No, we shouldn't rush things." Glenn's eyes shifted towards another window as he reviewed the testing schedule. "It looks like there's nothing scheduled for testing between 10AM and 2PM tomorrow. Do you think four hours is enough time? Yes? Great, I'll block that off for you."

Suffice to say, the test passed, and was verified quite thoroughly.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianRuss Allbery: Review: Men at Arms

Review: Men at Arms, by Terry Pratchett

Series: Discworld #15
Publisher: Harper
Copyright: 1993
Printing: November 2013
ISBN: 0-06-223740-3
Format: Mass market
Pages: 420

Men at Arms is the fifteenth Discworld novel and a direct plot sequel to Guards! Guards!. You could start here without missing too much, but starting with Guards! Guards! would make more sense. And of course there are cameos (and one major appearance) by other characters who are established in previous books.

Carrot, the adopted dwarf who joined the watch in Guards! Guards!, has been promoted to corporal. He is now in charge of training new recruits, a role that is more important because of the Night Watch's new Patrician-ordered diversity initiative. The Watch must reflect the ethnic makeup of the city. That means admitting a troll, a dwarf... and a woman?

Trolls and dwarfs hate each other because dwarfs mine precious things out of rock and trolls are composed of precious things embedded in rocks, so relations between the new recruits are tense. Captain Vimes is leaving the Watch, and no one is sure who would or could replace him. (The reason for this is a minor spoiler for Guards! Guards!) A magical weapon is stolen from the Assassin's Guild. And a string of murders begins, murders that Vimes is forbidden by Lord Vetinari from investigating and therefore clearly is going to investigate.

This is an odd moment at which to read this book.

The Night Watch are not precisely a police force, although they are moving in that direction. Their role in Ankh-Morpork is made much stranger by the guild system, in which the Thieves' Guild is responsible for theft and for dealing with people who steal outside of the quota of the guild. But Men at Arms is in part a story about ethics, about what it means to be a police officer, and about what it looks like when someone is very good at that job.

Since I live in the United States, that makes it hard to avoid reading Men at Arms in the context of the current upheavals about police racism, use of force, and lack of accountability. Men at Arms can indeed be read that way; community relations, diversity in the police force, the merits of making two groups who hate each other work together, and the allure of violence are all themes Pratchett is working with in this novel. But they're from the perspective of a UK author writing in 1993 about a tiny city guard without any of the machinery of modern police, so I kept seeing a point of clear similarity and then being slightly wrong-footed by the details. It also felt odd to read a book where the cops are the heroes, much in the style of a detective show. This is in no way a problem with the book, and in a way it was helpful perspective, but it was a strange reading experience.

Cuddy had only been a guard for a few days but already he had absorbed one important and basic fact: it is almost impossible for anyone to be in a street without breaking the law.

Vimes and Carrot are both excellent police officers, but in entirely different ways. Vimes treats being a cop as a working-class job and is inclined towards glumness and depression, but is doggedly persistent and unable to leave a problem alone. His ethics are covered by a thick layer of world-weary cynicism. Carrot is his polar opposite in personality: bright, endlessly cheerful, effortlessly charismatic, and determined to get along with everyone. On first appearance, this contrast makes Vimes seem wise and Carrot seem a bit dim. That is exactly what Pratchett is playing with and undermining in Men at Arms.

Beneath Vimes's cynicism, he's nearly as idealistic as Carrot, even though he arrives at his ideals through grim contrariness. Carrot, meanwhile, is nowhere near as dim as he appears to be. He's certain about how he wants to interact with others and is willing to stick with that approach no matter how bad of an idea it may appear to be, but he's more self-aware than he appears. He and Vimes are identical in the strength of their internal self-definition. Vimes shows it through the persistent, grumpy stubbornness of a man devoted to doing an often-unpleasant job, whereas Carrot verbally steamrolls people by refusing to believe they won't do the right thing.

Colon thought Carrot was simple. Carrot often struck people as simple. And he was. Where people went wrong was thinking that simple meant the same thing as stupid.

There's a lot going on in this book apart from the profiles of two very different models of cop. Alongside the mystery (which doubles as pointed commentary on the corrupting influence of violence and personal weaponry), there's a lot about dwarf/troll relations, a deeper look at the Ankh-Morpork guilds (including a horribly creepy clown guild), another look at how good Lord Vetinari is at running the city by anticipating how other people will react, a sarcastic dog named Gaspode (originally seen in Moving Pictures), and Pratchett's usual collection of memorable lines. It is also the origin of the now-rightfully-famous Vimes boots theory:

The reason that the rich were so rich, Vimes reasoned, was because they managed to spend less money.

Take boots, for example. He earned thirty-eight dollars a month plus allowances. A really good pair of leather boots cost fifty dollars. But an affordable pair of boots, which were sort of OK for a season or two and then leaked like hell when the cardboard gave out, cost about ten dollars. Those were the kind of boots Vimes always bought, and wore until the soles were so thin that he could tell where he was in Ankh-Morpork on a foggy night by the feel of the cobbles.

But the thing was that good boots lasted for years and years. A man who could afford fifty dollars had a pair of boots that'd still be keeping his feet dry in ten years' time, while the poor man who could only afford cheap boots would have spent a hundred dollars on boots in the same time and would still have wet feet.

This was the Captain Samuel Vimes 'Boots' theory of socioeconomic unfairness.

Men at Arms regularly makes lists of the best Discworld novels, and I can see why. At this point in the series, Pratchett has hit his stride. The plots have gotten deeper and more complex without losing the funny moments, movie and book references, and glorious turns of phrase. There is also a lot of life philosophy and deep characterization when one pays close attention to the characters.

He was one of those people who would recoil from an assault on strength, but attack weakness without mercy.

My one complaint is that I found it a bit overstuffed with both characters and subplots, and as a result had a hard time following the details of the plot. I found myself wanting a timeline of the murders or a better recap from one of the characters. As always with Pratchett, the digressions are wonderful, but they do occasionally come at the cost of plot clarity.

I'm not sure I recommend the present moment in the United States as the best time to read this book, although perhaps there is no better time for Carrot and Vimes to remind us what good cops look like. But regardless of when one reads it, it's an excellent book, one of the best in the Discworld series to this point.

Followed, in publication order, by Soul Music. The next Watch book is Feet of Clay.

Rating: 8 out of 10

Planet DebianDirk Eddelbuettel: RcppCCTZ 0.2.9: API Header Added

A new minor release 0.2.9 of RcppCCTZ is now on CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do—using copies in their packages which remains less than ideal.

This version adds a header file for the recently-exported three functions.

Changes in version 0.2.9 (2020-08-30)

  • Provide a header RcppCCZT_API.h for client packages.
  • Show a simple example of parsing a YYYYMMDD HHMMSS.FFFFFF date.

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJacob Adams: Command Line 101

How to Work in a Text-Only Environment.

What is this thing?

When you first open a command-line (note that I use the terms command-line and shell interchangably here, they’re basically the same, but command-line is the more general term, and shell is the name for the program that executes commands for you) you’ll see something like this:


This line is called a “command prompt” and it tells you four pieces of information:

  1. jaadams: The username of the user that is currently running this shell.
  2. bg7: The name of the computer that this shell is running on, important for when you start accessing shells on remote machines.
  3. /tmp/thisfolder: The folder or directory that your shell is currently running in. Like a file explorer (like Window’s Explorer or Mac’s Finder) a shell always has a “working directory,” from which all relative paths (see sidenote below) are resolved.

When you first opened a shell, however, you might notice that is looks more like this:


This is a shorthand notation that the shell uses to make this output shorter when possible. ~ stands for your home directory, usually /home/<username>. Like C:\Users\<username>\ on Windows or /Users/<username> on Mac, this directory is where all your files should go by default.

Thus a command prompt like this:


actually tells you that you are currently in the /home/jaadams/Downloads directory.

Sidenote: The Unix Filesystem and Relative Paths

“folders” on Linux and other Unix-derived systems like MacOS are usually called “directories.”

These directories are represented by paths, strings that indicate where the directory is on the filesystem.

The one unusual part is the so-called “root directory”. All files are stored in this folder or directories under it. Its path is just / and there are no directories above it.

For example, the directory called home typically contains all user directories. This is stored in the root directory, and each users specific data is stored in a directory named after that user under home. Thus, the home directory of the user jacob is typically /home/jacob, the directory jacob under the home directory stored in the root directory /.

If you’re interested in more details about what goes in what directory, man hier has the basics and the Filesystem Hierarchy Standard governs the layout of the filesystem on most Linux distributions.

You don’t always have to use the full path, however. If the path does not begin with a /, it is assumed that the path actually begins with the path of the current directory. So if you use a path like my/folders/here, and you’re in the /home/jacob directory, the path will be treated like /home/jacob/my/folders/here.

Each folder also contains the symbolic links .. and .. Symbolic links are a very powerful kind of file that is actually a reference to another file. .. always represents the parent directory of the current directory, so /home/jacob/.. links to /home/. . always links to the current directory, so /home/jacob/. links to /home/jacob.

Running commands

To run a command from the command prompt, you type its name and then usually some arguments to tell it what to do.

For example, the echo command displays the text passed as arguments.

jacob@lovelace/home/jacob$ echo hello world
hello world

Arguments to commands are space-separated, so in the previous example hello is the first argument and world is the second. If you need an argument to contain spaces, you’ll want to put quotes around it, echo "like so".

Certain arguments are called “flags”, or “options” (options if they take another argument, flags otherwise) usually prefixed with a hyphen, and they change the way a program operates.

For example, the ls command outputs the contents of a directory passed as an argument, but if you add -l before the directory, it will give you more details on the files in that directory.

jacob@lovelace/tmp/test$ ls /tmp/test
1  2  3  4  5  6
jacob@lovelace/tmp/test$ ls -l /tmp/test
total 0
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 1
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 2
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 3
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 4
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 5
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 6

Most commands take different flags to change their behavior in various ways.

File Management

  • cd <path>: Change the current directory of the running shell to <path>.
  • ls <path>: Output the contents of <path>. If no path is passed, it prints the contents of the current directory.
  • touch <filename>: create an new empty file called <filename>. Used on an existing file, it updates the file’s last accessed and modified times. Most text editors can also create a new file for you, which is probably more useful.
  • mkdir <directory>: Create a new folder/directory at path <directory>.
  • mv <src> <dest>: Move a file or directory at path <src> to <dest>.
  • cp <src> <dest>: Copy a file or directory at path <src> to <dest>.
  • rm <file>: Remove a file at path <file>.
  • zip -r <zipfile> <contents...>: Create a zip file <zipfile> with contents <contents>. <contents> can be multiple arguments, and you’ll usually want to use the -r argument when including directories in your zipfile, as otherwise only the directory will be included and not the files and directories within it.


  • grep <thing> <file>: Look for the string <thing> in <file>. If no <file> is passed it searches standard input.
  • find <path> -name <name>: Find a file or directory called <name> somwhere under <path>. This command is actually very powerful, but also very complex. For example you can delete all files in a directory older than 30 days with:
    find -mtime +30 -exec rm {}\;
  • locate <name>: A much easier to use command to find a file with a given name, but it is not usually installed by default.

Outputting Files

  • cat <files...>: Output (concatenate) all the files passed as arguments.
  • head <file>: Output the beginning of <file>
  • tail <file>: Output the end of <file>

How to Find the Right Command

All commands (at least on sane Linux distributions like Debian or Ubuntu) are documented with a manual page, in man section 1 (for more information on manual sections, run man intro). This can be accessed using man <command> You can search for the right command using the -k flag, as in man -k <search>.

You can also view manual pages in your browser, on sites like or

This is not always helpful, however, because some command’s descriptions are not particularly useful, and also there are a lot of manual pages, which can make searching for a specific one difficult. For example, finding the right command to search inside text files is quite difficult via man (it’s grep). When you can’t find what you need with man I recommend falling back to searching the Internet. There are lots of bad Linux tutorials out there, but here are some reputable sources I recommend:

  • nixCraft has excellent tutorials on all things Linux
  • Hosting providers like Digital Ocean or Linode: Good intro documentation, but can sometimes be outdated
  • The Linux Documentation project is great, but it can also be a little outdated sometimes.
  • Oftentimes has great answers, but quality varies wildly since anyone can answer.

These are certainly not the only options but they’re the sources I would recommend when available.

How to Read a Manual Page

Manual pages consist of a series of sections, each with a specific purpose. Instead of attempting to write my own description here, I’m going to borrow the excellent one from The Linux Documentation Project

The NAME section

…is the only required section. Man pages without a name section are as useful as refrigerators at the north pole. This section also has a standardized format consisting of a comma-separated list of program or function names, followed by a dash, followed by a short (usually one line) description of the functionality the program (or function, or file) is supposed to provide. By means of makewhatis(8), the name sections make it into the whatis database files. Makewhatis is the reason the name section must exist, and why it must adhere to the format I described. (Formatting explanation cut for brevity)

The SYNOPSIS section

…is intended to give a short overview on available program options. For functions this sections lists corresponding include files and the prototype so the programmer knows the type and number of arguments as well as the return type.


…eloquently explains why your sequence of 0s and 1s is worth anything at all. Here’s where you write down all your knowledge. This is the Hall Of Fame. Win other programmers’ and users’ admiration by making this section the source of reliable and detailed information. Explain what the arguments are for, the file format, what algorithms do the dirty jobs.

The OPTIONS section

…gives a description of how each option affects program behaviour. You knew that, didn’t you?

The FILES section

…lists files the program or function uses. For example, it lists configuration files, startup files, and files the program directly operates on. (Cut details about installing files)


…lists all environment variables that affect your program or function and tells how, of course. Most commonly the variables will hold pathnames, filenames or default options.


…should give an overview of the most common error messages from your program and how to cope with them. There’s no need to explain system error error messages (from perror(3)) or fatal signals (from psignal(3)) as they can appear during execution of any program.

The BUGS section

…should ideally be non-existent. If you’re brave, you can describe here the limitations, known inconveniences and features that others may regard as misfeatures. If you’re not so brave, rename it the TO DO section ;-)

The AUTHOR section

…is nice to have in case there are gross errors in the documentation or program behaviour (Bzzt!) and you want to mail a bug report.

The SEE ALSO section

…is a list of related man pages in alphabetical order. Conventionally, it is the last section.

Remote Access

One of the more powerful uses of the shell is through ssh, the secure shell. This allows you to remotely connect to another computer and run a shell on that machine:

user@host:~$ ssh

The prompt changes to reflect the change in user and host, as you can see in the example above. This allows you to work in a shell on that machine as if it was right in front of you.

Moving Files Between Machines

There are several ways you can move files between machines over ssh. The first and easiest is scp, which works much like the cp command except that paths can also take a user@host argument to move files across computers. For example, if you wanted to move a file test.txt to your home directory on another machine, the command would look like:

scp test.txt

(The home directory is the default path)

Otherwise you can move files by reversing the order of the arguments and put a path after the colon to move files from another directory on the remote host. For example, if you wanted to fetch the file /etc/ from

scp .

Another option is the sftp command, which gives you a very simple shell-like interface in which you can cd and ls, before either puting files onto the local machine or geting files off of it.

The final and most powerful option is rsync which syncs the contents of one directory to another, and doesn’t copy files that haven’t changed. It’s powerful and complex, however, so I recommend reading the USAGE section of its man page.

Long-Running Commands

The one problem with ssh is that it will stop any command running in your shell when you disconnect. If you want to leave something on and come back later then this can be a problem.

This is where terminal multiplexers come in. tmux and screen both allow you to run a shell in a safe environment where it will continue even if you disconnect from it. You do this by running the command without any arguments, i.e. just tmux or just screen. In tmux you can disconnect from the current session by pressing Ctrl+b then d, and reattach with the tmux attach command. screen works similarly, but with Ctrl+a instead of b and screen -r to reattach.

Command Inputs and Outputs

Arguments are not the only way to pass input to a command. They can also take input from what’s called “standard input”, which the shell usually connects to your keyboard.

Output can go to two places, standard output and standard error, both of which are directed to the screen by default.

Redirecting I/O

Note that I said above that standard input/output/error are only “usually” connected to the keyboard and the terminal? This is because you can redirect them to other places with the shell operators <, > and the very powerful |.

File redirects

The operators < and > redirect the input and output of a command to a file. For example, if you wanted a file called list.txt that contained a list of all the files in a directory /this/one/here you could use:

ls /this/one/here > list.txt


The pipe character, |, allows you to direct the output of one command into the input of another. This can be very powerful. For example, the following pipeline lists the contents of the current directory searches for the string “test”, then counts the number of results. (wc -l counts the number of lines in its input)

ls | grep test | wc -l

For a better, but even more contrived example, say you have a file myfile, with a bunch of lines of potentially duplicated and unsorted data


You can sort it and output only the unique lines with sort and uniq:

$ uniq < myfile | sort

Save Yourself Some Typing: Globs and Tab-Completion

Sometimes you don’t want to type out the whole filename when writing out a command. The shell can help you here by autocompleting when you press the tab key.

If you have a whole bunch of files with the same suffix, you can refer to them when writing arguments as *.suffix. This also works with prefixes, prefix*, and in fact you can put a * anywhere, *middle*. The shell will “expand” that * into all the files in that directory that match your criteria (ending with a specific suffix, starting with a specific prefix, and so on) and pass each file as a separate argument to the command.

For example, if I have a series of files called 1.txt, 2.txt, and so on up to 9, each containing just the number for which it’s named, I could use cat to output all of them like so:

jacob@lovelace/tmp/numbers$ ls
1.txt  2.txt  3.txt  4.txt  5.txt  6.txt  7.txt  8.txt	9.txt
jacob@lovelace/tmp/numbers$ cat *.txt

Also the ~ shorthand mentioned above that refers to your home directory can be used when passing a path as an argument to a command.

Ifs and For loops

The files in the above example were generated with the following shell commands:

for i in 1 2 3 4 5 6 7 8 9
echo $i > $i.txt

But I’ll have to save variables, conditionals and loops for another day because this is already too long. Needless to say the shell is a full programming language, although a very ugly and dangerous one.


Planet DebianMike Hommey: [Linux] Disabling CPU turbo, cores and threads without rebooting

[Disclaimer: this has been sitting as a draft for close to three months ; I forgot to publish it, this is now finally done.]

In my previous blog post, I built Firefox in a multiple different number of configurations where I’d disable the CPU turbo, some of its cores or some of its threads. That is something that was traditionally done via the BIOS, but rebooting between each attempt is not really a great experience.

Fortunately, the Linux kernel provides a large number of knobs that allow this at runtime.


This is the most straightforward:

$ echo 0 > /sys/devices/system/cpu/cpufreq/boost

Re-enable with

$ echo 1 > /sys/devices/system/cpu/cpufreq/boost

CPU frequency throttling

Even though I haven’t mentioned it, I might as well add this briefly. There are many knobs to tweak frequency throttling, but assuming your goal is to disable throttling and set the CPU frequency to its fastest non-Turbo frequency, this is how you do it:

$ echo performance > /sys/devices/system/cpu/cpu$n/cpufreq/scaling_governor

where $n is the id of the core you want to do that for, so if you want to do that for all the cores, you need to do that for cpu0, cpu1, etc.

Re-enable with:

$ echo ondemand > /sys/devices/system/cpu/cpu$n/cpufreq/scaling_governor

(assuming this was the value before you changed it ; ondemand is usually the default)

Cores and Threads

This one requires some attention, because you cannot assume anything about the CPU numbers. The first thing you want to do is to check those CPU numbers. You can do so by looking at the physical id and core id fields in /proc/cpuinfo, but the output from lscpu --extended is more convenient, and looks like the following:

0   0    0      0    0:0:0:0       yes    3700.0000 2200.0000
1   0    0      1    1:1:1:0       yes    3700.0000 2200.0000
2   0    0      2    2:2:2:0       yes    3700.0000 2200.0000
3   0    0      3    3:3:3:0       yes    3700.0000 2200.0000
4   0    0      4    4:4:4:1       yes    3700.0000 2200.0000
5   0    0      5    5:5:5:1       yes    3700.0000 2200.0000
6   0    0      6    6:6:6:1       yes    3700.0000 2200.0000
7   0    0      7    7:7:7:1       yes    3700.0000 2200.0000
32  0    0      0    0:0:0:0       yes    3700.0000 2200.0000
33  0    0      1    1:1:1:0       yes    3700.0000 2200.0000
34  0    0      2    2:2:2:0       yes    3700.0000 2200.0000
35  0    0      3    3:3:3:0       yes    3700.0000 2200.0000
36  0    0      4    4:4:4:1       yes    3700.0000 2200.0000
37  0    0      5    5:5:5:1       yes    3700.0000 2200.0000
38  0    0      6    6:6:6:1       yes    3700.0000 2200.0000
39  0    0      7    7:7:7:1       yes    3700.0000 2200.0000

Now, this output is actually the ideal case, where pairs of CPUs (virtual cores) on the same physical core are always n, n+32, but I’ve had them be pseudo-randomly spread in the past, so be careful.

To turn off a core, you want to turn off all the CPUs with the same CORE identifier. To turn off a thread (virtual core), you want to turn off one CPU. On machines with multiple sockets, you can also look at the SOCKET column.

Turning off one CPU is done with:

$ echo 0 > /sys/devices/system/cpu/cpu$n/online

Re-enable with:

$ echo 1 > /sys/devices/system/cpu/cpu$n/online

Extra: CPU sets

CPU sets are a feature of Linux’s cgroups. They allow to restrict groups of processes to a set of cores. The first step is to create a group like so:

$ mkdir /sys/fs/cgroup/cpuset/mygroup

Please note you may already have existing groups, and you may want to create subgroups. You can do so by creating subdirectories.

Then you can configure on which CPUs/cores/threads you want processes in this group to run on:

$ echo 0-7,16-23 > /sys/fs/cgroup/cpuset/mygroup/cpuset.cpus

The value you write in this file is a comma-separated list of CPU/core/thread numbers or ranges. 0-3 is the range for CPU/core/thread 0 to 3 and is thus equivalent to 0,1,2,3. The numbers correspond to /proc/cpuinfo or the output from lscpu as mentioned above.

There are also memory aspects to CPU sets, that I won’t detail here (because I don’t have a machine with multiple memory nodes), but you can start with:

$ cat /sys/fs/cgroup/cpuset/cpuset.mems > /sys/fs/cgroup/cpuset/mygroup/cpuset.mems

Now you’re ready to assign processes to this group:

$ echo $pid >> /sys/fs/cgroup/cpuset/mygroup/tasks

There are a number of tweaks you can do to this setup, I invite you to check out the cpuset(7) manual page.

Disabling a group is a little involved. First you need to move the processes to a different group:

$ while read pid; do echo $pid > /sys/fs/cgroup/cpuset/tasks; done < /sys/fs/cgroup/cpuset/mygroup/tasks

Then deassociate CPU and memory nodes:

$ > /sys/fs/cgroup/cpuset/mygroup/cpuset.cpus
$ > /sys/fs/cgroup/cpuset/mygroup/cpuset.mems

And finally remove the group:

$ rmdir /sys/fs/cgroup/cpuset/mygroup

Planet DebianEnrico Zini: Miscellaneous news

A fascinating apparent paradox that kind of makes sense: Czech nudists reprimanded by police for not wearing face-masks.

Besides being careful about masks when naked at the lake, be careful about your laptop being confused for a pizza: German nudist chases wild boar that stole laptop.

Talking about pigs: Pig starts farm fire by excreting pedometer.

Now that traveling is complicated, you might enjoy A Brief History of Children Sent Through the Mail, or learning about Narco-submarines.

Meanwhile, in a time of intense biotechnological research, Scientists rename human genes to stop Microsoft Excel from misreading them as dates.

Finally, for a good, cheaper, and more readily available alternative to a trip to the pharmacy, learn about Hypoalgesic effect of swearing.

Planet DebianDirk Eddelbuettel: RcppSMC 0.2.2: Small updates

A new release 0.2.2 of the RcppSMC package arrived on CRAN earlier today (and once again as a very quick pretest-publish within minutes of submission).

RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts.

This releases contains two fixes from a while back that had not been released, a CRAN-requested update plus a few more minor polishes to make it pass R CMD check --as-cran as nicely as usual.

Changes in RcppSMC version 0.2.2 (2020-08-30)

  • Package helper files .editorconfig added (Adam in #43).

  • Change const correctness and add return (Leah in #44).

  • Updates to continuous integration and R versions used (Dirk)

  • Accomodate CRAN request, other updates to CRAN Policy (Dirk in #49 fixing #48).

Courtesy of CRANberries, there is a diffstat report for this release.

More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJonathan Carter: The metamorphosis of Loopy Loop

Dealing with the void during MiniDebConf Online #1

Between 28 and 31 May this year, we set out to create our first ever online MiniDebConf for Debian. Many people have been meaning to do something similar for a long time, but it just didn’t work out yet. With many of us being in lock down due to COVID-19, and with the strong possibility looming that DebConf20 might have had to become an online event, we rushed towards organising the first ever Online MiniDebConf and put together some form of usable video stack for it.

I could go into all kinds of details on the above, but this post is about a bug that lead to a pretty nifty feature for DebConf20. The tool that we use to capture Jitsi calls is called Jibri (Jitsi Broadcasting Infrustructure). It had a bug (well, bug for us, but it’s an upstream feature) where Jibri would hang up after 30s of complete silence, because it would assume that the call has ended and that the worker can be freed up again. This would result in the stream being ended at the end of every talk, so before the next talk, someone would have to remember to press play again in their media player or on the video player on the stream page. Hrmph.

Easy solution on the morning that the conference starts? I was testing a Debian Live image the night before in a KVM and thought that I might as well just start a Jitsi call from there and keep a steady stream of silence so that Jibri doesn’t hang up.

It worked! But the black screen and silence on stream was a bit eery. Because this event was so experimental in nature, and because we were on such an incredibly tight timeline, we opted not to seek sponsors for this event, so there was no sponsors loop that we’d usually stream during a DebConf event. Then I thought “Ah! I could just show the schedule!”.

The stream looked bright and colourful (and was even useful!) and Jitsi/Jibri didn’t die. I thought my work was done. As usual, little did I know how untrue that was.

The silence was slightly disturbing after the talks, and people asked for some music. Playing music on my VM and capturing the desktop audio in to Jitsi was just a few pulseaudio settings away, so I spent two minutes finding some freely licensed tracks that sounded ok enough to just start playing on the stream. I came across mini-albums by Captive Portal and Cinema Noir, During the course of the MiniDebConf Online I even started enjoying those. Someone also pointed out that it would be really nice to have a UTC clock on the stream. I couldn’t find a nice clock in a hurry so I just added a tmux clock in the meantime while we deal with the real-time torrent of issues that usually happens when organising events like this.

Speaking of issues, during our very first talk of the last day, our speaker had a power cut during the talk and abruptly dropped off. Oops! So, since I had a screenshare open from the VM to the stream, I thought I’d just pop in a quick message in a text editor to let people know that we’re aware of it and trying to figure out what’s going on.

In the end, MiniDebConf Online worked out all right. Besides the power cut for our one speaker, and another who had a laptop that was way too under-powered to deal with video, everything worked out very well. Even the issues we had weren’t show-stoppers and we managed to work around them.

DebConf20 Moves Online

For DebConf, we usually show a sponsors loop in between sessions. It’s great that we give our sponsors visibility here, but in reality people see the sponsors loop and think “Talk over!” and then they look away. It’s also completely silent and doesn’t provide any additional useful information. I was wondering how I could take our lessons from MDCO#1 and integrate our new tricks with the sponsors loop. That is, add the schedule, time, some space to type announcements on the screen and also add some loopable music to it.

I used OBS before in making my videos, and like the flexibility it provides when working with scenes and sources. A scene is what you would think of as a screen or a document with its own collection of sources or elements. For example, a scene might contain sources such as a logo, clock, video, image, etc. A scene can also contain another scene. This is useful if you want to contain a banner or play some background music that is shared between scenes.

The above screenshots illustrate some basics of scenes and sources. First with just the DC20 banner, and then that used embedded in another scene.

For MDCO#1, I copied and pasted the schedule into a LibreOffice Impress slide that was displayed on the stream. Having to do this for all 7 days of DebConf, plus dealing with scheduling changes would be daunting. So, I started to look in to generating some schedule slides programmatically. Stefano then pointed me to the Happening Now page on the DebConf website, where the current schedule block is displayed. So all I would need to do in OBS was to display a web page. Nice!

Unfortunately the OBS in Debian doesn’t have the ability to display web pages out of the box (we need to figure out CEF in Debian), but fortunately someone provides a pre-compiled version of the plugin called Linux Browser that works just fine. This allowed me to easily add the schedule page in its own scene.

Being able to display a web page solved another problem. I wasn’t fond of having to type / manage the announcements in OBS. It would either be a bit prone to user error, and if you want to edit the text while the loop is running, you’d have to disrupt the loop, go to the foreground scene, and edit the text before resuming the loop. That’s a bit icky. Then I thought that we could probably just get that from a web page instead. We could host some nice html snippet in a repository in salsa, and then anyone could easily commit an MR to update the announcement.

But then I went a step further, use an etherpad! Then anyone in the orga team can quickly update the announcement and it would be instantly changed on the stream. Nice! So that small section of announcement text on the screen is actually a whole web browser with an added OBS filter to crop away all the pieces we don’t want. Overkill? Sure, but it gave us a decent enough solution that worked in time for the start of DebConf. Also, being able to type directly on to the loop screen works out great especially in an emergency. Oh, and uhm… the clock is also a website rendered in its own web browser :-P

So, I had the ability to make scenes, add elements and add all the minimal elements I wanted in there. Great! But now I had to figure out how to switch scenes automatically. It’s probably worth mentioning that I only found some time to really dig into this right before DebConf started, so with all of this I was scrambling to find things that would work without too many bugs while also still being practical.

Now I needed the ability to switch between the scenes automatically / programmatically. I had never done this in OBS before. I know it has some API because there are Android apps that you can use to control OBS with from your phone. I discovered that it had an automatic scene switcher, but it’s very basic. It can only switch based on active window, which can be useful in some cases, but since we won’t have any windows open other than OBS, this tool was basically pointless.

After some quick searches, I found a plugin called Advanced Scene Switcher. This plugin can do a lot more, but has some weird UI choices, and is really meant for gamers and other types of professional streamers to help them automate their work flow and doesn’t seem at all meant to be used for a continuous loop, but, it worked, and I could make it do something that will work for us during the DebConf.

I had a chicken and egg problem because I had to figure out a programming flow, but didn’t really have any content to work with, or an idea of all the content that we would eventually have. I’ve been toying with the idea in my mind and had some idea that we could add fun facts, postcards (an image with some text), time now in different timezones, Debian news (maybe procured by the press team), cards that contain the longer announcements that was sent to debconf-announce, perhaps a shout out or two and some photos from previous DebConfs like the group photos. I knew that I wouldn’t be able to build anything substantial by the time DebConf starts, but adding content to OBS in between talks is relatively easy, so we could keep on building on it during DebConf.

Nattie provided the first shout out, and I made 2 video loops with the DC18/19 pictures and also two “Did you know” cards. So the flow I ended up with was: Sponsors -> Happening Now -> Random video (which would be any of those clips) -> Back to sponsors. This ended up working pretty well for quite a while. With the first batch of videos the sponsor loop would come up on average about every 2 minutes, but as much shorter clips like shout outs started to come in faster and faster, it made sense to play a few 2-3 shout-outs before going back to sponsors.

So here is a very brief guide on how I set up the sequencing in Advanced Scene Switcher.

If no condition was met, a video would play from the Random tab.

Then in the Random tab, I added the scenes that were part of the random mix. Annoyingly, you have to specify how long it should play for. If you don’t, the ‘no condition’ thingy is triggered and another video is selected. The time is also the length of the video minus one second, because…

You can’t just say that a random video should return back to a certain scene, you have to specify that in the sequence tab for each video. Why after 1 second? Because, at least in my early tests, and I didn’t circle back to this, it seems like 0s can randomly either mean instantly, or never. Yes, this ended up being a bit confusing and tedious, and considering the late hours I worked on this, I’m surprised that I didn’t manage to screw it up completely at any point.

I also suspected that threads would eventually happen. That is, when people create video replies to other videos. We had 3 threads in total. There was a backups thread, beverage thread and an impersonation thread. The arrow in the screenshot above points to the backups thread. I know it doesn’t look that complicated, but it was initially somewhat confusing to set up and make sense out of it.

For the next event, the Advanced Scene Switcher might just get some more taming, or even be replaced entirely. There are ways to drive OBS by API, and even the Advanced Scene Switcher tool can be driven externally to some degree, but I think we definitely want to replace it by the next full DebConf. We had the problem that when a talk ended, we would return to the loop in the middle of a clip, which felt very unnatural and sometimes even confusing. So Stefano helped me with a helper script that could read the socket from Vocto, which I used to write either “Loop” or “Standby” to a file, and then the scene switcher would watch that file and keep the sponsors loop ready for start while the talks play. Why not just switch to sponsors when the talk ends? Well, the little bit of delay in switching would mean that you would see a tiny bit of loop every time before switching to sponsors. This is also why we didn’t have any loop for the ad-hoc track (that would have probably needed another OBS instance, we’ll look more into solutions for this for the future).

Then for all the clips. There were over 50 of them. All of them edited by hand in kdenlive. I removed any hard clicks, tried to improve audibility, remove some sections at the beginning and the end that seemed extra and added some music that would reduce in volume when someone speaks. In the beginning, I had lots of fun with choosing music for the clips. Towards the end, I had to rush them through and just chose the same tune whether it made sense or not. For comparison of what a difference the music can make, compare the original and adapted version for Valhalla’s clip above, or this original and adapted video from urbec. This part was a lot more fun than dealing with the video sequencer, but I also want to automate it a bit. When I can fully drive OBS from Python I’ll likely instead want to show those cards and control music volume from Python (what could possibly go wrong…).

The loopy name happened when I requested an alias for this. I was initially just thinking about but since I wanted to make it clear that the purpose of this loop is also to have some fun, I opted for “loopy” instead:

I was really surprised by how people took to loopy. I hoped it would be good and that it would have somewhat positive feedback, but the positive feedback was just immense. The idea was that people typically saw it in between talks. But a few people told me they kept it playing after the last talk of the day to watch it in the background. Some asked for the music because they want to keep listening to it while working (and even for jogging!?). Some people also asked for recordings of the loop because they want to keep it for after DebConf. The shoutouts idea proved to be very popular. Overall, I’m very glad that people enjoyed it and I think it’s safe to say that loopy will be back for the next event.

Also throughout this experiment Loopy Loop turned into yet another DebConf mascot. We gain one about every DebConf, some by accident and some on purpose. This one was not quite on purpose. I meant to make an image for it for salsa, and started with an infinite loop symbol. That’s a loop, but by just adding two more solid circles to it, it looks like googly eyes, now it’s a proper loopy loop!

I like the progress we’ve made on this, but there’s still a long way to go, and the ideas keep heaping up. The next event is quite soon (MDCO#2 at the end of November, and it seems that 3 other MiniDebConf events may also be planned), but over the next few events there will likely be significantly better graphics/artwork, better sequencing, better flow and more layout options. I hope to gain some additional members in the team to deal with incoming requests during DebConf. It was quite hectic this time! The new OBS also has a scripting host that supports Python, so I should be able to do some nice things even within OBS without having to drive it externally (like, display a clock without starting a web browser).

The Loopy Loop Music

The two mini albums that mostly played during the first few days were just a copy and paste from the MDCO#1 music, which was:

For shoutout tracks, that were later used in the loop too (because it became a bit monotonous), most of the tracks came from

I have much more things to say about DebConf20, but I’ll keep that for another post, and hopefully we can get all the other video stuff in a post from the video team, because I think there’s been some real good work done for this DebConf. Also thanks to Infomaniak who was not only a platinum sponsor for this DebConf, but they also provided us with plenty of computing power to run all the video stuff on. Thanks again!

Planet DebianBits from Debian: DebConf20 online closes

DebConf20 group photo - click to enlarge

On Saturday 29 August 2020, the annual Debian Developers and Contributors Conference came to a close.

DebConf20 has been held online for the first time, due to the coronavirus (COVID-19) disease pandemic.

All of the sessions have been streamed, with a variety of ways of participating: via IRC messaging, online collaborative text documents, and video conferencing meeting rooms.

With more than 850 attendees from 80 different countries and a total of over 100 event talks, discussion sessions, Birds of a Feather (BoF) gatherings and other activities, DebConf20 was a large success.

When it became clear that DebConf20 was going to be an online-only event, the DebConf video team spent much time over the next months to adapt, improve, and in some cases write from scratch, technology that would be required to make an online DebConf possible. After lessons learned from the MiniDebConfOnline in late May, some adjustments were made, and then eventually we came up with a setup involving Jitsi, OBS, Voctomix, SReview, nginx, Etherpad, and a newly written web-based frontend for voctomix as the various elements of the stack.

All components of the video infrastructure are free software, and the whole setup is configured through their public ansible repository.

The DebConf20 schedule included two tracks in other languages than English: the Spanish language MiniConf, with eight talks in two days, and the Malayalam language MiniConf, with nine talks in three days. Ad-hoc activities, introduced by attendees over the course of the entire conference, have been possible too, streamed and recorded. There have also been several team gatherings to sprint on certain Debian development areas.

Between talks, the video stream has been showing the usual sponsors on the loop, but also some additional clips including photos from previous DebConfs, fun facts about Debian and short shout-out videos sent by attendees to communicate with their Debian friends.

For those who were not able to participate, most of the talks and sessions are already available through the Debian meetings archive website, and the remaining ones will appear in the following days.

The DebConf20 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events.

Next year, DebConf21 is planned to be held in Haifa, Israel, in August or September.

DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Community team) have been available to help so participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf20 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf20, particularly our Platinum Sponsors: Lenovo, Infomaniak, Google and Amazon Web Services (AWS).

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from

About Lenovo

As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.

About Infomaniak

Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

About Google

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.

About Amazon Web Services (AWS)

Amazon Web Services (AWS) is one of the world's most comprehensive and broadly adopted cloud platforms, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies.

Contact Information

For further information, please visit the DebConf20 web page at or send mail to


Planet DebianAndrew Cater: Just coming to the end of Debconf 20 2020 - and a preview.

 One more talk from Jon "maddog" Hall and then the closing wrap up. This has been a blast: I've enjoyed it a lot and it's made me more enthusiastic than I have been for a long time.

 So once more with thanks to the video team
It's almost the end of this year's DebConf dream
As we all break up, there's a favour to ask
Stay safe - wear a smile - and where needed, a mask

We'll not take you to task ; it's a favour we ask
Stay safe - wear a smile - and where needed, a mask

Haifa, Pristina or Kochi - we'll see how that lands
There's not much left to tidy up - wash your own hands
So now if you'll join us in virtual beer
We'll bring this to a close - and we'll meet up next year

So now if you'll join us - let us all raise a cheer
To Debconf21 - and we'll see you next year

Planet DebianRuss Allbery: PGP::Sign 1.02

This is another test-only release of my module for manipulating PGP signatures in Perl. I'm trying to get the CPAN testing failures down to a dull roar. This iteration fixes some testing issues with systems that have only GnuPG v1 and tries to handle systems whose gpg is GnuPG v2 but is older than 2.1.12 and therefore doesn't have the --pinentry-mode flag that GnuPG uses to suppress password prompting.

I handled the latter by skipping the tests if the gpg on the user's PATH was too old. I'm not certain this is the best approach, although it makes the CPAN automated testing more useful for me, since the module will not work without special configuration on those systems. On the other hand, if someone is installing it to point to some other GnuPG binary on the system at runtime, failing the installation because their system gpg is too old seems wrong, and the test failure doesn't indicate a bug in the module.

Essentially, I'm missing richer test metadata in the Perl ecosystem. I want to be able to declare a dependency on a non-Perl system binary, but of course Perl has no mechanism to do that.

I thought about trying to deal with the Windows failures due to missing IPC::Run features (redirecting high-numbered file descriptors) on the Windows platform in a similar way, but decided in that case I do want the tests to fail because PGP::Sign will never work on that platform regardless of the runtime configuration. Here too I spent some time searching for some way to indicate with Module::Build that the module doesn't work on Windows, and came up empty. This seems to be a gap in Perl's module distribution ecosystem.

In any case, hopefully this release will clean up the remaining test failures on Linux and BSD systems, and I can move on to work on the Big Eight signing key, which was the motivating application for these releases.

You can get the latest release from CPAN or from the PGP::Sign distribution page.

CryptogramFriday Squid Blogging: How Squid Survive Freezing, Oxygen-Deprived Waters

Lots of interesting genetic details.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianJelmer Vernooij: Debian Janitor: The Slow Trickle from Git Repositories to the Debian Archive

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

Last week’s blog post documented how there are now over 30,000 lintian issues that have been fixed in git packaging repositories by the Janitor.

It's important to note that any fixes from the Janitor that make it into a Git packaging repository will also need to be uploaded to the Debian archive. This currently requires that a Debian packager clones the repository and builds and uploads the package.

Until a change makes it into the archive, users of Debian will unfortunately not see the benefits of improvements made by the Janitor.

82% of the 30,000 changes from the Janitor that have made it into a Git repository have not yet been uploaded, although changes do slowly trickle in as maintainers make other changes to packages and upload them along with the lintian fixes from the Janitor. This is not just true for changes from the Janitor, but for all sorts of other smaller improvements as well.

However, the process of cloning and building git repositories and uploading the resulting packages to the Debian archive is fairly time-consuming – and it’s probably not worth the time of developers to follow up every change from the Janitor with a labour-intensive upload to the archive.

It would be great if it was easier to trigger uploads from git commits. Projects like tag2upload will hopefully help, and make it more likely that changes end up in the Debian archive.

The majority packages do get at least one new source version upload per release, so most changes will eventually make it into the archive.

For more information about the Janitor's lintian-fixes efforts, see the landing page


Sam VargheseManaging a relationship is hard work

For many years, Australia has been trading with China, apparently in the belief that one can do business with a country for yonks without expecting the development of some sense of obligation. The attitude has been that China needs Australian resources and the relationship needs to go no further than the transfer of sand dug out of Australia and sent to China.

Those in Beijing, obviously, haven’t seen the exchange this way. There has been an expectation that there would be some obligation for the relationship to go further than just the impersonal exchange of goods for money. Australia, in true colonial fashion, has expected China to know its place and keep its distance.

This is similar to the attitude the Americans took when they pushed for China’s admission to the World Trade Organisation: all they wanted was a means of getting rid of their manufacturing so their industries could grow richer and an understanding that China would agree to go along with the American diktat to change as needed to keep the US on top of the trading world.

But then you cannot invite a man into your house for a dinner party and insist that he eat only bread. Once inside, he is free to choose what he wants to consume. It appears that the Americans do not understand this simple rule.

Both Australia and the US have forgotten they are dealing with the oldest civilisation in the world. A culture that plays the long waiting game. The Americans read the situation completely wrong for the last 70 years, assuming initially that the Kuomintang would come out on top and that the Communists would be vanquished. In the interim, the Americans obtained most of the money used for the early development of their country by selling opium to the Chinese.

China has not forgotten that humiliation.

There was never a thought given to the very likely event that China would one day want to assert itself and ask to be treated as an equal. Which is what is happening now. Both Australia and the US are feigning surprise and acting as though they are competely innocent in this exercise.

Fast forward to 2020 when the Americans and the Australians are both on the warpath, asserting that China is acting aggressively and trying to intimidate Australia while refusing to bow to American demands that it behave as it is expected to. There are complaints about Chinese demands for technology transfers, completely ignoring the fact that a developing country can ask for such transfers under WTO rules.

There are allegations of IP theft by the Americans, completely forgetting that they stole IP from Britain in the early days of the colonies; the name Samuel Slater should ring a bell in this context. Many educated Americans have themselves written about Slater.

Racism is one trait that defines the Australian approach to China. The Asian nation has been expected to confine itself to trade and never ask for more. And Australia, in condescending fashion, has lauded its approach, never understanding that it is seen as an American lapdog and no more. China has been waiting for the day when it can level scores.

It is difficult to comprehend why Australia genuflects before the US. There has been an attitude of veneration going back to the time of Harold Holt who is well known for his “All the way with LBJ” line, referring to the fact that Australian soldiers would be sent to Vietnam to serve as cannon fodder for the Americans and would, in short, do anything as long as the US decided so. Exactly what fight Australia had with Vietnam is not clear.

At that stage, there was no seminal action by the US that had put the fear of God into Australia; this came later, in 1975, when the CIA manipulated Australian politics and influenced the sacking of prime minister Gough Whitlam by the governor-general, Sir John Kerr. There is still resistance from Australian officialdom and its toadies to this version of events, but the evidence is incontrovertible; Australian journalist Guy Rundle has written two wonderful accounts of how the toppling took place.

Whitlam’s sins? Well, he had cracked down on the Australian Security Intelligence Organisation, an agency that spied on Australians and conveyed information to the CIA, when he discovered that it was keeping tabs on politicians. His attorney-general, Lionel Murphy, even ordered the Australian Federal Police to raid the ASIO, a major affront to the Americans who did not like their client being treated this way.

Whitlam also hinted that he would not renew a treaty for the Americans to continue using a base at Pine Gap as a surveillance centre. This centre was offered to the US, with the rent being one peppercorn for 99 years.

Of course, this was pure insolence coming from a country which the Americans — as they have with many other nations — treated as a vassal state and one only existing to do their bidding. So Whitlam was thrown out.

On China, too, Australia has served the role of American lapdog. In recent days, the Australian Prime Minister Scott Morrison has made statements attacking China soon after he has been in touch with the American leadership. In other words, the Americans are using Australia to provoke China. It’s shameful to be used in this manner, but then once a bootlicker, always a bootlicker.

Australia’s subservience to the US is so great that it even co-opted an American official, former US Secretary of Homeland Security Kirstjen Nielsen, to play a role in developing a cyber security strategy. There are a large number of better qualified people in the country who could do a much better job than Nielsen, who is a politician and not a technically qualified individual. But the slave mentality has always been there and will remain.

Krebs on SecuritySendgrid Under Siege from Hacked Accounts

Email service provider Sendgrid is grappling with an unusually large number of customer accounts whose passwords have been cracked, sold to spammers, and abused for sending phishing and email malware attacks. Sendgrid’s parent company Twilio says it is working on a plan to require multi-factor authentication for all of its customers, but that solution may not come fast enough for organizations having trouble dealing with the fallout in the meantime.

Image: Wikipedia

Many companies use Sendgrid to communicate with their customers via email, or else pay marketing firms to do that on their behalf using Sendgrid’s systems. Sendgrid takes steps to validate that new customers are legitimate businesses, and that emails sent through its platform carry the proper digital signatures that other companies can use to validate that the messages have been authorized by its customers.

But this also means when a Sendgrid customer account gets hacked and used to send malware or phishing scams, the threat is particularly acute because a large number of organizations allow email from Sendgrid’s systems to sail through their spam-filtering systems.

To make matters worse, links included in emails sent through Sendgrid are obfuscated (mainly for tracking deliverability and other metrics), so it is not immediately clear to recipients where on the Internet they will be taken when they click.

Dealing with compromised customer accounts is a constant challenge for any organization doing business online today, and certainly Sendgrid is not the only email marketing platform dealing with this problem. But according to multiple emails from readers, recent threads on several anti-spam discussion lists, and interviews with people in the anti-spam community, over the past few months there has been a marked increase in malicious, phishous and outright spammy email being blasted out via Sendgrid’s servers.

Rob McEwen is CEO of, an anti-spam firm whose data on junk email trends are used to improve the spam-blocking technologies deployed by several Fortune 100 companies. McEwen said no other email service provider has come close to generating the volume of spam that’s been emanating from Sendgrid accounts lately.

“As far as the nasty criminal phishes and viruses, I think there’s not even a close second in terms of how bad it’s been with Sendgrid over the past few months,” he said.

Trying to filter out bad emails coming from a major email provider that so many legitimate companies rely upon to reach their customers can be a dicey business. If you filter the emails too aggressively you end up with an unacceptable number of “false positives,” i.e., benign or even desirable emails that get flagged as spam and sent to the junk folder or blocked altogether.

But McEwen said the incidence of malicious spam coming from Sendgrid has gotten so bad that he recently launched a new anti-spam block list specifically to filter out email from Sendgrid accounts that have been known to be blasting large volumes of junk or malicious email.

“Before I implemented this in my own filtering system a week ago, I was getting three to four phone calls or stern emails a week from angry customers wondering why these malicious emails were getting through to their inboxes,” McEwen said. “And I just am not seeing anything this egregious in terms of viruses and spams from the other email service providers.”

In an interview with KrebsOnSecurity, Sendgrid parent firm Twilio acknowledged the company had recently seen an increase in compromised customer accounts being abused for spam. While Sendgrid does allow customers to use multi-factor authentication (also known as two-factor authentication or 2FA), this protection is not mandatory.

But Twilio Chief Security Officer Steve Pugh said the company is working on changes that would require customers to use some form of 2FA in addition to usernames and passwords.

“Twilio believes that requiring 2FA for customer accounts is the right thing to do, and we’re working towards that end,” Pugh said. “2FA has proven to be a powerful tool in securing communications channels. This is part of the reason we acquired Authy and created a line of account security products and services. Twilio, like other platforms, is forming a plan on how to better secure our customers’ accounts through native technologies such as Authy and additional account level controls to mitigate known attack vectors.”

Requiring customers to use some form of 2FA would go a long way toward neutralizing the underground market for compromised Sendgrid accounts, which are sold by a variety of cybercriminals who specialize in gaining access to accounts by targeting users who re-use the same passwords across multiple websites.

One such individual, who goes by the handle “Kromatix” on several forums, is currently selling access to more than 400 compromised Sendgrid user accounts. The pricing attached to each account is based on volume of email it can send in a given month. Accounts that can send up to 40,000 emails a month go for $15, whereas those capable of blasting 10 million missives a month sell for $400.

“I have a large supply of cracked Sendgrid accounts that can be used to generate an API key which you can then plug into your mailer of choice and send massive amounts of emails with ensured delivery,” Kromatix wrote in an Aug. 23 sales thread. “Sendgrid servers maintain a very good reputation with [email service providers] so your content becomes much more likely to get into the inbox so long as your setup is correct.”

Neil Schwartzman, executive director of the anti-spam group CAUCE, said Sendgrid’s 2FA plans are long overdue, noting that the company bought Authy back in 2015.

Single-factor authentication for a company like this in 2020 is just ludicrous given the potential damage and malicious content we’re seeing,” Schwartzman said.

“I understand that it’s a task to invoke 2FA, and given the volume of customers Sendgrid has that’s something to consider because there’s going to be a lot of customer overhead involved,” he continued. “But it’s not like your bank, social media account, email and plenty of other places online don’t already insist on it.”

Schwartzman said if Twilio doesn’t act quickly enough to fix the problem on its end, the major email providers of the world (think Google, Microsoft and Apple) — and their various machine-learning anti-spam algorithms — may do it for them.

“There is a tipping point after which receiving firms start to lose patience and start to more aggressively filter this stuff,” he said. “If seeing a Sendgrid email according to machine learning becomes a sign of abuse, trust me the machines will make the decisions even if the people don’t.”

Planet DebianDirk Eddelbuettel: anytime 0.3.9: More Minor Maintenance

A new minor release of the anytime package arrived on CRAN yesterday. This is the twentieth release, but sadly we seem to be spinning our wheels just accomodating CRAN (which the two or three last releases focused on). Code and functionality remain mature and stable, of course.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string as well as accomodating different formats in one input vector. See the anytime page, or the GitHub for a few examples.

This release once again has to play catchup with CRAN as r-devel now changes how tzone propagates when we do as.POSIXct(as.POSIXlt(Sys.time()) — which is now no longer “equal” to as.POSIXct(Sys.time()) even for a fixed, stored Sys.time() call result. Probably for the better, but an issue for now so we … effectively just reduced test coverage. Call it “progress”.

The full list of changes follows.

Changes in anytime version 0.3.9 (2020-08-26)

  • Skip one test file that is impossible to run across different CRAN setups, and life is definitely too short for these games.

  • Change remaining http:// to https:// because, well, you know.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page. The issue tracker tracker off the GitHub repo can be use for questions and comments.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramUS Postal Service Files Blockchain Voting Patent

The US Postal Service has filed a patent on a blockchain voting method:

Abstract: A voting system can use the security of blockchain and the mail to provide a reliable voting system. A registered voter receives a computer readable code in the mail and confirms identity and confirms correct ballot information in an election. The system separates voter identification and votes to ensure vote anonymity, and stores votes on a distributed ledger in a blockchain

I wasn't going to bother blogging this, but I've received enough emails about it that I should comment.

As is pretty much always the case, blockchain adds nothing. The security of this system has nothing to do with blockchain, and would be better off without it. For voting in particular, blockchain adds to the insecurity. Matt Blaze is most succinct on that point:

Why is blockchain voting a dumb idea?

Glad you asked.

For starters:

  • It doesn't solve any problems civil elections actually have.
  • It's basically incompatible with "software independence", considered an essential property.
  • It can make ballot secrecy difficult or impossible.

Both Ben Adida and Matthew Green have written longer pieces on blockchain and voting.

News articles.

Planet DebianBits from Debian: DebConf20 welcomes its sponsors!

DebConf20 logo

DebConf20 is taking place online, from 23 August to 29 August 2020. It is the 21st Debian conference, and organizers and participants are working hard together at creating interesting and fruitful events.

We would like to warmly welcome the 17 sponsors of DebConf20, and introduce them to you.

We have four Platinum sponsors.

Our first Platinum sponsor is Lenovo. As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.

Our next Platinum sponsor is Infomaniak. Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

Google is our third Platinum sponsor. Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner.

Amazon Web Services (AWS) is our fourth Platinum sponsor. Amazon Web Services is one of the world's most comprehensive and broadly adopted cloud platforms, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies.

Our Gold sponsors are Deepin, the Matanel Foundation, Collabora, and HRT.

Deepin is a Chinese commercial company focusing on the development and service of Linux-based operating systems. They also lead research and development of the Deepin Debian derivative.

The Matanel Foundation operates in Israel, as its first concern is to preserve the cohesion of a society and a nation plagued by divisions. The Matanel Foundation also works in Europe, in Africa and in South America.

Collabora is a global consultancy delivering Open Source software solutions to the commercial world. In addition to offering solutions to clients, Collabora's engineers and developers actively contribute to many Open Source projects.

Hudson-Trading is a company led by mathematicians, computer scientists, statisticians, physicists and engineers. They research and develop automated trading algorithms using advanced mathematical techniques.

Our Silver sponsors are:

Linux Professional Institute, the global certification standard and career support organization for open source professionals, Civil Infrastructure Platform, a collaborative project hosted by the Linux Foundation, establishing an open source “base layer” of industrial grade software, Ubuntu, the Operating System delivered by Canonical, and Roche, a major international pharmaceutical provider and research company dedicated to personalized healthcare.

Bronze sponsors: IBM, MySQL, Univention.

And finally, our Supporter level sponsors, ISG.EE and Pengwin.

Thanks to all our sponsors for their support! Their contributions make it possible for a large number of Debian contributors from all over the globe to work together, help and learn from each other in DebConf20.

Participating in DebConf20 online

The 21st Debian Conference is being held online, due to COVID-19, from August 23 to 29, 2020. Talks, discussions, panels and other activities run from 10:00 to 01:00 UTC. Visit the DebConf20 website at to learn about the complete schedule, watch the live streaming and join the different communication channels for participating in the conference.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, July 2020

A Debian LTS logo Like each month, albeit a bit later due to vacation, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 249.25 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 18.0h (out of 14h assigned and 6h from June), and gave back 2h to the pool.
  • Adrian Bunk did 16.0h (out of 25.25h assigned), thus carrying over 9.25h to August.
  • Ben Hutchings did 5h (out of 20h assigned), and gave back the remaining 15h.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 60h (out of 5.75h assigned and 54.25h from June).
  • Holger Levsen spent 10h (out of 10h assigned) for managing LTS and ELTS contributors.
  • Markus Koschany did 15h (out of 25.25h assigned), thus carrying over 10.25h to August.
  • Mike Gabriel did nothing (out of 8h assigned), thus is carrying over 8h for August.
  • Ola Lundqvist did 3h (out of 12h assigned and 7h from June), thus carrying over 16h to August.
  • Roberto C. Sánchez did 26.5h (out of 25.25h assigned and 1.25h from June).
  • Sylvain Beucler did 25.25h (out of 25.25h assigned).
  • Thorsten Alteholz did 25.25h (out of 25.25h assigned).
  • Utkarsh Gupta did 25.25h (out of 25.25h assigned).

Evolution of the situation

July was our first month of Stretch LTS! Given this is our fourth LTS release we anticipated a smooth transition and it seems everything indeed went very well. Many thanks to the members of the Debian ftpmaster-, security, release- and publicity- teams who helped us make this happen!
Stretch LTS begun on July 18th 2020 after the 13th and final Stretch point release. and is currently scheduled to end on June 30th 2022.

Last month, we asked you to participate in a survey and we got 1764 submissions, which is pretty awesome. Thank you very much for participating!. Right now we are still busy crunching the results, but we already shared some early analysis during the Debconf LTS bof this week.

The security tracker currently lists 54 packages with a known CVE and the dla-needed.txt file has 52 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Worse Than FailureError'd: Don't Leave This Page

"My Kindle showed me this for the entire time I read this book. Luckily, page 31 is really exciting!" writes Hans H.


Tim wrote, "Thanks JustPark, I'd love to verify my account! about that button?"


"I almost managed to uninstall Viber, or did I?" writes Simon T.


Marco wrote, "All I wanted to do was to post a one-time payment on a reputable cloud provider. Now I'm just confused."


Brinio H. wrote, "Somehow I expected my muscles to feel more sore after walking over 382 light-years on one day."


"Here we have PowerBI failing to dispel the perception that 'Business Intelligence' is an oxymoron," writes Craig.


[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianReproducible Builds (diffoscope): diffoscope 158 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 158. This version includes the following changes:

* Improve PGP support:
  - Support extracting of files within PGP signed data.
    (Closes: reproducible-builds/diffoscope#214)
  - pgpdump(1) can successfully parse some unrelated, non-PGP binary files,
    so check that the parsed output contains something remotely sensible
    before identifying it as a PGP file.
* Don't use Python's repr(...)-style output in "Calling external command"
  logging output.
* Correct a typo of "output" in an internal comment.

You find out more by visiting the project homepage.


Krebs on SecurityConfessions of an ID Theft Kingpin, Part II

Yesterday’s piece told the tale of Hieu Minh Ngo, a hacker the U.S. Secret Service described as someone who caused more material financial harm to more Americans than any other convicted cybercriminal. Ngo was recently deported back to his home country after serving more than seven years in prison for running multiple identity theft services. He now says he wants to use his experience to convince other cybercriminals to use their skills for good. Here’s a look at what happened after he got busted.

Hieu Minh Ngo, 29, in a recent photo.

Part I of this series ended with Ngo in handcuffs after disembarking a flight from his native Vietnam to Guam, where he believed he was going to meet another cybercriminal who’d promised to hook him up with the mother of all consumer data caches.

Ngo had been making more than $125,000 a month reselling ill-gotten access to some of the biggest data brokers on the planet. But the Secret Service discovered his various accounts at these data brokers and had them shut down one by one. Ngo became obsessed with restarting his business and maintaining his previous income. By this time, his ID theft services had earned roughly USD $3 million.

As this was going on, Secret Service agents used an intermediary to trick Ngo into thinking he’d trodden on the turf of another cybercriminal. From Part I:

The Secret Service contacted Ngo through an intermediary in the United Kingdom — a known, convicted cybercriminal who agreed to play along. The U.K.-based collaborator told Ngo he had personally shut down Ngo’s access to Experian because he had been there first and Ngo was interfering with his business.

“The U.K. guy told Ngo, ‘Hey, you’re treading on my turf, and I decided to lock you out. But as long as you’re paying a vig through me, your access won’t go away’,” the Secret Service’s Matt O’Neill recalled.

After several months of conversing with his apparent U.K.-based tormentor, Ngo agreed to meet him in Guam to finalize the deal. But immediately after stepping off of the plane in Guam, he was apprehended by Secret Service agents.

“One of the names of his identity theft services was findget[.]me,” O’Neill said. “We took that seriously, and we did like he asked.”

In an interview with KrebsOnSecurity, Ngo said he spent about two months in a Guam jail awaiting transfer to the United States. A month passed before he was allowed a 10 minute phone call to his family and explain what he’d gotten himself into.

“This was a very tough time,” Ngo said. “They were so sad and they were crying a lot.”

First stop on his prosecution tour was New Jersey, where he ultimately pleaded guilty to hacking into MicroBilt, the first of several data brokers whose consumer databases would power different iterations of his identity theft service over the years.

Next came New Hampshire, where another guilty plea forced him to testify in three different trials against identity thieves who had used his services for years. Among them was Lance Ealy, a serial ID thief from Dayton, Ohio who used Ngo’s service to purchase more than 350 “fullz” — a term used to describe a package of everything one would need to steal someone’s identity, including their Social Security number, mother’s maiden name, birth date, address, phone number, email address, bank account information and passwords.

Ealy used Ngo’s service primarily to conduct tax refund fraud with the U.S. Internal Revenue Service (IRS), claiming huge refunds in the names of ID theft victims who first learned of the fraud when they went to file their taxes and found someone else had beat them to it.

Ngo’s cooperation with the government ultimately led to 20 arrests, with a dozen of those defendants lured into the open by O’Neill and other Secret Service agents posing as Ngo.

The Secret Service had difficulty pinning down the exact amount of financial damage inflicted by Ngo’s various ID theft services over the years, primarily because those services only kept records of what customers searched for — not which records they purchased.

But based on the records they did have, the government estimated that Ngo’s service enabled approximately $1.1 billion in new account fraud at banks and retailers throughout the United States, and roughly $64 million in tax refund fraud with the states and the IRS.

“We interviewed a number of Ngo’s customers, who were pretty open about why they were using his services,” O’Neill said. “Many of them told us the same thing: Buying identities was so much better for them than stolen payment card data, because card data could be used once or twice before it was no good to them anymore. But identities could be used over and over again for years.”

O’Neill said he still marvels at the fact that Ngo’s name is practically unknown when compared to the world’s most infamous credit card thieves, some of whom were responsible for stealing hundreds of millions of cards from big box retail merchants.

“I don’t know of anyone who has come close to causing more material harm than Ngo did to the average American,” O’Neill said. “But most people have probably never heard of him.”

Ngo said he wasn’t surprised that his services were responsible for so much financial damage. But he was utterly unprepared to hear about the human toll. Throughout the court proceedings, Ngo sat through story after dreadful story of how his work had ruined the financial lives of people harmed by his services.

“When I was running the service, I didn’t really care because I didn’t know my customers and I didn’t know much about what they were doing with it,” Ngo said. “But during my case, the federal court received like 13,000 letters from victims who complained they lost their houses, jobs, or could no longer afford to buy a home or maintain their financial life because of me. That made me feel really bad, and I realized I’d been a terrible person.”

Even as he bounced from one federal detention facility to the next, Ngo always seemed to encounter ID theft victims wherever he went, including prison guards, healthcare workers and counselors.

“When I was in jail at Beaumont, Texas I talked to one of the correctional officers there who shared with me a story about her friend who lost her identity and then lost everything after that,” Ngo recalled. “Her whole life fell apart. I don’t know if that lady was one of my victims, but that story made me feel sick. I know now that what I was doing was just evil.”

Ngo’s former ID theft service usearching[.]info.

The Vietnamese hacker was released from prison a few months ago, and is now finishing up a mandatory three-week COVID-19 quarantine in a government-run facility near Ho Chi Minh city. In the final months of his detention, Ngo started reading everything he could get his hands on about computer and Internet security, and even authored a lengthy guide written for the average Internet user with advice about how to avoid getting hacked or becoming the victim of identity theft.

Ngo said while he would like to one day get a job working in some cybersecurity role, he’s in no hurry to do so. He’s already had at least one job offer in Vietnam, but he turned it down. He says he’s not ready to work yet, but is looking forward to spending time with his family — and specifically with his dad, who was recently diagnosed with Stage 4 cancer.

Longer term, Ngo says, he wants to mentor young people and help guide them on the right path, and away from cybercrime. He’s been brutally honest about his crimes and the destruction he’s caused. His LinkedIn profile states up front that he’s a convicted cybercriminal.

“I hope my work can help to change the minds of somebody, and if at least one person can change and turn to do good, I’m happy,” Ngo said. “It’s time for me to do something right, to give back to the world, because I know I can do something like this.”

Still, the recidivism rate among cybercriminals tends to be extremely high, and it would be easy for him to slip back into his old ways. After all, few people know as well as he does how best to exploit access to identity data.

O’Neill said he believes Ngo probably will keep his nose clean. But he added that Ngo’s service if it existed today probably would be even more successful and lucrative given the sheer number of scammers involved in using stolen identity data to defraud states and the federal government out of pandemic assistance loans and unemployment insurance benefits.

“It doesn’t appear he’s looking to get back into that life of crime,” O’Neill said. “But I firmly believe the people doing fraudulent small business loans and unemployment claims cut their teeth on his website. He was definitely the new coin of the realm.”

Ngo maintains he has zero interest in doing anything that might send him back to prison.

“Prison is a difficult place, but it gave me time to think about my life and my choices,” he said. “I am committing myself to do good and be better every day. I now know that money is just a part of life. It’s not everything and it can’t bring you true happiness. I hope those cybercriminals out there can learn from my experience. I hope they stop what they are doing and instead use their skills to help make the world better.”

CryptogramCory Doctorow on The Age of Surveillance Capitalism

Cory Doctorow has writtten an extended rebuttal of The Age of Surveillance Capitalism by Shoshana Zuboff. He summarized the argument on Twitter.

Shorter summary: it's not the surveillance part, it's the fact that these companies are monopolies.

I think it's both. Surveillance capitalism has some unique properties that make it particularly unethical and incompatible with a free society, and Zuboff makes them clear in her book. But the current acceptance of monopolies in our society is also extremely damaging -- which Doctorow makes clear.

Worse Than FailureCodeSOD: Win By Being Last

I’m going to open with just one line, just one line from Megan D, before we dig into the story:

public static boolean comparePasswords(char[] password1, char[] password2)

A long time ago, someone wrote a Java 1.4 application. It’s all about getting data out of data files, like CSVs and Excel and XML, and getting it into a database, where it can then be turned into plots and reports. Currently, it has two customers, but boy, there’s a lot of technology invested in it, so the pointy-hairs decided that it needed to be updated so they could sell it to new customers.

The developers played a game of “Not It!” and Megan lost. It wasn’t hard to see why no one wanted to touch this code. The UI section was implemented in code generated by an Eclipse plugin that no longer exists. There was UI code which wasn’t implemented that way, but there were no code paths that actually showed it. The project didn’t have one “do everything” class of utilities- it had many of them.

The real magic was in All the data got converted into strings before going into the database, and data got pulled back out as lists of strings- one string per row, prepended with the number of columns in that row. The string would get split up and converted back into the actual real datatypes.

Getting back to our sample line above, Megan adds:

No restrictions on any data in the database, or even input cleaning - little Bobby Tables would have a field day. There are so many issues that the fact that passwords are plaintext barely even registers as a problem.

A common convention used in the database layer is “loop and compare”. Want to check if a username exists in the database? SELECT username FROM users WHERE username = 'someuser', loop across the results, and if the username in the result set matches 'someuser', set a flag to true (set it to false otherwise). Return the flag. And if you're wondering why they need to look at each row instead of just seeing a non-zero number of matches, so am I.

Usernames are not unique, but the username/group combination should be.

Similarly, if you’re logging in, it uses a “loop and compare”. Find all the rows for users with that username. Then, find all the groups for that username. Loop across all the groups and check if any of them match the user trying to log in. Then loop across all the stored- plaintext stored passwords and see if they match.

But that raises the question: how do you tell if two strings match? Just use an equality comparison? Or a .equals? Of course not.

We use “loop and compare” on sequences of rows, so we should also use “loop and compare” on sequences of characters. What could be wrong with that?

   * Compares two given char arrays for equality.
   * @param password1
   *          The first password to compare.
   * @param password2
   *          The second password to compare.
   * @return True if the passwords are equal false otherwise.
  public static boolean comparePasswords(char[] password1, char[] password2)
    // assume false until prove otherwise
    boolean aSameFlag = false;
    if (password1 != null && password2 != null)
      if (password1.length == password2.length)
        for (int aIndex = 0; aIndex < password1.length; aIndex++)
          aSameFlag = password1[aIndex] == password2[aIndex];
    return aSameFlag;

If the passwords are both non-null, if they’re both the same length, compare them one character at a time. For each character, set the aSameFlag to true if they match, false if they don’t.

Return the aSameFlag.

The end result of this is that only the last letter matters, so from the perspective of this code, there’s no difference between the word “ship” and a more accurate way to describe this code.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!


Planet DebianDirk Eddelbuettel: #29: Easy, Reliable, Fast Linux CRAN Binaries via BSPM

Welcome to the 29th post in the randomly repeating R recommendations series or R4 for short. Our last post #28 introduced RSPM, and just before that we also talked in #27 about binary installations on Ubuntu (which was also a T4 video). This post is joined with Iñaki Ucar and mainly about work we have done with his bspm package.


CRAN has been a cornerstone of the success of R in recent years. As a well-maintained repository with stringent quality control, it ensures users have access to highest-quality statistical / analytical software that “just works”. Users on Windows and macOS also benefit from faster installation via pre-compiled binary packages.

Linux users generally install from source, which can be more tedious and, often, much slower. Those who know where to look have had access to (at least some) binaries for years as well (and one of us blogged and vlogged about this at length). Debian users get close to 1000 CRAN and BioConductor packages (and, true to Debian form, for well over a dozen hardware platforms). Michael Rutter maintains a PPA with 4600 binaries for three different Ubuntu flavors (see c2d4u4.0+). More recently, Fedora joined the party with 16000 (!!) binaries, essentially all of CRAN, via a Copr repository (see iucar/cran).

The buzz currently is however with RSPM, a new package manager by RStudio. An audacious project, it provides binaries for several Linux distributions and releases. It has already been tested in many RStudio Cloud sessions (including with some of our students) as well as some CI integrations.

RSPM cuts “across” and takes the breadth of CRAN across several Linux distributions, bringing installation of pre-built CRAN packages a binaries under their normal CRAN package names. Another nice touch is the integration with install.packages(): these binaries are installed in a way that is natural for R users—but as binaries. It is however entirely disconnected from the system package management. This means that the installation of a package requiring an external library may “succeed” and still fail, as a required library simply cannot be pulled in directly by RSPM.

So what is needed is a combination. We want binaries that are aware of their system dependencies but accessible directly from R just like RSPM offers it. Enter BSPM—the Bridge to System Package Manager package (also on CRAN).

The first illustration (using Ubuntu 18.04) shows RSPM on the left, and BSPM on the right, both installing the graphics package Cairo (and both using custom Rocker containers).

This fails for RSPM as no binary is present and a source build fails for the familiar lack of a -dev package. It proceeds just fine on the right under BSPM.

A second illustration shows once again RSPM on the left, and BSPM on the right (this time on Fedora), both installing the units package without a required system dependency.

The installation of units works for BSPM as the dependency libudunits is brought in, but fails under RSPM. The binary installation succeeds in both cases, but the missing dependency (the UDUNITS2 library) is brought in only by BSPM. Consequently, the package fails to load under RSPM.


To conclude, highlights of BSPM are:

  • direct installation of binary packages from R via R commands under their normal CRAN names (just like RSPM);
  • full integration with the system package manager, delivering system installations (improving upon RSPM);
  • full dependency resolution for R packages, including system requirements (improving upon RSPM).

This offers easy, reliable, fast installation of R packages, and we invite you to pick all three. We recommend usage with either Ubuntu with the 4.6k packages via the Rutter PPA, or Fedora via the even more complete Copr repository (which already includes a specially-tailored version of BSPM called CoprManager).

We hope this short note wets your appetite to learn more about bspm (which is itself on CRAN) and the two sets of Rocker containers shown. The rocker/r-rspm container comes in two two flavours for Ubuntu 18.04 and 20.04. Similarly, the rocker/r-bspm container comes in the same two two flavours for Ubuntu 18.04 and 20.04, as well as in a Debian testing variant.

Feedback is appreciated at the bspm or rocker issue trackers.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityConfessions of an ID Theft Kingpin, Part I

At the height of his cybercriminal career, the hacker known as “Hieupc” was earning $125,000 a month running a bustling identity theft service that siphoned consumer dossiers from some of the world’s top data brokers. That is, until his greed and ambition played straight into an elaborate snare set by the U.S. Secret Service. Now, after more than seven years in prison Hieupc is back in his home country and hoping to convince other would-be cybercrooks to use their computer skills for good.

Hieu Minh Ngo, in his teens.

For several years beginning around 2010, a lone teenager in Vietnam named Hieu Minh Ngo ran one of the Internet’s most profitable and popular services for selling “fullz,” stolen identity records that included a consumer’s name, date of birth, Social Security number and email and physical address.

Ngo got his treasure trove of consumer data by hacking and social engineering his way into a string of major data brokers. By the time the Secret Service caught up with him in 2013, he’d made over $3 million selling fullz data to identity thieves and organized crime rings operating throughout the United States.

Matt O’Neill is the Secret Service agent who in February 2013 successfully executed a scheme to lure Ngo out of Vietnam and into Guam, where the young hacker was arrested and sent to the mainland U.S. to face prosecution. O’Neill now heads the agency’s Global Investigative Operations Center, which supports investigations into transnational organized criminal groups.

O’Neill said he opened the investigation into Ngo’s identity theft business after reading about it in a 2011 KrebsOnSecurity story, “How Much is Your Identity Worth?” According to O’Neill, what’s remarkable about Ngo is that to this day his name is virtually unknown among the pantheon of infamous convicted cybercriminals, the majority of whom were busted for trafficking in huge quantities of stolen credit cards.

Ngo’s businesses enabled an entire generation of cybercriminals to commit an estimated $1 billion worth of new account fraud, and to sully the credit histories of countless Americans in the process.

“I don’t know of any other cybercriminal who has caused more material financial harm to more Americans than Ngo,” O’Neill told KrebsOnSecurity. “He was selling the personal information on more than 200 million Americans and allowing anyone to buy it for pennies apiece.”

Freshly released from the U.S. prison system and deported back to Vietnam, Ngo is currently finishing up a mandatory three-week COVID-19 quarantine at a government-run facility. He contacted KrebsOnSecurity from inside this facility with the stated aim of telling his little-known story, and to warn others away from following in his footsteps.


Ten years ago, then 19-year-old hacker Ngo was a regular on the Vietnamese-language computer hacking forums. Ngo says he came from a middle-class family that owned an electronics store, and that his parents bought him a computer when he was around 12 years old. From then on out, he was hooked.

In his late teens, he traveled to New Zealand to study English at a university there. By that time, he was already an administrator of several dark web hacker forums, and between his studies he discovered a vulnerability in the school’s network that exposed payment card data.

“I did contact the IT technician there to fix it, but nobody cared so I hacked the whole system,” Ngo recalled. “Then I used the same vulnerability to hack other websites. I was stealing lots of credit cards.”

Ngo said he decided to use the card data to buy concert and event tickets from Ticketmaster, and then sell the tickets at a New Zealand auction site called TradeMe. The university later learned of the intrusion and Ngo’s role in it, and the Auckland police got involved. Ngo’s travel visa was not renewed after his first semester ended, and in retribution he attacked the university’s site, shutting it down for at least two days.

Ngo said he started taking classes again back in Vietnam, but soon found he was spending most of his time on cybercrime forums.

“I went from hacking for fun to hacking for profits when I saw how easy it was to make money stealing customer databases,” Ngo said. “I was hanging out with some of my friends from the underground forums and we talked about planning a new criminal activity.”

“My friends said doing credit cards and bank information is very dangerous, so I started thinking about selling identities,” Ngo continued. “At first I thought well, it’s just information, maybe it’s not that bad because it’s not related to bank accounts directly. But I was wrong, and the money I started making very fast just blinded me to a lot of things.”


His first big target was a consumer credit reporting company in New Jersey called MicroBilt.

“I was hacking into their platform and stealing their customer database so I could use their customer logins to access their [consumer] databases,” Ngo said. “I was in their systems for almost a year without them knowing.”

Very soon after gaining access to MicroBilt, Ngo says, he stood up Superget[.]info, a website that advertised the sale of individual consumer records. Ngo said initially his service was quite manual, requiring customers to request specific states or consumers they wanted information on, and he would conduct the lookups by hand.

Ngo’s former identity theft service, superget[.]info

“I was trying to get more records at once, but the speed of our Internet in Vietnam then was very slow,” Ngo recalled. “I couldn’t download it because the database was so huge. So I just manually search for whoever need identities.”

But Ngo would soon work out how to use more powerful servers in the United States to automate the collection of larger amounts of consumer data from MicroBilt’s systems, and from other data brokers. As I wrote of Ngo’s service back in November 2011:

“Superget lets users search for specific individuals by name, city, and state. Each “credit” costs USD$1, and a successful hit on a Social Security number or date of birth costs 3 credits each. The more credits you buy, the cheaper the searches are per credit: Six credits cost $4.99; 35 credits cost $20.99, and $100.99 buys you 230 credits. Customers with special needs can avail themselves of the “reseller plan,” which promises 1,500 credits for $500.99, and 3,500 credits for $1000.99.

“Our Databases are updated EVERY DAY,” the site’s owner enthuses. “About 99% nearly 100% US people could be found, more than any sites on the internet now.”

Ngo’s intrusion into MicroBilt eventually was detected, and the company kicked him out of their systems. But he says he got back in using another vulnerability.

“I was hacking them and it was back and forth for months,” Ngo said. “They would discover [my accounts] and fix it, and I would discover a new vulnerability and hack them again.”


This game of cat and mouse continued until Ngo found a much more reliable and stable source of consumer data: A U.S. based company called Court Ventures, which aggregated public records from court documents. Ngo wasn’t interested in the data collected by Court Ventures, but rather in its data sharing agreement with a third-party data broker called U.S. Info Search, which had access to far more sensitive consumer records.

Using forged documents and more than a few lies, Ngo was able to convince Court Ventures that he was a private investigator based in the United States.

“At first [when] I sign up they asked for some documents to verify,” Ngo said. “So I just used some skill about social engineering and went through the security check.”

Then, in March 2012, something even more remarkable happened: Court Ventures was purchased by Experian, one of the big three major consumer credit bureaus in the United States. And for nine months after the acquisition, Ngo was able to maintain his access.

“After that, the database was under control by Experian,” he said. “I was paying Experian good money, thousands of dollars a month.”

Whether anyone at Experian ever performed due diligence on the accounts grandfathered in from Court Ventures is unclear. But it wouldn’t have taken a rocket surgeon to figure out that this particular customer was up to something fishy.

For one thing, Ngo paid the monthly invoices for his customers’ data requests using wire transfers from a multitude of banks around the world, but mostly from new accounts at financial institutions in China, Malaysia and Singapore.

O’Neill said Ngo’s identity theft website generated tens of thousands of queries each month. For example, the first invoice Court Ventures sent Ngo in December 2010 was for 60,000 queries. By the time Experian acquired the company, Ngo’s service had attracted more than 1,400 regular customers, and was averaging 160,000 monthly queries.

More importantly, Ngo’s profit margins were enormous.

“His service was quite the racket,” he said. “Court Ventures charged him 14 cents per lookup, but he charged his customers about $1 for each query.”

By this time, O’Neill and his fellow Secret Service agents had served dozens of subpoenas tied to Ngo’s identity theft service, including one that granted them access to the email account he used to communicate with customers and administer his site. The agents discovered several emails from Ngo instructing an accomplice to pay Experian using wire transfers from different Asian banks.


Working with the Secret Service, Experian quickly zeroed in on Ngo’s accounts and shut them down. Aware of an opportunity here, the Secret Service contacted Ngo through an intermediary in the United Kingdom — a known, convicted cybercriminal who agreed to play along. The U.K.-based collaborator told Ngo he had personally shut down Ngo’s access to Experian because he had been there first and Ngo was interfering with his business.

“The U.K. guy told Ngo, ‘Hey, you’re treading on my turf, and I decided to lock you out. But as long as you’re paying a vig through me, your access won’t go away’,” O’Neill recalled.

The U.K. cybercriminal, acting at the behest of the Secret Service and U.K. authorities, told Ngo that if he wanted to maintain his access, he could agree to meet up in person. But Ngo didn’t immediately bite on the offer.

Instead, he weaseled his way into another huge data store. In much the same way he’d gained access to Court Ventures, Ngo got an account at a company called TLO, another data broker that sells access to extremely detailed and sensitive information on most Americans.

TLO’s service is accessible to law enforcement agencies and to a limited number of vetted professionals who can demonstrate they have a lawful reason to access such information. In 2014, TLO was acquired by Trans Union, one of the other three big U.S. consumer credit reporting bureaus.

And for a short time, Ngo used his access to TLO to power a new iteration of his business — an identity theft service rebranded as usearching[.]info. This site also pulled consumer data from a payday loan company that Ngo hacked into, as documented in my Sept. 2012 story, ID Theft Service Tied to Payday Loan Sites. Ngo said the hacked payday loans site gave him instant access to roughly 1,000 new fullz records each day.

Ngo’s former ID theft service usearching[.]info.


By this time, Ngo was a multi-millionaire: His various sites and reselling agreements with three Russian-language cybercriminal stores online had earned him more than USD $3 million. He told his parents his money came from helping companies develop websites, and even used some of his ill-gotten gains to pay off the family’s debts (its electronics business had gone belly up, and a family member had borrowed but never paid back a significant sum of money).

But mostly, Ngo said, he spent his money on frivolous things, although he says he’s never touched drugs or alcohol.

“I spent it on vacations and cars and a lot of other stupid stuff,” he said.

When TLO locked Ngo out of his account there, the Secret Service used it as another opportunity for their cybercriminal mouthpiece in the U.K. to turn the screws on Ngo yet again.

“He told Ngo he’d locked him out again, and the he could do this all day long,” O’Neill said. “And if he truly wanted lasting access to all of these places he used to have access to, he would agree to meet and form a more secure partnership.”

After several months of conversing with his apparent U.K.-based tormentor, Ngo agreed to meet him in Guam to finalize the deal. Ngo says he understood at the time that Guam is an unincorporated territory of the United States, but that he discounted the chances that this was all some kind of elaborate law enforcement sting operation.

“I was so desperate to have a stable database, and I got blinded by greed and started acting crazy without thinking,” Ngo said. “Lots of people told me ‘Don’t go!,’ but I told them I have to try and see what’s going on.”

But immediately after stepping off of the plane in Guam, he was apprehended by Secret Service agents.

“One of the names of his identity theft services was findget[.]me,” O’Neill said. “We took that seriously, and we did like he asked.”

This is Part I of a multi-part series. Part II in this series is available at this link.

Planet DebianAndrew Cater: The Debconf20 song

The DebConf 20 song - a sea shanty - to the tune of "Fathom the bowl"

Here's to DebConf 20, the brightest and best
Now it's this year's orga team getting no rest
We're not met in Haifa - it's all doom and gloom
And I'm sat like a lifer here trapped in my room

I'm sat in my room, it's all doom and gloom
And I'm sat at my keyboard here trapped in my room

Now there's IRC rooms and there's jitsi and all
But no fun conversations as we meet in the hall
No hugs for old friends, no shared wine and cheese
Just shared indigestion as we take our ease

I'm sat in my room, it's all doom and gloom
And I'm sat with three screens around me in my room

But there's people to chat to, and faces we know
And new things to learn and we're all on the go
Algo en espanol - there's no cause for alarm
An Indic track showcasing Malayalam

I'm sat in my room, it's all doom and gloom
And I'm sat with my Thinkpads and cats in my room

With webcams and buffering, with lag and delay
It's as well that there's Debconf time all through the day
The effects of tiredness are hard to foresee
For the Debian clocks all are timezone UTC

I'm sat in my room, it's all doom and gloom
And I'll sing out of tune as I'm sat in my room

There's no social drinking, there's no games of Mao
Keeping social distance, we can't think quite how
This year is still friendly though minus some fun
We'll catch up next year when we'll all get some sun

I'm sat in my room, it's all doom and gloom
I'm sat with my friends around here in my room

There's loopy@debconf and snippets and such
To cheer us all up, sure, it doesn't take much
For we're all one big family, though we each code alone
And we sometimes switch off or just complain and moan

I'm sat in my room, it's all doom and gloom
And there's space for us all in the debconf chat room

This is my first DebConf - hope it won't be my last
And we'll meet up somewhere when this COVID is past
To all who have done this - we deserve the credit
Now if you'll excuse me - I've web pages to edit

I'm sat in my room, it's not all doom and gloom
And we're met as one Debian here in my room


CryptogramAmazon Supplier Fraud

Interesting story of an Amazon supplier fraud:

According to the indictment, the brothers swapped ASINs for items Amazon ordered to send large quantities of different goods instead. In one instance, Amazon ordered 12 canisters of disinfectant spray costing $94.03. The defendants allegedly shipped 7,000 toothbrushes costing $94.03 each, using the code for the disinfectant spray, and later billed Amazon for over $650,000.

In another instance, Amazon ordered a single bottle of designer perfume for $289.78. In response, according to the indictment, the defendants sent 927 plastic beard trimmers costing $289.79 each, using the ASIN for the perfume. Prosecutors say the brothers frequently shipped and charged Amazon for more than 10,000 units of an item when it had requested fewer than 100. Once Amazon detected the fraud and shut down their accounts, the brothers allegedly tried to open new ones using fake names, different email addresses, and VPNs to obscure their identity.

It all worked because Amazon is so huge that everything is automated.

Worse Than FailureCodeSOD: Where to Insert This

If you run a business of any size, you need some sort of resource-management/planning software. Really small businesses use Excel. Medium businesses use Excel. Enterprises use Excel. But in addition to that, the large businesses also pay through the nose for a gigantic ERP system, like Oracle or SAP, that they can wire up to Excel.

Small and medium businesses can’t afford an ERP, but they might want to purchase a management package in the niche realm of “SMB software”- small and medium business software. Much like their larger cousins, these SMB tools have… a different idea of code quality.

Cassandra’s company had deployed such a product, and with it came a slew of tickets. The performance was bad. There were bugs everywhere. While the company provided support, Cassandra’s IT team was expected to also do some diagnosing.

While digging around in one nasty performance problem, Cassandra found that one button in the application would generate and execute this block of SQL code using a SQLCommand object in C#.

DECLARE @tmp TABLE (Id uniqueidentifier)

--{ Dynamic single insert statements, may be in the hundreds. }

IF NOT EXISTS (SELECT TOP 1 1 FROM SomeTable AS st INNER JOIN @tmp t ON t.Id = st.PK)
    INSERT INTO SomeTable (PK, SomeDate) SELECT Id, getdate() as SomeDate FROM @tmp 
    UPDATE st
        SET SomeDate = getdate()
        FROM @tmp t
        LEFT JOIN SomeTable AS st ON t.Id = st.PK AND SomeDate = NULL

At its core, the purpose of this is to take a temp-table full of rows and perform an “upsert” for all of them: insert if a record with that key doesn’t exist, update if a record with that key does. Now, this code is clearly SQL Server code, so a MERGE handles that.

But okay, maybe they’re trying to be as database agnostic as possible, and don’t want to use something that, while widely supported, has some dialect differences across databases. Fine, but there’s another problem here.

Whoever built this understood that in SQL Server land, cursors are frowned upon, so they didn’t want to iterate across every row. But here’s their problem: some of the records may exist, some of them may not, so they need to check that.

As you saw, this was their approach:

IF NOT EXISTS (SELECT TOP 1 1 FROM SomeTable AS st INNER JOIN @tmp t ON t.Id = st.PK)

This is wrong. This will be true only if none of the rows in the dynamically generated INSERT statements exist in the base table. If some of the rows exist and some don’t, you aren’t going to get the results you were expecting, because this code only goes down one branch: it either inserts or updates.

There are other things wrong with this code. For example, SomeDate = NULL is going to have different behavior based on whether the ANSI_NULLS database flag is OFF (in which case it works), or ON (in which case it doesn’t). There’s a whole lot of caveats about whether you set it at the database level, on the connection string, during your session, but in Cassandra’s example, ANSI_NULLS was ON at the time this ran, so that also didn’t work.

There are other weird choices and performance problems with this code, but the important thing is that this code doesn’t work. This is in a shipped product, installed by over 4,000 businesses (the vendor is quite happy to cite that number in their marketing materials). And it ships with code that can’t work.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianAlexandre Viau: Setting up Nightscout using MongoDB Atlas

Nightscout is an Open Source web-based CGM (Continuous Glucose Monitor) that allows multiple caregivers to remotely view a patient’s glucose data in real time.

It is often deployed by non-technical people for their own use. The traditional method used a MongoDB Addon on Heroku that is now deprecated. I have sent patches to adapt the documentation on the relevant projects:

This app is life-changing. Some Nigthscout users may be impatient, so I am writing this blog post to guide them in the meantime.

Setting up Nightscout

If you want to setup Nightscout from scratch using MongoDB Atlas, please follow this modified guide.

However, note that you will have to make one modification to the steps in the guide. At the start of Step 4, you will need to go to this repository instead: This is my own version of Nightscout and it contains small modifications that will allow you to set it up with MongoDB Atlas easily.

I will keep this blog post updated as I receive feedback. Come back here for more instructions.

Planet DebianJacob Adams: Get A Command Line

How to access a command line on your laptop without installing Linux.

Linux is great, and I recommend trying it out, whether on real hardware or in a virtual machine, to anyone interested in Computer Science.

However, the process can be quite involved, and sometimes you don’t want to change your whole operating system or sort out installing virtual machines.

Fortunately, these days you can try out one of Linux’s greatest features, the command line, without going through all that hassle.


If you have a Mac, all you need to do is open, which is usually found under /Applications/Utilities. Note that Mac now defaults to zsh instead of bash, which is usually the shell used on Linux. This shouldn’t matter, but it’s something you should be aware of.


On Windows, things are much more complex. There’s always Powershell, but if you want a true Unix shell experience like you’d get on Linux, you’ll need to install the Windows Subsystem for Linux. This allows you to run Linux programs on your Windows 10 computer.

This boils down to opening Powershell (open the start menu, search for “powershell”) as an administrator (right-click, then “Run as Administrator,” then click “Yes” or enter an administrator’s password when the UAC prompt appears). In this new Powershell window, you need to run the following command to enable WSL:

dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart

When this command executes successfully you should see the following output: WSL Sucess

After this output appears, you’ll need to reboot your machine before you can continue.

Once your machine is rebooted, you need to install a Linux distribution. Different communities and companies ship their own versions of Linux and the various services and utilities required to use it, and these versions are called “distributions.”

If you’re not sure which one to use, I would recommend Ubuntu, as it was the distribution first integrated into WSL, and it’s a common distribution for new users.

After installing your chosen distribution, you’ll need to perform the first-time setup. You’ll just need to run it, as it is now installed as a program on your computer, and then it will walk you through the setup process, which requires you to create a new user account for your Linux distribution.

This does not need to be the same as your Windows username and password, and it’s probably safer if it isn’t. You’ll need to remember that password for running administrative commands with sudo.


Kevin RuddAFR: How Mitch Hooke Axed the Mining Tax and Climate Action

Published by The Australian Financial Review on 25 August 2020

The Australian political arena is full of reinventions.

Tony Abbott has gone from pushing emissions cuts under the Paris climate agreement to demanding Australia withdraw from the treaty altogether. And Scott Morrison, who accused Labor of presiding over “crippling” debt, now binges on wasteful debt-fuelled spending that makes our government’s stimulus look like a rounding error.

However, neither of these metamorphoses comes close to the transformation of Mitch Hooke, the former Minerals Council chief and conservative political operative, who now pretends he is a lifelong evangelist of carbon pricing.

Writing in The Australian Financial Review, (Ken Henry got it wrong on climate wars, mining tax on August 11) Hooke said he supported emissions trading throughout the mid-2000s until my government came to power in 2007.

I then supposedly “trashed that consensus” by using the proceeds of a carbon price to compensate motorists, low-income households and trade-exposed industries.

How dreadful to help those most impacted by a carbon price! The very point of an emissions trading scheme is that it can change consumers’ behaviour without making people on low to middle incomes worse off. That’s why you increase the price of emissions-intensive goods and services (relative to less polluting alternatives) then give that money back to people through the tax or benefits system so they’re no worse off. But they are then able to choose a more climate-friendly product.

The alternative is the government just pockets the cash – thereby defeating the entire purpose of a market-based scheme. Obviously this is pure rocket science for Mitch.

Hooke also seems to have forgotten that such compensation was not only appropriate, but it was exactly what Malcolm Turnbull was demanding in exchange for Liberal support for our proposal in the Senate. Without it, any emissions trading scheme would be a non-starter.

When that deal was tested in the Liberal party room, it was defeated by a single vote. Even so, enough Liberal senators crossed the floor to give the Green political party the balance of power.

Showing their true colours, Bob Brown’s senators sided with Tony Abbott and Barnaby Joyce to kill the legislation. The Green party has, to this day, been unable to adequately explain its decision to voters. If they hadn’t, Australia would now be 10 years down the path of steady decarbonisation.

For Hooke, the reality is that he never wanted an emissions trading scheme if he could avoid one. But rather than state this outright, he just insists on impossible preconditions. As for Hooke’s most beloved Howard government, John Winston would in all probability have gone even further than Labor in compensating people affected by his own proposed emissions trading scheme, given Howard’s legendary ability to bake middle-class welfare into any national budget. Just ask Peter Costello.

Hooke has, like Abbott, been one of the most destructive voices in Australian national climate change action. He also expresses zero remorse for his deceptive campaign of misinformation, in partnership with those wonderful corporate citizens at Rio, targeting my government’s efforts to introduce a profits-based tax for minerals, mirroring the petroleum resource rent tax implemented by the Hawke government in the 1980s.

Our Resource Super Profits Tax would have funded new infrastructure to address looming capacity constraints affecting the sector as well as an across-the-board company tax cut to 28 per cent. Most importantly it sought to fairly spread the proceeds of mining profits when they vastly exceeded the industry norms – such as during commodity price booms – with the broader Australian public. Lest we forget, they actually own those resources. Rio just rents them.

In response, Hooke and his mates at Rio and BHP accumulated a $90 million war chest and $22.2 million of shareholders’ funds were poured into a political advertising campaign over six weeks.

Another $1.9 million was tipped into Liberal and National party coffers to keep conservative politicians on side. All to keep Rio and BHP happy, while ignoring the deep structural interests of the rest of our mining sector, many of whom supported our proposal.

At their height, Hooke’s television ads were screening around 33 times per day on free-to-air channels. Claims the tax would be a “hand grenade” to retirement savings were blasted by the Australian Institute of Superannuation Trustees which referred the “irresponsible” and “scaremongering” campaign to regulators.

This was not an exercise in public debate to refine aspects of the tax’s design; it was a systematic effort to use the wealth of two multinational mining companies to bludgeon the government into submission.

And when Gillard and Swan capitulated as the first act of their new government, they essentially turned over the drafting pen to Hooke to write a new rent tax that collected almost zero revenue.

The industry, however, was far from unified. Fortescue Metals Group chairman Andrew “Twiggy” Forrest understood what we were trying to achieve, having circumvented Hooke’s spin machine to deal directly with my resources minister Martin Ferguson.

We ultimately agreed that Forrest would stand alongside me and pledge to support the tax. The next day, Gillard and Swan struck. And Hooke has been a happy man ever since, even though Australia is the poorer for it.

It doesn’t matter where you sit on the political spectrum, everyone involved in public debate should hope that they’ve helped to improve the lives of ordinary people.

That is not Hooke’s legacy. Nor his interest. However much he may now seek to rationalise his conduct, Hooke’s stock and trade was brutal, destructive politics in direct service of BHP, Rio and the carbon lobby.

He was paid handsomely to thwart climate change action and ensure wealthy multinationals didn’t pay a dollar more in tax than was absolutely necessary. He succeeded. But I’m not sure his grandchildren will be all that proud of his destructive record.

Congratulations, Mitch.

The post AFR: How Mitch Hooke Axed the Mining Tax and Climate Action appeared first on Kevin Rudd.

Planet DebianJonas Meurer: cryptsetup-suspend

Introducing cryptsetup-suspend

Today, we're introducing cryptsetup-suspend, whose job is to protect the content of your harddrives while the system is sleeping.


  • You can lock your encrypted harddrives during suspend mode by installing cryptsetup-suspend
  • For cryptsetup-suspend to work properly, at least Linux kernel 5.6 is required
  • We hope that in a bright future, everything will be available out-of-the-box in Debian and it's derivatives





Table of contents

What does this mean and why should you care about it?

If you don't use full-disk encryption, don't read any further. Instead, think about, what will happen if you lose your notebook on the train, a random person picks it up and browses through all your personal pictures, e-mails, and tax records. Then encrypt your system and come back.

If you believe full-disk encryption is necessary, you might know that it only works when your machine is powered off. Once you turn on the machine and decrypt your harddrive, your encryption key stays in RAM and can potentially be extracted by malicious software or physical access. Even if these attacks are non-trivial, it's enough to worry about. If an attacker is able to extract your disk encryption keys from memory, they're able to read the content of your disk in return.

Sadly, in 2020, we hardly power off our laptops anymore. The sleep mode, also known as "suspend mode", is just too convenient. Just close the lid to freeze the system state and lift it anytime later in order to continue. Well, convenience usually comes with a cost: during suspend mode, your system memory is kept powered, all your data - including your encryption keys - stays there, waiting to be extracted by a malicious person.  Unfortunately, there are practical attacks to extract the data of your powered memory.

Cryptsetup-suspend expands the protection of your full-disk encryption to all those times when your computer sleeps in suspend mode. Cryptsetup-suspend utilizes the suspend feature of LUKS volumes and integrates it with your Debian system. Encryption keys are evicted from memory before suspend mode and the volumes have to be re-opened after resuming - potentially prompting for the required passphrases.

By now, we have a working prototype which we want to introduce today. We did quite some testing, both on virtualized and bare-metal Debian and Ubuntu systems, with and without graphical stack, so we dare to unseal and set free the project and ask you - the community - to test, review, criticize and give feedback.

Here's a screencast of cryptsetup-suspend in action:

State of the implementation: where are we?

If you're interested in the technical details, here's how cryptsetup-suspend works internally. It basically consists of three parts:


  1. cryptsetup-suspend: A C program that takes a list of LUKS devices as arguments, suspends them via luksSuspend and suspends the system afterwards. Also, it tries to reserve some memory for decryption, which we'll explain below.
  2. cryptsetup-suspend-wrapper: A shell wrapper script which works the following way:
    1. Extract the initramfs into a ramfs
    2. Run (systemd) pre-suspend scripts, stop udev, freeze almost all cgroups
    3. Chroot into the ramfs and run cryptsetup-suspend
    4. Resume initramfs devices inside chroot after resume
    5. Resume non-initramfs devices outside chroot
    6. Thaw groups, start udev, run (systemd) post-suspend scripts
    7. Unmount the ramfs
  3. A systemd unit drop-in file overriding the Exec property of systemd-suspend.service so that it invokes the script cryptsetup-suspend-wrapper.

Reusing large parts of the existing cryptsetup-initramfs implementation has some positive side-effects: Out-of-the-box, we support all LUKS block device setups that have been supported by the Debian cryptsetup packages before.

Freezing most processes/cgroups is necessary to prevent possible race-conditions and dead-locks after the system resumes. Processes will try to access data on the locked/suspended block devices eventually leading to buffer overflows and data loss.

Technical challenges and caveats

  • Dead-locks at suspend: In order to prevent possible dead-locks between suspending the encrypted LUKS disks and suspending the system, we have to tell the Linux kernel to not sync() before going to sleep. A corresponding patch got accepted upstream in Linux 5.6. See section What about the kernel patch? below for details.
  • Race conditions at resume: Likewise, there's a risk of race conditions between resuming the system and unlocking the encypted LUKS disks. We went with freezing as many processes as possible as a counter measurement. See last part of section State of the implementation: where are we? for details.
  • Memory management: Memory management is definitely a challenge. Unlocking disks might require a lot of memory (if key derivation function is argon2i) and the swap device most likely is locked at that time. See section All that matters to me is the memories! below for details.
  • systemd dependency: Our implementation depends on systemd. It uses a unit drop-in file for systemd-suspend.service for hooking into the system suspend process and depends on systemds cgroup management to freeze and thaw processes. If you're using a different init system, sorry, you're currently out of luck.

What about the kernel patch?

The problem is simple: the Linux kernel suspend implementation enforces a final filesystem sync() before the system goes to sleep in order to prevent potential data loss. While that's sensible in most scenarios, it may result in dead-locks in our situation, since the block device that holds the filesystem is already suspended. The fssync() call will block forever as it waits for the block device to finish the sync() operation. So we need a way to conditionally disable this sync() call in the Linux kernel resume function. That's what our patch does, by introducing a run-time switch at /sys/power/sync_on_suspend, but it only got accepted into the Linux kernel recently and was first released with Linux kernel 5.6.

Since release 4.3, the Linux kernel at least provides a build-time flag to disable the sync(): CONFIG_SUSPEND_SKIP_SYNC (that was called SUSPEND_SKIP_SYNC first and renamed to CONFIG_SUSPEND_SKIP_SYNC in kernel release 4.9). Enabling this flag at build-time protects you against the dead locks perfectly well. But while that works on an individual basis, it's a non-option for the distribution Linux kernel defaults. In most cases you still want the sync() to happen, except if you have user-space code that takes care of the sync() just before suspending your system - just like our cryptsetup-suspend implementation does.

So in order to properly test cryptsetup-suspend, you're strongly advised to run Linux kernel 5.6 or newer. Fortunately, Linux 5.6 is available in buster-backports thanks to the Debian Kernel Team.

All that matters to me is the memories!

One of the tricky parts is memory management. Since version 2, LUKS uses argon2i as default key derivation function. Argon2i is a memory-hard hash function and LUKS2 assigns the minimum of half of your systems memory or 1 GB to unlocking your device. While this is usually unproblematic during system boot - there's not much in the system memory anyway - it can become problematic when suspending. When cryptsetup tries to unlock a device and wants 1 GB of memory for this, but everything is already occupied by your browser and video player, there's only two options what to do:

  1. Kill a process to free some memory
  2. Move some of the data from memory to swap space

The first option is certainly not what you expect when suspending your system. The second option is impossible, because swap is located on your harddrive which we have locked before. Our current solution is to allocate the memory after freezing the other processes, but before locking the disks. At this time, the system can still move data to swap, but it won't be accessed anymore. We then release the memory just in time for cryptsetup to claim it again. The implementation of this is still subject to change.


What's missing: A proper user interface

As mentioned before, we consider cryptsetup-suspend usable, but it certainly still has bugs and shortcomings. The most obvious one is lack of a proper user interface. Currently, we switch over to a tty command-line interface to prompt for passphrases when unlocking the LUKS devices. It certainly would be better to replace this with a graphical user interface later, probably by using plymouth or something alike. Unfortunately, it seems rather impossible to spawn a real graphical environment for the passphrase prompt. That would imply to load the full graphical stack into the ramfs, raising the required amount of memory significantly. Lack of memory is currently our biggest concern and source of trouble.

We'd definitely appreciate to learn about your ideas how to improve the user experience here.

Let's get practical: how to use

TL;DR: On Debian Bullseye (Testing), all you need to do is to install the cryptsetup-suspend package from experimental. It's not necessary to upgrade the other cryptsetup packages. On Debian Buster, cryptsetup packages from backports are required.

  1. First, be sure that you're running Linux kernel 5.6 or newer. For Buster systems, it's available in buster-backports.
  2. Second, if you're on Debian Buster, install the cryptsetup 2:2.3.3-2~bpo10+1 packages from buster-backports.
  3. Third, install the cryptsetup-suspend package from experimental. Beware that cryptsetup-suspend depends on cryptsetup-initramfs (>= 2:2.3.3-1~). Either you need the cryptsetup packages from testing/unstable, or the backports from buster-backports.
  4. Now that you have the cryptsetup-suspend package installed, everything should be in place: Just send your system to sleep. It should switch to a virtual text terminal before going to sleep, ask for a passphrase to unlock your encrypted disk(s) after resume and switch back to your former working environment (most likely your graphical desktop environment) afterwards.

Security considerations

Suspending LUKS devices basically means to remove the corresponding encryption keys from system memory. This protects against all sort of attacks trying to read them from there, e.g. cold-boot attacks. But, cryptsetup-suspend only protects the encryption keys of your LUKS devices. Most likely there's more sensitive data in system memory, like all kinds of private keys (e.g. OpenPGP, OpenSSH) or documents with sensitive content.

We hope that the community will help improve this situation by providing useful pre-/post-suspend scripts. A positive example is KeepassXC, which is able to lock itself when going to suspend mode.

Feedback and Comments

We'd be more than happy to learn about your thoughts on cryptsetup-suspend. For specific issues, don't hesitate to open a bugreport against cryptsetup-suspend. You can also reach us via mail - see the next section for contact addresses. Last but not least, comments below the blogpost work as well.


  • Tim (tim at
  • Jonas (jonas at

LongNowThe Alchemical Brothers: Brian Eno & Roger Eno Interviewed

Long Now co-founder Brian Eno on time, music, and contextuality in a recent interview, rhyming on Gregory Bateson’s definition of information as “a difference that makes a difference”:

If a Martian came to Earth and you played her a late Beethoven String Quartet and then another written by a first-year music student, it is unlikely that she would a) understand what the point of listening to them was at all, and b) be able to distinguish between them.

What this makes clear is that most of the listening experience is constructed in our heads. The ‘beauty’ we hear in a piece of music isn’t something intrinsic and immutable – like, say, the atomic weight of a metal is intrinsic – but is a product of our perception interacting with that group of sounds in a particular historical context. You hear the music in relation to all the other experiences you’ve had of listening to music, not in a vacuum. This piece you are listening to right now is the latest sentence in a lifelong conversation you’ve been having. What you are hearing is the way it differs from, or conforms to, the rest of that experience. The magic is in our alertness to novelty, our attraction to familiarity, and the alchemy between the two.

The idea that music is somehow eternal, outside of our interaction with it, is easily disproven. When I lived for a few months in Bangkok I went to the Chinese Opera, just because it was such a mystery to me. I had no idea what the other people in the audience were getting excited by. Sometimes they’d all leap up from their chairs and cheer and clap at a point that, to me, was effectively identical to every other point in the performance. I didn’t understand the language, and didn’t know what the conversation had been up to that point. There could be no magic other than the cheap thrill of exoticism.

So those poor deluded missionaries who dragged gramophones into darkest Africa because they thought the experience of listening to Bach would somehow ‘civilise the natives’ were wrong in just about every way possible: in thinking that ‘the natives’ were uncivilised, in not recognising that they had their own music, and in assuming that our Western music was culturally detachable and transplantable – that it somehow carried within it the seeds of civilisation. This cultural arrogance has been attached to classical music ever since it lost its primacy as the popular centre of the Western musical universe, as though the soundtrack of the Austro-Hungarian Empire in the 19th Century was somehow automatically universal and superior.

Google AdsenseAdSense Reports Technical Lead Manager

The new AdSense reporting is live

CryptogramIdentifying People by Their Browsing Histories

Interesting paper: "Replication: Why We Still Can't Browse in Peace: On the Uniqueness and Reidentifiability of Web Browsing Histories":

We examine the threat to individuals' privacy based on the feasibility of reidentifying users through distinctive profiles of their browsing history visible to websites and third parties. This work replicates and extends the 2012 paper Why Johnny Can't Browse in Peace: On the Uniqueness of Web Browsing History Patterns[48]. The original work demonstrated that browsing profiles are highly distinctive and stable.We reproduce those results and extend the original work to detail the privacy risk posed by the aggregation of browsing histories. Our dataset consists of two weeks of browsing data from ~52,000 Firefox users. Our work replicates the original paper's core findings by identifying 48,919 distinct browsing profiles, of which 99% are unique. High uniqueness hold seven when histories are truncated to just 100 top sites. Wethen find that for users who visited 50 or more distinct do-mains in the two-week data collection period, ~50% can be reidentified using the top 10k sites. Reidentifiability rose to over 80% for users that browsed 150 or more distinct domains.Finally, we observe numerous third parties pervasive enough to gather web histories sufficient to leverage browsing history as an identifier.

One of the authors of the original study comments on the replication.

Worse Than FailureCodeSOD: Wait a Minute

Hanna's co-worker implemented a new service, got it deployed, and then left for vacation someplace where there's no phones or Internet. So, of course, Hanna gets a call from one of the operations folks: "That new service your team deployed keeps crashing on startup, but there's nothing in the log."

Hanna took it on herself to check into the VB.Net code.

Public Class Service Private mContinue As Boolean = True Private mServiceException As System.Exception = Nothing Private mAppSettings As AppSettings '// ... snip ... // Private Sub DoWork() Try Dim aboutNowOld As String = "" Dim starttime As String = DateTime.Now.AddSeconds(5).ToString("HH:mm") While mContinue Threading.Thread.Sleep(1000) Dim aboutnow As String = DateTime.Now.ToString("HH:mm") If starttime = aboutnow And aboutnow <> aboutNowOld Then '// ... snip ... // starttime = DateTime.Now.AddMinutes(mAppSettings.pollingInterval).ToString("HH:mm") End If aboutNowOld = aboutnow End While Catch ex As Exception mServiceException = ex End Try If mServiceException IsNot Nothing Then EventLog.WriteEntry(mServiceException.ToString, Diagnostics.EventLogEntryType.Error) Throw mServiceException End If End Sub End Class

Presumably whatever causes the crash is behind one of those "snip"s, but Hanna didn't include that information. Instead, let's focus on our unique way to idle.

First, we pick our starttime to be the minute 5 seconds into the future. Then we enter our work loop. Sleep for one second, and then check which minute we're on. If that minute is our starttime and this loop hasn't run during this minute before, we can get into our actual work (snipped), and then calculate the nextstarttime, based on our app settings.

If there are any exceptions, we break the loop, log and re-throw it- but don't do that from the exception handler. No, we store the exception in a member variable and then if it IsNot Nothing we log it out.

Hanna writes: "After seeing this I gave up immediately before I caused a time paradox. Guess we'll have to wait till she's back from the future to fix it."

It's not quite a paradox, but it's certainly far more complex than it ever needs to be. First, we have the stringly-typed date handling. That's just awful. Then, we have the once-per-second polling, but we except pollingInterval to be in minutes. But AddMinutes takes doubles, so it could be seconds, expressed as fractional minutes. But wait, if we know how long we want to wait between executions, couldn't we just Sleep that long? Why poll every second? Does this job absolutely have to run in the first second of every minute? Even if it does, we could easily calculate that sleep time with reasonable accuracy if we actually looked at the seconds portion of the current time.

The developer who wrote this saw the problem of "execute this code once every polling interval" and just called it a day.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Rondam RamblingsThey knew. They still know.

Never forget what conservatives were saying about Donald Trump before he cowed them into submission.(Sorry about the tiny size of the embedded video.  That's the default that Blogger gave me and I can't figure out how to adjust the size.  If it bothers you, click on the link above to see the original.)

Rondam RamblingsRepublicans officially endorse a Trump dictatorship

The Republican party has formally decided not to adopt a platform this year, instead passing a resolution that says essentially, "we will support whatever the Dear Leader says".  Since the resolution calls out the media for its biased reporting, I will quote the resolution here in its entirety, with the salient portions highlighted: WHEREAS, The Republican National Committee (RNC) has

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 14)

Here’s part fourteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.


Planet DebianJonathan Dowland: Out of control (21 minutes of madness mix)

Chemical Brothers — Out Of Control (21 Minutes of Madness remix) Copy 1143 of 1999

Chemical Brothers — Out Of Control (21 Minutes of Madness remix) Copy 1143 of 1999

Also known as The Secret Psychedelic Mix. I picked this up last year. It was issued to promote the 20th anniversary re-issue of the parent album "Surrender". I remember liking this song back when it came out. At that time I didn't know who the guest singer was — Bernard Sumner — and if I had it wouldn't mean anything to me.

This is a pretty good mix. There's nothing "extra" in the mix, really, it's the same elements as the original 7 minute version, for 21 minutes this time, with perhaps some more production elements (more dubby stuff) but it doesn't seem to overstay its welcome.

Planet DebianJonathan Carter: DebConf 20 Sessions

DebConf20 is happening from 23 August to 29 August. The full is schedule available on the DebConf20 website.

I’m preparing (or helping to prepare) 3 sessions for this DebConf. I wish I had the time for more, but with my current time constraints, even preparing for these sessions took some careful planning!

Bits from the DPL

Time: Aug 24 (Mon): 16:00 UTC.

The traditional DebConf talk from the DPL, where we take a look at the state of the Debian project and where we’re heading. This talk is pre-recorded, but there will be a few minutes after the talk for questions.

Leadership in Debian BOF/Panel

Time: Aug 27 (Thu): 18:00 UTC.

In this session, we will host a panel of people who hold (or who have held) leadership positions within Debian.

We’ll go through a few questions for the panel and then continue with open questions and discussion.

Local Teams

Time: Aug 29 (Sat): 19:00 UTC.

We already have a number of large and very successful Debian Local Groups (Debian France, Debian Brazil and Debian Taiwan, just to name a few), but what can we do to help support upcoming local groups or help spark interest in more parts of the world?

In this BoF, we’ll discuss the possibility of setting up a local group support team or a new delegation that will keep track of local teams, manage budgets and get new local teams bootstrapped.


DiceKeys is a physical mechanism for creating and storing a 192-bit key. The idea is that you roll a special set of twenty-five dice, put them into a plastic jig, and then use an app to convert those dice into a key. You can then use that key for a variety of purposes, and regenerate it from the dice if you need to.

This week Stuart Schechter, a computer scientist at the University of California, Berkeley, is launching DiceKeys, a simple kit for physically generating a single super-secure key that can serve as the basis for creating all the most important passwords in your life for years or even decades to come. With little more than a plastic contraption that looks a bit like a Boggle set and an accompanying web app to scan the resulting dice roll, DiceKeys creates a highly random, mathematically unguessable key. You can then use that key to derive master passwords for password managers, as the seed to create a U2F key for two-factor authentication, or even as the secret key for cryptocurrency wallets. Perhaps most importantly, the box of dice is designed to serve as a permanent, offline key to regenerate that master password, crypto key, or U2F token if it gets lost, forgotten, or broken.


Schechter is also building a separate app that will integrate with DiceKeys to allow users to write a DiceKeys-generated key to their U2F two-factor authentication token. Currently the app works only with the open-source SoloKey U2F token, but Schechter hopes to expand it to be compatible with more commonly used U2F tokens before DiceKeys ship out. The same API that allows that integration with his U2F token app will also allow cryptocurrency wallet developers to integrate their wallets with DiceKeys, so that with a compatible wallet app, DiceKeys can generate the cryptographic key that protects your crypto coins too.

Here's the DiceKeys website and app. Here's a short video demo. Here's a longer SOUPS talk.

Preorder a set here.

Note: I am an adviser on the project.

Another news article. Slashdot thread. Hacker News thread. Reddit thread.

Planet DebianSven Hoexter: google cloud buster images without python 2


I have to stand corrected. noahm@ wrote me, because the Debian Cloud Image maintainer only ever included python explicitly in Azure images. The most likely explanation for the change in the Google images is that Google just ported the last parts of their own software to python 3, and subsequently removed python 2.

With some relieve one can just conclude it's only our own fault that we did not build our own images, which include all our own dependencies. Take it as reminder to always build your own images. Always. Be it VMs or docker. Build your own image.

Original Post

Fun in the morning, we realized that the Debian Cloud image builds dropped python 2 and that propagated to the Google provided Debian/buster images. So in case you use something like ansible, and so far assumed python 2 as the default interpreter, and installed additional python 2 modules to support ansible modules, you now have to either install python 2 again or just move to python 3k.

We just try to suffer it through now, and set interpreter_python = auto in our ansible.cfg to anticipate the new default behaviour, which is planned for ansible 2.12. See also

Other lesson to learn here: The GCE Debian stable images are not stable. Blends in nicely with this rant, though it's not 100% a Google Cloud foul this time.

Worse Than FailureCodeSOD: Sudon't

There are a few WTFs in today's story. Let's get the first one out of the way: Jan S downloaded a shell script and ran it as root, without reading it. Now, let's be fair, that's honestly a pretty mild WTF; we've all done something similar, and popular software tools still tell you to install them with a curl … | sh, and then sudo themselves extra permissions in the script.

The software being installed in this case is a tool for accessing Bitlocker encrypted drives from Linux. And the real WTF for this one is the install script, which we'll dig into in a moment. This is not, however, some small scale open source project thrown together by hobbyists, but instead released by Initech's "Data Recovery" business. In this case, this is the open source core of a larger data recovery product- if you're willing to muck around with low level commands and configs, you can do it for free, but if you want a vaguely usable UI, get ready to pony up $40.

With that in mind, let's take a look at the script. We're going to do this in chunks, because nearly everything is wrong. You might think I'm exaggerating, but here's the first two lines of the script:

#!/bin/bash home_dir="/home/"${USER}"/initech.bitlocker"

That is not how you find out the user's home directory. We'll usually use ${HOME}, or since the shebang tells us this is definitely bash, we could just use ~. Jan also points out that while a username probably shouldn't have a space, it's possible, and since the ${USER} isn't in quotes, this breaks in that case.

echo ${home_dir} install_dir=$1 if [ ! -d "${install_dir}" ]; then install_dir=${home_dir} if [ ! -d "${install_dir}" ]; then echo "create dir : "${install_dir} mkdir ${install_dir}

Who wants indentation in their scripts? And if a script supports arguments, should we tell the user about it? Of course not! Just check to see if they supplied an argument, and if they did, we'll treat that as the install directory.

As a bonus, the mkdir line protects people like Jan who run this script as root, at least if their home directory is /root, which is common. When it tries to mkdir /home/root/initech.bitlocker, the script fails there.

echo "Install software to ${install_dir}" cp -rf ./* ${install_dir}"/"

Once again, the person who wrote this script doesn't seem to understand what the double quotes in Bash are for, but the real magic is the next segment:

echo "Copy runtime environment ..." sudo cp -f ./ /usr/lib/ sudo cp -f ./ /usr/lib64 sudo cp -f ./ /usr/lib/ sudo cp -f ./ /usr/lib64

Did you have libssl already installed in your system? Well now you have this version! Hope that's okay for you. We like our version of libssl and libcrypto so much we're copying them into your library directories twice. They probably meant to copy libcrypto and libssl to both lib and lib64, but messed up.

Well, that is assuming you already have a lib64 directory, because if you don't, you now have a lib64 file which contains the data from

This is the installer for a piece of software which has been released as part of a product that Initech wants to sell, and they don't successfully install it.

sudo ln -s ${install_dir}/mount.bitlocker /usr/bin/mount.bitlocker sudo ln -s ${install_dir}/bitlocker.creator /usr/bin/create.bitlocker sudo ln -s ${install_dir}/ /usr/bin/ sudo ln -s ${install_dir}/ /usr/bin/initech.bitlocker.mount sudo ln -s ${install_dir}/ /usr/bin/initech.bitlocker

Hey, here's an install step with no critical mistakes, assuming that no other package or tool has tried to claim those names in /usr/bin, which is probably true (Jan actually checked this using dpkg -S … to see if any packages wanted to use that path).

source /etc/os-release case $ID in debian|ubuntu|devuan) echo "Installing dependent package - curl ..." sudo apt-get install curl -y echo "Installing dependent package - openssl ..." sudo apt-get install openssl -y echo "Installing dependent package - fuse ..." sudo apt-get install fuse -y echo "Installing dependent package - gksu ..." sudo apt-get install gksu -y ;;

Here's the first branch of our case. They've learned to indent. They've chosen to slap the -y flag on all the apt-get commands, which means the user isn't going to get a choice about installing these packages, which is mildly annoying. It's also worth noting that sourceing /etc/os-release can be considered harmful, but clearly "not doing harm" isn't high on this script's agenda.

centos|fedora|rhel) yumdnf="yum" if test "$(echo "$VERSION_ID >= 22" | bc)" -ne 0; then yumdnf="dnf" fi echo "Installing dependent package - curl ..." sudo $yumdnf install -y curl echo "Installing dependent package - openssl ..." sudo $yumdnf install -y openssl echo "Installing dependent package - fuse ..." sudo $yumdnf install -y fuse3-libs.x86_64 ;;

So, maybe they just don't think if supports additional indentation? They indent the case fine. I'm not sure what their thinking is.

Speaking of if, look closely at that version check: test "$(echo "$VERSION_ID >= 22" | bc)" -ne 0.

Now, this is almost clever. If your Linux version number uses decimal values, like 18.04, you can't do a simple if [ "$VERSION_ID" -ge 22]…: you'd get an integer expression expected error. So using bc does make sense…ish. It would be good to check if, y'know, bc were actually installed- it probably is, but you don't know- and it might be better to actually think about the purpose of the check.

They don't actually care what version of Redhat Linux you're running. What they're checking is if your version uses yum for package management, or its successor dnf. A more reliable check would be to simply see if dnf is a valid command, and if not, fallback to yum.

Let's finish out the case statement:

*) exit 1 ;; esac

So if your system doesn't use an apt based package manager or a yum/dnf based package manager, this just bails at this point. No error message, just an error number. You know it failed, and you don't know why, and it failed after copying a bunch of crap around your system.

So first it mostly installs itself, then it checks to see if it can successfully install all of its dependencies. And if it fails, does it clean up the changes it made? You better believe it doesn't!

echo "" echo "Initech BitLocker Loader has been installed to "${install_dir}" successfully." echo "Run initech.bitlocker --help to learn more about Initech BitLocker Loader"

This is a pretty optimistic statement, and while yes, it has theoretically been installed to ${install_dir}, assuming that we've gotten this far, it's really installed to your /usr/bin directory.

The real extra super-special puzzle to me is that it interfaces with your package manager to install dependencies. But it also installs its own versions of libcrypto and libssl, which don't come from your package manager. Ignoring the fact that it probably *installs them into the wrong places*, it seems bad. Suspicious, bad, and troubling.

Jan didn't send us the uninstall script, and honestly, I assume there isn't one. But if there is one, you know it probably tries to do rm -rf /${SOME_VAR_THAT_MIGHT_BE_EMPTY} somewhere in there. Which, in consideration, is probably the safest way to uninstall this software anyway.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianUlrike Uhlig: Code reviews: from nitpicking to cooperation

After we gave our talk at DebConf 20, Doing things together, there were 5 minutes left for the live Q&A. Pollo asked a question that I think is interesting and deserves a longer answer: How can we still have a good code review process without making it a "you need to be perfect" scenario? I often find picky code reviews help me write better code.

I find it useful to first disentangle what code reviews are good for, how we do them, why we do them that way, and how we can potentially improve processes.

What are code reviews good for?

Code review and peer review are great methods for cooperation aiming at:

  • Ensuring that the code works as intended
  • Ensuring that the task was fully accomplished and no detail left out
  • Ensuring that no security issues have been introduced
  • Making sure the code fits the practices of the team and is understandable and maintainable by others
  • Sharing insights, transferring knowledge between code author and code reviewer
  • Helping the code author to learn to write better code

Looking at this list, the last point seems to be more like a nice side effect of all the other points. :)

How do code reviews happen in our communities?

It seems to be a common assumption that code reviews are—and have to be—picky and perfectionist. To me, this does not actually seem to be a necessity to accomplish the above mentioned goals. We might want to work with precision—a quality which is different from perfection. Perfection can hardly be a goal: perfection does not exist.

Perfectionist dynamics can lead to failing to call something "good enough" or "done". Sometimes, a disproportionate amount of time is invested in writing (several) code reviews for minor issues. In some cases, strong perfectionist dynamics of a reviewer can create a feeling of never being good enough along with a loss of self esteem for otherwise skilled code authors.

When do we cross the line?

When going from cooperation, precision, and learning to write better code, to nitpicking, we are crossing a line: nitpicking means to pedantically search for others' faults. For example, I once got one of my Git commits at work criticized merely for its commit message that was said to be "ugly" because I "use[d] the same word twice" in it.

When we are nitpicking, we might not give feedback in an appreciative, cooperative way, we become fault finders instead. From there it's a short way to operating on the level of blame.

Are you nitpicking to help or are you nitpicking to prove something? Motivations matter.

How can we improve code reviewing?

When we did something wrong, we can do better next time. When we are told that we are wrong, the underlying assumption is that we cannot change (See Brené Brown, The difference between blame and shame). We can learn to go beyond blame.

Negative feedback rarely leads to improvement if the environment in which it happens lacks general appreciation and confirmation. We can learn to give helpful feedback. It might be harder to create an appreciative environment in which negative feedback is a possibility for growth. One can think of it like of a relationship: in a healthy relationship we can tell each other when something does not work and work it out—because we regularly experience that we respect, value, and support each other.

To be able to work precisely, we need guidelines, tools, and time. It's not possible to work with precision if we are in a hurry, burnt out, or working under a permanent state of exception. The same is true for receiving picky feedback.

On DebConf's IRC channel, after our talk, marvil07 said: On picky code reviews, something that I find useful is automation on code reviews; i.e. when a bot is stating a list of indentation/style errors it feels less personal, and also saves time to humans to provide more insightful changes. Indeed, we can set up routines that do automatic fault checking (linting). We can set up coding guidelines. We can define what we call "done" or "good enough".

We can negotiate with each other how we would like code to be reviewed. For example, one could agree that a particularly perfectionist reviewer should point out only functional faults. They can spare their time and refrain from writing lengthy reviews about minor esthetic issues that have never made it into a guideline. If necessary, author and reviewer can talk about what can be improved on the long term during a retrospective. Or, on the contrary, one could explicitly ask for a particularly detailed review including all sorts of esthetic issues to learn the best practices of a team applied to one's own code.

In summary: let's not lose sight of what code reviews are good for, let's have a clear definition of "done", let's not confuse precision with perfection, let's create appreciative work environments, and negotiate with each other how reviews are made.

I'm sure you will come up with more ideas. Please do not hesitate to share them!

Planet DebianNorbert Preining: Social Equality and Free Software – BoF at DebConf20

Shortly after yesterday’s start of the Debian Conference 2020, I had the honor to participate in a BoF on social equality in free software, led by the OSI vice president and head of the FOSSASIA community, Hong Phuc Dang. The group of discussants consisted of OSS representatives from a wide variety of countries (India, Indonesia, China, Hong Kong, Germany, Vietnam, Singapore, Japan).

After a short introduction by Hong Phuc we turned to a self-introduction and “what is equality for me” round. This brought up already a wide variety of issues that need to be addressed if we want to counter inequality in free software (culture differences, language barriers, internet connection, access to services, onboarding difficulties, political restrictions, …).

Unfortunately, on-air time was rather restricted, but even after the DebConf related streaming time slot was finished, we continued discussing problems and possible approaches for another two hours. We have agreed to continue our collaboration and meetings in the hope that we, in particular the FOSSASIA community, can support those in need to counter inequality.

Concluding, I have to say I am very happy to be part of the FOSSASIA community – where real diversity is lived and everyone strives for and tries to increase social equality. In the DebConf IRC chat I was asked why at FOSSASIA we have about a 50:50 quote between women and men, in contrast to the usual 10:90 predominant in most software communities including Debian. For me this boils down to many reasons, one being competent female leadership, Hong Phuc is inspiring and competent to a degree I haven’t seen in anyone else. Another reason is of course that software development is, especially in developing countries, one of the few “escape pods” for any gender, and thus fully embraced by normally underrepresented groups. Finally, but this is a typical chicken-egg problem, the FOSSASIA community is not doing any specific gender politics, but simply remains open and friendly to everyone. I think Debian, and in particular the diversity movement in Debian – can learn a lot from the FOSSASIA community. At the end we are all striving for more equality in our projects and in the realm of free software as a whole!

Thanks again for all the participants for the very inspiring discussion, and I am looking forward to our next meetings!

Planet DebianArnaud Rebillout: Send emails from your terminal with msmtp

In this tutorial, we'll configure everything needed to send emails from the terminal. We'll use msmtp, a lightweight SMTP client. For the sake of the example, we'll use a GMail account, but any other email provider can do. Your OS is expected to be Debian, as usual on this blog, although it doesn't really matter. We will also see how to store the credentials for the email account in the system keyring. And finally, we'll go the extra mile, and see how to configure various command-line utilities so that they automatically use msmtp to send emails. Even better, we'll make msmtp the default email sender, to actually avoid configuring these utilities one by one.


Strong prerequisites (if you don't recognize yourself here, you probably landed on the wrong page):

  • You run Linux on your computer (let's assume a Debian-like distro).
  • You want to send emails from your terminal.

Weak prerequisites (if your setup doesn't match those points exactly, that's fine, you can still read on):

  • Your email account is a GMail one.
  • Your desktop environment is GNOME.

GMail account setup

For a GMail account, there's a bit of configuration to do. For other email providers, I have no idea, maybe you can just skip this part, or maybe you will have to go through a similar procedure.

If you want an external program (msmtp in this case) to talk to the GMail servers on your behalf, and send emails, you can't just use your usual GMail password. Instead, GMail requires you to generate so-called app passwords, one for each application that needs to access your GMail account.

This approach has several advantages:

  • it will basically work, GMail won't block you because it thinks that you're trying to sign in from an unknown device, a weird location or whatever.
  • your main GMail password remains secret, you won't have to write it down in any configuration file or anywhere else.
  • you can change your main GMail password, no breakage, apps will still work as each of them use their own passwords.
  • you can revoke an app password anytime, without impacting anything else.

So app passwords are a good idea, it just requires a bit of work to set it up. Let's see what it takes.

First, 2-Step Verification must be enabled on your GMail account. Visit, and if that's not the case, enable it. You'll need to authorize all of your devices (computer(s), phone(s) and so on), and it can be a bit tedious, granted. But you only have to do it once in a lifetime, and after it's done, you're left with a more secure account, so it's not that bad, right?

Enabling the 2-Step Verification will unlock the feature we need: App passwords. Visit, and under "Signing in to Google", click "App passwords", and generate one. An app password is a 16 characters string, something like qwertyuiopqwerty. It's supposed to be used from only one place, ie. from ONE application that is installed on ONE device. That's why it's common to give it a name of the form application@device, so in our case it could be msmtp@laptop, but really it's free form, choose whatever name suits you, as long as it makes sense to you.

So let's give a name to this app password, write it down for now, and we're done with the GMail config.

Send your first email

Time to get started with msmtp.

First thing first, installation, trivial:

sudo apt install msmtp

Let's try to send an email. At this point, we did not create any configuration file for msmtp yet, so we have to provide every details on the command line.

# Write a dummy email
cat << EOF > message.txt
Subject: Cafe Sua Da

Iced-coffee with condensed milk

# Send it
cat message.txt | msmtp \
    --auth=on --tls=on \
    --host \
    --port 587 \
    --user YOUR_LOGIN \
    --read-envelope-from \

# msmtp prompts you for your password:
# this is where goes the app password!

Obviously, in this example you should replace the uppercase words with the real thing, that is, your email login, and real email addresses.

Also, let me insist, you must enter the app password that was generated previously, not your real GMail password.

And it should work already, this email should have been sent and received by now.

So let me explain quickly what happened here.

In the file message.txt, we provided From: (the email address of the person sending the email) and To: (the destination email address). Then we asked msmtp to re-use those values to set the envelope of the email with --read-envelope-from and --read-recipients.

What about the other parameters?

  • --auth=on because we want to authenticate with the server.
  • --tls=on because we want to make sure that the communication with the server is encrypted.
  • --host and --port tells where to find the server. If you don't use GMail, adjust that accordingly.
  • --user is obviously your GMail username.

For more details, you should refer to the msmtp documentation.

Write a configuration file

So we could send an email, that's cool already.

However the command to do that was a bit long, and we don't want to juggle with all these arguments every time we send an email. So let's write down all of that into a configuration file.

msmtp supports two locations: ~/.msmtprc and ~/.config/msmtp/config, at your preference. In this tutorial we'll use ~/.msmtprc for brevity:

cat << 'EOF' > ~/.msmtprc
tls on

account gmail
auth on
port 587

account default : gmail

And for a quick explanation:

  • under defaults are the default values for all the following accounts.
  • under account are the settings specific to this account, until another account line is found.
  • finally, the last line defines which account is the default.

All in all it's pretty simple, and it's becoming easier to send an email:

# Write a dummy email. Note that the
# header 'From:' is no longer needed,
# it's already in '~/.msmtprc'.
cat << 'EOF' > message.txt
Subject: Flat White

The milky way for coffee

# Send it
cat message.txt | msmtp \
    --account default \

Actually, --account default is not needed, as it's the default anyway if you don't provide a --account argument. Furthermore --read-recipients can be shortened as -t. So we can make it real short now:

msmtp -t < message.txt

At this point, life is good! Except for one thing maybe: we still have to type the password every time we send an email. Surely it must be possible to avoid that annoyance...

Store your password in the system keyring

For this part, we'll make use of the libsecret tool to store the password in the system keyring via the Secret Service API. It means that your desktop environment should implement the Secret Service specification, which is the case for both GNOME and KDE.

Note that GNOME provides Seahorse to have a look at your secrets, KDE has the KDE Wallet. There's also KeePassXC, which I have only heard of but never used. I guess it can be your password manager of choice if you use neither GNOME nor KDE.

For those running an up-to-date Debian unstable, you should have msmtp >= 1.8.11-2, and you're all good to go. For those having an older version than that however, you will have to install the package msmtp-gnome in order to have msmtp built with libsecret support. Note that this package depends on seahorse, hence it pulls in a good part of the GNOME stack when you install it. For those not running GNOME, that's unfortunate. All of this was discussed and fixed in #962689.

Alright! So let's just make sure that the libsecret tools are installed:

sudo apt install libsecret-tools

And now we can store our password in the system keyring with this command:

secret-tool store --label msmtp \
    host \
    service smtp \
    user YOUR_LOGIN

If this looks a bit too magic, and you want something more visual, you can actually fire a GUI like seahorse (for GNOME users), or kwalletmanager5 (for KDE users), and then you will see what passwords are stored in there.

Here's a screenshot of Seahorse, with a msmtp password stored:

seahorse with msmtp password

Let's try to send an email again:

msmtp -t < message.txt

No need for a password anymore, msmtp got it from the system keyring!

For more details on how msmtp handle the passwords, and to see what other methods are supported, refer to the extensive documentation.

Use-cases and integration

Let's go over a few use-cases, situations where you might end up sending emails from the command-line, and what configuration is required to make it work with msmtp.

Git Send-Email

Sending emails with git is a common workflow for some projects, like the Linux kernel. How does git send-email actually send emails? From the git-send-email manual page:

the built-in default is to search for sendmail in /usr/sbin, /usr/lib and $PATH if such program is available

It is possible to override this default though:

[...] Alternatively it can specify a full pathname of a sendmail-like program instead; the program must support the -i option.

So in order to use msmtp here, you'd add a snippet like that to your ~/.gitconfig file:

    smtpserver = /usr/bin/msmtp

For a full guide, you can also refer to

Debian developer tools

Tools like bts or reportbug are also good examples of command-line tools that need to send emails.

From the bts manual page:

Specify the sendmail command [...] Default is /usr/sbin/sendmail.

So if you want bts to send emails with msmtp instead of sendmail, you must use bts --sendmail='/usr/bin/msmtp -t'.

Note that bts also loads settings from the file /etc/devscripts.conf and ~/.devscripts, so you could also set BTS_SENDMAIL_COMMAND='/usr/bin/msmtp -t' in one of those files.

From the reportbug manual page:

Specify an alternate MTA, instead of /usr/sbin/sendmail (the default).

In order to use msmtp here, you'd write reportbug --mta=/usr/bin/msmtp.

Note that reportbug reads it settings from /etc/reportbug.conf and ~/.reportbugrc, so you could as well set mta /usr/bin/msmtp in one of those files.

So who is this sendmail again?

By now, you probably noticed that sendmail seems to be considered the default tool for the job, the "traditional" command that has been around for ages.

Rather than configuring every tool to use something else than sendmail, wouldn't it be simpler to actually replace sendmail by msmtp? Like, create a symlink that points to msmtp, something like ln -sr /usr/bin/msmtp /usr/sbin/sendmail? So that msmtp acts as a drop-in replacement for sendmail, and there's nothing else to configure?

Answer is yes, kind of. Actually, the first msmtp feature that is listed on the homepage is "Sendmail compatible interface (command line options and exit codes)". Meaning that msmtp is a drop-in replacement for sendmail, that seems to be the intent.

However, you should refrain from creating or modifying anything in /usr, as it's the territory of the package manager, apt. Any change in /usr might be overwritten by apt the next time you run an upgrade or install new packages.

In the case of msmtp, there is actually a package named msmtp-mta that will create this symlink for you. So if you really want a definitive replacement for sendmail, there you go:

sudo apt install msmtp-mta

From this point, sendmail is now a symlink /usr/sbin/sendmail → /usr/bin/msmtp, and there's no need to configure git, bts, reportbug or any other tool that would rely on sendmail. Everything should work "out of the box".


I hope that you enjoyed reading this article! If you have any comment, feel free to send me a short email, preferably from your terminal!


Planet DebianEnrico Zini: Doing things /together/

Here are the slides of mine and Ulrike's talk Doing things /together/.

Our thoughts about cooperation aspects of doing things together.

Sometimes in Debian we do work together with others, and sometimes we are a number of people who work alone, and happen to all upload their work in the same place.

In times when we have needed to take important decisions together, this distinction has become crucial, and some of us might have found that we were not as good at cooperation as we would have thought.

This talk is intended for everyone who is part of a larger community. We will show concepts and tools that we think could help understand and shape cooperation.

Video of the talk:

The slides have extensive notes: you can use ViewNotes in LibreOffice Impress to see them.

Here are the Inkscape sources for the graphs:

Here are links to resources quoted in the talk:

In the Q&A, pollo asked:

How can we still have a good code review process without making it a "you need to be perfect" scenario? I often find picky code reviews help me write better code.

Ulrike wrote a more detailed answer: Code reviews: from nitpicking to cooperation

Planet DebianVincent Bernat: Zero-Touch Provisioning for Cisco IOS

The official documentation to automatically upgrade and configure on first boot a Cisco switch running on IOS, like a Cisco Catalyst 2960-X Series switch, is scarce on details. This note explains how to configure the ISC DHCP Server for this purpose.

When booting for the first time, Cisco IOS sends a DHCP request on all ports:

Dynamic Host Configuration Protocol (Discover)
    Message type: Boot Request (1)
    Hardware type: Ethernet (0x01)
    Hardware address length: 6
    Hops: 0
    Transaction ID: 0x0000117c
    Seconds elapsed: 0
    Bootp flags: 0x8000, Broadcast flag (Broadcast)
    Client IP address:
    Your (client) IP address:
    Next server IP address:
    Relay agent IP address:
    Client MAC address: Cisco_6c:12:c0 (b4:14:89:6c:12:c0)
    Client hardware address padding: 00000000000000000000
    Server host name not given
    Boot file name not given
    Magic cookie: DHCP
    Option: (53) DHCP Message Type (Discover)
    Option: (57) Maximum DHCP Message Size
    Option: (61) Client identifier
        Length: 25
        Type: 0
        Client Identifier: cisco-b414.896c.12c0-Vl1
    Option: (55) Parameter Request List
        Length: 12
        Parameter Request List Item: (1) Subnet Mask
        Parameter Request List Item: (66) TFTP Server Name
        Parameter Request List Item: (6) Domain Name Server
        Parameter Request List Item: (15) Domain Name
        Parameter Request List Item: (44) NetBIOS over TCP/IP Name Server
        Parameter Request List Item: (3) Router
        Parameter Request List Item: (67) Bootfile name
        Parameter Request List Item: (12) Host Name
        Parameter Request List Item: (33) Static Route
        Parameter Request List Item: (150) TFTP Server Address
        Parameter Request List Item: (43) Vendor-Specific Information
        Parameter Request List Item: (125) V-I Vendor-specific Information
    Option: (255) End

It requests a number of options, including the Bootfile name option 67, the TFTP server address option 150 and the Vendor-Identifying Vendor-Specific Information Option 125—or VIVSO. Option 67 provides the name of the configuration file located on the TFTP server identified by option 150. Option 125 includes the name of the file describing the Cisco IOS image to use to upgrade the switch. This file only contains the name of the tarball embedding the image.1

Configuring the ISC DHCP Server to answer with the TFTP server address and the name of the configuration file is simple enough:

filename "";
option tftp-server-address;

However, if you want to also provide the image for upgrade, you have to specify a hexadecimal-encoded string:2

option vivso 00:00:00:09:24:05:22:63:32:39:36:30:2d:6c:61:6e:62:61:73:65:6b:39:2d:74:61:72:2e:31:35:30:2d:32:2e:53:45:31:31:2e:74:78:74;

Having a large hexadecimal-encoded string inside a configuration file is quite unsatisfying. Instead, the ISC DHCP Server allows you to express this information in a more readable way using the option space statement:

# Create option space for Cisco and encapsulate it in VIVSO/vendor space
option space cisco code width 1 length width 1;
option code 5 = text;
option code 9 = encapsulate cisco;

# Image description for Cisco IOS ZTP
option = "c2960-lanbasek9-tar.150-2.SE11.txt";

# Workaround for VIVSO option 125 not being sent
option vendor.iana code 0 = string;
option vendor.iana = 01:01:01;

Without the workaround mentioned in the last block, the ISC DHCP Server would not send back option 125. With such a configuration, it returns the following answer, including a harmless additional enterprise 0 encapsulated into option 125:

Dynamic Host Configuration Protocol (Offer)
    Message type: Boot Reply (2)
    Hardware type: Ethernet (0x01)
    Hardware address length: 6
    Hops: 0
    Transaction ID: 0x0000117c
    Seconds elapsed: 0
    Bootp flags: 0x8000, Broadcast flag (Broadcast)
    Client IP address:
    Your (client) IP address:
    Next server IP address:
    Relay agent IP address:
    Client MAC address: Cisco_6c:12:c0 (b4:14:89:6c:12:c0)
    Client hardware address padding: 00000000000000000000
    Server host name not given
    Boot file name:
    Magic cookie: DHCP
    Option: (53) DHCP Message Type (Offer)
    Option: (54) DHCP Server Identifier (
    Option: (51) IP Address Lease Time
    Option: (1) Subnet Mask (
    Option: (6) Domain Name Server
    Option: (3) Router
    Option: (150) TFTP Server Address
        Length: 4
        TFTP Server Address:
    Option: (125) V-I Vendor-specific Information
        Length: 49
        Enterprise: Reserved (0)
        Enterprise: ciscoSystems (9)
            Length: 36
            Option 125 Suboption: 5
                Length: 34
                Data: 63323936302d6c616e626173656b392d7461722e3135302d…
    Option: (255) End

  1. The reason of this indirection is still puzzling me. I suppose it could be because updating the image name directly in option 125 is quite a hassle. ↩︎

  2. It contains the following information:

    • 0x00000009: Cisco’s Enterprise Number,
    • 0x24: length of the enclosed data,
    • 0x05: Cisco’s auto-update sub-option,
    • 0x22: length of the sub-option data, and
    • filename of the image description (c2960-lanbasek9-tar.150-2.SE11.txt).


Planet DebianPhilipp Kern: Self-service buildd givebacks now use Salsa auth

As client certificates are on the way out and Debian's SSO solution is effectively not maintained any longer, I switched self-service buildd givebacks over to Salsa authentication. It lives again at For authorization you still need to be in the "debian" group for now, i.e. be a regular Debian member.

For convenience the package status web interface now features an additional column "Actions" with generated "giveback" links.

Please remember to file bugs if you give builds back because of flakiness of the package rather than the infrastructure and resist the temptation to use this excessively to let your package migrate. We do not want to end up with packages that require multiple givebacks to actually build in stable, as that would hold up both security and stable updates needlessly and complicate development.


Planet DebianNorbert Preining: Converting html to mp4

Such an obvious problem, convert a piece of html/js/css, often with animations, to a video (mp4 or similar). We were just put before this problem for the TUG 2020 online conference. Searching the internet it turned up mostly web services, some of them even with lots of money to pay. At the end (below I will give a short history) it turned out to be rather simple.

The key is to use timesnap, a tool to take screenshots from web pages. It is actively maintained, and internally uses puppeteer, which in turn uses Google Chrome browser headless. This also means that rendering quality is very high.

So having an html file available, with all the necessary assets, either online or local, one simply creates enough single screenshots per second so that they can be assembled later on into a video with ffmpeg.

In our case, we wanted our leaders to last 10secs before the actual presentation video starts. I decided to render at 30fps, which left me with the simple invocation:

timesnap Leader.html --viewport=1920,1080 --fps=30 --duration=10 --output-pattern="leader-%03d.png"

followed by conversion of the various png images to an mp4:

ffmpeg -r 30 -f image2 -s 1920x1080 -i leader-%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p leader.mp4

The -r is the fps, so needs to agree with the --fps above. Also the --viewport and -s values should better agree. -crf is the video quality, and -pix_fmt the pixel format.

With that very simple and quick invocation a nice leader video was ready!


It was actually more complicated than normal. For similar problems, it usually takes me about 5min of googling and a bit of scripting, but this time, it was actually a long way. Simply searching for “convert html to mp4” doesn’t give a lot but web services, often paid for. At some point I came up with the idea to use Electron and led to Electron Recorder, which looked promising, but didn’t work.

A bit more searching led me to PhantomJS, which is not developed anymore, but there was some explanation how to dump frames using phantomjs and merge them using ffmpeg, very similar to the above. Unfortunately, the rendering of the html page by phantomjs was broken, and thus not usable.

Thus I ventured off into searching for alternatives of PhantomJS, which brought me to puppeteer, and from there it wasn’t too long a way that pointed me at timesnap.

Till now it is surprising to me that such a basic task is neither well documented, so hopefully this page helps some users.

Planet DebianJelmer Vernooij: Debian Janitor: > 60,000 Lintian Issues Automatically Fixed

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

Scheduling Lintian Fixes

To determine which packages to process, the Janitor looks at the import of lintian output across the archive that is available in UDD [1]. It will prioritize those packages with the most and more severe issues that it has fixers for.

Once a package is selected, it will clone the packaging repository and run lintian-brush on it. Lintian-brush provides a framework for applying a set of “fixers” to a package. It will run each of a set of “fixers” in a pristine version of the repository, and handles most of the heavy lifting.

The Inner Workings of a Fixer

Each fixer is just an executable which gets run in a clean checkout of the package, and can make changes there. Most of the fixers are written in Python or shell, but they can be in any language.

The contract for fixers is pretty simple:

  • If the fixer exits with non-zero, the changes are reverted and fixer is considered to have failed
  • If it exits with zero and made changes, then it should write a summary of its changes to standard out

If a fixer is uncertain about the changes it has made, it should report so on standard output using a pseudo-header. By default, lintian-brush will discard any changes with uncertainty but if you are running it locally you can still apply them by specifying --uncertain.

The summary message on standard out will be used for the commit message and (possibly) the changelog message, if the package doesn’t use gbp dch.

Example Fixer

Let’s look at an example. The package priority “extra” is deprecated since Debian Policy 4.0.1 (released August 2 017) – see Policy 2.5 "Priorities". Instead, most packages should use the “optional” priority.

Lintian will warn when a package uses the deprecated “extra” value for the “Priority” - the associated tag is priority-extra-is-replaced-by-priority-optional. Lintian-brush has a fixer script that can automatically replace “extra” with “optional”.

On systems that have lintian-brush installed, the source for the fixer lives in /usr/share/lintian-brush/fixers/, but here is a copy of it for reference:


from debmutate.control import ControlEditor
from lintian_brush.fixer import report_result, fixed_lintian_tag

with ControlEditor() as updater:
    for para in updater.paragraphs:
        if para.get("Priority") == "extra":
            para["Priority"] = "optional"
                para, 'priority-extra-is-replaced-by-priority-optional')

report_result("Change priority extra to priority optional.")

This fixer is written in Python and uses the debmutate library to easily modify control files while preserving formatting — or back out if it is not possible to preserve formatting.

All the current fixers come with tests, e.g. for this particular fixer the tests can be found here:

For more details on writing new fixers, see the README for lintian-brush.

For more details on debugging them, see the manual page.

Successes by fixer

Here is a list of the fixers currently available, with the number of successful merges/pushes per fixer:

Lintian Tag Previously merged/pushed Ready but not yet merged/pushed
uses-debhelper-compat-file 4906 4161
upstream-metadata-file-is-missing 4281 3841
package-uses-old-debhelper-compat-version 4256 3617
upstream-metadata-missing-bug-tracking 2438 2995
out-of-date-standards-version 2062 2936
upstream-metadata-missing-repository 1936 2987
trailing-whitespace 1720 2295
insecure-copyright-format-uri 1791 1093
package-uses-deprecated-debhelper-compat-version 1391 1287
vcs-obsolete-in-debian-infrastructure 872 782
homepage-field-uses-insecure-uri 527 1111
vcs-field-not-canonical 850 655
debian-changelog-has-wrong-day-of-week 224 376
debian-watch-uses-insecure-uri 314 242
useless-autoreconf-build-depends 112 428
priority-extra-is-replaced-by-priority-optional 315 194
debian-rules-contains-unnecessary-get-orig-source-target 35 428
tab-in-license-text 125 320
debian-changelog-line-too-long 186 190
debian-rules-sets-dpkg-architecture-variable 69 166
debian-rules-uses-unnecessary-dh-argument 42 182
package-lacks-versioned-build-depends-on-debhelper 125 95
unversioned-copyright-format-uri 43 136
package-needs-versioned-debhelper-build-depends 127 50
binary-control-field-duplicates-source 34 134
renamed-tag 73 69
vcs-field-uses-insecure-uri 14 109
uses-deprecated-adttmp 13 91
debug-symbol-migration-possibly-complete 12 88
copyright-refers-to-symlink-license 51 48
debian-control-has-unusual-field-spacing 33 66
old-source-override-location 32 62
out-of-date-copyright-format 20 62
public-upstream-key-not-minimal 43 30
older-source-format 17 54
custom-compression-in-debian-source-options 12 57
copyright-refers-to-versionless-license-file 29 39
tab-in-licence-text 33 31
global-files-wildcard-not-first-paragraph-in-dep5-copyright 28 33
out-of-date-copyright-format-uri 9 50
field-name-typo-dep5-copyright 29 29
copyright-does-not-refer-to-common-license-file 13 42
debhelper-but-no-misc-depends 9 45
debian-watch-file-is-missing 11 41
debian-control-has-obsolete-dbg-package 8 40
possible-missing-colon-in-closes 31 13
unnecessary-testsuite-autopkgtest-field 32 9
missing-debian-source-format 7 33
debhelper-tools-from-autotools-dev-are-deprecated 9 29
vcs-field-mismatch 8 29
debian-changelog-file-contains-obsolete-user-emacs-setting 33 0
patch-file-present-but-not-mentioned-in-series 24 9
copyright-refers-to-versionless-license-file 22 9
debian-control-has-empty-field 25 6
missing-build-dependency-for-dh-addon 10 20
obsolete-field-in-dep5-copyright 15 13
xs-testsuite-field-in-debian-control 20 7
ancient-python-version-field 13 12
unnecessary-team-upload 19 5
misspelled-closes-bug 6 16
field-name-typo-in-dep5-copyright 1 20
transitional-package-not-oldlibs-optional 4 17
maintainer-script-without-set-e 9 11
dh-clean-k-is-deprecated 4 14
no-dh-sequencer 14 4
missing-vcs-browser-field 5 12
space-in-std-shortname-in-dep5-copyright 6 10
xc-package-type-in-debian-control 4 11
debian-rules-missing-recommended-target 4 10
desktop-entry-contains-encoding-key 1 13
build-depends-on-obsolete-package 4 9
license-file-listed-in-debian-copyright 1 12
missing-built-using-field-for-golang-package 9 4
unused-license-paragraph-in-dep5-copyright 4 7
missing-build-dependency-for-dh_command 6 4
comma-separated-files-in-dep5-copyright 3 6
systemd-service-file-refers-to-var-run 4 5
copyright-not-using-common-license-for-apache2 3 5
debian-tests-control-autodep8-is-obsolete 2 6
dh-quilt-addon-but-quilt-source-format 2 6
no-homepage-field 3 5
font-packge-not-multi-arch-foreign 1 6
homepage-in-binary-package 1 4
vcs-field-bitrotted 1 3
built-using-field-on-arch-all-package 2 1
copyright-should-refer-to-common-license-file-for-apache-2 1 2
debian-pyversions-is-obsolete 3 0
debian-watch-file-uses-deprecated-githubredir 1 1
executable-desktop-file 1 1
skip-systemd-native-flag-missing-pre-depends 1 1
vcs-field-uses-not-recommended-uri-format 1 1
init.d-script-needs-depends-on-lsb-base 1 0
maintainer-also-in-uploaders 1 0
public-upstream-keys-in-multiple-locations 1 0
wrong-debian-qa-group-name 1 0
Total 29656 32209


[1]temporarily unavailable due to Debian bug #960156 – but the Janitor is relying on historical data

For more information about the Janitor's lintian-fixes efforts, see the landing page


CryptogramFriday Squid Blogging: Rhode Island's State Appetizer Is Calamari

Rhode Island has an official state appetizer, and it's calamari. Who knew?

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityFBI, CISA Echo Warnings on ‘Vishing’ Threat

The Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) on Thursday issued a joint alert to warn about the growing threat from voice phishing or “vishing” attacks targeting companies. The advisory came less than 24 hours after KrebsOnSecurity published an in-depth look at a crime group offering a service that people can hire to steal VPN credentials and other sensitive data from employees working remotely during the Coronavirus pandemic.

“The COVID-19 pandemic has resulted in a mass shift to working from home, resulting in increased use of corporate virtual private networks (VPNs) and elimination of in-person verification,” the alert reads. “In mid-July 2020, cybercriminals started a vishing campaign—gaining access to employee tools at multiple companies with indiscriminate targeting — with the end goal of monetizing the access.”

As noted in Wednesday’s story, the agencies said the phishing sites set up by the attackers tend to include hyphens, the target company’s name, and certain words — such as “support,” “ticket,” and “employee.” The perpetrators focus on social engineering new hires at the targeted company, and impersonate staff at the target company’s IT helpdesk.

The joint FBI/CISA alert (PDF) says the vishing gang also compiles dossiers on employees at the specific companies using mass scraping of public profiles on social media platforms, recruiter and marketing tools, publicly available background check services, and open-source research. From the alert:

“Actors first began using unattributed Voice over Internet Protocol (VoIP) numbers to call targeted employees on their personal cellphones, and later began incorporating spoofed numbers of other offices and employees in the victim company. The actors used social engineering techniques and, in some cases, posed as members of the victim company’s IT help desk, using their knowledge of the employee’s personally identifiable information—including name, position, duration at company, and home address—to gain the trust of the targeted employee.”

“The actors then convinced the targeted employee that a new VPN link would be sent and required their login, including any 2FA [2-factor authentication] or OTP [one-time passwords]. The actor logged the information provided by the employee and used it in real-time to gain access to corporate tools using the employee’s account.”

The alert notes that in some cases the unsuspecting employees approved the 2FA or OTP prompt, either accidentally or believing it was the result of the earlier access granted to the help desk impersonator. In other cases, the attackers were able to intercept the one-time codes by targeting the employee with SIM swapping, which involves social engineering people at mobile phone companies into giving them control of the target’s phone number.

The agencies said crooks use the vished VPN credentials to mine the victim company databases for their customers’ personal information to leverage in other attacks.

“The actors then used the employee access to conduct further research on victims, and/or to fraudulently obtain funds using varying methods dependent on the platform being accessed,” the alert reads. “The monetizing method varied depending on the company but was highly aggressive with a tight timeline between the initial breach and the disruptive cashout scheme.”

The advisory includes a number of suggestions that companies can implement to help mitigate the threat from these vishing attacks, including:

• Restrict VPN connections to managed devices only, using mechanisms like hardware checks or installed certificates, so user input alone is not enough to access the corporate VPN.

• Restrict VPN access hours, where applicable, to mitigate access outside of allowed times.

• Employ domain monitoring to track the creation of, or changes to, corporate, brand-name domains.

• Actively scan and monitor web applications for unauthorized access, modification, and anomalous activities.

• Employ the principle of least privilege and implement software restriction policies or other controls; monitor authorized user accesses and usage.

• Consider using a formalized authentication process for employee-to-employee communications made over the public telephone network where a second factor is used to
authenticate the phone call before sensitive information can be discussed.

• Improve 2FA and OTP messaging to reduce confusion about employee authentication attempts.

• Verify web links do not have misspellings or contain the wrong domain.

• Bookmark the correct corporate VPN URL and do not visit alternative URLs on the sole basis of an inbound phone call.

• Be suspicious of unsolicited phone calls, visits, or email messages from unknown individuals claiming to be from a legitimate organization. Do not provide personal information or information about your organization, including its structure or networks, unless you are certain of a person’s authority to have the information. If possible, try to verify the caller’s identity directly with the company.

• If you receive a vishing call, document the phone number of the caller as well as the domain that the actor tried to send you to and relay this information to law enforcement.

• Limit the amount of personal information you post on social networking sites. The internet is a public resource; only post information you are comfortable with anyone seeing.

• Evaluate your settings: sites may change their options periodically, so review your security and privacy settings regularly to make sure that your choices are still appropriate.

CryptogramYet Another Biometric: Bioacoustic Signatures

Sound waves through the body are unique enough to be a biometric:

"Modeling allowed us to infer what structures or material features of the human body actually differentiated people," explains Joo Yong Sim, one of the ETRI researchers who conducted the study. "For example, we could see how the structure, size, and weight of the bones, as well as the stiffness of the joints, affect the bioacoustics spectrum."


Notably, the researchers were concerned that the accuracy of this approach could diminish with time, since the human body constantly changes its cells, matrices, and fluid content. To account for this, they acquired the acoustic data of participants at three separate intervals, each 30 days apart.

"We were very surprised that people's bioacoustics spectral pattern maintained well over time, despite the concern that the pattern would change greatly," says Sim. "These results suggest that the bioacoustics signature reflects more anatomical features than changes in water, body temperature, or biomolecule concentration in blood that change from day to day."

It's not great. A 97% accuracy is worse than fingerprints and iris scans, and while they were able to reproduce the biometric in a month it almost certainly changes as we age, gain and lose weight, and so on. Still, interesting.

Worse Than FailureError'd: Just a Suggestion

"Sure thing Google, I guess I'll change my language to... let's see...Ah, how about English?" writes Peter G.


Marcus K. wrote, "Breaking news: tt tttt tt,ttt!"


Tim Y. writes, "Nothing makes my day more than someone accidentially leaving testing mode enabled (and yes, the test number went through!)"


"I guess even thinning brows and psoriasis can turn political these days," Lawrence W. wrote.


Strahd I. writes, "It was evident at the time that King Georges VI should have gone asked for a V12 instead."


"Well, gee, ZDNet, why do you think I enabled this setting in the first place?" Jeroen V. writes.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianReproducible Builds (diffoscope): diffoscope 157 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 157. This version includes the following changes:

[ Chris Lamb ]

* Try obsensibly "data" files named .pgp against pgpdump to determine whether
  they are PGP files. (Closes: reproducible-builds/diffoscope#211)
* Don't raise an exception when we encounter XML files with "<!ENTITY>"
  declarations inside the DTD, or when a DTD or entity references an external
  resource. (Closes: reproducible-builds/diffoscope#212)
* Temporarily drop gnumeric from Build-Depends as it has been removed from
  testing due to Python 2.x deprecation. (Closes: #968742)
* Codebase changes:
  - Add support for multiple file extension matching; we previously supported
    only a single extension to match.
  - Move generation of debian/tests/control.tmp to an external script.
  - Move to our assert_diff helper entirely in the PGP tests.
  - Drop some unnecessary control flow, unnecessary dictionary comprehensions
    and some unused imports found via pylint.
* Include the filename in the "... not identified by any comparator"
  logging message.

You find out more by visiting the project homepage.


Planet DebianBits from Debian: Lenovo, Infomaniak, Google and Amazon Web Services (AWS), Platinum Sponsors of DebConf20

We are very pleased to announce that Lenovo, Infomaniak, Google and Amazon Web Services (AWS), have committed to supporting DebConf20 as Platinum sponsors.


As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.


Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).


Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.


Amazon Web Services (AWS) is one of the world's most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies.

With these commitments as Platinum Sponsors, Lenovo, Infomaniak, Google and Amazon Web Services are contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much for your support of DebConf20!

Participating in DebConf20 online

The 21st Debian Conference is being held Online, due to COVID-19, from August 23rd to 29th, 2020. There are 7 days of activities, running from 10:00 to 01:00 UTC. Visit the DebConf20 website at to learn about the complete schedule, watch the live streaming and join the different communication channels for participating in the conference.

Planet DebianRitesh Raj Sarraf: LUKS Headless Laptop

As we grow old, so do our computing machines. And just like we don’t decommission ourselves, so should be the case of the machines. They should be semi-retired, delegating major tasks to newer machines while they can still serve some less demaning work: File Servers, UPNP Servers et cetera.

It is common on a Debian installer based derivative, and otherwise too, to use block encryption on Linux. With machines from this decade, I think we’ve always had CPU extension for encryption.

So, as would be the usual case, all my laptops are block encrypted. But as they reach the phase of their life to retire and serving as a headless boss, it becomes cumbersome to keep feeding it a password and all the logistics involved to feed it. As such, I wanted to get rid of feeding it the password.

Then, there’s also the case of bad/faulty hardware, many of which mostly can temporarily fix their functionality when reset, which usually is to reboot the machine. I still recollect words of my Linux Guru - Dhiren Raj Bhandari - that many of the unexplainable errors can be resolved by just rebooting the machine. This was more than 20 years ago in the prime era of Microsoft Windows OS and the context back then was quite different, but yes, some bits of that saying still apply today.

So I wanted my laptop, which had LUKS set up for 2 disks, to go password-less now. I stumbled across a slightly dated article where the author achieved similar results with keyscript. So the thing was doable.

To my delight, Debian cryptsetup has the best setup and documentation in place to do it with just adding keyfiles

rrs@lenovo:~$ dd if=/dev/random of=sda7.key bs=1 count=512
512+0 records in
512+0 records out
512 bytes copied, 0.00540209 s, 94.8 kB/s
19:19 ♒♒♒   ☺ 😄    

rrs@lenovo:~$ dd if=/dev/random of=sdb1.key bs=1 count=512
512+0 records in
512+0 records out
512 bytes copied, 0.00536747 s, 95.4 kB/s
19:20 ♒♒♒   ☺ 😄    

rrs@lenovo:~$ sudo cryptsetup luksAddKey /dev/sda7 sda7.key 
[sudo] password for rrs: 
Enter any existing passphrase: 
No key available with this passphrase.
19:20 ♒♒♒    ☹ 😟=> 2  

rrs@lenovo:~$ sudo cryptsetup luksAddKey /dev/sda7 sda7.key 
Enter any existing passphrase: 
19:20 ♒♒♒   ☺ 😄    

rrs@lenovo:~$ sudo cryptsetup luksAddKey /dev/sdb1 sdb1.key 
Enter any existing passphrase: 
19:21 ♒♒♒   ☺ 😄    

and the nice integration in crypttab to ensure your keys propagate to initramfs

rrs@lenovo:~$ cat /etc/cryptsetup-initramfs/conf-hook 
# Configuration file for the cryptroot initramfs hook.

# The value of this variable is interpreted as a shell pattern.
# Matching key files from the crypttab(5) are included in the initramfs
# image.  The associated devices can then be unlocked without manual
# intervention.  (For instance if /etc/crypttab lists two key files
# /etc/keys/{root,swap}.key, you can set KEYFILE_PATTERN="/etc/keys/*.key"
# to add them to the initrd.)
# If KEYFILE_PATTERN if null or unset (default) then no key file is
# copied to the initramfs image.
# Note that the glob(7) is not expanded for crypttab(5) entries with a
# 'keyscript=' option.  In that case, the field is not treated as a file
# name but given as argument to the keyscript.
# WARNING: If the initramfs image is to include private key material,
# you'll want to create it with a restrictive umask in order to keep
# non-privileged users at bay.  For instance, set UMASK=0077 in
# /etc/initramfs-tools/initramfs.conf

19:44 ♒♒♒   ☺ 😄    

The whole thing took me around 20-25 minutes, including drafting this post. From Retired Head and Password Prompt to Headless and Password-less. The beauty of Debian and FOSS

LongNowPeople slept on comfy grass beds 200,000 years ago

The oldest beds known to science now date back nearly a quarter of a million years: traces of silicate from woven grasses found in the back of Border Cave (in South Africa, which has a nearly continuous record of occupation dating back to 200,000 BCE).

Ars Technica reports:

Most of the artifacts that survive from more than a few thousand years ago are made of stone and bone; even wooden tools are rare. That means we tend to think of the Paleolithic in terms of hard, sharp stone tools and the bones of butchered animals. Through that lens, life looks very harsh—perhaps even harsher than it really was. Most of the human experience is missing from the archaeological record, including creature comforts like soft, clean beds.

Given recent work on the epidemic of modern orthodontic issues caused in part by sleeping with “bad oral posture” due to too-soft bedding, it seems like the bed may be another frontier for paleo re-thinking of high-tech life. (See also the controversies over barefoot running, prehistoric diets, and countless other forms of atavism emerging from our future-shocked society.) When technological innovation shuffles the “pace layers” of human existence, changing the built environment faster than bodies can adapt, sometimes comfort’s micro-scale horizon undermines the longer, slower beat of health.

Another plus to making beds of grass is their disposability and integration with the rest of ancient life:

Besides being much softer than the cave floor, these ancient beds were probably surprisingly clean. Burning dirty bedding would have helped cut down on problems with bedbugs, lice, and fleas, not to mention unpleasant smells. [Paleoanthropologist Lyn] Wadley and her colleagues suggest that people at Border Cave may even have raked some extra ashes in from nearby hearths ‘to create a clean, odor-controlled base for bedding.’

And charcoal found in the bedding layers includes bits of an aromatic camphor bush; some modern African cultures use another closely related camphor bush in their plant bedding as an insect repellent. The ash may have helped, too; Wadley and her colleagues note that ‘several ethnographies report that ash repels crawling insects, which cannot easily move through the fine powder because it blocks their breathing and biting apparatus and eventually leaves them dehydrated.’

Finding beds as old as Homo sapiens itself revives the (not quite as old) debate about what makes us human. Defining our humanity as “artists” or “ritualists” seems to weave together modern definitions of technology and craft, ceremony and expression, just as early people wove together sedges for a place to sleep. At least, they are the evidence of a much more holistic, integrated way of life — one that found every possible synergy between day and night, cooking and sleeping:

Imagine that you’ve just burned your old, stale bedding and laid down a fresh layer of grass sheaves. They’re still springy and soft, and the ash beneath is still warm. You curl up and breathe in the tingly scent of camphor, reassured that the mosquitoes will let you sleep in peace. Nearby, a hearth fire crackles and pops, and you stretch your feet toward it to warm your toes. You nudge aside a sharp flake of flint from the blade you were making earlier in the day, then drift off to sleep.

CryptogramCopying a Key by Listening to It in Action

Researchers are using recordings of keys being used in locks to create copies.

Once they have a key-insertion audio file, SpiKey's inference software gets to work filtering the signal to reveal the strong, metallic clicks as key ridges hit the lock's pins [and you can hear those filtered clicks online here]. These clicks are vital to the inference analysis: the time between them allows the SpiKey software to compute the key's inter-ridge distances and what locksmiths call the "bitting depth" of those ridges: basically, how deeply they cut into the key shaft, or where they plateau out. If a key is inserted at a nonconstant speed, the analysis can be ruined, but the software can compensate for small speed variations.

The result of all this is that SpiKey software outputs the three most likely key designs that will fit the lock used in the audio file, reducing the potential search space from 330,000 keys to just three. "Given that the profile of the key is publicly available for commonly used [pin-tumbler lock] keys, we can 3D-print the keys for the inferred bitting codes, one of which will unlock the door," says Ramesh.

Worse Than FailureCodeSOD: A Backwards For

Aurelia is working on a project where some of the code comes from a client. In this case, it appears that the client has very good reasons for hiring an outside vendor to actually build the application.

Imagine you have some Java code which needs to take an array of integers and iterate across them in reverse, to concatenate a string. Oh, and you need to add one to each item as you do this.

You might be thinking about some combination of a map/reverseString.join operation, or maybe a for loop with a i-- type decrementer.

I’m almost certain you aren’t thinking about this.

public String getResultString(int numResults) {
	StringBuffer sb = null;
	for (int result[] = getResults(numResults); numResults-- > 0;) {
		int i = result[numResults];
		if( i == 0){
			int j = i + 1; 
			if (sb == null)
				sb = new StringBuffer();
			int j = i + 1; 
			if (sb == null)
				sb = new StringBuffer();
	return sb.toString();

I really, really want you to look at that for loop: for (int result[] = getResults(numResults); numResults-- > 0;)

Just look at that. It’s… not wrong. It’s not… bad. It’s just written by an alien wearing a human skin suit. Our initializer actually populates the array we’re going to iterate across. Our bounds check also contains the decrement operation. We don’t have a decrement clause.

Then, if i == 0 we’ll do the exact same thing as if i isn’t 0, since our if and else branches contain the same code.

Increment i, and store the result in j. Why we don’t use the ++i or some other variation to be in-line with our weird for loop, I don’t know. Maybe they were done showing off.

Then, if our StringBuffer is null, we create one, otherwise we append a ",". This is one solution to the contatenator’s comma problem. Again, it’s not wrong, it’s just… unusual.

But this brings us to the thing which is actually, objectively, honestly bad. The indenting.

			if (sb == null)
				sb = new StringBuffer();

Look at that last line. Does that make you angry? Look more closely. Look for the curly brackets. Oh, you don’t see any? Very briefly, when I was looking at this code, I thought, “Wait, does this discard the first item?” No, it just eschews brackets and then indents wrong to make sure we’re nice and confused when we look at the code.

It should read:

			if (sb == null)
				sb = new StringBuffer();
[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


CryptogramUsing Disinformation to Cause a Blackout

Interesting paper: "How weaponizing disinformation can bring down a city's power grid":

Abstract: Social media has made it possible to manipulate the masses via disinformation and fake news at an unprecedented scale. This is particularly alarming from a security perspective, as humans have proven to be one of the weakest links when protecting critical infrastructure in general, and the power grid in particular. Here, we consider an attack in which an adversary attempts to manipulate the behavior of energy consumers by sending fake discount notifications encouraging them to shift their consumption into the peak-demand period. Using Greater London as a case study, we show that such disinformation can indeed lead to unwitting consumers synchronizing their energy-usage patterns, and result in blackouts on a city-scale if the grid is heavily loaded. We then conduct surveys to assess the propensity of people to follow-through on such notifications and forward them to their friends. This allows us to model how the disinformation may propagate through social networks, potentially amplifying the attack impact. These findings demonstrate that in an era when disinformation can be weaponized, system vulnerabilities arise not only from the hardware and software of critical infrastructure, but also from the behavior of the consumers.

I'm not sure the attack is practical, but it's an interesting idea.

Krebs on SecurityVoice Phishers Targeting Corporate VPNs

The COVID-19 epidemic has brought a wave of email phishing attacks that try to trick work-at-home employees into giving away credentials needed to remotely access their employers’ networks. But one increasingly brazen group of crooks is taking your standard phishing attack to the next level, marketing a voice phishing service that uses a combination of one-on-one phone calls and custom phishing sites to steal VPN credentials from employees.

According to interviews with several sources, this hybrid phishing gang has a remarkably high success rate, and operates primarily through paid requests or “bounties,” where customers seeking access to specific companies or accounts can hire them to target employees working remotely at home.

And over the past six months, the criminals responsible have created dozens if not hundreds of phishing pages targeting some of the world’s biggest corporations. For now at least, they appear to be focusing primarily on companies in the financial, telecommunications and social media industries.

“For a number of reasons, this kind of attack is really effective,” said Allison Nixon, chief research officer at New York-based cyber investigations firm Unit 221B. “Because of the Coronavirus, we have all these major corporations that previously had entire warehouses full of people who are now working remotely. As a result the attack surface has just exploded.”


A typical engagement begins with a series of phone calls to employees working remotely at a targeted organization. The phishers will explain that they’re calling from the employer’s IT department to help troubleshoot issues with the company’s virtual private networking (VPN) technology.

The employee phishing page bofaticket[.]com. Image:

The goal is to convince the target either to divulge their credentials over the phone or to input them manually at a website set up by the attackers that mimics the organization’s corporate email or VPN portal.

Zack Allen is director of threat intelligence for ZeroFOX, a Baltimore-based company that helps customers detect and respond to risks found on social media and other digital channels. Allen has been working with Nixon and several dozen other researchers from various security firms to monitor the activities of this prolific phishing gang in a bid to disrupt their operations.

Allen said the attackers tend to focus on phishing new hires at targeted companies, and will often pose as new employees themselves working in the company’s IT division. To make that claim more believable, the phishers will create LinkedIn profiles and seek to connect those profiles with other employees from that same organization to support the illusion that the phony profile actually belongs to someone inside the targeted firm.

“They’ll say ‘Hey, I’m new to the company, but you can check me out on LinkedIn’ or Microsoft Teams or Slack, or whatever platform the company uses for internal communications,” Allen said. “There tends to be a lot of pretext in these conversations around the communications and work-from-home applications that companies are using. But eventually, they tell the employee they have to fix their VPN and can they please log into this website.”


The domains used for these pages often invoke the company’s name, followed or preceded by hyphenated terms such as “vpn,” “ticket,” “employee,” or “portal.” The phishing sites also may include working links to the organization’s other internal online resources to make the scheme seem more believable if a target starts hovering over links on the page.

Allen said a typical voice phishing or “vishing” attack by this group involves at least two perpetrators: One who is social engineering the target over the phone, and another co-conspirator who takes any credentials entered at the phishing page and quickly uses them to log in to the target company’s VPN platform in real-time.

Time is of the essence in these attacks because many companies that rely on VPNs for remote employee access also require employees to supply some type of multi-factor authentication in addition to a username and password — such as a one-time numeric code generated by a mobile app or text message. And in many cases, those codes are only good for a short duration — often measured in seconds or minutes.

But these vishers can easily sidestep that layer of protection, because their phishing pages simply request the one-time code as well.

A phishing page (helpdesk-att[.]com) targeting AT&T employees. Image:

Allen said it matters little to the attackers if the first few social engineering attempts fail. Most targeted employees are working from home or can be reached on a mobile device. If at first the attackers don’t succeed, they simply try again with a different employee.

And with each passing attempt, the phishers can glean important details from employees about the target’s operations, such as company-specific lingo used to describe its various online assets, or its corporate hierarchy.

Thus, each unsuccessful attempt actually teaches the fraudsters how to refine their social engineering approach with the next mark within the targeted organization, Nixon said.

“These guys are calling companies over and over, trying to learn how the corporation works from the inside,” she said.


All of the security researchers interviewed for this story said the phishing gang is pseudonymously registering their domains at just a handful of domain registrars that accept bitcoin, and that the crooks typically create just one domain per registrar account.

“They’ll do this because that way if one domain gets burned or taken down, they won’t lose the rest of their domains,” Allen said.

More importantly, the attackers are careful to do nothing with the phishing domain until they are ready to initiate a vishing call to a potential victim. And when the attack or call is complete, they disable the website tied to the domain.

This is key because many domain registrars will only respond to external requests to take down a phishing website if the site is live at the time of the abuse complaint. This requirement can stymie efforts by companies like ZeroFOX that focus on identifying newly-registered phishing domains before they can be used for fraud.

“They’ll only boot up the website and have it respond at the time of the attack,” Allen said. “And it’s super frustrating because if you file an abuse ticket with the registrar and say, ‘Please take this domain away because we’re 100 percent confident this site is going to be used for badness,’ they won’t do that if they don’t see an active attack going on. They’ll respond that according to their policies, the domain has to be a live phishing site for them to take it down. And these bad actors know that, and they’re exploiting that policy very effectively.”

A phishing page (github-ticket[.]com) aimed at siphoning credentials for a target organization’s access to the software development platform Github. Image:


Both Nixon and Allen said the object of these phishing attacks seems to be to gain access to as many internal company tools as possible, and to use those tools to seize control over digital assets that can quickly be turned into cash. Primarily, that includes any social media and email accounts, as well as associated financial instruments such as bank accounts and any cryptocurrencies.

Nixon said she and others in her research group believe the people behind these sophisticated vishing campaigns hail from a community of young men who have spent years learning how to social engineer employees at mobile phone companies and social media firms into giving up access to internal company tools.

Traditionally, the goal of these attacks has been gaining control over highly-prized social media accounts, which can sometimes fetch thousands of dollars when resold in the cybercrime underground. But this activity gradually has evolved toward more direct and aggressive monetization of such access.

On July 15, a number of high-profile Twitter accounts were used to tweet out a bitcoin scam that earned more than $100,000 in a few hours. According to Twitter, that attack succeeded because the perpetrators were able to social engineer several Twitter employees over the phone into giving away access to internal Twitter tools.

Nixon said it’s not clear whether any of the people involved in the Twitter compromise are associated with this vishing gang, but she noted that the group showed no signs of slacking off after federal authorities charged several people with taking part in the Twitter hack.

“A lot of people just shut their brains off when they hear the latest big hack wasn’t done by hackers in North Korea or Russia but instead some teenagers in the United States,” Nixon said. “When people hear it’s just teenagers involved, they tend to discount it. But the kinds of people responsible for these voice phishing attacks have now been doing this for several years. And unfortunately, they’ve gotten pretty advanced, and their operational security is much better now.”

A phishing page (vzw-employee[.]com) targeting employees of Verizon. Image: DomainTools


While it may seem amateurish or myopic for attackers who gain access to a Fortune 100 company’s internal systems to focus mainly on stealing bitcoin and social media accounts, that access — once established — can be re-used and re-sold to others in a variety of ways.

“These guys do intrusion work for hire, and will accept money for any purpose,” Nixon said. “This stuff can very quickly branch out to other purposes for hacking.”

For example, Allen said he suspects that once inside of a target company’s VPN, the attackers may try to add a new mobile device or phone number to the phished employee’s account as a way to generate additional one-time codes for future access by the phishers themselves or anyone else willing to pay for that access.

Nixon and Allen said the activities of this vishing gang have drawn the attention of U.S. federal authorities, who are growing concerned over indications that those responsible are starting to expand their operations to include criminal organizations overseas.

“What we see now is this group is really good on the intrusion part, and really weak on the cashout part,” Nixon said. “But they are learning how to maximize the gains from their activities. That’s going to require interactions with foreign gangs and learning how to do proper adult money laundering, and we’re already seeing signs that they’re growing up very quickly now.”


Many companies now make security awareness and training an integral part of their operations. Some firms even periodically send test phishing messages to their employees to gauge their awareness levels, and then require employees who miss the mark to undergo additional training.

Such precautions, while important and potentially helpful, may do little to combat these phone-based phishing attacks that tend to target new employees. Both Allen and Nixon — as well as others interviewed for this story who asked not to be named — said the weakest link in most corporate VPN security setups these days is the method relied upon for multi-factor authentication.

A U2F device made by Yubikey, plugged into the USB port on a computer.

One multi-factor option — physical security keys — appears to be immune to these sophisticated scams. The most commonly used security keys are inexpensive USB-based devices. A security key implements a form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by inserting the USB device and pressing a button on the device. The key works without the need for any special software drivers.

The allure of U2F devices for multi-factor authentication is that even if an employee who has enrolled a security key for authentication tries to log in at an impostor site, the company’s systems simply refuse to request the security key if the user isn’t on their employer’s legitimate website, and the login attempt fails. Thus, the second factor cannot be phished, either over the phone or Internet.

In July 2018, Google disclosed that it had not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical security keys in place of one-time codes.

Probably the most popular maker of security keys is Yubico, which sells a basic U2F Yubikey for $20. It offers regular USB versions as well as those made for devices that require USB-C connections, such as Apple’s newer Mac OS systems. Yubico also sells more expensive keys designed to work with mobile devices. [Full disclosure: Yubico was recently an advertiser on this site].

Nixon said many companies will likely balk at the price tag associated with equipping each employee with a physical security key. But she said as long as most employees continue to work remotely, this is probably a wise investment given the scale and aggressiveness of these voice phishing campaigns.

“The truth is some companies are in a lot of pain right now, and they’re having to put out fires while attackers are setting new fires,” she said. “Fixing this problem is not going to be simple, easy or cheap. And there are risks involved if you somehow screw up a bunch of employees accessing the VPN. But apparently these threat actors really hate Yubikey right now.”

Kevin RuddWashington Post: China’s thirst for coal is economically shortsighted and environmentally reckless

First published in the Washington Post on 19 August 2020

Carbon emissions have fallen in recent months as economies have been shut down and put into hibernation. But whether the world will emerge from the pandemic in a stronger or weaker position to tackle the climate crisis rests overwhelmingly on the decisions that China will take.

China, as part of its plans to restart its economy, has already approved the construction of new coal-fired power plants accounting for some 17 gigawatts of energy this year, sending a collective shiver down the spines of environmentalists. This is more coal plants than it approved in the previous two years combined, and the total capacity now under development in China is larger than the remaining fleet operating in the United States.

At the same time, China has touted investments in so-called “new infrastructure,” such as electric-vehicle charging stations and rail upgrades, as integral to its economic recovery. But frankly, none of this will matter much if these new coal-fired power plants are built.

To be fair, the decisions to proceed with these coal projects largely rest in the hands of China’s provincial and regional governments and not in Beijing. However, this does not mean the central government has no power, nor that it won’t wear the reputational damage if the plants become a reality.

First, it is hard to see how China could meet one of its own commitments under the 2015 Paris climate agreement to peak its emissions by 2030 if these new plants are built. The pledge relies on China retiring much of its existing and relatively young coal fleet, which has been operational only for an average of 14 years. Bringing yet more coal capacity online now is therefore either economically shortsighted or environmentally reckless.

It would also put at risk the world’s collective long-term goal under the Paris agreement to keep temperature increases within 1.5 degrees Celsius, which the Intergovernmental Panel on Climate Change has said requires halving of global emissions between 2018 and 2030 and reaching net-zero emissions by the middle of the century.

It also is completely contrary to China’s own domestic interests, including President Xi Jinping’s desire to grow the economy, improve energy security and clean up the environment (or, as he says, to “make our skies blue again”).

But perhaps most importantly for the geopolitical hard heads in Beijing, it also risks unravelling the goodwill China has built up in recent years for staying the course on the fight against climate change in the face of the Trump administration’s retreat. This will especially be the case in the eyes of many vulnerable developing countries, including the world’s lowest-lying island nations that could face even greater risks if these plants are built.

For his part, former vice president Joe Biden has already got China’s thirst for coal in his sights. He speaks of the need for the United States to focus on how China is “exporting more dirty coal” through its support of carbon-intensive projects in its Belt and Road InitiativeStudies have found a Chinese role in more than 100 gigawatts of additional coal plants under construction across Asia and Africa, and even in Eastern Europe. It is hard to see how the first few months of a Biden administration would not make this an increasingly uncomfortable reality for Beijing at precisely the time the world would be welcoming with open the arms the return of U.S. climate leadership.

As a new paper published by the Asia Society Policy Institute highlights, China’s decisions on coal will also be among the most closely watched as it finalizes its next five-year plan, due out in 2021, as well as its mid-century decarbonization strategy and enhancements to its Paris targets ahead of the 2021 United Nations Climate Change Conference in Glasgow, Scotland. And although China may also have an enormously positive story to tell — continuing to lead the world in the deployment of renewable energy in 2019 — it is China’s decisions on coal that will loom large.

(Photo: Gwendolyn Stansbury/IFPRI)

The post Washington Post: China’s thirst for coal is economically shortsighted and environmentally reckless appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Shallow Perspective

There are times where someone writes code which does nothing. There are times where someone writes code which does something, but nothing useful. This is one of those times.

Ray H was going through some JS code, and found this “useful” method.

mapRowData (data) {
  if (isNullOrUndefined(data)) return null;
  return => x);

Technically, this isn’t a “do nothing” method. It converts undefined values to null, and it returns a shallow copy of an array, assuming that you passed in an array.

The fact that it can return a null value or an array is one of those little nuisances that we accept, but probably should code around (without more context, it’s probably fine if this returned an empty array on bad inputs, for example).

But Ray adds: “Where this is used, it could just use the array data directly and get the same result.” Yes, it’s used in a handful of places, and in each of those places, there’s no functional difference between the original array and the shallow copy.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Rondam RamblingsHere we go again

Here is a snapshot of the current map of temporary flight restrictions (TFRs) issued by the FAA across the western U.S.:Almost every one of those red shapes is a major fire burning.  Compare that to a similar snapshot taken two years ago at about this same time of year.The regularity of these extreme heat and fire events is starting to get really scary.


Planet DebianLisandro Damián Nicanor Pérez Meyer: Stepping down as Qt 6 maintainers

After quite some time maintaining Qt in Debian both Dmitry Shachnev and I decided to not maintain Qt 6 when it's published (expected in December 2020, see We will do our best to keep the Qt 5 codebase up and running.

We **love** Qt, but it's a huge codebase and requires time and build power, both things that we are currently lacking, so we decided it's time for us to step down and pass the torch. And a new major version seems the right point to do that.

We will be happy to review and/or sponsor other people's work or even occasionally do uploads, but we can't promise to do it regularly.

Some things we think potential Qt 6 maintainers should be familiar with are, of course, C++ packaging (specially symbols files) and CMake, as Qt 6 will be built with it.

We also encourage prospective maintainers to remove the source's -everywhere-src suffixes and just keep the base names as source package names: qtbase6, qtdeclarative6, etc.

It has been an interesting ride all these years, we really hope you enjoyed using Qt.

Thanks for everything,

Dmitry and Lisandro.

Note 20200818 12:12 ARST: I was asked if the move has anything to do with code quality or licensing. The answer is a huge no, Qt is a **great** project which we love. As stated before it's mostly about lack of free time to properly maintain it.


Planet DebianMolly de Blanc: Updates

We are currently working on a second draft of the Declaration of Digital Autonomy. We’re also working on some next steps, which I hadn’t really thought about existing before. Videos from GUADEC and HOPE are now online. We’ll be speaking at DebConf on August 29th.

I’ll be starting school soon, so I expect a lot of the content of what I’ll be writing (as well as the style) to shift a bit to reflect what I’m studying and how I’m expected to write for my program.

Kevin RuddMonocle 24 Radio: The Big Interview


The post Monocle 24 Radio: The Big Interview appeared first on Kevin Rudd.

CryptogramVaccine for Emotet Malware

Interesting story of a vaccine for the Emotet malware:

Through trial and error and thanks to subsequent Emotet updates that refined how the new persistence mechanism worked, Quinn was able to put together a tiny PowerShell script that exploited the registry key mechanism to crash Emotet itself.

The script, cleverly named EmoCrash, effectively scanned a user's computer and generated a correct -- but malformed -- Emotet registry key.

When Quinn tried to purposely infect a clean computer with Emotet, the malformed registry key triggered a buffer overflow in Emotet's code and crashed the malware, effectively preventing users from getting infected.

When Quinn ran EmoCrash on computers already infected with Emotet, the script would replace the good registry key with the malformed one, and when Emotet would re-check the registry key, the malware would crash as well, preventing infected hosts from communicating with the Emotet command-and-control server.


The Binary Defense team quickly realized that news about this discovery needed to be kept under complete secrecy, to prevent the Emotet gang from fixing its code, but they understood EmoCrash also needed to make its way into the hands of companies across the world.

Compared to many of today's major cybersecurity firms, all of which have decades of history behind them, Binary Defense was founded in 2014, and despite being one of the industry's up-and-comers, it doesn't yet have the influence and connections to get this done without news of its discovery leaking, either by accident or because of a jealous rival.

To get this done, Binary Defense worked with Team CYMRU, a company that has a decades-long history of organizing and participating in botnet takedowns.

Working behind the scenes, Team CYMRU made sure that EmoCrash made its way into the hands of national Computer Emergency Response Teams (CERTs), which then spread it to the companies in their respective jurisdictions.

According to James Shank, Chief Architect for Team CYMRU, the company has contacts with more than 125 national and regional CERT teams, and also manages a mailing list through which it distributes sensitive information to more than 6,000 members. Furthermore, Team CYMRU also runs a biweekly group dedicated to dealing with Emotet's latest shenanigans.

This broad and well-orchestrated effort has helped EmoCrash make its way around the globe over the course of the past six months.


Either by accident or by figuring out there was something wrong in its persistence mechanism, the Emotet gang did, eventually, changed its entire persistence mechanism on Aug. 6 -- exactly six months after Quinn made his initial discovery.

EmoCrash may not be useful to anyone anymore, but for six months, this tiny PowerShell script helped organizations stay ahead of malware operations -- a truly rare sight in today's cyber-security field.

Kevin RuddABC Late Night Live: US-China Relations

17 AUGUST 2020

Main topic: Foreign Affairs article ‘Beware the Guns of August — in Asia’


Image: The USS Ronald Reagan steams through the San Bernardino Strait, July 3, 2020, crossing from the Philippine Sea into the South China Sea. (Navy Petty Officer 3rd Class Jason Tarleton)

The post ABC Late Night Live: US-China Relations appeared first on Kevin Rudd.

Planet DebianJonathan Dowland: Come Together

Primal Scream — Come Together

This one rarely returns to its proper place, instead living in the small pile of records permanently next to my turntable. I'm a late convert to Primal Scream: I first heard the 10 minute Andrew Weatherall mix of Come Together on Tom Robinson's 6Music show. It's a remarkable record, more so to think that it's quite hard, in isolation, to actually hear Primal Scream's contribution. This is very much Weatherall's track, and (to me, at least) it does a great job of encapsulating the house music explosion of the time.

It's interesting to hear Terry Farley's mix, partially because the band's contribution is more evident, so you can get a glimpse of the material that Weatherall had to work with.

RIP Andrew Weatherall, 1963-2020.

Worse Than FailureCodeSOD: Carbon Copy

I avoid writing software that needs to send emails. It's just annoying code to build, interfacing with mailservers is shockingly frustrating, and honestly, users don't tend to like the emails that my software tends to send. Once upon a time, it was a system which would tell them it was time to calibrate a scale, and the business requirements were basically "spam them like three times a day the week a scale comes do," which shockingly everyone hated.

But Krista inherited some code that sends email. The previous developer was a "senior", but probably could have had a little more supervision and maybe some mentoring on the C# language.

One commit added this method, for sending emails:

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

That's not so bad, as these things go, though one has to wonder about parameters like fileName1 and fileName2. Do they only ever send exactly two files? Well, maybe when this method was written, but a few commits later, an overloaded version gets added:

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2, String fileName3) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); if (File.Exists(fileName3)) mailMsg.Attachments.Add(new Attachment(fileName3)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

And then, a few commits later, someone decided that they needed to send four files, sometimes.

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2, String fileName3, String fileName4) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); if (File.Exists(fileName3)) mailMsg.Attachments.Add(new Attachment(fileName3)); if (File.Exists(fileName4)) mailMsg.Attachments.Add(new Attachment(fileName4)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

Each time someone discovered a new case where they wanted to include a different number of attachments, the previous developer copy/pasted the same code, with minor revisions.

Krista wrote a single version which used a paramarray, which replaced all of these versions (and any other possible versions), without changing the calling semantics.

Though the real WTF is probably still forcing the BodyEncoding to be ASCII at this point in time. There's a whole lot of assumptions about your dataset which are probably not true, or at least no reliably true.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Rondam RamblingsIrit Gat, Ph.D. 25 November 1966 - 11 August 2020

With a heavy heart I bear witness to the untimely passing of Dr. Irit Gat last Tuesday at the age of 53.  Irit was the Dean of Behavioral and Social Sciences at Antelope Valley College in Lancaster, California.  She was also my younger sister.  She died peacefully of natural causes.I am going to miss her.  A lot.  I'm going to miss her smile.  I'm going to miss the way she said "Hey bro" when we

Planet DebianIan Jackson: Doctrinal obstructiveness in Free Software

Any software system has underlying design principles, and any software project has process rules. But I seem to be seeing more often, a pathological pattern where abstract and shakily-grounded broad principles, and even contrived and sophistic objections, are used to block sensible changes.

Today I will go through an example in detail, before ending with a plea:

PostgreSQL query planner, WITH [MATERIALIZED] optimisation fence

Background history

PostgreSQL has a sophisticated query planner which usually gets the right answer. For good reasons, the pgsql project has resisted providing lots of knobs to control query planning. But there are a few ways to influence the query planner, for when the programmer knows more than the planner.

One of these is the use of a WITH common table expression. In pgsql versions prior to 12, the planner would first make a plan for the WITH clause; and then, it would make a plan for the second half, counting the WITH clause's likely output as a given. So WITH acts as an "optimisation fence".

This was documented in the manual - not entirely clearly, but a careful reading of the docs reveals this behaviour:

The WITH query will generally be evaluated as written, without suppression of rows that the parent query might discard afterwards.

Users (authors of applications which use PostgreSQL) have been using this technique for a long time.

New behaviour in PostgreSQL 12

In PostgreSQL 12 upstream were able to make the query planner more sophisticated. In particular, it is now often capable of looking "into" the WITH common table expression. Much of the time this will make things better and faster.

But if WITH was being used for its side-effect as an optimisation fence, this change will break things: queries that ran very quickly in earlier versions might now run very slowly. Helpfully, pgsql 12 still has a way to specify an optimisation fence: specifying WITH ... AS MATERIALIZED in the query.

So far so good.

Upgrade path for existing users of WITH fence

But what about the upgrade path for existing users of the WITH fence behaviour? Such users will have to update their queries to add AS MATERIALIZED. This is a small change. Having to update a query like this is part of routine software maintenance and not in itself very objectionable. However, this change cannnot be made in advance because pgsql versions prior to 12 will reject the new syntax.

So the users are in a bit of a bind. The old query syntax can be unuseably slow with the new database and the new syntax is rejected by the old database. Upgrading both the database and the application, in lockstep, is a flag day upgrade, which every good sysadmin will want to avoid.

A solution to this problem

Colin Watson proposed a very simple solution: make the earlier PostgreSQL versions accept the new MATERIALIZED syntax. This is correct since the new syntax specifies precisely the actual behaviour of the old databases. It has no deleterious effect on any users of older pgsql versions. It makes it possible to add the new syntax to the application, before doing the database upgrade, decoupling the two upgrades.

Colin Watson even provided an implementation of this proposal.

The solution is rejected by upstream

Unfortunately upstream did not accept this idea. You can read the whole thread yourself if you like. But in summary, the objections were (italic indicates literal quotes):

  • New features don't gain a backpatch. This is a project policy. Of course this is not a new feature, and if it is an exception should be made. This was explained clearly in the thread.
  • I'm not sure the "we don't want to upgrade application code at the same time as the database" is really tenable. This is quite an astonishing statement, particularly given the multiple users who said they wanted to do precisely that.
  • I think we could find cases where we caused worse breaks between major versions. Paraphrasing: "We've done worse things in the past so we should do this bad thing too". WTF?
  • One disadvantage is that this will increase confusion for users, who'll get used to the behavior on 12, and then they'll get confused on older releases. This seems highly contrived. Surely the number of people who are likely to suffer this confusion is tiny. Providing the new syntax in old versions (including of course the appropriate changes to the docs everywhere) might well make such confusion less rather than more likely.
  • [Poster is concerned about] 11.6 and up allowing a syntax that 11.0-11.5 don't. People are likely to write code relying on this and then be surprised when it doesn't work on a slightly older server. And, similarly: we'll then have a lot more behavior differences between minor releases. Again this seems a contrived and unconvincing objection. As that first poster even notes: Still, is that so much different from cases where we fix a bug that prevented some statement from working? No, it isn't.
  • if we started looking we'd find many changes every year that we could justify partially or completely back-porting on similar grounds ... we'll certainly screw it up sometimes. This is a slippery slope argument. But there is no slippery slope: in particular, the proposed change does not change any of the substantive database logic, and the upstream developers will hardly have any difficulty rejecting future more risky backport proposals.
  • if you insist on using the same code with pre-12 and post-12 releases, this should be achievable (at least in most cases) by using the "offset 0" trick. What? First I had heard of it but this in fact turns out to be true! Read more about this, below...

I find these extremely unconvincing, even taken together. Many of them are very unattractive things to hear one's upstream saying.

At best they are knee-jerk and inflexible application of very general principles. The authors of these objections seem to have lost sight of the fact that these principles have a purpose. When these kind of software principles work against their purposes, they should be revised, or exceptions made.

At worst, it looks like a collective effort to find reasons - any reasons, no matter how bad - not to make this change.

The OFFSET 0 trick

One of the responses in the thread mentions OFFSET 0. As part of writing the queries in the Xen Project CI system, and preparing for our system upgrade, I had carefully read the relevant pgsql documentation. This OFFSET 0 trick was new to me.

But, now that I know the answer, it is easy to provide the right search terms and find, for example, this answer on stackmumble. Apparently adding a no-op OFFSET 0 to the subquery defeats the pgsql 12 query planner's ability to see into the subquery.

I think OFFSET 0 is the better approach since it's more obviously a hack showing that something weird is going on, and it's unlikely we'll ever change the optimiser behaviour around OFFSET 0 ... wheras hopefully CTEs will become inlineable at some point CTEs became inlineable by default in PostgreSQL 12.
So in fact there is a syntax for an optimisation fence that is accepted by both earlier and later PostgreSQL versions. It's even recommended by pgsql devs. It's just not documented, and is described by pgsql developers as a "hack". Astonishingly, the fact that it is a "hack" is given as a reason to use it!

Well, I have therefore deployed this "hack". No doubt it will stay in our codebase indefinitely.

Please don't be like that!

I could come up with a lot more examples of other projects that have exhibited similar arrogance. It is becoming a plague! But every example is contentious, and I don't really feel I need to annoy a dozen separate Free Software communities. So I won't make a laundry list of obstructiveness.

If you are an upstream software developer, or a distributor of software to users (eg, a distro maintainer), you have a lot of practical power. In theory it is Free Software so your users could just change it themselves. But for a user or downstream, carrying a patch is often an unsustainable amount of work and risk. Most of us have patches we would love to be running, but which we haven't even written because simply running a nonstandard build is too difficult, no matter how technically excellent our delta.

As an upstream, it is very easy to get into a mindset of defending your code's existing behaviour, and to turn your project's guidelines into inflexible rules. Constant exposure to users who make silly mistakes, and rudely ask for absurd changes, can lead to core project members feeling embattled.

But there is no need for an upstream to feel embattled! You have the vast majority of the power over the software, and over your project communication fora. Use that power consciously, for good.

I can't say that arrogance will hurt you in the short term. Users of software with obstructive upstreams do not have many good immediate options. But we do have longer-term choices: we can choose which software to use, and we can choose whether to try to help improve the software we use.

After reading Colin's experience, I am less likely to try to help improve the experience of other PostgreSQL users by contributing upstream. It doesn't seem like there would be any point. Indeed, instead of helping the PostgreSQL community I am now using them as an example of bad practice. I'm only half sorry about that.

comment count unavailable comments

CryptogramRobocall Results from a Telephony Honeypot

A group of researchers set up a telephony honeypot and tracked robocall behavior:

NCSU researchers said they ran 66,606 telephone lines between March 2019 and January 2020, during which time they said to have received 1,481,201 unsolicited calls -- even if they never made their phone numbers public via any source.

The research team said they usually received an unsolicited call every 8.42 days, but most of the robocall traffic came in sudden surges they called "storms" that happened at regular intervals, suggesting that robocallers operated using a tactic of short-burst and well-organized campaigns.

In total, the NCSU team said it tracked 650 storms over 11 months, with most storms being of the same size.

Research paper. USENIX talk. Slashdot thread.

Planet DebianNorbert Preining: KDE Apps 20.08 now available for Debian

KDE Apps bundle 20.08 has been released recently, and some of the packages are already updated in Debian/unstable. I have updated also all my packages to 20.08 and they are now available for x86_64, i586, and hopefully aarch64 (some issues remaining here still).

With the new release 20.08 I have also switched to versioned app repositories, so you need to update the apt sources directive. The new one is

deb ./

and similar for Testing.

Packages from the “other” repo that depend on apps, that is in particular Digikam, are currently rebuild and will be coinstallable soon.

Just to make sure, here is the full set of repositories I use on my computers:

deb ./
deb ./
deb ./
deb ./
deb ./


Worse Than FailureCodeSOD: Perls Can Change

Tomiko* inherited some web-scraping/indexing code from Dennis. The code started out just scanning candidate profiles for certain keywords, but grew, mutated, and eventually turned into something that also needed to download their CVs.

Now, Dennis was, as Tomiko puts it, "an interesting engineer". "Any agreed upon standard, he would aggressively oppose, and this can be seen in this code."

"This code" also happens to be in Perl, the "best" language for developers who don't like standards. And, it also happens to be connected to this infrastructure.

So let's start with the code, because this is the rare CodeSOD where the code itself isn't the WTF:

foreach my $n (0 .. @{$lines} - 1) { next if index($lines->[$n], 'RT::Spider::Deepweb::Controller::Factory->make(') == -1; # Don't let other cv_id survive. $lines->[$n] =~ s/,\s*cv_id\s*=>[^,)]+//; $lines->[$n] =~ s/,\s*cv_type\s*=>[^,)]+// if defined $cv_type; # Insert the new options. $lines->[$n] =~ s/\)/$opt)/; }

Okay, so it's a pretty standard for-each loop. We skip lines if they contain… wait, that looks like a Perl expression- RT::Spider::Deepweb::Controller::Factory->make('? Well, let's hold onto that thought, but keep trucking on.

Next, we do a few find-and-replace operations to ensure that we Don't let other cv_id survive. I'm not really sure what exactly that's supposed to mean, but Tomiko says, "Dennis never wrote a single meaningful comment".

Well, the regexes are pretty standard character-salad expressions; ugly, but harmless. If you take this code in isolation, it's not good, but it doesn't look terrible. Except, there's that next if line. Why are we checking to see if the input data contains a Perl expression?

Because our input data is a Perl script. Dennis was… efficient. He already had code that would download the candidate profiles. Instead of adding new code to download CVs, instead of refactoring the existing code so that it was generic enough to download both, Dennis instead decided to load the profile code into memory, scan it with regexes, and then eval it.

As Tomiko says: "You can't get more Perl than that."

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Krebs on SecurityMicrosoft Put Off Fixing Zero Day for 2 Years

A security flaw in the way Microsoft Windows guards users against malicious files was actively exploited in malware attacks for two years before last week, when Microsoft finally issued a software update to correct the problem.

One of the 120 security holes Microsoft fixed on Aug. 11’s Patch Tuesday was CVE-2020-1464, a problem with the way every supported version of Windows validates digital signatures for computer programs.

Code signing is the method of using a certificate-based digital signature to sign executable files and scripts in order to verify the author’s identity and ensure that the code has not been changed or corrupted since it was signed by the author.

Microsoft said an attacker could use this “spoofing vulnerability” to bypass security features intended to prevent improperly signed files from being loaded. Microsoft’s advisory makes no mention of security researchers having told the company about the flaw, which Microsoft acknowledged was actively being exploited.

In fact, CVE-2020-1464 was first spotted in attacks used in the wild back in August 2018. And several researchers informed Microsoft about the weakness over the past 18 months.

Bernardo Quintero is the manager at VirusTotal, a service owned by Google that scans any submitted files against dozens of antivirus services and displays the results. On Jan. 15, 2019, Quintero published a blog post outlining how Windows keeps the Authenticode signature valid after appending any content to the end of Windows Installer files (those ending in .MSI) signed by any software developer.

Quintero said this weakness would particularly acute if an attacker were to use it to hide a malicious Java file (.jar). And, he said, this exact attack vector was indeed detected in a malware sample sent to VirusTotal.

“In short, an attacker can append a malicious JAR to a MSI file signed by a trusted software developer (like Microsoft Corporation, Google Inc. or any other well-known developer), and the resulting file can be renamed with the .jar extension and will have a valid signature according Microsoft Windows,” Quintero wrote.

But according to Quintero, while Microsoft’s security team validated his findings, the company chose not to address the problem at the time.

“Microsoft has decided that it will not be fixing this issue in the current versions of Windows and agreed we are able to blog about this case and our findings publicly,” his blog post concluded.

Tal Be’ery, founder of Zengo, and Peleg Hadar, senior security researcher at SafeBreach Labs, penned a blog post on Sunday that pointed to a file uploaded to VirusTotal in August 2018 that abused the spoofing weakness, which has been dubbed GlueBall. The last time that August 2018 file was scanned at VirusTotal (Aug 14, 2020), it was detected as a malicious Java trojan by 28 of 59 antivirus programs.

More recently, others would likewise call attention to malware that abused the security weakness, including this post in June 2020 from the Security-in-bits blog.


Be’ery said the way Microsoft has handled the vulnerability report seems rather strange.

“It was very clear to everyone involved, Microsoft included, that GlueBall is indeed a valid vulnerability exploited in the wild,” he wrote. “Therefore, it is not clear why it was only patched now and not two years ago.”

Asked to comment on why it waited two years to patch a flaw that was actively being exploited to compromise the security of Windows computers, Microsoft dodged the question, saying Windows users who have applied the latest security updates are protected from this attack.

“A security update was released in August,” Microsoft said in a written statement sent to KrebsOnSecurity. “Customers who apply the update, or have automatic updates enabled, will be protected. We continue to encourage customers to turn on automatic updates to help ensure they are protected.”

Update, 12:45 a.m. ET: Corrected attribution on the June 2020 blog article about GlueBall exploits in the wild.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 13)

Here’s part thirteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.


Planet DebianArnaud Rebillout: Modify Vim syntax files for your taste

In this short how-to, we'll see how to make small modifications to a Vim syntax file, in order to change how a particular file format is highlighted. We'll go for a simple use-case: modify the Markdown syntax file, so that H1 and H2 headings (titles and subtitles, if you prefer) are displayed in bold. Of course, this won't be exactly as easy as expected, but no worries, we'll succeed in the end.

The calling

Let's start with a screenshot: how Vim displays Markdown files for me, someone who use the GNOME terminal with the Solarized light theme.

Vim - Markdown file with original highlighting

I'm mostly happy with that, except for one or two little details. I'd like to have the titles displayed in bold, for example, so that they're easier to spot when I skim through a Markdown file. It seems like a simple thing to ask, so I hope there can be a simple solution.

The first steps

Let's learn the basics.

In Vim world, the rules to highlight files formats are defined in the directory /usr/share/vim/vim82/syntax (I bet you'll have to adjust this path depending on the version of Vim that is installed on your system).

And so, for the Markdown file format, the rules are defined in the file /usr/share/vim/vim82/syntax/markdown.vim.

The first thing we could do is to have a look at this file, try to make sense of it, and maybe start to make some modifications.

But wait a moment. You should know that modifying a system file is not a great idea. First because your changes will be lost as soon as an update kicks in and the package manager replaces this file by a new version. Second, because you will quickly forget what files you modified, and what were your modifications, and if you do that too much, you might experience what is called "maintenance headache" in the long run.

So instead, maybe you DO NOT modify this file, and instead you copy it in your personal Vim folder, more precisely in ~/.vim/syntax. Create this directory if it does not exist:

mkdir -p ~/.vim/syntax
cp /usr/share/vim/vim82/syntax/markdown.vim ~/.vim/syntax

The file in your personal folder takes precedence over the system file of the same name in /usr/share/vim/vim82/syntax/, it is a replacement for the existing syntax files. And so from now on, Vim uses the file ~/.vim/syntax/markdown.vim, and this is where we can make our modifications.

(And by the way, this is explained in the Vim faq-24.12)

And so, it's already nice to know all of that, but wait, there's even better.

There's is another location of interest, and it is ~/.vim/after/syntax. You can drop syntax files in this directory, and these files are treated as additions to the existing syntax. So if you only want to make slight modifications, that's the way to go.

(And by the way, this is explained in the Vim faq-24.11)

So let's forget about a syntax replacement in ~/.vim/syntax/markdown.vim, and instead let's go for some syntax additions in ~/.vim/after/syntax/markdown.vim.

mkdir -p ~/.vim/after/syntax
touch ~/.vim/after/syntax/markdown.vim

Now, let's answer the initial question: how do we modify the highlighting rules for Markdown files, so that the titles are displayed in bold? First, we have to understand where are the rules that define the highlighting for titles. Here there are, from the file /usr/share/vim/vim82/syntax/markdown.vim:

hi def link markdownH1 htmlH1
hi def link markdownH2 htmlH2
hi def link markdownH3 htmlH3

You should know that H1 means Heading 1, and so on, and so we want to make H1 and H2 bold. What we can see here is that the headings in the Markdown files are highlighted like the headings in HTML files, and this is obviously defined in the file /usr/share/vim/vim82/syntax/html.vim. So let's have a look into this file:

hi def link htmlH1 Title
hi def link htmlH2 htmlH1
hi def link htmlH3 htmlH2

Let's keep digging a bit. Where is Title defined? For those using the default color scheme like me, this is defined straight in the Vim source code, in the file src/highlight.c.

CENT("Title term=bold ctermfg=DarkMagenta",
     "Title term=bold ctermfg=DarkMagenta gui=bold guifg=Magenta"),

And for those using custom color schemes, it might be defined in a file under /usr/share/vim/vim82/colors/.

Alright, so how do we override that? We can just define this kind of rules in our syntax additions file at ~/.vim/after/syntax/markdown.vim:

hi link markdownH1 markdownHxBold
hi link markdownH2 markdownHxBold
hi markdownHxBold  term=bold ctermfg=DarkMagenta gui=bold guifg=Magenta cterm=bold

As you can see, the only addition we made, compared to what's defined in src/highlight.c, is cterm=bold. And that's already enough to achieve the initial goal, make the titles (ie. H1 and H2) bold. The result can be seen in the following screenshot:

Vim - Markdown file with modified highlighting

The rabbit hole

So we could stop right here, and life would be easy and good.

However, with this solution there's still something that is not perfect. We use the color DarkMagenta as defined in the default color scheme. What I didn't mention however, is that this is applicable for a light background. If you have a dark background though, dark magenta won't be easy to read.

Actually, if you look a bit more into src/highlight.c, you will see that the default color scheme comes in two variants, one for a light background, and one for a dark background.

And so the definition for Title for a dark background is as follow:

CENT("Title term=bold ctermfg=LightMagenta",
     "Title term=bold ctermfg=LightMagenta gui=bold guifg=Magenta"),

Hmmm, so how do we do that in our syntax file? How can we support both light and dark background, so that the color is right in both cases?

After a bit of research, and after looking at other syntax files, it seems that the solution is to check for the value of the background option, and so our syntax file becomes:

hi link markdownH1 markdownHxBold
hi link markdownH2 markdownHxBold
if &background == "light"
  hi markdownHxBold term=bold ctermfg=DarkMagenta gui=bold guifg=Magenta cterm=bold
  hi markdownHxBold term=bold ctermfg=LightMagenta gui=bold guifg=Magenta cterm=bold

In case you wonder, in Vim script you prefix Vim options with &, and so you get the value of the background option by writing &background. You can learn this kind of things in the Vim scripting cheatsheet.

And so, it's easy enough, except for one thing: it doesn't work. The headings always show up in DarkMagenta, even for a dark background.

This is why I called this paragraph "the rabbit hole", by the way.

So... Well after trying a few things, I noticed that in order to make it work, I would have to reload the syntax files with :syntax on.

At this point, the most likely explanation is that the background option is not set yet when the syntax files are loaded at startup, hence it needs to be reloaded manually afterward.

And after muuuuuuch research, I found out that it's actually possible to set a hook for when an option is modified. Meaning, it's possible to execute a function when the background option is modified. Quite cool actually.

And so, there it goes in my ~/.vimrc:

" Reload syntax when the background changes 
autocmd OptionSet background if exists("g:syntax_on") | syntax on | endif

For humans, this line reads as:

  1. when the background option is modified -- autocmd OptionSet background
  2. check if the syntax is on -- if exists("g:syntax_on")
  3. if that's the case, reload it -- syntax on

With that in place, my Markdown syntax overrides work for both dark and light background. Champagne!

The happy end

To finish, let me share my actual additions to the markdown.vim syntax. It makes H1 and H2 bold, along with their delimiters, and it also colors the inline code and the code blocks.

" H1 and H2 headings -> bold
hi link markdownH1 markdownHxBold
hi link markdownH2 markdownHxBold
" Heading delimiters (eg '#') and rules (eg '----', '====') -> bold
hi link markdownHeadingDelimiter markdownHxBold
hi link markdownRule markdownHxBold
" Code blocks and inline code -> highlighted
hi link markdownCode htmlH1

" The following test requires this addition to your vimrc:
" autocmd OptionSet background if exists("g:syntax_on") | syntax on | endif
if &background == "light"
  hi markdownHxBold term=bold ctermfg=DarkMagenta gui=bold guifg=Magenta cterm=bold
  hi markdownHxBold term=bold ctermfg=LightMagenta gui=bold guifg=Magenta cterm=bold

And here's how it looks like with a light background:

Vim - Markdown file with final highlighting (light)

And a dark background:

Vim - Markdown file with final highlighting (dark)

That's all, that's very little changes compared to the highlighting from the original syntax file, and now that we understand how it's supposed to be done, it's not much effort to achieve it.

It's just that finding the workaround to make it work for both light and dark background took forever, and leaves the usual, unanswered question: bug or feature?


Planet DebianEnrico Zini: Historical links

Saint Guinefort was a dog who lived in France in the 13th century, worshipped through history as a saint until less than a century ago. The recurrence is soon, on the 22th of August.

Many think middle ages were about superstition, and generally a bad period. Black Death, COVID, and Why We Keep Telling the Myth of a Renaissance Golden Age and Bad Middle Ages tells a different, fascinating story.

Another fascinating middle age story is that of Christine de Pizan, author of The Book of the City of Ladies. This is a very good lecture about her (in Italian): Come pensava una donna nel Medioevo? 2 - Christine de Pizan. You can read some of her books at the Memory of the World library.

If you understand Italian, Alessandro Barbero gives fascinating lectures. You can find them index in a timeline, or in a map.

Still from around the middle ages, we get playing cards: see Playing Cards Around the World and Through the Ages.

If you want to go have a look in person, and you overshoot with your time machine, here's a convenient route planner for antique Roman roads.

View all historical links that I have shared.

LongNowKathryn Cooper’s Wildlife Movement Photography

Amazing wildlife photography by Kathryn Cooper reveals the brushwork of birds and their flocks through sky, hidden by the quickness of the human eye.

“Staple Newk” by Kathryn Cooper.

Ever since Eadweard Muybridge’s pioneering photography of animal locomotion in 01877 and 01878 (including the notorious “horse shot by pistol” frames from an era less concerned with animal experiments), the trend has been to unpack our lived experience of movement into serial, successive frames. The movie camera smears one pace layer out across another, lets the eye scrub over one small moment.

“UFO” by Kathryn Cooper.

In contrast, time-lapse and long exposure camerawork implodes the arc of moments, an integral calculus that gathers the entire gesture. Cooper’s flock photography is less the autopsy of high-speed video and more the graceful enzo drawn by a zen master.

Learn More

LongNowPuzzling artifacts found at Europe’s oldest battlefield

Bronze-Age crime scene forensics: newly discovered artifacts only deepen the mystery of a 3,300-year-old battle. What archaeologists previously thought to be a local skirmish looks more and more like a regional conflict that drew combatants in from hundreds of kilometers away…but why?

Much like the total weirdness of the Ediacaran fauna of 580 million years ago, this oldest Bronze-Age battlefield is the earliest example of its kind in the record…and firsts are always difficult to analyze:

Among the stash are also three bronze cylinders that may have been fittings for bags or boxes designed to hold personal gear—unusual objects that until now have only been discovered hundreds of miles away in southern Germany and eastern France.

‘This was puzzling for us,’ says Thomas Terberger, an archaeologist at the University of Göttingen in Germany who helped launch the excavation at Tollense and co-authored the paper. To Terberger and his team, that lends credence to their theory that the battle wasn’t just a northern affair.

Anthony Harding, an archaeologist and Bronze Age specialist who was not involved with the research: ‘Why would a warrior be going round with a lot of scrap metal?’ he asks. To interpret the cache—which includes distinctly un-warlike metalworking gear—as belonging to warriors is ‘a bit far-fetched to me,’ he says.


CryptogramFriday Squid Blogging: Editing the Squid Genome

Scientists have edited the genome of the Doryteuthis pealeii squid with CRISPR. A first.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityMedical Debt Collection Firm R1 RCM Hit in Ransomware Attack

R1 RCM Inc. [NASDAQ:RCM], one of the nation’s largest medical debt collection companies, has been hit in a ransomware attack.

Formerly known as Accretive Health Inc., Chicago-based R1 RCM brought in revenues of $1.18 billion in 2019. The company has more than 19,000 employees and contracts with at least 750 healthcare organizations nationwide.

R1 RCM acknowledged taking down its systems in response to a ransomware attack, but otherwise declined to comment for this story.

The “RCM” portion of its name refers to “revenue cycle management,” an industry which tracks profits throughout the life cycle of each patient, including patient registration, insurance and benefit verification, medical treatment documentation, and bill preparation and collection from patients.

The company has access to a wealth of personal, financial and medical information on tens of millions of patients, including names, dates of birth, Social Security numbers, billing information and medical diagnostic data.

It’s unclear when the intruders first breached R1’s networks, but the ransomware was unleashed more than a week ago, right around the time the company was set to release its 2nd quarter financial results for 2020.

R1 RCM declined to discuss the strain of ransomware it is battling or how it was compromised. Sources close to the investigation tell KrebsOnSecurity the malware is known as Defray.

Defray was first spotted in 2017, and its purveyors have a history of specifically targeting companies in the healthcare space. According to Trend Micro, Defray usually is spread via booby-trapped Microsoft Office documents sent via email.

“The phishing emails the authors use are well-crafted,” Trend Micro wrote. For example, in an attack targeting a hospital, the phishing email was made to look like it came from a hospital IT manager, with the malicious files disguised as patient reports.

Email security company Proofpoint says the Defray ransomware is somewhat unusual in that it is typically deployed in small, targeted attacks as opposed to large-scale “spray and pray” email malware campaigns.

“It appears that Defray may be for the personal use of specific threat actors, making its continued distribution in small, targeted attacks more likely,” Proofpoint observed.

A recent report (PDF) from Corvus Insurance notes that ransomware attacks on companies in the healthcare industry have slowed in recent months, with some malware groups even dubiously pledging they would refrain from targeting these firms during the COVID-19 pandemic. But Corvus says that trend is likely to reverse in the second half of 2020 as the United States moves cautiously toward reopening.

Corvus found that while services that scan and filter incoming email for malicious threats can catch many ransomware lures, an estimated 75 percent of healthcare companies do not use this technology.

CryptogramUpcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

CryptogramDrovorub Malware

The NSA and FBI have jointly disclosed Drovorub, a Russian malware suite that targets Linux.

Detailed advisory. Fact sheet. News articles. Reddit thread.

LongNowHow to Be in Time

Photograph: Scott Thrift.

“We already have timepieces that show us how to be on time. These are timepieces that show us how to be in time.”

– Scott Thrift

Slow clocks are growing in popularity, perhaps as a tonic for or revolt against the historical trend of ever-faster timekeeping mechanisms.

Given that bell tower clocks were originally used to keep monastic observances of the sacred hours, it seems appropriate to restore some human agency in timing and give kairos back some of the territory it lost to the minute and second hands so long ago…

Scott Thrift’s three conceptual timepieces measure with only one hand each, counting 24 hour, one-month, and one-year cycles with each revolution. Not quite 10,000 years, but it’s a consumer-grade start.

“Right now we’re living in the long-term effects of short-term thinking. I don’t think it’s possible really for us to commonly think long term if the way that we tell time is with a short-term device that just shows the seconds, minutes, and hours. We’re precluded to seeing things in the short term.”

-Scott Thrift

Worse Than FailureError'd: New Cat Nullness

"Honest! If I could give you something that had a 'cat' in it, I would!" wrote Gordon P.


"You'd think Outlook would hage told me sooner about these required updates," Carlos writes.


Colin writes, "Asking for a friend, does balsamic olive oil still have to be changed every 3,000 miles?"


"I was looking for Raspberry Pi 4 cases on my local when I stumbled upon a pretty standard, boring WTF. Desparate to find an actual picture of the case I was after, I changed to and I guess I got what I wanted," George wrote. (Here are the short versions:


Kevin wrote, "Ah, I get it. Shiny and blinky ads are SO last decade. Real container advertisers nowadays get straight to the point!"


"I noticed this in the footer of an email from my apartment management company and well, I'm intrigued at the possibility of 'rewards'," wrote Peter C.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


CryptogramThe NSA on the Risks of Exposing Location Data

The NSA has issued an advisory on the risks of location data.

Mitigations reduce, but do not eliminate, location tracking risks in mobile devices. Most users rely on features disabled by such mitigations, making such safeguards impractical. Users should be aware of these risks and take action based on their specific situation and risk tolerance. When location exposure could be detrimental to a mission, users should prioritize mission risk and apply location tracking mitigations to the greatest extent possible. While the guidance in this document may be useful to a wide range of users, it is intended primarily for NSS/DoD system users.

The document provides a list of mitigation strategies, including turning things off:

If it is critical that location is not revealed for a particular mission, consider the following recommendations:

  • Determine a non-sensitive location where devices with wireless capabilities can be secured prior to the start of any activities. Ensure that the mission site cannot be predicted from this location.
  • Leave all devices with any wireless capabilities (including personal devices) at this non-sensitive location. Turning off the device may not be sufficient if a device has been compromised.
  • For mission transportation, use vehicles without built-in wireless communication capabilities, or turn off the capabilities, if possible.

Of course, turning off your wireless devices is itself a signal that something is going on. It's hard to be clandestine in our always connected world.

News articles.

CryptogramUAE Hack and Leak Operations

Interesting paper on recent hack-and-leak operations attributed to the UAE:

Abstract: Four hack-and-leak operations in U.S. politics between 2016 and 2019, publicly attributed to the United Arab Emirates (UAE), Qatar, and Saudi Arabia, should be seen as the "simulation of scandal" ­-- deliberate attempts to direct moral judgement against their target. Although "hacking" tools enable easy access to secret information, they are a double-edged sword, as their discovery means the scandal becomes about the hack itself, not about the hacked information. There are wider consequences for cyber competition in situations of constraint where both sides are strategic partners, as in the case of the United States and its allies in the Persian Gulf.

Kevin RuddBBC World: US-China Tensions

13 AUGUST 2020

Topics: Foreign Affairs article ‘Beware the Guns of August – in Asia’

Mike Embley
Beijing’s crackdown on Hong Kong’s democracy movement has attracted strong criticism both Washington and Beijing hitting key figures with sanctions and closing consulates in recent weeks. Is that not the only issue where the two countries don’t see eye to eye tensions have been escalating on a range of fronts, including the Chinese handling of the pandemic, the American decision to ban Huawei and Washington’s allegations of human rights abuses against Uighur Muslims in Xinjiang. So where is all this heading? Let’s try and find out we speak to Kevin Rudd, former Australian Prime Minister, of course, now the president of the Asia Society Policy Institute. Welcome very good to talk to you. You’ve been very vocal about China’s attitudes to democracy in Hong Kong, also the tit for tat sanctions between the US and China where do you think all this is heading?

Kevin Rudd
Well, if our prism for analysis is where does the US China relationship go? The bottom line is we haven’t seen this relationship in such fundamental disrepair in about half a century. And as a result, whether it’s Hong Kong, or whether it’s Taiwan or events unfolding in the South China Sea, this is pushing the relationship into greater and greater levels of crisis. What concerns those of us who study this professionally. And who know both systems of government reasonably well, both in Beijing and Washington, is that the probability of a crisis unfolding either in the Taiwan Straits or in the South China Sea is now growing. And the probability of escalation is now real into a serious shooting match. And the lesson of history is it’s very difficult to de escalate under those circumstances.

Mike Embley
Yes, I think you’ve spoken in terms of the risk of a hot war, actual war between the US and China. Are you serious?

Kevin Rudd
I am serious and I’ve not said this before. I’ve been a student of US-China relations for the last 35 years. And I’ve I take a genuinely sceptical approach to people who have sounded the alarms in previous periods of the relationship. But those of us who have observed this through the prism of history, I think have got a responsibility to say to decision makers both in Washington and in Beijing right now be careful what you wish for, because this is catapulting in a particular direction. When you look at the South China Sea in particular, there you have a huge amount of metal on metal, that is a large number of American ships and a large number of People’s Liberation Army Navy ships, similar number of aircraft, the rules of engagement, the standard operating procedures of these vessels are unbeknownst to the rest of us, we’ve had near misses before. What I’m pointing to is that if we actually have a collision, or a sinking or a crash, what then ensues in terms of crisis management on both sides when we last had this in 2001 2002 in the Bush administration, the state of the US China relationship was pretty good. Right now 20 years later, it is fundamentally appalling. That’s why many of us are deeply concerned, and are sounding this concern both to Beijing and Washington.

Mike Embley
And yet you know, of course, China is such a power economically and is making its presence felt in so many places in the world. There is a sense that really China can pretty much do what it wants, how do you avoid the kind of situation you’re describing?

Kevin Rudd
Well, the government in Beijing needs to understand the importance of restraint as well in terms of its own calculus of its own long term national interests. And that is China’s current cause of action across a range of fronts is in fact causing a massive international reaction against China now, unprecedented against again, the measures of the last 40 or 50 years. You now have fundamental dislocations in the relationship not just with Washington, but with Canada, with Australia, with United Kingdom, with Japan, with the Republic of Korea, and a whole bunch of others as well, including those in various parts of continental Europe. And so therefore, looking at this from the prism of Beijing’s own interests, there are those in Beijing who will be raising the argument, are we pushing too far too hard, too fast. And the responsibility of the rest of us is to say to that cautionary advice within Beijing, all power to your arm in restraining China from this course of action, but also in equal measure saying into our friends in Washington, particularly in a presidential election season, where Republicans and Democrats are seeking to outflank each other to the right, on China strategy, that this is no time to engage in, shall we say, symbolic acts for a domestic political purpose in the United States presidential election context, which can have real national security consequences in Southeast Asia and then globally.

Mike Embley
Mr. Rudd, you say very clearly what you hope will happen what you hope China will realize, what do you think actually will happen? Are you optimistic in a nutshell or pessimistic?

Kevin Rudd
The reason for me writing the piece I’ve just done in Foreign Affairs Magazine, which is entitled “Beware The Guns of August”, for those of us obviously familiar with what happened in August of 1914. Is that on balance I am pessimistic, that the political cultures in both capitals right now are fully seized of the risks that they are playing with on the high seas and over Taiwan as well. Hong Kong, the matters you were referring to before, frankly, add further to the deterioration of the surrounding political relationship between the two countries. But in terms of incendiary actions of a national security nature, it’s events in the Taiwan straits and it’s events on the high seas in the South China Sea, which are most likely to trigger this. And to answer your question directly right now, until we see the other side of the US presidential election. I remain on balance concerned and pessimistic.

Mike Embley
Right. Kevin Rudd Thank you very much for talking to us.

Kevin Rudd
Good to be with you.

The post BBC World: US-China Tensions appeared first on Kevin Rudd.

Kevin RuddAustralian Jewish News: Michael Gawenda and ‘The Powerbroker’

With the late Shimon Peres in 2012.

This article was first published by The Australian Jewish News on 13 August 2020.

The factional manoeuvrings of Labor’s faceless men a decade ago are convoluted enough without demonstrable misrepresentations by authors like Michael Gawenda in his biography of Mark Leibler, The Powerbroker.

Gawenda claims my memoir, The PM Years, blames the leadership coup on Leibler’s hardline faction of Australia’s Israel lobby, “plotting” in secret with Julia Gillard – a vision of “extreme, verging on conspiratorial darkness”. This is utter fabrication on his part. My simple challenge to Gawenda is to specify where I make such claims. He can’t. If he’d bothered to call me before publishing, I would have told him so.

Let me be clear: I have never claimed, nor do I believe, that Leibler or AIJAC were involved in the coup. It was conceived and executed almost entirely by factional warlords who blamed me for stymieing their individual ambitions.

It’s true my relationship with Leibler was strained in 2010 after Mossad agents stole the identities of four Australians living in Israel. Using false passports, they slipped into Dubai to assassinate a Hamas operative. They broke our laws and breached our trust.

The Mossad also jeopardised the safety of every Australian who travels on our passports in the Middle East. Unless this stopped, any Australian would be under suspicion, exposing them to arbitrary detention or worse.

More shocking, this wasn’t their first offence. The Mossad explicitly promised to stop abusing Australian passports after an incident in 2003, in a memorandum kept secret to spare Israel embarrassment. It didn’t work. They reoffended because they thought Australia was weak and wouldn’t complain.

We needed a proportional response to jolt Israeli politicians to act, without fundamentally damaging our valued relationship. Australia’s diplomatic, national security and intelligence establishments were unanimous: we should expel the Mossad’s representative in Canberra. This would achieve our goal but make little practical difference to Australia-Israel cooperation. Every minister in the national security committee agreed, including Gillard.

But obdurate elements of Australia’s Israel lobby accused us of overreacting. How could we treat our friend Israel like this? How did we know it was them? Wasn’t this just the usual murky business of espionage? According to Leibler, Diaspora leaders should “not criticise any Israeli government when it comes to questions of Israeli security”. Any violation of law, domestic or international, is acceptable. Never mind every citizen’s duty to uphold our laws and protect Australian lives.

I invited Leibler and others to dinner at the Lodge to reassure them the affair, although significant, wouldn’t derail the relationship. I sat politely as Leibler berated me. Boasting of his connections, he wanted to personally arrange meetings with the Mossad to smooth things over. We had, of course, already done this.

Apropos of nothing, Leibler then leaned over and, in what seemed to me a slightly menacing manner, suggested Julia was “looking very good in the polls” and “a great friend of Israel”. This surprised me, not least because I believed, however foolishly, that my deputy was loyal.

Leibler’s denials are absorbed wholly by Gawenda, solely on the basis of his notes. Give us a break, Michael – why would Leibler record such behaviour? It’s also meaningless that others didn’t hear him since, as often happens at dinners, multiple conversations occur around the table. The truth is it did happen, hence why I recorded it in my book. I have no reason to invent such an anecdote.

In fairness to Gillard, her eagerness to befriend Leibler reflected the steepness of her climb on Israel. She emerged from organisations that historically antagonised Israel – the Socialist Left and Australian Union of Students – and often overcompensated by swinging further towards AIJAC than longstanding Labor policy allowed.

By contrast, my reputation was well established, untainted by the anti-Israel sentiment sometimes found on the political left. A lifelong supporter of Israel and security for its people, I defied Labor critics by proudly leading Parliament in praise of the Jewish State’s achievements. I have consistently denounced the BDS campaign targeting Israeli businesses, both in office and since. My government blocked numerous shipments of potential nuclear components to Iran, and commissioned legal advice on charging president Mahmoud Ahmadinejad with incitement to genocide against the Jewish people. I’m as proud of this record as I am of my longstanding support for a two-state solution.

I have never considered that unequivocal support for Israel means unequivocal support for the policies of the Netanyahu government. For example, the annexation plan in the West Bank would be disastrous for Israel’s future security and fundamentally breach international law – a view shared by UK Conservative PM Boris Johnson. Israel, like the Australian Jewish community, is not monolithic; my concerns are shared by ordinary Israelis as well as many members of the Knesset.

Michael Gawenda is free to criticise me for things I’ve said and done (ironically, as editor of The Age, he didn’t consider me left-wing enough!), but his assertions in this account are flatly untrue.

The post Australian Jewish News: Michael Gawenda and ‘The Powerbroker’ appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Don't Stop the Magic

Don’t you believe in magic strings and numbers being bad? From the perspective of readability and future maintenance, constants are better. We all know this is true, and we all know that it can sometimes go too far.

Douwe Kasemier has a co-worker that has taken that a little too far.

For example, they have a Java method with a signature like this:

Document addDocument(Action act, boolean createNotification);

The Action type contains information about what action to actually perform, but it will result in a Document. Sometimes this creates a notification, and sometimes it doesn’t.

Douwe’s co-worker was worried about the readability of addDocument(myAct, true) and addDocument(myAct, false), so they went ahead and added some constants:

    private static final boolean NO_NOTIFICATION = false;
    private static final boolean CREATE_NOTIFICATION = true;

Okay, now, I don’t love this, but it’s not the worst thing…

public Document doActionWithNotification(Action act) {
  addDocument(act, CREATE_NOTIFICATION);

public Document doActionWithoutNotification(Action act) {
  addDocument(act, NO_NOTIFICATION);

Okay, now we’re just getting silly. This is at least diminishing returns of readability, if not actively harmful to making the code clear.

    private static final int SIX = 6;
    private static final int FIVE = 5;
    public String findId(String path) {
      String[] folders = path.split("/");
      if (folders.length >= SIX && (folders[FIVE].startsWith(PREFIX_SR) || folders[FIVE].startsWith(PREFIX_BR))) {
          return folders[FIVE].substring(PREFIX_SR.length());
      return null;

Ah, there we go. The logical conclusion: constants for 5 and 6. And yet they didn’t feel the need to make a constant for "/"?

At least this in maintainable, so that when the value of FIVE changes, the method doesn’t need to change.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


CryptogramSmart Lock Vulnerability

Yet another Internet-connected door lock is insecure:

Sold by retailers including Amazon, Walmart, and Home Depot, U-Tec's $139.99 UltraLoq is marketed as a "secure and versatile smart deadbolt that offers keyless entry via your Bluetooth-enabled smartphone and code."

Users can share temporary codes and 'Ekeys' to friends and guests for scheduled access, but according to Tripwire researcher Craig Young, a hacker able to sniff out the device's MAC address can help themselves to an access key, too.

UltraLoq eventually fixed the vulnerabilities, but not in a way that should give you any confidence that they know what they're doing.

EDITED TO ADD (8/12): More.

CryptogramCybercrime in the Age of COVID-19

The Cambridge Cybercrime Centre has a series of papers on cybercrime during the coronavirus pandemic.

EDITED TO ADD (8/12): Interpol report.

CryptogramTwitter Hacker Arrested

A 17-year-old Florida boy was arrested and charged with last week's Twitter hack.

News articles. Boing Boing post. Florida state attorney press release.

This is a developing story. Post any additional news in the comments.

EDITED TO ADD (8/1): Two others have been charged as well.

EDITED TO ADD (8/11): The online bail hearing was hacked.

Krebs on SecurityWhy & Where You Should Plant Your Flag

Several stories here have highlighted the importance of creating accounts online tied to your various identity, financial and communications services before identity thieves do it for you. This post examines some of the key places where everyone should plant their virtual flags.

As KrebsOnSecurity observed back in 2018, many people — particularly older folks — proudly declare they avoid using the Web to manage various accounts tied to their personal and financial data — including everything from utilities and mobile phones to retirement benefits and online banking services. From that story:

“The reasoning behind this strategy is as simple as it is alluring: What’s not put online can’t be hacked. But increasingly, adherents to this mantra are finding out the hard way that if you don’t plant your flag online, fraudsters and identity thieves may do it for you.”

“The crux of the problem is that while most types of customer accounts these days can be managed online, the process of tying one’s account number to a specific email address and/or mobile device typically involves supplying personal data that can easily be found or purchased online — such as Social Security numbers, birthdays and addresses.”

In short, although you may not be required to create online accounts to manage your affairs at your ISP, the U.S. Postal Service, the credit bureaus or the Social Security Administration, it’s a good idea to do so for several reasons.

Most importantly, the majority of the entities I’ll discuss here allow just one registrant per person/customer. Thus, even if you have no intention of using that account, establishing one will be far easier than trying to dislodge an impostor who gets there first using your identity data and an email address they control.

Also, the cost of planting your flag is virtually nil apart from your investment of time. In contrast, failing to plant one’s flag can allow ne’er-do-wells to create a great deal of mischief for you, whether it be misdirecting your service or benefits elsewhere, or canceling them altogether.

Before we dive into the list, a couple of important caveats. Adding multi-factor authentication (MFA) at these various providers (where available) and/or establishing a customer-specific personal identification number (PIN) also can help secure online access. For those who can’t be convinced to use a password manager, even writing down all of the account details and passwords on a slip of paper can be helpful, provided the document is secured in a safe place.

Perhaps the most important place to enable MFA is with your email accounts. Armed with access to your inbox, thieves can then reset the password for any other service or account that is tied to that email address.

People who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control.

Secondly, guard the security of your mobile phone account as best you can (doing so might just save your life). The passwords for countless online services can be reset merely by entering a one-time code sent via text message to the phone number on file for the customer’s account.

And thanks to the increasing prevalence of a crime known as SIM swapping, thieves may be able to upend your personal and financial life simply by tricking someone at your mobile service provider into diverting your calls and texts to a device they control.

Most mobile providers offer customers the option of placing a PIN or secret passphrase on their accounts to lessen the likelihood of such attacks succeeding, but these protections also usually fail when the attackers are social engineering some $12-an-hour employee at a mobile phone store.

Your best option is to reduce your overall reliance on your phone number for added authentication at any online service. Many sites now offer MFA options that are app-based and not tied to your mobile service, and this is your best option for MFA wherever possible.


First and foremost, all U.S. residents should ensure they have accounts set up online at the three major credit bureaus — Equifax, Experian and Trans Union.

It’s important to remember that the questions these bureaus will ask to verify your identity are not terribly difficult for thieves to answer or guess just by referencing public records and/or perhaps your postings on social media.

You will need accounts at these bureaus if you wish to freeze your credit file. KrebsOnSecurity has for many years urged all readers to do just that, because freezing your file is the best way to prevent identity thieves from opening new lines of credit in your name. Parents and guardians also can now freeze the files of their dependents for free.

For more on what a freeze entails and how to place or thaw one, please see this post. Beyond the big three bureaus, Innovis is a distant fourth bureau that some entities use to check consumer creditworthiness. Fortunately, filing a freeze with Innovis likewise is free and relatively painless.

It’s also a good idea to notify a company called ChexSystems to keep an eye out for fraud committed in your name. Thousands of banks rely on ChexSystems to verify customers who are requesting new checking and savings accounts, and ChexSystems lets consumers place a security alert on their credit data to make it more difficult for ID thieves to fraudulently obtain checking and savings accounts. For more information on doing that with ChexSystems, see this link.

If you placed a freeze on your file at the major bureaus more than a few years ago but haven’t revisited the bureaus’ sites lately, it might be wise to do that soon. Following its epic 2017 data breach, Equifax reconfigured its systems to invalidate the freeze PINs it previously relied upon to unfreeze a file, effectively allowing anyone to bypass that PIN if they can glean a few personal details about you. Experian’s site also has undermined the security of the freeze PIN.

I mentioned planting your flag at the credit bureaus first because if you plan to freeze your credit files, it may be wise to do so after you have planted your flag at all the other places listed in this story. That’s because these other places may try to check your identity records at one or more of the bureaus, and having a freeze in place may interfere with that account creation.


I can’t tell you how many times people have proudly told me they don’t bank online, and prefer to manage all of their accounts the old fashioned way. I always respond that while this is totally okay, you still need to establish an online account for your financial providers because if you don’t someone may do it for you.

This goes doubly for any retirement and pension plans you may have. It’s a good idea for people with older relatives to help those individuals set up and manage online identities for their various accounts — even if those relatives never intend to access any of the accounts online.

This process is doubly important for parents and relatives who have just lost a spouse. When someone passes away, there’s often an obituary in the paper that offers a great deal of information about the deceased and any surviving family members, and identity thieves love to mine this information.


Whether you’re approaching retirement, middle-aged or just starting out in your career, you should establish an account online at the U.S. Social Security Administration. Maybe you don’t believe Social Security money will actually still be there when you retire, but chances are you’re nevertheless paying into the system now. Either way, the plant-your-flag rules still apply.

Ditto for the Internal Revenue Service. A few years back, ID thieves who specialize in perpetrating tax refund fraud were massively registering people at the IRS’s website to download key data from their prior years’ tax transcripts. While the IRS has improved its taxpayer validation and security measures since then, it’s a good idea to mark your territory here as well.

The same goes for your state’s Department of Motor Vehicles (DMV), which maintains an alarming amount of information about you whether you have an online account there or not. Because the DMV also is the place that typically issues state drivers licenses, you really don’t want to mess around with the possibility that someone could register as you, change your physical address on file, and obtain a new license in your name.

Last but certainly not least, you should create an account for your household at the U.S. Postal Service’s Web site. Having someone divert your mail or delay delivery of it for however long they like is not a fun experience.

Also, the USPS has this nifty service called Informed Delivery, which lets residents view scanned images of all incoming mail prior to delivery. In 2018, the U.S. Secret Service warned that identity thieves have been abusing Informed Delivery to let them know when residents are about to receive credit cards or notices of new lines of credit opened in their names. Do yourself a favor and create an Informed Delivery account as well. Note that multiple occupants of the same street address can each have their own accounts.


Online accounts coupled with the strongest multi-factor authentication available also are important for any services that provide you with telephone, television and Internet access.

Strange as it may sound, plenty of people who receive all of these services in a bundle from one ISP do not have accounts online to manage their service. This is dangerous because if thieves can establish an account on your behalf, they can then divert calls intended for you to their own phones.

My original Plant Your Flag piece in 2018 told the story of an older Florida man who had pricey jewelry bought in his name after fraudsters created an online account at his ISP and diverted calls to his home phone number so they could intercept calls from his bank seeking to verify the transactions.

If you own a home, chances are you also have an account at one or more local utility providers, such as power and water companies. If you don’t already have an account at these places, create one and secure access to it with a strong password and any other access controls available.

These frequently monopolistic companies traditionally have poor to non-existent fraud controls, even though they effectively operate as mini credit bureaus. Bear in mind that possession of one or more of your utility bills is often sufficient documentation to establish proof of identity. As a result, such records are highly sought-after by identity thieves.

Another common way that ID thieves establish new lines of credit is by opening a mobile phone account in a target’s name. A little-known entity that many mobile providers turn to for validating new mobile accounts is the National Consumer Telecommunications and Utilities Exchange, or Happily, the NCTUE allows consumers to place a freeze on their file by calling their 800-number, 1-866-349-5355. For more information on the NCTUE, see this page.

Have I missed any important items? Please sound off in the comments below.

CryptogramCryptanalysis of an Old Zip Encryption Algorithm

Mike Stay broke an old zipfile encryption algorithm to recover $300,000 in bitcoin.

DefCon talk here.

Worse Than FailureTeleconference Horror

Jcacweb cam

In the spring of 2020, with very little warning, every school in the United States shut down due to the ongoing global pandemic. Classrooms had to move to virtual meeting software like Zoom, which was never intended to be used as the primary means of educating grade schoolers. The teachers did wonderfully with such little notice, and most kids finished out the year with at least a little more knowledge than they started. This story takes place years before then, when online schooling was seen as an optional add-on and not a necessary backup plan in case of plague.

TelEdu provided their take on such a thing in the form of a free third-party add-on for Moodle, a popular e-learning platform. Moodle provides space for teachers to upload recordings and handouts; TelEdu takes it one step further by adding a "virtual classroom" complete with a virtual whiteboard. The catch? You have to pay a subscription fee to use the free module, otherwise it's nonfunctional.

Initech decided they were on a tight schedule to implement a virtual classroom feature for their corporate training, so they went ahead and bought the service without testing it. They then scheduled a demonstration to the client, still without testing it. The client's 10-man team all joined to test out the functionality, and it wasn't long before the phone started ringing off the hook with complaints: slowness, 504 errors, blank pages, the whole nine yards.

That's where Paul comes in to our story. Paul was tasked with finding what had gone wrong and completing the integration. The most common complaint was that Moodle was being slow, but upon testing it himself, Paul found that only the TelEdu module pages were slow, not the rest of the install. So far so good. The code was open-source, so he went digging through to find out what in view.php was taking so long:

$getplan = telEdu_get_plan();
$paymentinfo = telEdu_get_payment_info();
$getclassdetail = telEdu_get_class($telEduclass->class_id);
$pricelist = telEdu_get_price_list($telEduclass->class_id);

Four calls to get info about the class, three of them to do with payment. Not a great start, but not necessarily terrible, either. So, how was the info fetched?

function telEdu_get_plan() {
    $data['task'] = TELEDU_TASK_GET_PLAN;
    $result = telEdu_get_curl_info($data);
    return $result;

"They couldn't possibly ... could they?" Paul wondered aloud.

function telEdu_get_payment_info() {
    $data['task'] = TELEDU_TASK_GET_PAYMENT_INFO;
    $result = telEdu_get_curl_info($data);
    return $result;

Just to make sure, Paul next checked what telEdu_get_curl_info actually did:

function telEdu_get_curl_info($data) {
    global $CFG;
    require_once($CFG->libdir . '/filelib.php');

    $key = $CFG->mod_telEdu_apikey;
    $baseurl = $CFG->mod_telEdu_baseurl;

    $urlfirstpart = $baseurl . "/" . $data['task'] . "?apikey=" . $key;

    if (($data['task'] == TELEDU_TASK_GET_PAYMENT_INFO) || ($data['task'] == TELEDU_TASK_GET_PLAN)) {
        $location = $baseurl;
    } else {
        $location = telEdu_post_url($urlfirstpart, $data);

    $postdata = '';
    if ($data['task'] == TELEDU_TASK_GET_PAYMENT_INFO) {
        $postdata = 'task=getPaymentInfo&apikey=' . $key;
    } else if ($data['task'] == TELEDU_TASK_GET_PLAN) {
        $postdata = 'task=getplan&apikey=' . $key;

    $options = array(

    $curl = new curl();
    $result = $curl->post($location, $postdata, $options);

    $finalresult = json_decode($result, true);
    return $finalresult;

A remote call to another API using, of all things, a shell call out to cURL, which queried URLs from the command line. Then it waited for the result, which was clocking in at anywhere between 1 and 30 seconds ... each call. The result wasn't used anywhere, either. It seemed to be just a precaution in case somewhere down the line they wanted these things.

After another half a day of digging through the rest of the codebase, Paul gave up. Sales told the client that "Due to the high number of users, we need more time to make a small server calibration."

The calibration? Replacing TelEdu with BigBlueButton. Problem solved.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Krebs on SecurityMicrosoft Patch Tuesday, August 2020 Edition

Microsoft today released updates to plug at least 120 security holes in its Windows operating systems and supported software, including two newly discovered vulnerabilities that are actively being exploited. Yes, good people of the Windows world, it’s time once again to backup and patch up!

At least 17 of the bugs squashed in August’s patch batch address vulnerabilities Microsoft rates as “critical,” meaning they can be exploited by miscreants or malware to gain complete, remote control over an affected system with little or no help from users. This is the sixth month in a row Microsoft has shipped fixes for more than 100 flaws in its products.

The most concerning of these appears to be CVE-2020-1380, which is a weaknesses in Internet Explorer that could result in system compromise just by browsing with IE to a hacked or malicious website. Microsoft’s advisory says this flaw is currently being exploited in active attacks.

The other flaw enjoying active exploitation is CVE-2020-1464, which is a “spoofing” bug in virtually all supported versions of Windows that allows an attacker to bypass Windows security features and load improperly signed files. For more on this flaw, see Microsoft Put Off Fixing Zero for 2 Years.

Trend Micro’s Zero Day Initiative points to another fix — CVE-2020-1472 — which involves a critical issue in Windows Server versions that could let an unauthenticated attacker gain administrative access to a Windows domain controller and run an application of their choosing. A domain controller is a server that responds to security authentication requests in a Windows environment, and a compromised domain controller can give attackers the keys to the kingdom inside a corporate network.

“It’s rare to see a Critical-rated elevation of privilege bug, but this one deserves it,” said ZDI’S Dustin Childs. “What’s worse is that there is not a full fix available.”

Perhaps the most “elite” vulnerability addressed this month earned the distinction of being named CVE-2020-1337, and refers to a security hole in the Windows Print Spooler service that could allow an attacker or malware to escalate their privileges on a system if they were already logged on as a regular (non-administrator) user.

Satnam Narang at Tenable notes that CVE-2020-1337 is a patch bypass for CVE-2020-1048, another Windows Print Spooler vulnerability that was patched in May 2020. Narang said researchers found that the patch for CVE-2020-1048 was incomplete and presented their findings for CVE-2020-1337 at the Black Hat security conference earlier this month. More information on CVE-2020-1337, including a video demonstration of a proof-of-concept exploit, is available here.

Adobe has graciously given us another month’s respite from patching Flash Player flaws, but it did release critical security updates for its Acrobat and PDF Reader products. More information on those updates is available here.

Keep in mind that while staying up-to-date on Windows patches is a must, it’s important to make sure you’re updating only after you’ve backed up your important data and files. A reliable backup means you’re less likely to pull your hair out when the odd buggy patch causes problems booting the system.

So do yourself a favor and backup your files before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And as ever, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Cory DoctorowTerra Nullius

Terra Nullius is my March 2019 column in Locus magazine; it explores the commonalities between the people who claim ownership over the things they use to make new creative works and the settler colonialists who arrived in various “new worlds” and declared them to be empty, erasing the people who were already there as a prelude to genocide.

I was inspired by the story of Aloha Poke, in which a white dude from Chicago secured a trademark for his “Aloha Poke” midwestern restaurants, then threatened Hawai’ians who used “aloha” in the names of their restaurants (and later, by the Dutch grifter who claimed a patent on the preparation of teff, an Ethiopian staple grain that has been cultivated and refined for about 7,000 years).

MP3 Link

LongNowScientists Have a Powerful New Tool to Investigate Triassic Dark Ages

The time-honored debate between catastrophists and gradualists (those who believe major Earth changes were due to sudden violent events or happened over long periods of time) has everything to do with the coarse grain of the geological record. When paleontologists only have a series of thousand-year flood deposits to study, it’s almost impossible to say what was really going on at shorter timescales. So many of the great debates of natural history hinge on the resolution at which data can be collected, and boil down to something like, “Was it a meteorite impact that caused this extinction, or the inexorable climate changes caused by continental drift?”

One such gap in our understanding is in the Late Triassic — a geological shadow during which major regime changes in terrestrial fauna took place, setting the stage for The Age of Dinosaurs. But the curtains were closed during that scene change…until, perhaps, now:

By determining the age of the rock core, researchers were able to piece together a continuous, unbroken stretch of Earth’s history from 225 million to 209 million years ago. The timeline offers insight into what has been a geologic dark age and will help scientists investigate abrupt environmental changes from the peak of the Late Triassic and how they affected the plants and animals of the time.

Cool new detective work on geological “tree rings” from the Petrified Forest National Park (where I was lucky enough to do some revolutionary paleontological reconstruction work under Dr. Bill Parker back in 2005).

CryptogramCollecting and Selling Mobile Phone Location Data

The Wall Street Journal has an article about a company called Anomaly Six LLC that has an SDK that's used by "more than 500 mobile applications." Through that SDK, the company collects location data from users, which it then sells.

Anomaly Six is a federal contractor that provides global-location-data products to branches of the U.S. government and private-sector clients. The company told The Wall Street Journal it restricts the sale of U.S. mobile phone movement data only to nongovernmental, private-sector clients.


Anomaly Six was founded by defense-contracting veterans who worked closely with government agencies for most of their careers and built a company to cater in part to national-security agencies, according to court records and interviews.

Just one of the many Internet companies spying on our every move for profit. And I'm sure they sell to the US government; it's legal and why would they forgo those sales?

Kevin RuddCNN: South China Sea and the US-China Tech War

11 AUGUST 2020

Topics: Foreign Affairs article; US-China tech war

Zain Asher
In a sobering assessment in Foreign Affairs magazine, the former Australian Prime Minister Kevin Rudd warns that diplomatic relations are crumbling and raise the possibility of armed conflict. Mr Rudd, who is president of the Asia Society Policy Institute, joins us live now. So Mr Rudd, just walk us through this. You believe that armed conflict is possible and, is this relationship at this point, in your opinion, quite frankly, beyond repair?

Kevin Rudd
It’s not beyond repair, but we’ve got to be blunt about the fact that the level of deterioration has been virtually unprecedented at least in the last half-century. And things are moving at a great pace in terms of the scenarios, the two scenarios which trouble us most are the Taiwan straits and the South China Sea. In the Taiwan straits, we see consistent escalation of tensions between Washington and Beijing. And certainly, in the South China Sea, the pace and intensity of naval and air activity in and around that region increases the possibility, the real possibility, of collisions at sea and collisions in the air. And the question then becomes: do Beijing and Washington really have an intention to de-escalate or then to escalate, if such a crisis was to unfold?

Zain Asher
How do they de-escalate? Is the only way at this point, or how do they reverse the sort of tensions between them? Is the main way at this point that, you know, a new administration comes in in November and it can be reset? If Trump gets re-elected, can there be de-escalation? If so, how?

Kevin Rudd
Well the purpose of my writing the article in Foreign Affairs, which you referred to before, was to, in fact, talk about the real dangers we face in the next three months. That is, before the US presidential election. We all know that in the US right now, that tensions or, shall I say, political pressure on President Trump are acute. But what people are less familiar of within the West is the fact that in Chinese politics there is also pressure on Xi Jinping for a range of domestic and external reasons as well. So what I have simply said is: in this next three months, where we face genuine political pressure operating on both political leaders, if we do have an incident, that is an unplanned incident or collision in the air or at sea, we now have a tinderbox environment. Therefore, the plans which need to be put in place between the grown-ups in the US and Chinese militaries is to have a mechanism to rapidly de-escalate should a collision occur. I’m not sure that those plans currently exist.

Zain Asher
Let’s talk about tech because President Donald Trump, as you know, is forcing ByteDance, the company that owns TikTok, to sell its assets and no longer operate in the US. The premise is that there are national security fears and also this idea that TikTok is handing over user data from American citizens to the Chinese government. How real and concrete are those fears, or is this purely politically motivated? Are the fears justified, in other words?

Kevin Rudd
As far as TikTok is concerned, this is way beyond my paygrade in terms of analysing the technological capacities of a) the company and b) the ability of the Chinese security authorities to backdoor them. What I can say is this a deliberate decision on the part of the US administration to radically escalate the technology war. In the past, it was a war about Huawei and 5G. It then became an unfolding conflict over the question of the future access to semiconductors, computer chips. And now we have, as it were, the unfolding ban imposed by the administration on Chinese-sourced computer apps, including this one, for TikTok. So this is a throwing-down of the gauntlet by the US administration. What I believe we will see, however, is Chinese retaliation. I think they will find a corporate mechanism to retaliate, given the actions taken not just against ByteDance and TikTok, but of course against WeChat. And so the pattern of escalation that we were talking about earlier in technology, the economy, trade, investment, finance, and the hard stuff in national security continues to unfold, which is why we need sober heads to prevail in the months ahead.

The post CNN: South China Sea and the US-China Tech War appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: The Concatenator

In English, there's much debate over the "Oxford Comma": in a list of items, do you put a comma between the penultimate item and the "and" before the final one? For example: "The conference featured bad programmers, Remy and TheDailyWTF readers" versus "The conference featured bad programmers, Remy, and the TheDailyWTF readers."

I'd like to introduce a subtly different one: "the concatenator's comma", or if we want to be generic "the concatenator's seperator character", but that doesn't have the same ring to it. If you're planning to list items as a string, you might to something like this pseudocode:

for each item in items result.append(item + ", ")

This naive approach does pose a problem: we'll have an extra comma. So maybe you have to add logic to decide if you're on the first or last item, and insert (or fail to insert) commas as appropriate. Or, maybe isn't a problem- if we're generating JSON, for example, we can just leave the trailing commas. This isn't universally true, of course, but many formats will ignore extra separators. Edit: I was apparently hallucinating when I wrote this; one of the most annoying things about JSON is that you can't do this.

Like, for example, URL query strings, which don't require a "sub-delim" like "&" to have anything following it.

But fortunately for us, no matter what language we're using, there's almost certainly an API that makes it so that we don't have to do string concatenation anyway, so why even bring it up?

Well, because Mike has a co-worker that has read the docs well enough to know that PHP has a substr method, but not well enough to know it has an http_build_query method. Or even an implode method, which handles string concats for you. Instead, they wrote this:

$query = ''; foreach ($postdata as $var => $val) { $query .= $var .'='. $val .'&'; } $query = substr($query, 0, -1);

This code exploits a little-observed feature of substr: a negative length reads back from the end. So this lops off that trailing "&", which is both unnecessary and one of the most annoying ways to do this.

Maybe it's not enough to RTFM, as Mike puts it, maybe you need to "RTEFM": read the entire manual.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!


Worse Than FailureI'm Blue

Designers are used to getting vague direction from clients. "It should have more pop!" or "Can you make the blue more blue?" But Kevin was a contractor who worked on embedded software, so he didn't really expect to have to deal with that, even if he did have to deal with colors a fair bit.

Kevin was taking over a contract from another developer to build software for a colorimeter, a device to measure color. When companies, like paint companies, care about color, they tend to really care about color, and need to be able to accurately observe a real-world color. Once you start diving deep into color theory, you start having to think about things like observers, and illuminants and tristimulus models and "perceptual color spaces".

The operating principle of the device was fairly simple. It had a bright light, of a well known color temperature. It had a brightness sensor. It had a set of colored filter gels that would pass in front of the sensor. Place the colorimeter against an object, and the bright light would reflect off the surface, through each of the filters in turn and record the brightness. With a little computation, you can determine, with a high degree of precision, what color something is.

Now, this is a scientific instrument, and that means that the code which runs it, even though it's proprietary, needs to be vetted by scientists. The device needs to be tested against known samples. Deviations need to be corrected for, and then carefully justified. There should be no "magic numbers" in the code that aren't well documented and explained. If, for example, the company gets its filter gels from a new vendor and they filter slightly different frequencies, the commit needs to link to the datasheets for those gels to explain the change. Similarly, if a sensor has a frequency response that means that the samples may be biased, you commit that with a link to the datasheet showing that to be the case.

Which is why Kevin was a little surprised by the commit by his predecessor. The message read: "Nathan wants the blue 'more blue'? Fine. the blue is more blue." Nathan was the product owner.

The corresponding change was a line which read:

blue += 20;

Well, Nathan got what he wanted. It's a good thing he didn't ask for it to "pop" though.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


LongNowThe Deep Sea

As detailed in the exquisite documentary Proteus, the ocean floor was until very recently a repository for the dreams of humankind — the receptacle for our imagination. But when the H.M.S. Challenger expedition surveyed the world’s deep-sea life and brought it back for cataloging by now-legendary illustrator Ernst Haeckel (who coined the term “ecology”), the hidden benthic universe started coming into view. What we found, and what we continue to discover on the ocean floor, is far stranger than the monsters we’d projected.

This spectacular site by Neal Agarwal brings depth into focus. You’ve surfed the Web; now take a few and dive all the way down to Challenger Deep, scrolling past the animals that live at every depth.

Just as The Long Now situates us in a humbling, Copernican experience of temporality, Deep Sea reminds us of just how thin of a layer surface life exists in. Just as with Stewart Brand’s pace layers, the further down you go, the slower everything unfolds: the cold and dark and pressure slow the evolutionary process, dampening the frequency of interactions between creatures, bestowing space and time for truly weird and wondrous and as-yet-uncategorized life.

Dig in the ground and you might pull up the fossils of some strange long-gone organisms. Dive to the bottom of the ocean and you might find them still alive down there, the unmolested records of an ancient world still drifting in slow motion, going about their days-without-days…

For evidence of time-space commutability, settle in for a sublime experience that (like benthic life itself) makes much of very little: just one page, one scroll bar, and one journey to a world beyond.

(Mobile device suggested: this scroll goes in, not just across…)

Learn More:

  • The “Big Here” doesn’t get much bigger than Neal Agarwal‘s The Size of Space, a new interactive visualization that provides a dose of perspective on our place in the universe.


CryptogramFriday Squid Blogging: New SQUID

There's a new SQUID:

A new device that relies on flowing clouds of ultracold atoms promises potential tests of the intersection between the weirdness of the quantum world and the familiarity of the macroscopic world we experience every day. The atomtronic Superconducting QUantum Interference Device (SQUID) is also potentially useful for ultrasensitive rotation measurements and as a component in quantum computers.

"In a conventional SQUID, the quantum interference in electron currents can be used to make one of the most sensitive magnetic field detectors," said Changhyun Ryu, a physicist with the Material Physics and Applications Quantum group at Los Alamos National Laboratory. "We use neutral atoms rather than charged electrons. Instead of responding to magnetic fields, the atomtronic version of a SQUID is sensitive to mechanical rotation."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Kevin RuddThe Guardian: If the Liberal Party truly cared about racial injustice, they would pay their fair share to Close the Gap

Published in the Guardian on 7 August 2020

Throughout our country’s modern history, the treatment of our Aboriginal and Torres Strait Islander brothers and sisters has been appalling. It has also been inconsistent with the original instructions from the British Admiralty to treat the Indigenous peoples of this land with proper care and respect. From first encounter to the frontier wars, the stolen generations and ongoing institutionalised racism, First Nations people have been handed a raw deal. The gaps between Indigenous and non-Indigenous Australians’ outcomes in areas of education, employment, health, housing and justice are a product of historical, intergenerational maltreatment.

In 2008, I apologised to the stolen generations and Indigenous Australians for the racist laws and policies of successive Australian governments. The apology may have been 200 years late, but it was an important part of the reconciliation process.

But the apology meant nothing if it wasn’t backed by action. For this reason, my government acted on Aboriginal and Torres Strait Islander social justice commissioner Tom Calma’s call to Close the Gap. We worked hard to push this framework through the Council of Australian governments so that all states and territories were on board with the strategy. We also funded it, with $4.6bn committed to achieve each of the six targets we set. While the targets and funding were critical to any improvements in the lives of Indigenous Australians, we suspected the Coalition would scrap our programs once they returned to government. After all, only a few years earlier, John Howard’s Indigenous affairs minister was denying the very existence of the stolen generations. Howard himself had refused to deliver an apology for a decade. And then both he and Peter Dutton decided to boycott the official apology in 2008.

To ensure that the Closing the Gap strategy would not be abandoned, we made it mandatory for the prime minister to stand before the House of Representatives each year and account for the success and failures in reaching the targets that were set.

Had we not adopted the Closing the Gap framework, would we now be on target to have 95% of Indigenous four year-olds enrolled in early childhood education? I think not. Would we have halved the gap for young Indigenous adults to have completed year 12 by 2020? I think not. And would we see progress on closing the gap in child mortality, and literacy and numeracy skills? No, I think not.

Despite these achievements, the most recent Closing the Gap report nonetheless showed Australia was not on track to meet four of the deadlines we’d originally set. A major reason for this is that federal funding for the closing the gap strategy collapsed under Tony Abbott, the great wrecking-ball of Australian politics, whose government cut $534.4m from programs dedicated to improving the lives of Indigenous Australians. And it’s never been restored by Abbott’s successors. It’s all there in the budget papers.

Whatever targets are put in place, governments must commit to physical resourcing of Closing the Gap. They are not going to be delivered by magic.

On Thursday last week, the new national agreement on Closing the Gap was announced. I applaud Pat Turner and other Indigenous leaders who will now sit with the leaders of the commonwealth, states, territories and local government to devise plans to achieve the new targets they have negotiated.

Scott Morrison, however, sought to discredit our government’s targets, rather than coming clean about the half-billion-dollar funding cuts that had made it impossible to achieve these targets under any circumstances. His argument that the original targets were conjured out of thin air by my government is demonstrably untrue. The truth is, Jenny Macklin, the responsible minister, spoke widely with Indigenous leaders to prioritise the areas that needed to be urgently addressed in the original Closing the Gap targets. Furthermore, if Morrison is now truly awakened to the intrinsic value of listening to Indigenous Australians, I look forward to him enshrining an Indigenous voice to parliament in the Constitution, given this is the universal position of all Indigenous groups.

Yet amid the welter of news coverage of the new closing the gap agreement, the central question remains: who will be paying the bill? While shared responsibility to close the gap between all levels of government and Indigenous organisations might sound like good news, this will quickly unravel into a political blame game if the commonwealth continues to shirk its financial duty.

The announcement this week that the commonwealth would allocate $45m over four years is just a very bad joke. This is barely 10% of what the Liberals cut from our national Closing the Gap strategy. And barely 1% of our total $4.5bn national program to meet our targets agreed to with the states and territories in 2009.

The Liberals want you to believe they care about racial injustice. But they don’t believe there are any votes in it. This is well understood by Scotty From Marketing, a former state director of the Liberal party, who lives and breathes polling and focus groups. That’s why they are not even pretending to fund the realisation of the new more “realistic” targets they have so loudly proclaimed.

The post The Guardian: If the Liberal Party truly cared about racial injustice, they would pay their fair share to Close the Gap appeared first on Kevin Rudd.

Worse Than FailureError'd: All Natural Errors

"I'm glad the asdf is vegan. I'm really thinking of going for the asasdfsadf, though. With a name like that, you know it's got to be 2 1/2 times as good for you," writes VJ.


Phil G. wrote, "Get games twice as fast with Epic's new multidimensional downloads!"


" DOES!" Zed writes.


John M. wrote, "I appreciate the helpful suggestion, but I think I'll take a pass."


"java.lang.IllegalStateException...must be one of those edgy indie games! I just hope it's not actually illegal," writes Matthijs .


"For added flavor, I received this reminder two hours after I'd completed my checkout and purchased that very same item_name," Aaron K. writes.


[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.


Krebs on SecurityHacked Data Broker Accounts Fueled Phony COVID Loans, Unemployment Claims

A group of thieves thought to be responsible for collecting millions in fraudulent small business loans and unemployment insurance benefits from COVID-19 economic relief efforts gathered personal data on people and businesses they were impersonating by leveraging several compromised accounts at a little-known U.S. consumer data broker, KrebsOnSecurity has learned.

In June, KrebsOnSecurity was contacted by a cybersecurity researcher who discovered that a group of scammers was sharing highly detailed personal and financial records on Americans via a free web-based email service that allows anyone who knows an account’s username to view all email sent to that account — without the need of a password.

The source, who asked not to be identified in this story, said he’s been monitoring the group’s communications for several weeks and sharing the information with state and federal authorities in a bid to disrupt their fraudulent activity.

The source said the group appears to consist of several hundred individuals who collectively have stolen tens of millions of dollars from U.S. state and federal treasuries via phony loan applications with the U.S. Small Business Administration (SBA) and through fraudulent unemployment insurance claims made against several states.

KrebsOnSecurity reviewed dozens of emails the fraud group exchanged, and noticed that a great many consumer records they shared carried a notation indicating they were cut and pasted from the output of queries made at Interactive Data LLC, a Florida-based data analytics company.

Interactive Data, also known as, markets access to a “massive data repository” on U.S. consumers to a range of clients, including law enforcement officials, debt recovery professionals, and anti-fraud and compliance personnel at a variety of organizations.

The consumer dossiers obtained from IDI and shared by the fraudsters include a staggering amount of sensitive data, including:

-full Social Security number and date of birth;
-current and all known previous physical addresses;
-all known current and past mobile and home phone numbers;
-the names of any relatives and known associates;
-all known associated email addresses
-IP addresses and dates tied to the consumer’s online activities;
-vehicle registration, and property ownership information
-available lines of credit and amounts, and dates they were opened
-bankruptcies, liens, judgments, foreclosures and business affiliations

Reached via phone, IDI Holdings CEO Derek Dubner acknowledged that a review of the consumer records sampled from the fraud group’s shared communications indicates “a handful” of authorized IDI customer accounts had been compromised.

“We identified a handful of legitimate businesses who are customers that may have experienced a breach,” Dubner said.

Dubner said all customers are required to use multi-factor authentication, and that everyone applying for access to its services undergoes a rigorous vetting process.

“We absolutely credential businesses and have several ways do that and exceed the gold standard, which is following some of the credit bureau guidelines,” he said. “We validate the identity of those applying [for access], check with the applicant’s state licensor and individual licenses.”

Citing an ongoing law enforcement investigation into the matter, Dubner declined to say if the company knew for how long the handful of customer accounts were compromised, or how many consumer records were looked up via those stolen accounts.

“We are communicating with law enforcement about it,” he said. “There isn’t much more I can share because we don’t want to impede the investigation.”

The source told KrebsOnSecurity he’s identified more than 2,000 people whose SSNs, DoBs and other data were used by the fraud gang to file for unemployment insurance benefits and SBA loans, and that a single payday can land the thieves $20,000 or more. In addition, he said, it seems clear that the fraudsters are recycling stolen identities to file phony unemployment insurance claims in multiple states.


Hacked or ill-gotten accounts at consumer data brokers have fueled ID theft and identity theft services of various sorts for years. In 2013, KrebsOnSecurity broke the news that the U.S. Secret Service had arrested a 24-year-old man named Hieu Minh Ngo for running an identity theft service out of his home in Vietnam.

Ngo’s service, variously named superget[.]info and findget[.]me, gave customers access to personal and financial data on more than 200 million Americans. He gained that access by posing as a private investigator to a data broker subsidiary acquired by Experian, one of the three major credit bureaus in the United States.

Ngo’s ID theft service

Experian was hauled before Congress to account for the lapse, and assured lawmakers there was no evidence that consumers had been harmed by Ngo’s access. But as follow-up reporting showed, Ngo’s service was frequented by ID thieves who specialized in filing fraudulent tax refund requests with the Internal Revenue Service, and was relied upon heavily by an identity theft ring operating in the New York-New Jersey region.

Also in 2013, KrebsOnSecurity broke the news that ssndob[.]ms, then a major identity theft service in the cybercrime underground, had infiltrated computers at some of America’s large consumer and business data aggregators, including LexisNexis Inc., Dun & Bradstreet, and Kroll Background America Inc.

The now defunct SSNDOB identity theft service.

In 2006, The Washington Post reported that a group of five men used stolen or illegally created accounts at LexisNexis subsidiaries to lookup SSNs and other personal information more than 310,000 individuals. And in 2004, it emerged that identity thieves masquerading as customers of data broker Choicepoint had stolen the personal and financial records of more than 145,000 Americans.

Those compromises were noteworthy because the consumer information warehoused by these data brokers can be used to find the answers to so-called knowledge-based authentication (KBA) questions used by companies seeking to validate the financial history of people applying for new lines of credit.

In that sense, thieves involved in ID theft may be better off targeting data brokers like IDI and their customers than the major credit bureaus, said Nicholas Weaver, a researcher at the International Computer Science Institute and lecturer at UC Berkeley.

“This means you have access not only to the consumer’s SSN and other static information, but everything you need for knowledge-based authentication because these are the types of companies that are providing KBA data.”

The fraud group communications reviewed by this author suggest they are cashing out primarily through financial instruments like prepaid cards and a small number of online-only banks that allow consumers to establish accounts and move money just by providing a name and associated date of birth and SSN.

While most of these instruments place daily or monthly limits on the amount of money users can deposit into and withdraw from the accounts, some of the more popular instruments for ID thieves appear to be those that allow spending, sending or withdrawal of between $5,000 to $7,000 per transaction, with high limits on the overall number or dollar value of transactions allowed in a given time period.

KrebsOnSecurity is investigating the extent to which a small number of these financial instruments may be massively over-represented in the incidence of unemployment insurance benefit fraud at the state level, and in SBA loan fraud at the federal level. Anyone in the financial sector or state agencies with information about these apparent trends may confidentially contact this author at krebsonsecurity @ gmail dot com, or via the encrypted message service Wickr at “krebswickr“.

The looting of state unemployment insurance programs by identity thieves has been well documented of late, but far less public attention has centered on fraud targeting Economic Injury Disaster Loan (EIDL) and advance grant programs run by the U.S. Small Business Administration in response to the COVID-19 crisis.

Late last month, the SBA Office of Inspector General (OIG) released a scathing report (PDF) saying it has been inundated with complaints from financial institutions reporting suspected fraudulent EIDL transactions, and that it has so far identified $250 million in loans given to “potentially ineligible recipients.” The OIG said many of the complaints were about credit inquiries for individuals who had never applied for an economic injury loan or grant.

The figures released by the SBA OIG suggest the financial impact of the fraud may be severely under-reported at the moment. For example, the OIG said nearly 3,800 of the 5,000 complaints it received came from just six financial institutions (out of several thousand across the United States). One credit union reportedly told the U.S. Justice Department that 59 out of 60 SBA deposits it received appeared to be fraudulent.

LongNowChildhood as a solution to explore–exploit tensions

Big questions abound regarding the protracted childhood of Homo sapiens, but there’s a growing argument that it’s an adaptation to the increased complexity of our social environment and the need to learn longer and harder in order to handle the ever-raising bar of adulthood. (Just look to the explosion of requisite schooling over the last century for a concrete example of how childhood grows along with social complexity.)

It’s a tradeoff between genetic inheritance and enculturation — see also Kevin Kelly’s remarks in The Inevitable that we have entered an age of lifelong learning and the 21st Century requires all of us to be permanent “n00bs”, due to the pace of change and the scale at which we have to grapple with evolutionarily relevant sociocultural information.

New research from Past Long Now Seminar Speaker Alison Gopnik:

“I argue that the evolution of our life history, with its distinctively long, protected human childhood, allows an early period of broad hypothesis search and exploration, before the demands of goal-directed exploitation set in. This cognitive profile is also found in other animals and is associated with early behaviours such as neophilia and play. I relate this developmental pattern to computational ideas about explore–exploit trade-offs, search and sampling, and to neuroscience findings. I also present several lines of empirical evidence suggesting that young human learners are highly exploratory, both in terms of their search for external information and their search through hypothesis spaces. In fact, they are sometimes more exploratory than older learners and adults.”

Alison Gopnik, “Childhood as a solution to explore-exploit tensions” in Philosophical Transactions of the Royal Society B.

Worse Than FailureCodeSOD: A Slow Moving Stream

We’ve talked about Java’s streams in the past. It’s hardly a “new” feature at this point, but its blend of “being really useful” and “based on functional programming techniques” and “different than other APIs” means that we still have developers struggling to figure out how to use it.

Jeff H has a co-worker, Clarence, who is very “anti-stream”. “It creates too many copies of our objects, so it’s terrible for memory, and it’s so much slower. Don’t use streams unless you absolutely have to!” So in many a code review, Jeff submits some very simple, easy to read, and fast-performing bit of stream code, and Clarence objects. “It’s slow. It wastes memory.”

Sometimes, another team member goes to bat for Jeff’s code. Sometimes they don’t. But then, in a recent review, Clarence submitted his own bit of stream code. -> -> {

    if (schedule.getDays() != null && !schedule.getDays().isEmpty()) {
        schedule.getDays().stream().forEach(day -> -> {
            dayVisitor.visitDay(schedule, day);

            if (day.getSlots() != null && !day.getSlots().isEmpty()) {
                day.getSlots().stream().forEach(slot -> -> {
                    slotVisitor.visitSlot(schedule, day, slot);

That is six nested “for each” operations, and they’re structured so that we iterate across the same list multiple times. For each schedule, we look at each visitor on that schedule, then we look at each day for that schedule, and then we look at every visitor again, then we look at each day’s slots, and then we look at each visitor again.

Well, if nothing else, we understand why Clarence thinks the Java Streams API is slow. This code did not pass code review.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Chaotic IdealismTo a Newly Diagnosed Autistic Teenager

I was recently asked by a 14-year-old who had just been diagnosed autistic what advice I had to give. This is what I said.

The thing that helped me most was understanding myself and talking to other autistic people, so you’re already well on that road.

The more you learn about yourself, the more you learn about how you *learn*… meaning that you can become better at teaching yourself to communicate with neurotypicals.

Remember though: The goal is to communicate. Blending in is secondary, or even irrelevant, depending on your priorities. If you can get your ideas from your brain to theirs, and understand what they’re saying, and live in the world peacefully without hurting anyone and without putting yourself in danger, then it does not matter how different you are or how differently you do things.

Autistic is not better and not worse than neurotypical; it’s simply different. Having a disability is a normal part of human life; it’s nothing to be proud of and nothing to be ashamed of. Disability doesn’t stop you from being talented or from becoming unusually skilled, especially with practice. Being different means that you see things from a different perspective, which means that as you grow and gain experience you will be able to provide solutions to problems that other people simply don’t see, to contribute skills that most people don’t have.

Learn to advocate for yourself. If you have an IEP, go to the meetings and ask questions about what help is available and what problems you have. When you are mistreated, go to someone you trust and ask for help; and if you can’t get help, protect yourself as best you can. Learn to stand up for yourself, to keep other people from taking advantage of you. Also learn to help other people stay safe.

Your best social connections now will be anyone who treats you with kindness. You can tell whether someone is kind by observing how they treat those they have power over when nobody, or nobody with much influence, is watching. You want people who are honest, or who only lie when they are trying to protect others’ feelings. Talk to these people; explain that you are not very good with social things and that you sometimes embarrass yourself or accidentally insult people, and that you would like them to tell you when you are doing something clumsy, offensive, confusing, or cringeworthy. Explain to these people that you would prefer to know about mistakes you are making, because if you are not told you will never be able to correct those mistakes.

Learn to apologize, and learn that an apology simply means, “I recognize I have made a mistake and shall work to correct it in the future.” An apology is not a sign of failure or an admission of inferiority. Sometimes an apology can even mean, “I have made a mistake that I could not control; if I had been able to control it, I would not have made the mistake.” Therefore, it is okay to apologize if you have simply made an honest mistake. The best apology includes an explanation of how you will fix your mistake or what you will change to keep it from happening in the future.

Learn not to apologize when you have done nothing wrong. Do not apologize for being different, for standing up for yourself or for other people, or for having an opinion others disagree with. You do not need to justify your existence. You should never give in to the pressure to say, “I am autistic, but that’s okay because I have this skill and that talent.” The correct statement is, “I am autistic, and that is okay.” You don’t need to do anything to be valuable. You just need to be human.

If someone uses you to fulfill their own desires but doesn’t give things back in return; if someone doesn’t care about your needs when you tell them; if someone can tell you are hurt and doesn’t care; then that is a person you cannot trust.

In general, you can expect your teen years to be harder than your young-adult years. As you grow and gain experience, you’ll gain skills and you’ll gather a library of techniques to help you navigate the social and sensory world, to help you deal with your emotions and with your relationships. You will never be perfect–but then, nobody is. What you’re aiming for is useful, functional skills, in whatever form they take, whether they are the typical way of doing things or not. As the saying goes: If it looks stupid but it works, it isn’t stupid.

Keep trying. Take good care of yourself. When you are tired, rest. Learn to push yourself to your limits, but not beyond; and learn where those limits are. When you are tired from something that would not tire a neurotypical, be unashamed about your need for down time. Learn to say “no” when you don’t want something, and learn to say “yes” when you want something but you are a little bit intimidated by it because it is new or complicated or unpredictable. Learn to accept failure and learn from it. Help others. Make your world better. Make your own way. Grow. Live.

You’ll be okay.

Krebs on SecurityPorn Clip Disrupts Virtual Court Hearing for Alleged Twitter Hacker

Perhaps fittingly, a Web-streamed court hearing for the 17-year-old alleged mastermind of the July 15 mass hack against Twitter was cut short this morning after mischief makers injected a pornographic video clip into the proceeding.

17-year-old Graham Clark of Tampa, Fla. was among those charged in the July 15 Twitter hack. Image: Hillsborough County Sheriff’s Office.

The incident occurred at a bond hearing held via the videoconferencing service Zoom by the Hillsborough County, Fla. criminal court in the case of Graham Clark. The 17-year-old from Tampa was arrested earlier this month on suspicion of social engineering his way into Twitter’s internal computer systems and tweeting out a bitcoin scam through the accounts of high-profile Twitter users.

Notice of the hearing was available via public records filed with the Florida state attorney’s office. The notice specified the Zoom meeting time and ID number, essentially allowing anyone to participate in the proceeding.

Even before the hearing officially began it was clear that the event would likely be “zoom bombed.” That’s because while participants were muted by default, they were free to unmute their microphones and transmit their own video streams to the channel.

Sure enough, less than a minute had passed before one attendee not party to the case interrupted a discussion between Clark’s attorney and the judge by streaming a live video of himself adjusting his face mask. Just a few minutes later, someone began interjecting loud music.

It became clear that presiding Judge Christopher C. Nash was personally in charge of administering the video hearing when, after roughly 15 seconds worth of random chatter interrupted the prosecution’s response, Nash told participants he was removing the troublemakers as quickly as he could.

Judge Nash, visibly annoyed immediately after one of the many disruptions to today’s hearing.

What transpired a minute later was almost inevitable given the permissive settings of this particular Zoom conference call: Someone streamed a graphic video clip from Pornhub for approximately 15 seconds before Judge Nash abruptly terminated the broadcast.

With the ongoing pestilence that is the COVID-19 pandemic, the nation’s state and federal courts have largely been forced to conduct proceedings remotely via videoconferencing services. While Zoom and others do offer settings that can prevent participants from injecting their own audio and video into the stream unless invited to do so, those settings evidently were not enabled in today’s meeting.

At issue before the court today was a defense motion to modify the amount of the defendant’s bond, which has been set at $750,000. The prosecution had argued that Clark should be required to show that any funds used toward securing that bond were gained lawfully, and were not merely the proceeds from his alleged participation in the Twitter bitcoin scam or some other form of cybercrime.

Florida State Attorney Andrew Warren’s reaction as a Pornhub clip began streaming to everyone in today’s Zoom proceeding.

Mr. Clark’s attorneys disagreed, and spent most of the uninterrupted time in today’s hearing explaining why their client could safely be released under a much smaller bond and close supervision restrictions.

On Sunday, The New York Times published an in-depth look into Clark’s wayward path from a small-time cheater and hustler in online games like Minecraft to big-boy schemes involving SIM swapping, a form of fraud that involves social engineering employees at mobile phone companies to gain control over a target’s phone number and any financial, email and social media accounts associated with that number.

According to The Times, Clark was suspected of being involved in a 2019 SIM swapping incident which led to the theft of 164 bitcoins from Gregg Bennett, a tech investor in the Seattle area. That theft would have been worth around $856,000 at the time; these days 164 bitcoins is worth approximately $1.8 million.

The Times said that soon after the theft, Bennett received an extortion note signed by Scrim, one of the hacker handles alleged to have been used by Clark. From that story:

“We just want the remainder of the funds in the Bittrex,” Scrim wrote, referring to the Bitcoin exchange from which the coins had been taken. “We are always one step ahead and this is your easiest option.”

In April, the Secret Service seized 100 Bitcoins from Mr. Clark, according to government forfeiture documents. A few weeks later, Mr. Bennett received a letter from the Secret Service saying they had recovered 100 of his Bitcoins, citing the same code that was assigned to the coins seized from Mr. Clark.

Florida prosecutor Darrell Dirks was in the middle of explaining to the judge that investigators are still in the process of discovering the extent of Clark’s alleged illegal hacking activities since the Secret Service returned the 100 bitcoin when the porn clip was injected into the Zoom conference.

Ultimately, Judge Nash decided to keep the bond amount as is, but to remove the condition that Clark prove the source of the funds.

Clark has been charged with 30 felony counts and is being tried as an adult. Federal prosecutors also have charged two other young men suspected of playing roles in the Twitter hack, including a 22-year-old from Orlando, Fla. and a 19-year-old from the United Kingdom.

Kevin RuddABC RN: South China Sea

5 AUGUST 2020

Fran Kelly
Prime Minister Scott Morrison today will warn of the unprecedented militarization of the Indo-Pacific which he says has become the epicentre of strategic competition between the US and China. In his virtual address to the Aspen Security Forum in the United States, Scott Morrison will also condemn the rising frequency of cyber attacks and the new threats democratic nations are facing from foreign interference. This speech coincides with a grim warning from former prime minister Kevin Rudd that the threat of armed conflict in the region is especially high in the run-up to the US presidential election in November. Kevin Rudd, welcome back to breakfast.

Kevin Rudd
Thanks for having me on the program, Fran.

Fran Kelly
Kevin Rudd, you’ve written in the Foreign Affairs journal that the US-China tensions could lead to, quote, a hot war not just a cold one. That conflict, you say, is no longer unthinkable. It’s a fairly alarming assessment. Just how likely do you rate the confrontation in the Indo-Pacific other coming three or four months?

Kevin Rudd
Well, Fran, I think it’s important to narrow our geographical scope here. Prime Minister Morrison is talking about a much wider theatre. My comments in Foreign Affairs are about crisis scenarios emerging over what will happen or could happen in Hong Kong over the next three months leading up to the presidential election. And I think things in Hong Kong are more likely to get worse than better. What’s happening in relation to the Taiwan Straits where things have become much sharper than before in terms of actions on both sides, that’s the Chinese and the United States. But the thrust of my article is that the real problem area in terms of crisis management, crisis escalation, etc, lies in the South China Sea. And what I simply try to pull together is the fact that we now have a much greater concentration of military hardware, ships at sea, aircraft flying reconnaissance missions, together with changes in deployments by the Chinese fighters and bombers now into the actual Paracel Islands themselves in the north part of the South China Sea. Together with the changes in the declaratory postures of both sides. So what I do in this article this pull these threads together and say to both sides: be careful what you wish for; you’re playing with fire.

Fran Kelly
And when you talk about a heightened risk of armed conflict, or you’re talking about a being confined to a flare-up in one very specific location like the South China Sea?

Kevin Rudd
What I try to do is to go to where could a crisis actually emerge?

Fran Kelly

Kevin Rudd
If you go across the whole spectrum of conflicts, at the moment between China and the United States on a whole range of policies, all roads tend to lead back to the South China Sea because it’s effectively a ruleless environment at the moment. We have contested views of both territorial and maritime sovereignty. And that’s where my concern, Fran, is that we have a crisis, which comes about through a collision at sea, a collision in the air, and given the nationalist politics now in Washington because of the presidential election, but also the nationalist politics in China, as its own leadership go to their annual August retreat, Beidaihe, that it’s a very bad admixture which could result in a crisis for allies like Australia, which have treaty obligations with the United States through the ANZUS treaty. This is a deeply concerning set of developments because if the crisis erupts, what then does the Australian government do?

Fran Kelly
Well, what does it do in your view from your viewpoint as a former Prime Minister. You know Australia tries to walk a very fine line by Washington and Beijing. That’s proved very difficult lately, but we are in the ANZUS alliance. Would we need to get involved militarily?

Kevin Rudd
Let me put it in these terms: Australia, like other countries dealing with China’s greater regional and international assertiveness, has had to adjust its strategy. We can have a separate debate, Fran, about what that strategy should be across the board in terms of the economy, technology, Huawei in the rest. But what I’ve sought to do in this article is go specifically to the possibility of a national security crisis. Now, if I was Prime Minister Morrison, what I’d be doing in the current circumstances is taking out the fire hose to those in Washington and to the extent that you can to those in Beijing, and simply make it as plain as possible through private diplomacy and public statements, the time has come for de-escalation because the obligations under the treaty, Fran, to go to your direct question, are relatively clear. What it says in one of the operational clauses of the ANZUS treaty of 1951 is that if the armed forces of either of the contracting parties, namely Australia or the United States, come under attack in the Pacific area, then the allies shall meet and consult to meet the common danger. That, therefore, puts us as an Australian ally directly into this frame. Hence my call for people to be very careful about the months which lie ahead.

Fran Kelly
In terms of ‘the time has come for de-escalation’, that message, do we see signs that that was the message very clearly being given by the Foreign Minister and the Defense Minister when they’re in Washington last week? Marise Payne didn’t buy into Secretary of State Mike Pompeo’s very elevated rhetoric aimed at China, kept a distance there. And is it your view that this danger period will be over come the first Tuesday in November, the presidential election?

Kevin Rudd
I think when we looking at the danger of genuine armed conflict between China and United States, that is now with us for a long period of time, whoever wins in November, including under the Democrats. What I’m more concerned about, however, is given that President Trump is in desperate domestic political circumstances at present in Washington, and that there will be a temptation to continue to elevate. And also domestic politics are playing their role in China itself where Xi Jinping is under real pressure because of the state of the Chinese economy because of COVID and a range of other factors. On Australia, you asked directly about what Marise Payne was doing in Washington. I think finally the penny dropped with Prime Minister Morrison and Foreign Minister Payne that the US presidential election campaign strategy was beginning to directly influence the content of rational national security policy. I think wisely they decided to step back slightly from that.

Fran Kelly
Former Prime Minister Kevin Rudd is our guest. Kevin Rudd, this morning Scott Morrison, the Prime Minister, is addressing the US Aspen Security Forum. He’s also talking about rising tensions in the Indo-Pacific. He’s pledged that Australia won’t be a bystander, quote, who will leave it to others in the region. He wants other like-minded democracies of the region to step up and act in some kind of alliance. Is that the best way to counter Beijing’s rising aggression and assertiveness?

Kevin Rudd
Well, Prime Minister Morrison seems to like making speeches but I’ve yet to see evidence of a coherent Australian national China strategy in terms of what the government is operationally doing as opposed to what it continues to talk about. So my concern on his specific proposal is: what are you doing, Mr Morrison? The talk of an alliance I think is misplaced. The talk of, shall we say, a common policy approach to the challenges which China now represents, that is an entirely appropriate course of action and something which we sought to do during our own period in government, but it’s a piece of advice which Morrison didn’t bother listening to himself when he unilaterally went out and called for an independent global investigation into the origins of COVID-19. Far wiser, if Morrison had taken his own counsel and brought together a coalition of the policy willing first and said: do we have a group of 10 robust states standing behind this proposal? And the reason for that, Fran, is that makes it much harder then for Beijing to unilaterally pick off individual countries.

Fran Kelly
Kevin Rudd, thank you very much for joining us on Breakfast.

Kevin Rudd
Good to be with you, Fran.

Fran Kelly
Former Prime Minister Kevin Rudd. He’s president of the Asia Society Policy Institute in New York, and the article that he’s just penned is in the Foreign Affairs journal.

The post ABC RN: South China Sea appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Private Code Review

Jessica has worked with some cunning developers in the past. To help cope with some of that “cunning”, they’ve recently gone out searching for new developers.

Now, there were some problems with their job description and salary offer, specifically, they were asking for developers who do too much and get paid too little. Which is how Jessica started working with Blair. Jessica was hoping to staff up her team with some mid-level or junior developers with a background in web development. Instead, she got Blair, a 13+ year veteran who had just started doing web development in the past six months.

Now, veteran or not, there is a code review process, so everything Blair does goes through code review. And that catches some… annoying habits, but every once in awhile, something might sneak through. For example, he thinks static is a code smell, and thus removes the keyword any time he sees it. He’ll rewrite most of the code to work around it, except once the method was called from a cshtml template file, so no one discovered that it didn’t work until someone reported the error.

Blair also laments that with all the JavaScript and loosely typed languages, kids these days don’t understand the importance of separation of concerns and putting a barrier between interface and implementation. To prove his point, he submitted his MessageBL class. BL, of course, is to remind you that this class is “business logic”, which is easy to forget because it’s in an assembly called theappname.businesslogic.

Within that class, he implemented a bunch of data access methods, and this pair of methods lays out the pattern he followed.

public async Task<LinkContentUpdateTrackingModel> GetLinkAndContentTrackingModelAndUpdate(int id, Msg msg)
    return await GetLinkAndContentTrackingAndUpdate(id, msg);

/// <summary>
/// LinkTrackingUpdateLinks
/// returns: HasAnalyticsConfig, LinkTracks, ContentTracks
/// </summary>
/// <param name="id"></param>
/// <param name="msg"></param>
private async Task<LinkContentUpdateTrackingModel> GetLinkAndContentTrackingAndUpdate(int id, Msg msg)

Here, we have one public method, and one private method. Their names, as you can see, are very similar. The public method does nothing but invoke the private method. This public method is, in fact, the only place the private method is invoked. The public method, in turn, is called only twice, from one controller.

This method also doesn’t ever need to be called, because the same block of code which constructs this object also fetches the relevant model objects. So instead of going back to the database with this thing, we could just use the already fetched objects.

But the real magic here is that Blair was veteran enough to know that he should put some “thorough” documentation using Visual Studio’s XML comment features. But he put the comments on the private method.

Jessica was not the one who reviewed this code, but adds:

I won’t blame the code reviewer for letting this through. There’s only so many times you can reject a peer review before you start questioning yourself. And sometimes, because Blair has been here so long, he checks code in without peer review as it’s a purely manual process.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


LongNowTraditional Ecological Knowledge

Archaeologist Stefani Crabtree writes about her work to reconstruct Indigenous food and use networks for the National Park Service:

Traditional Ecological Knowledge gets embedded in the choices that people make when they consume, and how TEK can provide stability of an ecosystem. Among Martu, the use of fire for hunting and the knowledge of the habits of animals are enshrined in the Dreamtime stories passed inter-generationally; these Dreamtime stories have material effects on the food web, which were detected in our simulations. The ecosystem thrived with Martu; it was only through their removal that extinctions began to cascade through the system.

Kevin RuddForeign Affairs: Beware the Guns of August — in Asia

U.S. Navy photo by Mass Communication Specialist 2nd Class Taylor DiMartino

Published in Foreign Affairs on August 3, 2020.

In just a few short months, the U.S.-Chinese relationship seems to have returned to an earlier, more primal age. In China, Mao Zedong is once again celebrated for having boldly gone to war against the Americans in Korea, fighting them to a truce. In the United States, Richard Nixon is denounced for creating a global Frankenstein by introducing Communist China to the wider world. It is as if the previous half century of U.S.-Chinese relations never happened.

The saber rattling from both Beijing and Washington has become strident, uncompromising, and seemingly unending. The relationship lurches from crisis to crisis—from the closures of consulates to the most recent feats of Chinese “wolf warrior” diplomacy to calls by U.S. officials for the overthrow of the Chinese Communist Party (CCP). The speed and intensity of it all has desensitized even seasoned observers to the scale and significance of change in the high politics of the U.S.-Chinese relationship. Unmoored from the strategic assumptions of the previous 50 years but without the anchor of any mutually agreed framework to replace them, the world now finds itself at the most dangerous moment in the relationship since the Taiwan Strait crises of the 1950s.

The question now being asked, quietly but nervously, in capitals around the world is, where will this end? The once unthinkable outcome—actual armed conflict between the United States and China—now appears possible for the first time since the end of the Korean War. In other words, we are confronting the prospect of not just a new Cold War, but a hot one as well.

Click here to read the rest of the article at Foreign Affairs.

The post Foreign Affairs: Beware the Guns of August — in Asia appeared first on Kevin Rudd.

Worse Than FailureA Massive Leak

"Memory leaks are impossible in a garbage collected language!" is one of my favorite lies. It feels true, but it isn't. Sure, it's much harder to make them, and they're usually much easier to track down, but you can still create a memory leak. Most times, it's when you create objects, dump them into a data structure, and never empty that data structure. Usually, it's just a matter of finding out what object references are still being held. Usually.

A few months ago, I discovered a new variation on that theme. I was working on a C# application that was leaking memory faster than bad waterway engineering in the Imperial Valley.

A large, glowing, computer-controlled chandelier

I don't exactly work in the "enterprise" space anymore, though I still interact with corporate IT departments and get to see some serious internal WTFs. This is a chandelier we built for the Allegheny Health Network's Cancer Institute which recently opened in Pittsburgh. It's 15 meters tall, weighs about 450kg, and is broken up into 30 segments, each with hundreds of addressable LEDs in a grid. The software we were writing was built to make them blink pretty.

Each of those 30 segments is home to a single-board computer with their GPIO pins wired up to addressable LEDs. Each computer runs a UDP listener, and we blast them with packets containing RGB data, which they dump to the LEDs using a heavily tweaked version of LEDScape.

This is our standard approach to most of our lighting installations. We drop a Beaglebone onto a custom circuit board and let it drive the LEDs, then we have a render-box someplace which generates frame data and chops it up into UDP packets. Depending on the environment, we can drive anything from 30-120 frames per second this way (and probably faster, but that's rarely useful).

Apologies to the networking folks, but this works very well. Yes, we're blasting many megabytes of raw bitmap data across the network, but we're usually on our own dedicated network segment. We use UDP because, well, we don't care about the data that much. A dropped packet or an out of order packet isn't going to make too large a difference in most cases. We don't care if our destination Beaglebone is up or down, we just blast the packets out onto the network, and they get there reliably enough that the system works.

Now, normally, we do this from Python programs on Linux. For this particular installation, though, we have an interactive kiosk which provides details about cancer treatments and patient success stories, and lets the users interact with the chandelier in real time. We wanted to show them a 3D model of the chandelier on the screen, and show them an animation on the UI that was mirrored in the physical object. After considering our options, we decided this was a good case for Unity and C#. After a quick test of doing multitouch interactions, we also decided that we shouldn't deploy to Linux (Unity didn't really have good Linux multitouch support), so we would deploy a Windows kiosk. This meant we were doing most of our development on MacOS, but our final build would be for Windows.

Months go by. We worked on the software while building the physical pieces, which meant the actual testbed hardware wasn't available for most of the development cycle. Custom electronics were being refined and physical designs were changing as we iterated to the best possible outcome. This is normal for us, but it meant that we didn't start getting real end-to-end testing until very late in the process.

Once we started test-hanging chandelier pieces, we started basic developer testing. You know how it is: you push the run button, you test a feature, you push the stop button. Tweak the code, rinse, repeat. Eventually, though, we had about 2/3rds of the chandelier pieces plugged in, and started deploying to the kiosk computer, running Windows.

We left it running, and the next time someone walked by and decided to give the screen a tap… nothing happened. It was hung. Well, that could be anything. We rebooted and checked again, and everything seems fine, until a few minutes later, when it's hung… again. We checked the task manager- which hey, everything is really slow, and sure enough, RAM is full and the computer is so slow because it's constantly thrashing to disk.

We're only a few weeks before we actually have to ship this thing, and we've discovered a massive memory leak, and it's such a sudden discovery that it feels like the draining of Lake Agassiz. No problem, though, we go back to our dev machines, fire it up in the profiler, and start looking for the memory leak.

Which wasn't there. The memory leak only appeared in the Windows build, and never happened in the Mac or Linux builds. Clearly, there must be some different behavior, and it must be around object lifecycles. When you see a memory leak in a GCed language, you assume you're creating objects that the GC ends up thinking are in use. In the case of Unity, your assumption is that you're handing objects off to the game engine, and not telling it you're done with them. So that's what we checked, but we just couldn't find anything that fit the bill.

Well, we needed to create some relatively large arrays to use as framebuffers. Maybe that's where the problem lay? We keep digging through the traces, we added a bunch of profiling code, we spent days trying to dig into this memory leak…

… and then it just went away. Our memory leak just became a Heisenbug, our shipping deadline was even closer, and we officially knew less about what was going wrong than when we started. For bonus points, once this kiosk ships, it's not going to be connected to the Internet, so if we need to patch the software, someone is going to have to go onsite. And we aren't going to have a suitable test environment, because we're not exactly going to build two gigantic chandeliers.

The folks doing assembly had the whole chandelier built up, hanging in three sections (we don't have any 14m tall ceiling spaces), and all connected to the network for a smoke test. There wasn't any smoke, but they needed to do more work. Someone unplugged a third of the chandelier pieces from the network.

And the memory leak came back.

We use UDP because we don't care if our packet sends succeed or not. Frame-by-frame, we just want to dump the data on the network and hope for the best. On MacOS and Linux, our software usually uses a sender thread that just, at the end of the day, wraps around calls to the send system call. It's simple, it's dumb, and it works. We ignore errors.

In C#, though, we didn't do things exactly the same way. Instead, we used the .NET UdpClient object and its SendAsync method. We assumed that it would do roughly the same thing.

We were wrong.

await client.SendAsync(packet, packet.Length, hostip, port);

Async operations in C# use Tasks, which are like promises or futures in other environments. It lets .NET manage background threads without the developer worrying about the details. The await keyword is syntactic sugar which lets .NET know that it can hand off control to another thread while we wait. While we await here, we don't actually await the results of the await, because again: we don't care about the results of the operation. Just send the packet, hope for the best.

We don't care- but Windows does. After a load of investigation, what we discovered is that Windows would first try and resolve the IP address. Which, if a host was down, obviously it couldn't. But Windows was friendly, Windows was smart, and Windows wasn't going to let us down: it kept the Task open and kept trying to resolve the address. It held the task open for 3 seconds before finally deciding that it couldn't reach the host and errored out.

An error which, as I stated before, we were ignoring, because we didn't care.

Still, if you can count and have a vague sense of the linear passage of time, you can see where this is going. We had 30 hosts. We sent each of the 30 packets every second. When one or more of those hosts were down, Windows would keep each of those packets "alive" for 3 seconds. By the time that one expired, 90 more had queued up behind it.

That was the source of our memory leak, and our Heisenbug. If every Beaglebone was up, we didn't have a memory leak. If only one of them was down, the leak was pretty slow. If ten or twenty were out, the leak was a waterfall.

I spent a lot of time reading up on Windows networking after this. Despite digging through the socket APIs, I honestly couldn't figure out how to defeat this behavior. I tried various timeout settings. I tried tracking each task myself and explicitly timing them out if they took longer than a few frames to send. I was never able to tell Windows, "just toss the packet and hope for the best".

Well, my co-worker was building health monitoring on the Beaglebones anyway. While the kiosk wasn't going to be on the Internet via a "real" Internet connection, we did have a cellular modem attached, which we could use to send health info, so getting pings that say "hey, one of the Beaglebones failed" is useful. So my co-worker hooked that into our network sending layer: don't send frames to Beaglebones which are down. Recheck the down Beaglebones every five minutes or so. Continue to hope for the best.

This solution worked. We shipped. The device looks stunning, and as patients and guests come to use it, I hope they find some useful information, a little joy, and maybe some hope while playing with it. And while there may or may not be some ugly little hacks still lurking in that code, this was the one thing which made me say: WTF.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Krebs on SecurityRobocall Legal Advocate Leaks Customer Data

A California company that helps telemarketing firms avoid getting sued for violating a federal law that seeks to curb robocalls has leaked the phone numbers, email addresses and passwords of all its customers, as well as the mobile phone numbers and other data on people who have hired lawyers to go after telemarketers.

The Blacklist Alliance provides technologies and services to marketing firms concerned about lawsuits under the Telephone Consumer Protection Act (TCPA), a 1991 law that restricts the making of telemarketing calls through the use of automatic telephone dialing systems and artificial or prerecorded voice messages. The TCPA prohibits contact with consumers — even via text messages — unless the company has “prior express consent” to contact the consumer.

With statutory damages of $500 to $1,500 per call, the TCPA has prompted a flood of lawsuits over the years. From the telemarketer’s perspective, the TCPA can present something of a legal minefield in certain situations, such as when a phone number belonging to someone who’d previously given consent gets reassigned to another subscriber.

Enter The Blacklist Alliance, which promises to help marketers avoid TCPA legal snares set by “professional plaintiffs and class action attorneys seeking to cash in on the TCPA.” According to the Blacklist, one of the “dirty tricks” used by TCPA “frequent filers” includes “phone flipping,” or registering multiple prepaid cell phone numbers to receive calls intended for the person to whom a number was previously registered.

Lawyers representing TCPA claimants typically redact their clients’ personal information from legal filings to protect them from retaliation and to keep their contact information private. The Blacklist Alliance researches TCPA cases to uncover the phone numbers of plaintiffs and sells this data in the form of list-scrubbing services to telemarketers.

“TCPA predators operate like malware,” The Blacklist explains on its website. “Our Litigation Firewall isolates the infection and protects you from harm. Scrub against active plaintiffs, pre litigation complainers, active attorneys, attorney associates, and more. Use our robust API to seamlessly scrub these high-risk numbers from your outbound campaigns and inbound calls, or adjust your suppression settings to fit your individual requirements and appetite for risk.”

Unfortunately for the Blacklist paying customers and for people represented by attorneys filing TCPA lawsuits, the Blacklist’s own Web site until late last week leaked reams of data to anyone with a Web browser. Thousands of documents, emails, spreadsheets, images and the names tied to countless mobile phone numbers all could be viewed or downloaded without authentication from the domain

The directory also included all 388 Blacklist customer API keys, as well as each customer’s phone number, employer, username and password (scrambled with the relatively weak MD5 password hashing algorithm).

The leaked Blacklist customer database points to various companies you might expect to see using automated calling systems to generate business, including real estate and life insurance providers, credit repair companies and a long list of online advertising firms and individual digital marketing specialists.

The very first account in the leaked Blacklist user database corresponds to its CEO Seth Heyman, an attorney in southern California. Mr. Heyman did not respond to multiple requests for comment, although The Blacklist stopped leaking its database not long after that contact request.

Two other accounts marked as administrators were among the third and sixth registered users in the database; those correspond to two individuals at Riip Digital, a California-based email marketing concern that serves a diverse range of clients in the lead generation business, from debt relief and timeshare companies, to real estate firms and CBD vendors.

Riip Digital did not respond to requests for comment. But According to Spamhaus, an anti-spam group relied upon by many Internet service providers (ISPs) to block unsolicited junk email, the company has a storied history of so-called “snowshoe spamming,” which involves junk email purveyors who try to avoid spam filters and blacklists by spreading their spam-sending systems across a broad swath of domains and Internet addresses.

The irony of this data leak is that marketers who constantly scrape the Web for consumer contact data may not realize the source of the information, and end up feeding it into automated systems that peddle dubious wares and services via automated phone calls and text messages. To the extent this data is used to generate sales leads that are then sold to others, such a leak could end up causing more legal problems for The Blacklist’s customers.

The Blacklist and their clients talk a lot about technologies that they say separate automated telephonic communications from dime-a-dozen robocalls, such as software that delivers recorded statements that are manually selected by a live agent. But for your average person, this is likely a distinction without a difference.

Robocalls are permitted for political candidates, but beyond that if the recording is a sales message and you haven’t given your written permission to get calls from the company on the other end, the call is illegal. According to the Federal Trade Commission (FTC), companies are using auto-dialers to send out thousands of phone calls every minute for an incredibly low cost.

In fiscal year 2019, the FTC received 3.78 million complaints about robocalls. Readers may be able to avoid some marketing calls by registering their mobile number with the Do Not Call registry, but the list appears to do little to deter all automated calls — particularly scam calls that spoof their real number. If and when you do receive robocalls, consider reporting them to the FTC.

Some wireless providers now offer additional services and features to help block automated calls. For example, AT&T offers wireless customers its free Call Protect app, which screens incoming calls and flags those that are likely spam calls. See the FCC’s robocall resource page for links to resources at your mobile provider. In addition, there are a number of third-party mobile apps designed to block spammy calls, such as Nomorobo and TrueCaller.

Obviously, not all telemarketing is spammy or scammy. I have friends and relatives who’ve worked at non-profits that rely a great deal on fundraising over the phone. Nevertheless, readers who are fed up with telemarketing calls may find some catharsis in the Jolly Roger Telephone Company, which offers subscribers a choice of automated bots that keep telemarketers engaged for several minutes. The service lets subscribers choose which callers should get the bot treatment, and then records the result.

For my part, the volume of automated calls hitting my mobile number got so bad that I recently enabled a setting on my smart phone to simply send to voicemail all calls from numbers that aren’t already in my contacts list. This may not be a solution for everyone, but since then I haven’t received a single spammy jingle.

CryptogramBlackBerry Phone Cracked

Australia is reporting that a BlackBerry device has been cracked after five years:

An encrypted BlackBerry device that was cracked five years after it was first seized by police is poised to be the key piece of evidence in one of the state's longest-running drug importation investigations.

In April, new technology "capabilities" allowed authorities to probe the encrypted device....

No details about those capabilities.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 12)

Here’s part twelve of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.


Worse Than FailureCodeSOD: A Unique Choice

There are many ways to mess up doing unique identifiers. It's a hard problem, and that's why we've sorta agreed on a few distinct ways to do it. First, we can just autonumber. Easy, but it doesn't always scale that well, especially in distributed systems. Second, we can use something like UUIDs: mix a few bits of real data in with a big pile of random data, and you can create a unique ID. Finally, there are some hashing-related options, where the data itself generates its ID.

Tiffanie was digging into some weird crashes in a database application, and discovered that their MODULES table couldn't decide which was correct, and opted for two: MODULE_ID, an autonumbered field, and MODULE_UUID, which one would assume, held a UUID. There were also the requsite MODULE_NAME and similar fields. A quick scan of the table looked like:

0 Defects 8461aa9b-ba38-4201-a717-cee257b73af0 Defects
1 Test Plan 06fd18eb-8214-4431-aa66-e11ae2a6c9b3 Test Plan

Now, using both UUIDs and autonumbers is a bit suspicious, but there might be a good reason for that (the UUIDs might be used for tracking versions of installed modules, while the ID is the local database-reference for that, so the ID shouldn't change ever, but the UUID might). Still, given that MODULE_NAME and MODULE_DESC both contain exactly the same information in every case, I suspect that this table was designed by the Department of Redunancy Department.

Still, that's hardly the worst sin you could commit. What would be really bad would be using the wrong datatype for a column. This is a SQL Server database, and so we can safely expect that the MODULE_ID is numeric, the MODULE_NAME and MODULE_DESC must be text, and clearly the MODULE_UUID field should be the UNIQUEIDENTIFIER type, right?

Well, let's look at one more row from this table:

11 Releases Releases does not have a UUID Releases

Oh, well. I think I have a hunch what was causing the problems. Sure enough, the program was expecting the UUID field to contain UUIDs, and was failing when a field contained something that couldn't be converted into a UUID.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


LongNowPredicting the Animals of the Future

Jim Cooke / Gizmodo

Gizmodo asks half a dozen natural historians to speculate on who is going to be doing what jobs on Earth after the people disappear. One of the streams that runs wide and deep through this series of fun thought experiments is how so many niches stay the same through catastrophic changes in the roster of Earth’s animals. Dinosaurs die out but giant predatory birds evolve to take their place; butterflies took over from (unrelated) dot-winged, nectar-sipping giant lacewing pollinator forebears; before orcas there were flippered ocean-going crocodiles, and there will probably be more one day.

In Annie Dillard’s Pulitzer Prize-winning Pilgrim at Tinker Creek, she writes about a vision in which she witnesses glaciers rolling back and forth “like blinds” over the Appalachian Mountains. In this Gizmodo piece, Alexis Mychajliw of the La Brea Tar Pits & Museum talks about how fluctuating sea levels connected island chains or made them, fusing and splitting populations in great oscillating cycles, shrinking some creatures and giantizing others. There’s something soothing in the view from orbit paleontologists, like other deep-time mystics, possess, embody, and transmit: a sense for the clockwork of the cosmos and its orderliness, an appreciation for the powerful resilience of life even in the face of the ephemerality of life-forms.

While everybody interviewed here has obvious pet futures owing to their areas of interest, hold all of them superimposed together and you’ll get a clearer image of the secret teachings of biology…

(This article must have been inspired deeply by Dougal Dixon’s book After Man, but doesn’t mention him – perhaps a fair turn, given Dixon was accused of plagiarizing Wayne Barlowe for his follow-up, Man After Man.)


MELinks July 2020

iMore has an insightful article about Apple’s transition to the ARM instruction set for new Mac desktops and laptops [1]. I’d still like to see them do something for the server side.

Umair Haque wrote an insightful article about How the American Idiot Made America Unlivable [2]. We are witnessing the destruction of a once great nation.

Chris Lamb wrote an interesting blog post about comedy shows with the laugh tracks edited out [3]. He then compares that to social media with the like count hidden which is an interesting perspective. I’m not going to watch TV shows edited in that way (I’ve enjoyed BBT inspite of all the bad things about it) and I’m not going to try and hide like counts on social media. But it’s interesting to consider these things.

Cory Doctorow wrote an interesting Locus article suggesting that we could have full employment by a transition to renewable energy and methods for cleaning up the climate problems we are too late to prevent [4]. That seems plausible, but I think we should still get a Universal Basic Income.

The Thinking Shop has posters and decks of cards with logical fallacies and cognitive biases [5]. Every company should put some of these in meeting rooms. Also they have free PDFs to download and print your own posters. [6] is a site that lists powerful homophobic people who hurt GLBT people but then turned out to be gay. It’s presented in an amusing manner, people who hurt others deserve to be mocked.

Wired has an insightful article about the shutdown of Backpage [7]. The owners of Backpage weren’t nice people and they did some stupid things which seem bad (like editing posts to remove terms like “lolita”). But they also worked well with police to find criminals. The opposition to what Backpage were doing conflates sex trafficing, child prostitution, and legal consenting adult sex work. Taking down Backpage seems to be a bad thing for the victims of sex trafficing, for consenting adult sex workers, and for society in general.

Cloudflare has an interesting blog post about short lived certificates for ssh access [8]. Instead of having user’s ssh keys stored on servers each user has to connect to a SSO server to obtain a temporary key before connecting, so revoking an account is easy.