Planet Russell


Planet DebianVincent Fourmond: Define a function with inline Ruby code in QSoas

QSoas can read and execute Ruby code directly, while reading command files, or even at the command prompt. For that, just write plain Ruby code inside a ruby...ruby end block. Probably the most useful possibility is to define elaborated functions directly from within QSoas, or, preferable, from within a script; this is an alternative to defining a function in a completely separated Ruby-only file using ruby-run. For instance, you can define a function for plain Michaelis-Menten kinetics with a file containing:

def my_func(x, vm, km)
  return vm/(1 + km/x)
ruby end

This defines the function my_func with three parameters, , (vm) and (km), with the formula:

You can then test that the function has been correctly defined running for instance:

QSoas> eval my_func(1.0,1.0,1.0)
 => 0.5
QSoas> eval my_func(1e4,1.0,1.0)
 => 0.999900009999

This yields the correct answer: the first command evaluates the function with x = 1.0, vm = 1.0 and km = 1.0. For , the result is (here 0.5). For , the result is almost . You can use the newly defined my_func in any place you would use any ruby code, such as in the optional argument to generate-buffer, or for arbitrary fits:

QSoas> generate-buffer 0 10 my_func(x,3.0,0.6)
QSoas> fit-arb my_func(x,vm,km)

To redefine my_func, just run the ruby code again with a new definition, such as:
def my_func(x, vm, km)
  return vm/(1 + km/x**2)
ruby end
The previous version is just erased, and all new uses of my_func will refer to your new definition.

See for yourself

The code for this example can be found there. Browse the qsoas-goodies github repository for more goodies !

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.1. You can download its source code or buy precompiled versions for MacOS and Windows there.

Planet DebianVincent Fourmond: Release 2.2 of QSoas

The new release of QSoas is finally ready ! It brings in a lot of new features and improvements, notably greatly improved memory use for massive multifits, a fit for linear (in)activation processes (the one we used in Fourmond et al, Nature Chemistry 2014), a new way to transform "numbers" like peak position or stats into new datasets and even SVG output ! Following popular demand, it also finally brings back the peak area output in the find-peaks command (and the other, related commands) ! You can browse the full list of changes there.

The new release can be downloaded from the downloads page.

Freely available binary images for QSoas 1.0

In addition to the new release, we are now releasing the binary images for MacOS and Windows for the release 1.0. They are also freely available for download from the downloads page.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code or buy precompiled versions for MacOS and Windows there.

Worse Than FailureCodeSOD: A Random While

A bit ago, Aurelia shared with us a backwards for loop. Code which wasn’t wrong, but was just… weird. Well, now we’ve got some code which is just plain wrong, in a number of ways.

The goal of the following Java code is to generate some number of random numbers between 1 and 9, and pass them off to a space-separated file.

StringBuffer buffer = new StringBuffer();
long count = 0;
long numResults = GetNumResults();

while (count < numResults)
	ArrayList<BigDecimal> numbers = new ArrayList<BigDecimal>();
	while (numbers.size() < 1)
		int randInt = random.nextInt(10);
		long randLong = randInt & 0xffffffffL;
		if (!numbers.contains(new BigDecimal(randLong)) && (randLong != 0))
			buffer.append(" ");
			numbers.add(new BigDecimal(randLong));
		System.out.println("Random Integer: " + randInt + ", Long Integer: " + randLong);	
	buffer = new StringBuffer();

Pretty quickly, we get a sense that something is up, with the while (count < numResults)- this begs to be a for loop. It’s not wrong to while this, but it’s suspicious.

Then, right away, we create an ArrayList<BigDecimal>. There is no reasonable purpose to using a BigDecimal to hold a value between 1 and 9. But the rails don’t really start to come off until we get into the inner loop.

while (numbers.size() < 1)
		int randInt = random.nextInt(10);
		long randLong = randInt & 0xffffffffL;
    if (!numbers.contains(new BigDecimal(randLong)) && (randLong != 0))

This loop condition guarantees that we’ll only ever have one element in the list, which means our numbers.contains check doesn’t mean much, does it?

But honestly, that doesn’t hold a candle to the promotion of randInt to randLong, complete with an & 0xffffffffL, which guarantees… well, nothing. It’s completely unnecessary here. We might do that sort of thing when we’re bitshifting and need to mask out for certain bytes, but here it does nothing.

Also note the (randLong != 0) check. Because they use random.nextInt(10), that generates a number in the range 0–9, but we want 1 through 9, so if we draw a zero, we need to re-roll. A simple, and common solution to this would be to do random.nextInt(9) + 1, but at least we now understand the purpose of the while (numbers.size() < 1) loop- we keep trying until we get a non-zero value.

And honestly, I should probably point out that they include a println to make sure that both the int and the long versions match, but how could they not?

Nothing here is necessary. None of this code has to be this way. You don’t need the StringBuffer. You don’t need nested while loops. You don’t need the ArrayList<BigDecimal>, you don’t need the conversion between integer types. You don’t need the debugging println.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianRussell Coker: Qemu (KVM) and 9P (Virtfs) Mounts

I’ve tried setting up the Qemu (in this case KVM as it uses the Qemu code in question) 9P/Virtfs filesystem for sharing files to a VM. Here is the Qemu documentation for it [1].

VIRTFS="-virtfs local,path=/vmstore/virtfs,security_model=mapped-xattr,id=zz,writeout=immediate,fmode=0600,dmode=0700,mount_tag=zz"
VIRTFS="-virtfs local,path=/vmstore/virtfs,security_model=passthrough,id=zz,writeout=immediate,mount_tag=zz"

Above are the 2 configuration snippets I tried on the server side. The first uses mapped xattrs (which means that all files will have the same UID/GID and on the host XATTRs will be used for storing the Unix permissions) and the second uses passthrough which requires KVM to run as root and gives the same permissions on the host as on the VM. The advantages of passthrough are better performance through writing less metadata and having the same permissions in host and VM. The advantages of mapped XATTRs are running KVM/Qemu as non-root and not having a SUID file in the VM imply a SUID file in the host.

Here is the link to Bonnie++ output comparing Ext3 on a KVM block device (stored on a regular file in a BTRFS RAID-1 filesystem on 2 SSDs on the host), a NFS share from the host from the same BTRFS filesystem, and virtfs shares of the same filesystem. The only tests that Ext3 doesn’t win are some of the latency tests, latency is based on the worst-case not the average. I expected Ext3 to win most tests, but didn’t expect it to lose any latency tests.

Here is a link to Bonnie++ output comparing just NFS and Virtfs. It’s obvious that Virtfs compares poorly, giving about half the performance on many tests. Surprisingly the only tests where Virtfs compared well to NFS were the file creation tests which I expected Virtfs with mapped XATTRs to do poorly due to the extra metadata.

Here is a link to Bonnie++ output comparing only Virtfs. The options are mapped XATTRs with default msize, mapped XATTRs with 512k msize (I don’t know if this made a difference, the results are within the range of random differences), and passthrough. There’s an obvious performance benefit in passthrough for the small file tests due to the less metadata overhead, but as creating small files isn’t a bottleneck on most systems a 20% to 30% improvement in that area probably doesn’t matter much. The result from the random seeks test in passthrough is unusual, I’ll have to do more testing on that.

SE Linux

On Virtfs the XATTR used for SE Linux labels is passed through to the host. So every label used in a VM has to be valid on the host and accessible to the context of the KVM/Qemu process. That’s not really an option so you have to use the context mount option. Having the mapped XATTR mode work for SE Linux labels is a necessary feature.


The msize mount option in the VM doesn’t appear to do anything and it doesn’t appear in /proc/mounts, I don’t know if it’s even supported in the kernel I’m using.

The passthrough and mapped XATTR modes give near enough performance that there doesn’t seem to be a benefit of one over the other.

NFS gives significant performance benefits over Virtfs while also using less CPU time in the VM. It has the issue of files named .nfs* hanging around if the VM crashes while programs were using deleted files. It’s also more well known, ask for help with an NFS problem and you are more likely to get advice than when asking for help with a virtfs problem.

Virtfs might be a better option for accessing databases than NFS due to it’s internal operation probably being a better map to Unix filesystem semantics, but running database servers on the host is probably a better choice anyway.

Virtfs generally doesn’t seem to be worth using. I had hoped for performance that was better than NFS but the only benefit I seemed to get was avoiding the .nfs* file issue.

The best options for storage for a KVM/Qemu VM seem to be Ext3 for files that are only used on one VM and for which the size won’t change suddenly or unexpectedly (particularly the root filesystem) and NFS for everything else.


Cryptogram Documented Death from a Ransomware Attack

A Dusseldorf woman died when a ransomware attack against a hospital forced her to be taken to a different hospital in another city.

I think this is the first documented case of a cyberattack causing a fatality. UK hospitals had to redirect patients during the 2017 WannaCry ransomware attack, but there were no documented fatalities from that event.

The police are treating this as a homicide.

LongNowSleeping Beauties of Prehistory and the Present Day

Changmiania liaoningensis, buried while sleeping by a prehistoric volcano. Image Source.

Although the sensitive can feel it in all seasons, Autumn seems to thin the veil between the living and the dead. Writing from the dying cusp of summer and the longer bardo marking humankind’s uneasy passage into a new world age (a transit paradoxically defined by floating signifiers and eroded, fluid categories), it seems right to constellate a set of sleeping beauties, both extant and extinct, recently discovered and newly understood. Much like the “sleeping beauties” of forgotten scientific research, as described by Sidney Redner in his 02005 Physics Today paper and elaborated on by David Byrne’s Long Now Seminar, these finds provide an opportunity to contemplate time’s cycles — and the difference between the dead, and merely dormant.

We start 125 million years ago in the unbelievably fossiliferous Liaoning Province of China, one of the world’s finest lagerstätten (an area of unusually rich floral or faunal deposits — such as Canada’s famous Burgess Shale, which captured the transition into the first bloom of complex, hard-shelled, eye-bearing life; or the Solnhofen Limestone in Germany, from which flew the “Urvogel” feathered dinosaur Archaeopteryx, one of the most significant fossil finds in scientific history). Liaoning’s Lujiatan Beds just offered up a pair of perfectly-preserved herbivorous small dinosaurs, named Changmiania or “Sleeping Beauty” for how they were discovered buried in repose within their burrows by what was apparently volcanic ash, a kind of prehistoric Pompeii:

Changmiania in its eternal repose. Image Source.

There’s something especially poignant about flash-frozen remains that render ancient life in its sweet, quiet moments — a challenge to the reigning iconography of the Dawn Ages with their battling giants and their bloodied teeth. Like the Lovers of Valdaro, or the family of Pinacosaurus buried together huddling up against a sandstorm, Changmainia makes the alien past familiar and reminds us of the continuity of life through all its transmutations.

Similarly precious is the new discovery of a Titanosaurid (long-necked dinosaur) embryo from Late Cretaceous Patagonia. These creatures laid the largest eggs known in natural history — a requisite to house what would become some of the biggest animals to ever walk the land. Even so, their contents were so small and delicate it is a marvelous surprise to find the face of this pre-natal “Littlefoot” in such great shape, preserving what looks like an “egg tooth”-bearing structure that would have helped it break free:

The face of a baby dinosaur brings the ancient and brand new into stereoscopic focus. Image Source.

From ancient Argentina to the present day, we move from dinosaurs caught sleeping by fossilization to the “lizard popsicles” of modern reptiles who have managed to adapt to freezing night-time temperatures in their alpine environment. Legends tell of these outliers in the genus Liolaemus walking on the Perito Moreno glacier, a very un-lizard-like haunt; they’re regularly studied well above 13,000 feet, where they may be using an internal form of anti-freeze to live through blistering night-time temperatures. The Andes, young by montane standards, offer a heterogeneous environment that may function like a “species pump,” making Liolaemus one of the most diverse lizard genuses; 272 distinct varieties have been described, some of which give live birth to help their young survive where eggs would just freeze solid.

Liolaemus in half-popsicle mode. Image Source.

Even further south and further back in time, mammal-like reptile Lystrosaurus hibernated in Antarctica 250 million years ago — a discovery made when examining the pulsing growth captured in the records of ringed bone in its extraordinary tusks, much like the growth rings of a redwood:

Lystrosaurus tusks looking remarkably like a redwood tree cross-section. Image Source.

This prehistoric beast, however, lived in a world predating woody trees. Toothless, beaked, and tusked, its positively foreign face nonetheless slept through winter just like modern bears and turtles…which might be why it managed to endure the unimaginable hardships of the Permo-Triassic extinction, which wiped out even more life than the meteor that killed the dinosaurs 135 million years later. Slowing down enough to imitate the dead appears to be, poetically, a strategy for dodging draft into their ranks. And likely living on cuisine like roots and tubers during long Antarctic nights, it may have thrived both in and on the underworld. Lystrosaurus seems to have weathered the Great Dying by playing dead and preying on the kinds of flora that could also play dead through a crisis on the surface.

Lystrosaurus in the woodless landscape of the Triassic. Image Source.

And while we’re on the subject of the blurry boundary between the worlds of life and death, Yale University researchers recently announced they managed to restore activity in some parts of a pig’s brain four hours after death. In the 18th Century when the first proto-CPR resuscitation methods were invented, humans started adding horns to coffins so the not-quite-dead could call for help upon awakening, if necessary; perhaps we’re due for more of this, now that scientists have managed to turn certain regions of a dead brain back on just by simulating blood flow with a special formula called “BrainEx.”

Revived cells in a dead pig’s brain. Image Source.

At no point did the team observe coordinated patterns of electrical activity like those now correlated with awareness, but the research may deliver new techniques for studying an intact mammal brain that lead to innovations in brain injury repair. Discoveries like these suggest that life and death, once thought a binary, is a continuum instead — a slope more shallow every year.

If life and death are ultimately separated only by the paces at which processes at which the many layers of biology align, the future seems like it will be a twilight zone: a weird and wide slow Styx akin to Arthur C. Clarke’s 3001: The Final Odyssey, rife with de-extincted mammoths and revived celebrities of history, digital uploads and doubles and uncanny androids, cryogenic mummies in space ark sarcophagi — and more than a few hibernating Luddites waiting for a sign that it is safe to re-emerge.

Rondam RamblingsCan facts be racist?

Here's a fact:[D]ifferences in home and neighborhood quality do not fully explain the devaluation of homes in black neighborhoods. Homes of similar quality in neighborhoods with similar amenities are worth 23 percent less ($48,000 per home on average, amounting to $156 billion in cumulative losses) in majority black neighborhoods, compared to those with very few or no black residents(And

Planet DebianSteve Kemp: Using a FORTH-like language for something useful

So my previous post was all about implementing a simple FORTH-like language. Of course the obvious question is then "What do you do with it"?

So I present one possible use - turtle-graphics:

\ Draw a square of the given length/width
: square
  dup dup dup dup
  4 0 do
    90 turn

\ pen down
1 pen

\ move to the given pixel
100 100 move

\ draw a square of width 50 pixels
50 square

\ save the result (png + gif)

Exciting times!

Planet DebianDaniel Lange: Getting rid of the Google cookie consent popup

If you clear your browser cookies regularly (as you should do), Google will annoy you with a full screen cookie consent overlay these days. And - of course - there is no "no tracking consent, technically required cookies only" button. You may log in to Google to set your preference. Yeah, I'm sure this is totally following the intent of the EU Directive 2009/136/EC (the "cookie law").

Google cookie consent pop-up

Unfortunately none of the big "anti-annoyances" filter lists seem to have picked that one up yet but the friendly folks from the Computerbase Forum [German] to the rescue. User "Sepp Depp" has created the following filter set that WFM:

Add this to your uBlock Origin "My filters" tab:

! Google - remove cookie-consent-popup and restore scroll functionality
google.*##html:style(overflow: visible !important;)

Worse Than FailureCodeSOD: A Cutt Above

We just discussed ViewState last week, and that may have inspired Russell F to share with us this little snippet.

private ConcurrentQueue<AppointmentCuttOff> lstAppointmentCuttOff { get { object o = ViewState["lstAppointmentCuttOff"]; if (o == null) return null; else return (ConcurrentQueue<AppointmentCuttOff>)o; } set { ViewState["lstAppointmentCuttOff"] = value; } }

This pattern is used for pretty much all of the ViewState data that this code interacts with, and if you look at the null check, you can see that it's unnecessary. Our code checks for a null, and if we have one… returns null. The entire get block could just be: return (ConcurrentQueue<AppointmentCuttOff>)ViewState["lstAppointmentCuttOff"]

The bigger glitch here is the data-type. While there are a queue of appointments, that queue is never accessed across threads, so there's no need for a threadsafe ConcurrentQueue.

But I really love the name of the variable we store in ViewState. We have Hungarian notation, which calls it a lst, which isn't technically correct, though it is iterable, so maybe that's what they meant, but if the point of Hungarian notation is to make the code more clear, this isn't helping.

But what I really love is that these are CuttOffs, which just sounds like some retail brand attempting to sell uncomfortably short denim. It'll be next year's summer trend, mark my words!

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianRuss Allbery: Review: Unconquerable Sun

Review: Unconquerable Sun, by Kate Elliott

Series: Sun Chronicles #1
Publisher: Tor
Copyright: 2020
ISBN: 1-250-19725-2
Format: Kindle
Pages: 526

Sun is the daughter and heir of the mercurial Queen-Marshal Eirene, ruler of the Republic of Chaonia. Chaonia, thanks to Eirene and her ancestors, has carved out a fiercely independent position between the Yele League and the Phene Empire. Sun's father, Prince João, is one of Eirene's three consorts, all chosen for political alliances to shore up that fragile position. João is Gatoi, a civilization of feared fighters and supposed barbarians from outside Chaonia who normally ally with the Phene, which complicates Sun's position as heir. Sun attempts to compensate for that by winning battles for the Republic, following in the martial footsteps of her mother.

The publisher's summary of this book is not great (I'm a huge fan of Princess Leia, but that is... not the analogy that comes to mind), so let me try to help. This is gender-swapped Alexander the Great in space. However, it is gender-swapped Alexander the Great in space with her Companions, which means the DNA of this novel is half space opera and half heist story (without, to be clear, an actual heist, although there are some heist-like maneuvers). It's also worth mentioning that Sun, like Alexander, is not heterosexual.

The other critical thing to know before reading, mostly because it will get you through the rather painful start, is that the most interesting viewpoint character in this book is not Sun, the Alexander analogue. It's Persephone, who isn't introduced until chapter seven.

Significant disclaimer up front: I got a reasonably typical US grade school history of Alexander the Great, which means I was taught that he succeeded his father, conquered a whole swath of the middle of the Eurasian land mass at a very young age, and then died and left his empire to his four generals who promptly divided it into four uninteresting empires that no one's ever heard of, and that's why Rome is more important than Greece. (I put in that last bit to troll one specific person.)

I am therefore not the person to judge the parallels between this story and known history, or to notice any damage done to Greek pride, or to pick up on elements that might cause someone with a better grasp of that history to break out in hives. I did enough research to know that one scene in this book is lifted directly out of Alexander's life, but I'm not sure how closely the other parallels track. Yele is probably southern Greece and Phene is probably Persia, but I'm not certain even of that, and some of the details don't line up. If I had to hazard a guess, I'd say that Elliott has probably mangled history sufficiently to make it clear that this isn't intended to be a retelling, but if the historical parallels are likely to bother you, you may want to do more research before reading.

What I can say is that the space opera setup, while a bit stock, has all the necessary elements to make me happy. Unconquerable Sun is firmly in the "lost Earth" tradition: The Argosy fleet fled the now-mythical Celestial Empire and founded a new starfaring civilization without any contact with their original home. Eventually, they invented (or discovered; the characters don't know) the beacons, which allow for instantaneous travel between specific systems without the long (but still faster-than-light) journeys of the knnu drive. More recently, the beacon network has partly collapsed, cutting off the characters' known world from the civilization that was responsible for the beacons and guarded their secrets. It's a fun space opera history with lots of lost knowledge to reference and possibly discover, and with plot-enabling military choke points at the surviving beacons that link multiple worlds.

This is all background to the story, which is the ongoing war between Chaonia and the Phene Empire mixed with cutthroat political maneuvering between the great houses of the Chaonian Republic. This is where the heist aspects come in. Each house sends one representative to join the household of the Queen-Marshal and (more to the point for this story) another to join her heir. Sun has encouraged the individual and divergent talents of her Companions and their cee-cees (an unfortunate term that I suspect is short for Companion's Companion) and forged them into a good working team. A team that's about to be disrupted by the maneuverings of a rival house and the introduction of a new team member whom no one wants.

A problem with writing tactical geniuses is that they often aren't good viewpoint characters. Sun's tight third-person chapters, which is a little less than half the book, advance the plot and provide analysis of the interpersonal dynamics of the characters, but aren't the strength of the story. That lies with the interwoven first-person sections that follow Persephone, an altogether more interesting character.

Persephone is the scion of the house that is Sun's chief rival, but she has no interest in being part of that house or its maneuverings. When the story opens, she's a cadet in a military academy for recruits from the commoners, having run away from home, hidden her identity, and won a position through the open entrance exams. She of course doesn't stay there; her past catches up with her and she gets assigned to Sun, to a great deal of mutual suspicion. She also is assigned an impeccably dressed and stunningly beautiful cee-cee, Tiana, who has her own secrets and who was my favorite character in the book.

Somewhat unusually for the space opera tradition, this is a book that knows that common people exist and have interesting lives. It's primarily focused on the ruling houses, but that focus is not exclusive and the rulers do not have a monopoly on competence. Elliott also avoids narrowing the political field too far; the Gatoi are separate from the three rival powers, and there are other groups with traditions older than the Chaonian Republic and their own agendas. Sun and her Companions are following a couple of political threads, but there is clearly more going on in this world than that single plot.

This is exactly the kind of story I think of when I think space opera. It's not doing anything that original or groundbreaking, and it's not going to make any of my lists of great literature, but it's a fun romp with satisfyingly layered bits of lore, a large-scale setting with lots of plot potential, and (once we get through the confusing and somewhat tedious process of introducing rather too many characters in short succession) some great interpersonal dynamics. It's the kind of book in which the characters are in the middle of decisive military action in an interstellar war and are also near-teenagers competing for ratings in an ad hoc reality TV show, primarily as an excuse to create tactical distractions for Sun's latest scheme. The writing is okay but not great, and the first few chapters have some serious infodumping problems, but I thoroughly enjoyed the whole book and will pre-order the sequel.

One Amazon review complained that Unconquerable Sun is not a space opera like Hyperion or Use of Weapons. That is entirely true, but if that's your standard for space opera, the world may be a disappointing place. This is a solid entry in a subgenre I love, with some great characters, sarcasm, competence porn, plenty of pages to keep turning, a few twists, and the promise of more to come. Recommended.

Followed by the not-yet-published Furious Heaven.

Rating: 7 out of 10

Cryptogram Interview with the Author of the 2000 Love Bug Virus

No real surprises, but we finally have the story.

The story he went on to tell is strikingly straightforward. De Guzman was poor, and internet access was expensive. He felt that getting online was almost akin to a human right (a view that was ahead of its time). Getting access required a password, so his solution was to steal the passwords from those who’d paid for them. Not that de Guzman regarded this as stealing: He argued that the password holder would get no less access as a result of having their password unknowingly “shared.” (Of course, his logic conveniently ignored the fact that the internet access provider would have to serve two people for the price of one.)

De Guzman came up with a solution: a password-stealing program. In hindsight, perhaps his guilt should have been obvious, because this was almost exactly the scheme he’d mapped out in a thesis proposal that had been rejected by his college the previous year.


Planet DebianKees Cook: security things in Linux v5.7

Previously: v5.6

Linux v5.7 was released at the end of May. Here’s my summary of various security things that caught my attention:

arm64 kernel pointer authentication
While the ARMv8.3 CPU “Pointer Authentication” (PAC) feature landed for userspace already, Kristina Martsenko has now landed PAC support in kernel mode. The current implementation uses PACIASP which protects the saved stack pointer, similar to the existing CONFIG_STACKPROTECTOR feature, only faster. This also paves the way to sign and check pointers stored in the heap, as a way to defeat function pointer overwrites in those memory regions too. Since the behavior is different from the traditional stack protector, Amit Daniel Kachhap added an LKDTM test for PAC as well.

The kernel’s Linux Security Module (LSM) API provide a way to write security modules that have traditionally implemented various Mandatory Access Control (MAC) systems like SELinux, AppArmor, etc. The LSM hooks are numerous and no one LSM uses them all, as some hooks are much more specialized (like those used by IMA, Yama, LoadPin, etc). There was not, however, any way to externally attach to these hooks (not even through a regular loadable kernel module) nor build fully dynamic security policy, until KP Singh landed the API for building LSM policy using BPF. With this, it is possible (for a privileged process) to write kernel LSM hooks in BPF, allowing for totally custom security policy (and reporting).

execve() deadlock refactoring
There have been a number of long-standing races in the kernel’s process launching code where ptrace could deadlock. Fixing these has been attempted several times over the last many years, but Eric W. Biederman and Ernd Edlinger decided to dive in, and successfully landed the a series of refactorings, splitting up the problematic locking and refactoring their uses to remove the deadlocks. While he was at it, Eric also extended the exec_id counter to 64 bits to avoid the possibility of the counter wrapping and allowing an attacker to send arbitrary signals to processes they normally shouldn’t be able to.

slub freelist obfuscation improvements
After Silvio Cesare observed some weaknesses in the implementation of CONFIG_SLAB_FREELIST_HARDENED‘s freelist pointer content obfuscation, I improved their bit diffusion, which makes attacks require significantly more memory content exposures to defeat the obfuscation. As part of the conversation, Vitaly Nikolenko pointed out that the freelist pointer’s location made it relatively easy to target too (for either disclosures or overwrites), so I moved it away from the edge of the slab, making it harder to reach through small-sized overflows (which usually target the freelist pointer). As it turns out, there were a few assumptions in the kernel about the location of the freelist pointer, which had to also get cleaned up.

RISCV page table dumping
Following v5.6’s generic page table dumping work, Zong Li landed the RISCV page dumping code. This means it’s much easier to examine the kernel’s page table layout when running a debug kernel (built with PTDUMP_DEBUGFS), visible in /sys/kernel/debug/kernel_page_tables.

array index bounds checking
This is a pretty large area of work that touches a lot of overlapping elements (and history) in the Linux kernel. The short version is: C is bad at noticing when it uses an array index beyond the bounds of the declared array, and we need to fix that. For example, don’t do this:

int foo[5];
foo[8] = bar;

The long version gets complicated by the evolution of “flexible array” structure members, so we’ll pause for a moment and skim the surface of this topic. While things like CONFIG_FORTIFY_SOURCE try to catch these kinds of cases in the memcpy() and strcpy() family of functions, it doesn’t catch it in open-coded array indexing, as seen in the code above. GCC has a warning (-Warray-bounds) for these cases, but it was disabled by Linus because of all the false positives seen due to “fake” flexible array members. Before flexible arrays were standardized, GNU C supported “zero sized” array members. And before that, C code would use a 1-element array. These were all designed so that some structure could be the “header” in front of some data blob that could be addressable through the last structure member:

/* 1-element array */
struct foo {
    char contents[1];

/* GNU C extension: 0-element array */
struct foo {
    char contents[0];

/* C standard: flexible array */
struct foo {
    char contents[];

instance = kmalloc(sizeof(struct foo) + content_size);

Converting all the zero- and one-element array members to flexible arrays is one of Gustavo A. R. Silva’s goals, and hundreds of these changes started landing. Once fixed, -Warray-bounds can be re-enabled. Much more detail can be found in the kernel’s deprecation docs.

However, that will only catch the “visible at compile time” cases. For runtime checking, the Undefined Behavior Sanitizer has an option for adding runtime array bounds checking for catching things like this where the compiler cannot perform a static analysis of the index values:

int foo[5];
for (i = 0; i < some_argument; i++) {
    foo[i] = bar;

It was, however, not separate (via kernel Kconfig) until Elena Petrova and I split it out into CONFIG_UBSAN_BOUNDS, which is fast enough for production kernel use. With this enabled, it's now possible to instrument the kernel to catch these conditions, which seem to come up with some regularity in Wi-Fi and Bluetooth drivers for some reason. Since UBSAN (and the other Sanitizers) only WARN() by default, system owners need to set panic_on_warn=1 too if they want to defend against attacks targeting these kinds of flaws. Because of this, and to avoid bloating the kernel image with all the warning messages, I introduced CONFIG_UBSAN_TRAP which effectively turns these conditions into a BUG() without needing additional sysctl settings.

Fixing "additive" snprintf() usage
A common idiom in C for building up strings is to use sprintf()'s return value to increment a pointer into a string, and build a string with more sprintf() calls:

/* safe if strlen(foo) + 1 < sizeof(string) */
wrote  = sprintf(string, "Foo: %s\n", foo);
/* overflows if strlen(foo) + strlen(bar) > sizeof(string) */
wrote += sprintf(string + wrote, "Bar: %s\n", bar);
/* writing way beyond the end of "string" now ... */
wrote += sprintf(string + wrote, "Baz: %s\n", baz);

The risk is that if these calls eventually walk off the end of the string buffer, it will start writing into other memory and create some bad situations. Switching these to snprintf() does not, however, make anything safer, since snprintf() returns how much it would have written:

/* safe, assuming available <= sizeof(string), and for this example
 * assume strlen(foo) < sizeof(string) */
wrote  = snprintf(string, available, "Foo: %s\n", foo);
/* if (strlen(bar) > available - wrote), this is still safe since the
 * write into "string" will be truncated, but now "wrote" has been
 * incremented by how much snprintf() *would* have written, so "wrote"
 * is now larger than "available". */
wrote += snprintf(string + wrote, available - wrote, "Bar: %s\n", bar);
/* string + wrote is beyond the end of string, and availabe - wrote wraps
 * around to a giant positive value, making the write effectively 
 * unbounded. */
wrote += snprintf(string + wrote, available - wrote, "Baz: %s\n", baz);

So while the first overflowing call would be safe, the next one would be targeting beyond the end of the array and the size calculation will have wrapped around to a giant limit. Replacing this idiom with scnprintf() solves the issue because it only reports what was actually written. To this end, Takashi Iwai has been landing a bunch scnprintf() fixes.

That's it for now! Let me know if there is anything else you think I should mention here. Next up: Linux v5.8.

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

Cryptogram Matt Blaze on OTP Radio Stations

Matt Blaze discusses (also here) an interesting mystery about a Cuban one-time-pad radio station, and a random number generator error that probably helped arrest a pair of Russian spies in the US.

Planet DebianAntoine Beaupré: PSA: Mailman used to harrass people

It seems that Mailman instances are being abused to harrass people with subscribe spam. If some random people complain to you that they "never wanted to subscribe to your mailing list", you may be a victim to that attack, even if you run the latest Mailman 2.


Make sure you have SUBSCRIBE_FORM_SECRET set in your mailman configuration:

SECRET=$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 30)'
echo "SUBSCRIBE_FORM_SECRET = '$SECRET'" >> /etc/mailman/mm.cfg

This will add a magic token to all forms in the Mailman web forms that will force the attacker to at least get a token before asking for registration. There are, of course, other ways of performing the attack then, but it's more expensive than a single request for the attacker and keeps most of the junk out.

Other solutions

I originally deployed a different fix, using referrer checks and an IP block list:

RewriteMap hosts-deny  txt:/etc/apache2/blocklist.txt
RewriteCond ${hosts-deny:%{REMOTE_ADDR}|NOT-FOUND} !=NOT-FOUND [OR]
RewriteCond ${hosts-deny:%{REMOTE_HOST}|NOT-FOUND} !=NOT-FOUND [OR]
RewriteCond %{HTTP_REFERER} !^$ [NC]
RewriteRule ^/cgi-bin/mailman/subscribe/ - [F]
# see also
Header always set Referrer-Policy "origin"

I kept those restrictions in place because it keeps the spammers from even hitting the Mailman CGI, which is useful to preserve our server resources. But if "they" escalate with smarter crawlers, the block list will still be useful.

You can use this query to extract the top 10 IP addresses used for subscription attempts:

awk '{ print $NF }' /var/log/mailman/subscribe | sort | uniq -c | sort -n | tail -10  | awk '{ print $2 " " $1 }'

Note that this might include email-based registration, but in our logs those are extremely rare: only two in three weeks, out of over 73,000 requests. I also use this to keep an eye on the logs:

tail -f  /var/log/mailman/subscribe /var/log/apache2/ | grep -v 'GET /pipermail/'

The server-side mitigations might also be useful if you happen to run an extremely old version of Mailman, that is pre-2.1.18, but it's now over 6 years old and part of every supported Debian release out there (all the way back to Debian 8 jessie).

Why does that attack work?

Because Mailman 2 doesn't have CSRF tokens in its forms by default, anyone can send a POST request to /mailman/subscribe/LISTNAME to have Mailman send an email to the user. In the old "Internet is for nice people" universe, that wasn't a problem: all it does is ask the victim if they want to subscribe to LISTNAME. Innocuous, right?

But in the brave, new, post-Eternal-September, "Internet is for stupid" universe, some assholes think it's a good idea to make a form that collects hundreds of mailing list URLs and spam them through an iframe. To see what that looks like, you can look at the rendered source code behind (not linking to avoid promoting it). That site does what is basically a distributed cross-site scripting attack against Mailman servers.

Obviously, CSRF protection should be enabled by default in Mailman, but there you go. Hopefully this will help some folks...

(The latest Mailman 3 release doesn't suffer from such idiotic defaults and ships with proper CSRF protection out of the box.)

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 15)

Here’s part fifteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.


Planet DebianJonathan McDowell: Mainline Linux on the MikroTik RB3011

I upgraded my home internet connection to fibre (FTTP) last October. I’m still on an 80M/20M service, so it’s no faster than my old VDSL FTTC connection was, and as a result for a long time I continued to use my HomeHub 5A running OpenWRT. However the FTTP ONT meant I was using up an additional ethernet port on the router, and I was already short, so I ended up with a GigE switch in use as well. Also my wifi is handled by a UniFi, which takes its power via Power-over-Ethernet. That mean I had a router, a switch and a PoE injector all in close proximity. I wanted to reduce the number of devices, and ideally upgrade to something that could scale once I decide to upgrade my FTTP service speed.

Looking around I found the MikroTik RB3011UiAS-RM, which is a rack mountable device with 10 GigE ports (plus an SFP slot) and a dual core Qualcomm IPQ8064 ARM powering it. There’s 1G RAM and 128MB NAND flash, as well as a USB3 port. It also has PoE support. On paper it seemed like an ideal device. I wasn’t particularly interested in running RouterOS on it (the provided software), but that’s based on Linux and there was some work going on within OpenWRT to add support, so it seemed like a worthwhile platform to experiment with (what, you expected this to be about me buying an off the shelf device and using it with only the supplied software?). As an added bonus a friend said he had one he wasn’t using, and was happy to sell it to me for a bargain price.

RB3011 router in use

I did try out RouterOS to start with, but I didn’t find it particularly compelling. I’m comfortable configuring firewalling and routing at a Linux command line, and I run some additional services on the router like my MQTT broker, and mqtt-arp, my wifi device presence monitor. I could move things around such that they ran on the house server, but I consider them core services and as a result am happier with them on the router.

The first step was to get something booting on the router. Luckily it has an RJ45 serial console port on the back, and a reasonably featured bootloader that can manage to boot via tftp over the network. It wants an ELF binary rather than a plain kernel, but Sergey Sergeev had done the hard work of getting u-boot working for the IPQ8064, which mean I could just build normal u-boot images to try out.

Linux upstream already had basic support for a lot of the pieces I was interested in. There’s a slight fudge around AUTO_ZRELADDR because the network coprocessors want a chunk of memory at the start of RAM, but there’s ongoing discussions about how to handle this cleanly that I’m hopeful will eventually mean I can drop that hack. Serial, ethernet, the QCA8337 switches (2 sets of 5 ports, tied to different GigE devices on the processor) and the internal NOR all had drivers, so it was a matter of crafting an appropriate DTB to get them working. That left niggles.

First, the second switch is hooked up via SGMII. It turned out the IPQ806x stmmac driver didn’t initialise the clocks in this mode correctly, and neither did the qca8k switch driver. So I need to fix up both of those (Sergey had handled the stmmac driver, so I just had to clean up and submit his patch). Next it turned out the driver for talking to the Qualcomm firmware (SCM) had been updated in a way that broke the old method needed on the IPQ8064. Some git archaeology figured that one out and provided a solution. Ansuel Smith helpfully provided the DWC3 PHY driver for the USB port. That got me to the point I could put a Debian armhf image onto a USB stick and mount that as root, which made debugging much easier.

At this point I started to play with configuring up the device to actually act as a router. I make use of a number of VLANs on my home network, so I wanted to make sure I could support those. Turned out the stmmac driver wasn’t happy reconfiguring its MTU because the IPQ8064 driver doesn’t configure the FIFO sizes. I found what seem to be the correct values and plumbed them in. Then the qca8k driver only supported port bridging. I wanted the ability to have a trunk port to connect to the upstairs switch, while also having ports that only had a single VLAN for local devices. And I wanted the switch to handle this rather than requiring the CPU to bridge the traffic. Thankfully it’s easy to find a copy of the QCA8337 datasheet and the kernel Distributed Switch Architecture is pretty flexible, so I was able to implement the necessary support.

I stuck with Debian on the USB stick for actually putting the device into production. It makes it easier to fix things up if necessary, and the USB stick allows for a full Debian install which would be tricky on the 128M of internal NAND. That means I can use things like nftables for my firewalling, and use the standard Debian packages for things like collectd and mosquitto. Plus for debug I can fire up things like tcpdump or tshark. Which ended up being useful because when I put the device into production I started having weird IPv6 issues that turned out to be a lack of proper Ethernet multicast filter support in the IPQ806x ethernet device. The driver would try and setup the multicast filter for the IPv6 NDP related packets, but it wouldn’t actually work. The fix was to fall back to just receiving all multicast packets - this is what the vendor driver does.

Most of this work will be present once the 5.9 kernel is released - the basics are already in 5.8. Currently not queued up that I can think of are the following:

  • stmmac IPQ806x FIFO sizes. I sent out an RFC patch for these, but didn’t get any replies. I probably just need to submit this.
  • NAND. This is missing support for the QCOM ADM DMA engine. I’ve sent out the patch I found to enable this, and have had some feedback, so I’m hopeful it will get in at some point.
  • LCD. AFAICT LCD is an ST7735 device, which has kernel support, but I haven’t spent serious effort getting the SPI configuration to work.
  • Touchscreen. Again, this seems to be a zt2046q or similar, which has a kernel driver, but the basic attempts I’ve tried don’t get any response.
  • Proper SFP functionality. The IPQ806x has a PCS module, but the stmmac driver doesn’t have an easy way to plumb this in. I have ideas about how to get it working properly (and it can be hacked up with a fixed link config) but it’s not been a high priority.
  • Device tree additions. Some of the later bits I’ve enabled aren’t yet in the mainline RB3011 DTB. I’ll submit a patch for that at some point.

Overall I consider the device a success, and it’s been entertaining getting it working properly. I’m running a mostly mainline kernel, it’s handling my house traffic without breaking a sweat, and the fact it’s running Debian makes it nice and easy to throw more things on it as I desire. However it turned out the RB3011 isn’t as perfect device as I’d hoped. The PoE support is passive, and the UniFi wants 802.1af. So I was going to end up with 2 devices. As it happened I picked up a cheap D-Link DGS-1210-10P switch, which provides the PoE support as well as some additional switch ports. Plus it runs Linux, so more on that later…

Google AdsenseA guide to common AdSense policy questions

A guide to understand the digital advertising policies and resolving policy violations for AdSense.

Sociological ImagesSurvivors or Victims?

The #MeToo movement that began in 2017 has reignited a long debate about how to name people who have had traumatic experiences. Do we call individuals who have experienced war, cancer, crime, or sexual violence “victims”? Or should we call them “survivor,” as recent activists like #MeToo founder Tarana Burke have advocated?

Strong arguments can be raised for both sides. In the sexual violence debate, advocates of “survivor” argue the term places women at the center of their own narrative of recovery and growth. Defenders of victim language, meanwhile, argue that victim better describes the harm and seriousness of violence against women and identifies the source of violence in systemic misogyny and cultures of patriarchy.

Unfortunately, while there has been much debate about the use of these terms, there has been little documentation of how service and advocacy organizations that work with individuals who have experienced trauma actually use these terms. Understanding the use of survivor and victim is important because it tells us what these terms to mean in practice and where barriers to change are. 

We sought to remedy this problem in a recent paper published in Social Currents.  We used data from nonprofit mission statements to track language change among 3,756 nonprofits that once talked about victims in the 1990s.  We found, in general, that relatively few organizations adopted survivor as a way to talk about trauma even as some organizations have moved away from talking about victims.  However, we also found that, increasingly, organizations that focus on issues related to women tend to use victim and survivor interchangeably. In contrast, organizations that do not work with women appear be moving away from both terms.

These findings contradict the way we usually think about “survivor” and “victim” as opposing terms. Does this mean that survivor and victim are becoming the “extremely reduced form” through which women are able to enter the public sphere? Or does it mean that feminist service providers are avoiding binary thinking? These questions, as well as questions about the strategic, linguistic, and contextual reasons that organizations choose victim- or survivor-based language give advocates and scholars of language plenty to re-examine.  

Andrew Messamore is a PhD student in the Department of Sociology at the University of Texas at Austin. Andrew studies changing modes of local organizing at work and in neighborhoods and how the ways people associate shapes community, public discourse, and economic inequality in the United States.

Pamela Paxton is the Linda K. George and John Wilson Professor of Sociology at The University of Texas at Austin. With Melanie Hughes and Tiffany Barnes, she is the co-author of the 2020 book, Women, Politics, and Power: A Global Perspective.

(View original at

Worse Than FailureCodeSOD: Exceptional Standards Compliance

When we're laying out code standards and policies, we are, in many ways, relying on "policing by consent". We are trying to establish standards for behavior among our developers, but we can only do this with their consent. This means our standards have to have clear value, have to be applied fairly and equally. The systems we build to enforce those standards are meant to reduce conflict and de-escalate disagreements, not create them.

But that doesn't mean there won't always be developers who resist following the agreed upon standards. Take, for example, Daniel's co-worker. Their CI process also runs a static analysis step against their C# code, which lets them enforce a variety of coding standards.

One of those standards is: "Catch specific exceptions. Don't catch the generic Exception type unless explicitly necessary." If it is explicitly necessary, their CI system attaches "Comments" (not code comments) to the commit, so all you need to do is click the "resolve" button and provide a brief explanation of why it was necessary.

This wouldn't be an… exceptional standard. Specific is always better than vague, and in this case, the rule isn't hard and fast: you're allowed to violate it if you can explain why.

But explaining yourself sounds like a lot of work. Wouldn't it be easier to try and fool the static analysis tool?

try { {...} } catch (Exception ex) when (ex is Exception exception) { {...} }

C#'s catch block supports a when operator, which is meant to filter based on properties of the exception. The ex is Exception exception is a pattern match and also a cast: it's true if the type of ex is Exception, and also cast it to Exception and store the cast in exception.

Or to word it another way, we catch the exception if it is of the type Exception but only when the type is Exception in which case we cast the exception which we know is an Exception to Exception and store it in exception, and I take exception to all of that.

Presumably, they'll change the rule in the CI system to exclude these, but hopefully they'll also have a talk with the developer responsible about the purpose of standards. Maybe they'll get some "standards by consent", or maybe somebody'll be looking for a new job.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianRuss Allbery: Review: Lower Ed

Review: Lower Ed, by Tressie McMillan Cottom

Publisher: The New Press
Copyright: 2017
Printing: 2018
ISBN: 1-62097-472-X
Format: Kindle
Pages: 217

Lower Ed (subtitled The Troubling Rise of For-Profit Colleges in the New Economy) is the first book by sociologist Tressie McMillan Cottom. (I previously reviewed her second book, the excellent essay collection Thick.) It is a deep look at the sociology of for-profit higher education in the United States based on interviews with students and executives, analysis of Wall Street filings, tests of the admissions process, and her own personal experiences working for two of the schools. One of the questions that McMillan Cottom tries to answer is why students choose to enroll in these institutions, particularly the newer type of institution funded by federal student loans and notorious for being more expensive and less valuable than non-profit colleges and universities.

I was hesitant to read this book because I find for-profit schools depressing. I grew up with the ubiquitous commercials, watched the backlash develop, and have a strongly negative impression of the industry, partly influenced by having worked in traditional non-profit higher education for two decades. The prevailing opinion in my social group is that they're a con job. I was half-expecting a reinforcement of that opinion by example, and I don't like reading infuriating stories about people being defrauded.

I need not have worried. This is not that sort of book (nor, in retrospect, do I think McMillan Cottom would approach a topic from that angle). Sociology is broader than reporting. Lower Ed positions for-profit colleges within a larger social structure of education, credentialing, and changes in workplace expectations; takes a deep look at why they are attractive to their students; and humanizes and complicates the motives and incentives of everyone involved, including administrators and employees of for-profit colleges as well as the students. McMillan Cottom does of course talk about the profit motive and the deceptions surrounding that, but the context is less that of fraud that people are unable to see through and more a balancing of the drawbacks of a set of poor choices embedded in institutional failures.

One of my metrics for a good non-fiction book is whether it introduces me to a new idea that changes how I analyze the world. Lower Ed does that twice.

The first idea is the view of higher education through the lens of risk shifting. It used to be common for employers to hire people without prior job-specific training and do the training in-house, possibly through an apprenticeship structure. More notably, once one was employed by a particular company, the company routinely arranged or provided ongoing training. This went hand-in-hand with a workplace culture of long tenure, internal promotion, attempts to avoid layoffs, and some degree of mutual loyalty. Companies expected to invest significantly in an employee over their career and thus also had an incentive to retain that employee rather than train someone for a competitor.

However, from a purely financial perspective, this is a risk and an inefficiency, similar to the risk of carrying a large inventory of parts and components. Companies have responded to investor-driven focus on profits and efficiency by reducing overhead and shifting risk. This leads to the lean supply chain, where no one pays for parts to sit around in warehouses and companies aren't caught with large stockpiles of now-useless components, but which is more sensitive to any disruption (such as from a global pandemic). And, for employment, it leads to a desire to hire pre-trained workers, retain only enough workers to do the current amount of work, and replace them with new workers who already have appropriate training rather than retrain them.

The effect of the corporate decision to only hire pre-trained employees is to shift the risk and expense of training from the company to the prospective employee. The individual has to seek out training at their own expense in the hope (not guarantee) that at the conclusion of that training they will get or retain a job. People therefore turn to higher education to both provide that training and to help them decide what type of training will eventually be valuable. This has a long history with certain professional fields (doctors and lawyers, for example), but the requirements for completing training in those fields are relatively clear (a professional license to practice) and the compensation reflects the risk. What's new is the shift of training risk to the individual in more mundane jobs, without any corresponding increase in compensation.

This, McMillan Cottom explains, is the background for the growth in demand for higher education in general and the the type of education offered by for-profit colleges in particular. Workers who in previous eras would be trained by their employers are now responsible for their own training. That training is no longer judged by the standards of a specific workplace, but is instead evaluated by a hiring process that expects constant job-shifting. This leads to increased demand by both workers and employers for credentials: some simple-to-check certificate of completion of training that says that this person has the skills to immediately start doing some job. It also leads to a demand for more flexible class hours, since the student is now often someone older with a job and a family to balance. Their ongoing training used to be considered a cost of business and happen during their work hours; now it is something they have to fit around the contours of their life because their employer has shifted that risk to them.

The risk-shifting frame makes sense of the "investment" language so common in for-profit education. In this job economy, education as investment is not a weird metaphor for the classic benefits of a liberal arts education: broadened perspective, deeper grounding in philosophy and ethics, or heightened aesthetic appreciation. It's an investment in the literal financial sense; it is money that you spend now in order to get a financial benefit (a job) in the future. People have to invest in their own training because employers are no longer doing so, but still require the outcome of that investment. And, worse, it's primarily a station-keeping investment. Rather than an optional expenditure that could reap greater benefits later, it's a mandatory expenditure to prevent, at best, stagnation in a job paying poverty wages, and at worst the disaster of unemployment.

This explains renewed demand for higher education, but why for-profit colleges? We know they cost more and have a worse reputation (and therefore their credentials have less value) than traditional non-profit colleges. Flexible hours and class scheduling explains some of this but not all of it. That leads to the second perspective-shifting idea I got from Lower Ed: for-profit colleges are very good at what they focus time and resources on, and they focus on enrolling students.

It is hard to enroll in a university! More precisely, enrolling in a university requires bureaucracy navigation skills, and those skills are class-coded. The people who need them the most are the least likely to have them.

Universities do not reach out to you, nor do they guide you through the process. You have to go to them and discover how to apply, something that is often made harder by the confusing state of many university web sites. The language and process is opaque unless other people in your family have experience with universities and can explain it. There might be someone you can reach on the phone to ask questions, but they're highly unlikely to proactively guide you through the remaining steps. It's your responsibility to understand deadlines, timing, and sequence of operations, and if you miss any of the steps (due to, for example, the overscheduled life of someone in need of better education for better job prospects), the penalty in time and sometimes money can be substantial. And admission is just the start; navigating financial aid, which most students will need, is an order of magnitude more daunting. Community colleges are somewhat easier (and certainly cheaper) than universities, but still have similar obstacles (and often even worse web sites).

It's easy for people like me, who have long professional expertise with bureaucracies, family experience with higher education, and a support network of people to nag me about deadlines, to underestimate this. But the application experience at a for-profit college is entirely different in ways far more profound than I had realized. McMillan Cottom documents this in detail from her own experience working for two different for-profit colleges and from an experiment where she indicated interest in multiple for-profit colleges and then stopped responding before signing admission paperwork. A for-profit college is fully invested in helping a student both apply and get financial aid, devotes someone to helping them through that process, does not expect them to understand how to navigate bureaucracies or decipher forms on their own, does not punish unexpected delays or missed appointments, and goes to considerable lengths to try to keep anyone from falling out of the process before they are enrolled. They do not expect their students to already have the skills that one learns from working in white-collar jobs or from being surrounded by people who do. They provide the kind of support that an educational institution should provide to people who, by definition, don't understand something and need to learn.

Reading about this was infuriating. Obviously, this effort to help people enroll is largely for predatory reasons. For-profit schools make their money off federal loans and they don't get that money unless they can get someone to enroll and fill out financial paperwork (and to some extent keep them enrolled), so admissions is their cash cow and they act accordingly. But that's not why I found it infuriating; that's just predictable capitalism. What I think is inexcusable is that nothing they do is that difficult. We could being doing the same thing for prospective community college students but have made the societal choice not to. We believe that education is valuable, we constantly advocate that people get more job training and higher education, and yet we demand prospective students navigate an unnecessarily baroque and confusing application process with very little help, and then stereotype and blame them for failing to do so.

This admission support is not a question of resources. For-profit colleges are funded almost entirely by federally-guaranteed student loans. We are paying them to help people apply. It is, in McMillan Cottom's term, a negative social insurance program. Rather than buffering people against the negative effects of risk-shifting of employers by helping them into the least-expensive and most-effective training programs (non-profit community colleges and universities), we are spending tax dollars to enrich the shareholders of for-profit colleges while underfunding the alternatives. We are choosing to create a gap that routes government support to the institution that provides worse training at higher cost but is very good at helping people apply. It's as if the unemployment system required one to use payday lenders to get one's unemployment check.

There is more in this book I want to talk about, but this review is already long enough. Suffice it to say that McMillan Cottom's analysis does not stop with market forces and the admission process, and the parts of her analysis that touch on my own personal experience as someone with a somewhat unusual college path ring very true. Speaking as a former community college student, the discussion of class credit transfer policies and the way that institutional prestige gatekeeping and the desire to push back against low-quality instruction becomes a trap that keeps students in the for-profit system deserves another review this length. So do the implications of risk-shifting and credentialism on the morality of "cheating" on schoolwork.

As one would expect from the author of the essay "Thick" about bringing context to sociology, Lower Ed is personal and grounded. McMillan Cottom doesn't shy away from including her own experiences and being explicit about her sources and research. This is backed up by one of the best methodological notes sections I've seen in a book. One of the things I love about McMillan Cottom's writing is that it's solidly academic, not in the sense of being opaque or full of jargon (the text can be a bit dense, but I rarely found it hard to follow), but in the sense of being clear about the sources of knowledge and her methods of extrapolation and analysis. She brings her receipts in a refreshingly concrete way.

I do have a few caveats. First, I had trouble following a structure and line of reasoning through the whole book. Each individual point is meticulously argued and supported, but they are not always organized into a clear progression or framework. That made Lower Ed feel at times like a collection of high-quality but somewhat unrelated observations about credentials, higher education, for-profit colleges, their student populations, their business models, and their relationships with non-profit schools.

Second, there are some related topics that McMillan Cottom touches on but doesn't expand sufficiently for me to be certain I understood them. One of the big ones is credentialism. This is apparently a hot topic in sociology and is obviously important to this book, but it's referenced somewhat glancingly and was not satisfyingly defined (at least for me). There are a few similar places where I almost but didn't quite follow a line of reasoning because the book structure didn't lay enough foundation.

Caveats aside, though, this was meaty, thought-provoking, and eye-opening, and I'm very glad that I read it. This is a topic that I care more about than most people, but if you have watched for-profit colleges with distaste but without deep understanding, I highly recommend Lower Ed.

Rating: 8 out of 10


Planet DebianEnrico Zini: Relationships links

The Bouletcorp » Love & Dragons is a strip I like about fairytale relationships.

There are a lot of mainstream expectations about relationships. These links challenge a few of them:

More about emotional work, some more links to follow a previous links post:

Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.2: New upstream, awesome new stopwatch

Following up on the initial RcppSpdlog 0.0.1 release earlier this week, we are pumped to announce release 0.0.2. It contains upstream version 1.8.0 for spdlog which utilizes (among other things) a new feature in the embedded fmt library, namely completely automated formatting of high resolution time stamps which allows for gems like this (taken from this file in the package and edited down for brevity):

What we see is all there is: One instantiates a stopwatch object, and simply references it. The rest, as they say, is magic. And we get tic / toc alike behaviour in modern C++ at essentially no cost us (as code authors). So nice. Output from the (included in the package) function exampleRsink() (again edited down just a little):

We see that the two simple logging instances come 10 and 18 microseconds into the call.

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.2 (2020-09-17)

  • Upgraded to upstream release 1.8.0

  • Switched Travis CI to using BSPM, also test on macOS

  • Added 'stopwatch' use to main R sink example

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppSpdlog page.

The only sour grapes, again, are once more over the CRAN processing. And just how 0.0.1 was delayed for no good reason for three weeks, 0.0.2 was delayed by three days just because … well that is how CRAN rules sometimes. I’d be even more mad if I had an alternative but I don’t. We remain grateful for all they do but they really could have let this one through even at one-day update delta. Ah well, now we’re three days wiser and of course nothing changed in the package.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianAndy Simpkins: Using an IP camera in conference calls

My webcam has broken, something that I have been using a lot during the last few months for some reason.

A friend of mine suggested that I use the mic and camera on my mobile phone instead.  There is a simple app ‘droidcam’ that makes the phone behave as a simple webcam, it also has a client application to run on your PC to capture the web-stream and present it as a video device.  All well and good but I would like to keep propitiatory software off my PCs (I have a hard enough time accepting it on a phone but I have to draw a line somewhere).

I decided that there had to be a simple way to ingest that stream and present it as a local video device on a Linux box.  It turns out that it is a lot simpler than I thought it would be.  I had it working within 10 minutes!

Packages needed:  ffmpeg, v4l2loopback-utils

sudo apt-get install ffmpeg v4l2loopback-utils

Start the loop-back device:

sudo modprobe v4l2loopback

Ingest video and present on loop-back device:

ffmpeg -re -i <URL_VideoStream> -vcodec rawvideo -pix_fmt yuv420p -f v4l2 /dev/video0


Read input at native frame rate. Mainly used to simulate a grab
device, or live input stream (e.g. when reading from a file). Should
not be used with actual grab devices or live input streams (where it
can cause packet loss). By default ffmpeg attempts to read the
input(s) as fast as possible. This option will slow down the reading
of the input(s) to the native frame rate of the input(s). It is
useful for real-time output (e.g. live streaming).
 -i <source>
In this case my phone running droidcam
-vcodec rawvideo
Select the output video codec to raw video (as expected for /dev/video#)
-pix_fmt yuv420p
Set pixel format.  in this example tell ffmpeg that the video will
be yuv colour space at 420p resolution
-f v4l2 /dev/video0
Force the output to be in video for Linux 2 (v4l2) format bound to
device /dev/video0


Planet DebianVincent Bernat: Keepalived and unicast over multiple interfaces

Keepalived is a Linux implementation of VRRP. The usual role of VRRP is to share a virtual IP across a set of routers. For each VRRP instance, a leader is elected and gets to serve the IP address, ensuring the high availability of the attached service. Keepalived can also be used for a generic leader election, thanks to its ability to use scripts for healthchecking and run commands on state change.

A simple configuration looks like this:

vrrp_instance gateway1 {
  state BACKUP          # ❶
  interface eth0        # ❷
  virtual_router_id 12  # ❸
  priority 101          # ❹
  virtual_ipaddress {

The state keyword in ❶ instructs Keepalived to not take the leader role when starting. Otherwise, incoming nodes create a temporary disruption by taking over the IP address until the election settles. The interface keyword in ❷ defines the interface for sending and receiving VRRP packets. It is also the default interface to configure the virtual IP address. The virtual_router_id directive in ❸ is common to all nodes sharing the virtual IP. The priority keyword in ❹ helps choosing which router will be elected as leader. If you need more information around Keepalived, be sure to check the documentation.

VRRP design is tied to Ethernet networks and requires a multicast-enabled network for communication between nodes. In some environments, notably public clouds, multicast is unavailable. In this case, Keepalived can send VRRP packets using unicast:

vrrp_instance gateway1 {
  state BACKUP
  interface eth0
  virtual_router_id 12
  priority 101
  unicast_peer {
  virtual_ipaddress {
    2001:db8:ff/64 dev lo

Another process, like a BGP daemon, should advertise the virtual IP address to the “network”. If needed, Keepalived can trigger whatever action is needed for this by using notify_* scripts.

Until version 2.21 (not released yet), the interface directive is mandatory and Keepalived will transmit and receive VRRP packets on this interface only. If peers are reachable through several interfaces, like on a BGP on the host setup, you need a workaround. A simple one is to use a VXLAN interface:

$ ip -6 link add keepalived6 type vxlan id 6 dstport 4789 local 2001:db8::10 nolearning
$ bridge fdb append 00:00:00:00:00:00 dev keepalived6 dst 2001:db8::11
$ bridge fdb append 00:00:00:00:00:00 dev keepalived6 dst 2001:db8::12
$ ip link set up dev keepalived6

Learning of MAC addresses is disabled and one generic entry for each peer is added in the forwarding database: transmitted packets are broadcasted to all peers, notably VRRP packets. Have a look at “VXLAN & Linux” for additional details.

vrrp_instance gateway1 {
  state BACKUP
  interface keepalived6
  mcast_src_ip 2001:db8::10
  virtual_router_id 12
  priority 101
  virtual_ipaddress {
    2001:db8:ff/64 dev lo

Starting from Keepalived 2.21, unicast_peer can be used without the interface directive. I think using VXLAN is still a neat trick applicable to other situations where communication using broadcast or multicast is needed, while the underlying network provide no support for this.

Rondam RamblingsGame over for the USA

I would like to think that Ruth Bader Ginsberg's untimely passing is not the catastrophe that it appears to be.  I would like to think that Mitch McConnell is a man of principle, and having once said that the Senate should not confirm a Supreme Court justice in an election year he will not brazenly expose himself as a hypocrite and confirm a Supreme Court justice in an election year.I would like

Planet DebianVincent Bernat: Syncing NetBox with a custom Ansible module

The netbox.netbox collection from Ansible Galaxy provides several modules to update NetBox objects:

- name: create a device in NetBox
    netbox_url: http://netbox.local
    netbox_token: s3cret
      device_type: QFX5110-48S
      device_role: Compute Switch
      site: SFO1

However, if NetBox is not your source of truth, you may want to ensure it stays in sync with your configuration management database1 by removing outdated devices or IP addresses. While it should be possible to glue together a playbook with a query, a loop and some filtering to delete unwanted elements, it feels clunky, inefficient and an abuse of YAML as a programming language. A specific Ansible module solves this issue and is likely more flexible.


I recommend that you read “Writing a custom Ansible module� as an introduction, as well as “Syncing MySQL tables� for a first simpler example.


The module has the following signature and it syncs NetBox with the content of the provided YAML file:

  source: netbox.yaml
  token: s3cret

The synchronized objects are:

  • sites,
  • manufacturers,
  • device types,
  • device roles,
  • devices, and
  • IP addresses.

In our environment, the YAML file is generated from our configuration management database and contains a set of devices and a list of IP addresses:

     datacenter: sfo1
     manufacturer: Cisco
     model: Catalyst 2960G-48TC-L
     role: net_tor_oob_switch
     datacenter: sfo1
     manufacturer: Juniper
     model: QFX5110-48S
     role: net_tor_gpu_switch
# […]
  - device:
    interface: oob
  - device:
    interface: oob
  - device:
    interface: lo0.0
# […]

The network team is not the sole tenant in NetBox. While adding new objects or modifying existing ones should be relatively safe, deleting unwanted objects can be risky. The module only deletes objects it did create or modify. To identify them, it marks them with a specific tag, cmdb. Most objects in NetBox accept tags.

Module definition​

Starting from the skeleton described in the previous article, we define the module:

module_args = dict(
    source=dict(type='path', required=True),
    api=dict(type='str', required=True),
    token=dict(type='str', required=True, no_log=True),
    max_workers=dict(type='int', required=False, default=10)

result = dict(

module = AnsibleModule(

It contains an additional optional arguments defining the number of workers to talk to NetBox and query the existing objects in parallel to speedup the execution.

Abstracting synchronization​

We need to synchronize different object types, but once we have a list of objects we want in NetBox, the grunt work is always the same:

  • check if the objects already exist,
  • retrieve them and put them in a form suitable for comparison,
  • retrieve the extra objects we don’t want anymore,
  • compare the two sets, and
  • add missing objects, update existing ones, delete extra ones.

We code these behaviours into a Synchronizer abstract class. For each kind of object, a concrete class is built with the appropriate class attributes to tune its behaviour and a wanted() method to provide the objects we want.

I am not explaining the abstract class code here. Have a look at the source if you want.

Synchronizing tags and tenants​

As a starter, here is how we define the class synchronizing the tags:

class SyncTags(Synchronizer):
    app = "extras"
    table = "tags"
    key = "name"

    def wanted(self):
        return {"cmdb": dict(
            description="synced by network CMDB")}

The app and table attributes defines the NetBox objects we want to manipulate. The key attribute is used to determine how to lookup for existing objects. In this example, we want to lookup tags using their names.

The wanted() method is expected to return a dictionary mapping object keys to the list of wanted attributes. Here, the keys are tag names and we create only one tag, cmdb, with the provided slug, color and description. This is the tag we will use to mark the objects we create or modify.

If the tag does not exist, it is created. If it exists, the provided attributes are updated. Other attributes are left untouched.

We also want to create a specific tenant for objects accepting such an attribute (devices and IP addresses):

class SyncTenants(Synchronizer):
    app = "tenancy"
    table = "tenants"
    key = "name"

    def wanted(self):
        return {"Network": dict(slug="network",
                                description="Network team")}

Synchronizing sites​

We also need to synchronize the list of sites. This time, the wanted() method uses the information provided in the YAML file: it walks the devices and builds a set of datacenter names.

class SyncSites(Synchronizer):

    app = "dcim"
    table = "sites"
    key = "name"
    only_on_create = ("status", "slug")

    def wanted(self):
        result = set(details["datacenter"]
                     for details in self.source['devices'].values()
                     if "datacenter" in details)
        return {k: dict(slug=k,
                for k in result}

Thanks to the use of the only_on_create attribute, the specified attributes are not updated if they are different. The goal of this synchronizer is mostly to collect the references to the different sites for other objects.

>>> pprint(SyncSites(**sync_args).wanted())
{'sfo1': {'slug': 'sfo1', 'status': 'planned'},
 'chi1': {'slug': 'chi1', 'status': 'planned'},
 'nyc1': {'slug': 'nyc1', 'status': 'planned'}}

Synchronizing manufacturers, device types and device roles​

The synchronization of manufacturers is pretty similar, except we do not use the only_on_create attribute:

class SyncManufacturers(Synchronizer):

    app = "dcim"
    table = "manufacturers"
    key = "name"

    def wanted(self):
        result = set(details["manufacturer"]
                     for details in self.source['devices'].values()
                     if "manufacturer" in details)
        return {k: {"slug": slugify(k)}
                for k in result}

Regarding the device types, we use the foreign attribute linking a NetBox attribute to the synchronizer handling it.

class SyncDeviceTypes(Synchronizer):

    app = "dcim"
    table = "device_types"
    key = "model"
    foreign = {"manufacturer": SyncManufacturers}

    def wanted(self):
        result = set((details["manufacturer"], details["model"])
                     for details in self.source['devices'].values()
                     if "model" in details)
        return {k[1]: dict(manufacturer=k[0],
                for k in result}

The wanted() method refers to the manufacturer using its key attribute. In this case, this is the manufacturer name.

>>> pprint(SyncManufacturers(**sync_args).wanted())
{'Cisco': {'slug': 'cisco'},
 'Dell': {'slug': 'dell'},
 'Juniper': {'slug': 'juniper'}}
>>> pprint(SyncDeviceTypes(**sync_args).wanted())
{'ASR 9001': {'manufacturer': 'Cisco', 'slug': 'asr-9001'},
 'Catalyst 2960G-48TC-L': {'manufacturer': 'Cisco',
                           'slug': 'catalyst-2960g-48tc-l'},
 'MX10003': {'manufacturer': 'Juniper', 'slug': 'mx10003'},
 'QFX10002-36Q': {'manufacturer': 'Juniper', 'slug': 'qfx10002-36q'},
 'QFX10002-72Q': {'manufacturer': 'Juniper', 'slug': 'qfx10002-72q'},
 'QFX5110-32Q': {'manufacturer': 'Juniper', 'slug': 'qfx5110-32q'},
 'QFX5110-48S': {'manufacturer': 'Juniper', 'slug': 'qfx5110-48s'},
 'QFX5200-32C': {'manufacturer': 'Juniper', 'slug': 'qfx5200-32c'},
 'S4048-ON': {'manufacturer': 'Dell', 'slug': 's4048-on'},
 'S6010-ON': {'manufacturer': 'Dell', 'slug': 's6010-on'}}

The device roles are defined like this:

class SyncDeviceRoles(Synchronizer):

    app = "dcim"
    table = "device_roles"
    key = "name"

    def wanted(self):
        result = set(details["role"]
                     for details in self.source['devices'].values()
                     if "role" in details)
        return {k: dict(slug=slugify(k),
                for k in result}

Synchronizing devices​

A device is mostly a name with references to a role, a model, a datacenter and a tenant. These references are declared as foreign keys using the synchronizers defined previously.

class SyncDevices(Synchronizer):
    app = "dcim"
    table = "devices"
    key = "name"
    foreign = {"device_role": SyncDeviceRoles,
               "device_type": SyncDeviceTypes,
               "site": SyncSites,
               "tenant": SyncTenants}
    remove_unused = 10

    def wanted(self):
        return {name: dict(device_role=details["role"],
                for name, details in self.source['devices'].items()
                if {"datacenter", "model", "role"} <= set(details.keys())}

The remove_unused attribute is a safety implemented to fail if we have to delete more than 10 devices: this may be the indication there is a bug somewhere, unless one of your datacenter suddenly caught fire.

>>> pprint(SyncDevices(**sync_args).wanted())
{'': {'device_role': 'net_tor_oob_switch',
                             'device_type': 'Catalyst 2960G-48TC-L',
                             'site': 'sfo1',
                             'tenant': 'Network'},
 '': {'device_role': 'net_tor_gpu_switch',
                             'device_type': 'QFX5110-48S',
                             'site': 'sfo1',
                             'tenant': 'Network'},

Synchronizing IP addresses​

The last step is to synchronize IP addresses. We do not attach them to a device.2 Instead, we specify the device names in the description of the IP address:

class SyncIPs(Synchronizer):
    app = "ipam"
    table = "ip-addresses"
    key = "address"
    foreign = {"tenant": SyncTenants}
    remove_unused = 1000

    def wanted(self):
        wanted = {}
        for details in self.source['ips']:
            if details['ip'] in wanted:
                wanted[details['ip']]['description'] = \
                    f"{details['device']} (and others)"
                wanted[details['ip']] = dict(
                    dns_name="",        # information is present in DNS
                    description=f"{details['device']}: {details['interface']}",
        return wanted

There is a slight difficulty: NetBox allows duplicate IP addresses, so a simple lookup is not enough. In case of multiple matches, we choose the best by preferring those tagged with cmdb, then those already attached to an interface:

def get(self, key):
    """Grab IP address from NetBox."""
    # There may be duplicate. We need to grab the "best".
    results = super(Synchronizer, self).get(key)
    if len(results) == 0:
        return None
    if len(results) == 1:
        return results[0]
    scores = [0]*len(results)
    for idx, result in enumerate(results):
        if "cmdb" in result.tags:
            scores[idx] += 10
        if result.interface is not None:
            scores[idx] += 5
    return sorted(zip(scores, results),
                  reverse=True, key=lambda k: k[0])[0][1]

Getting the current and wanted states​

Each synchronizer is initialized with a reference to the Ansible module, a reference to a pynetbox’s API object, the data contained in the provided YAML file and two empty dictionaries for the current and expected states:

source = yaml.safe_load(open(module.params['source']))
netbox = pynetbox.api(module.params['api'],

sync_args = dict(
synchronizers = [synchronizer(**sync_args) for synchronizer in [

Each synchronizer has a prepare() method whose goal is to compute the current and wanted states. It returns True in case of a difference:

# Check what needs to be synchronized
    for synchronizer in synchronizers:
        result['changed'] |= synchronizer.prepare()
except AnsibleError as e:
    result['msg'] = e.message

Applying changes​

Back to the skeleton described in the previous article, the last step is to apply the changes if there is a difference between these states. Each synchronizer registers the current and wanted states in sync_args["before"][table] and sync_args["after"][table] where table is the name of the table for a given NetBox object type. The diff object is a bit elaborate as it is built table by table. This enables Ansible to display the name of each table before the diff representation:

# Compute the diff
if module._diff and result['changed']:
    result['diff'] = [
        for table in sync_args["after"]
        if sync_args["before"][table] != sync_args["after"][table]

# Stop here if check mode is enabled or if no change
if module.check_mode or not result['changed']:

Each synchronizer also exposes a synchronize() method to apply changes and a cleanup() method to delete unwanted objects. Order is important due to the relation between the objects.

# Synchronize
for synchronizer in synchronizers:
for synchronizer in synchronizers[::-1]:

The complete code is available on GitHub. Compared to using netbox.netbox collection, the logic is written in Python instead of trying to glue Ansible tasks together. I believe this is both more flexible and easier to read, notably when trying to delete outdated objects. While I did not test it, it should also be faster. An alternative would have been to reuse code from the netbox.netbox collection, as it contains similar primitives. Unfortunately, I didn’t think of it until now. 😶

  1. In my opinion, a good option for a source of truth is to use YAML files in a Git repository. You get versioning for free and people can get started with a text editor. ↩�

  2. This limitation is mostly due to laziness: we do not really care about this information. Our main motivation for putting IP addresses in NetBox is to keep track of the used IP addresses. However, if an IP address is already attached to an interface, we leave this association untouched. ↩�

Planet DebianBits from Debian: New Debian Maintainers (July and August 2020)

The following contributors were added as Debian Maintainers in the last two months:

  • Chirayu Desai
  • Shayan Doust
  • Arnaud Ferraris
  • Fritz Reichwald
  • Kartik Kulkarni
  • François Mazen
  • Patrick Franz
  • Francisco Vilmar Cardoso Ruviaro
  • Octavio Alvarez
  • Nick Black


Planet DebianRussell Coker: Burning Lithium Ion Batteries

I had an old Nexus 4 phone that was expanding and decided to test some of the theories about battery combustion.

The first claim that often gets made is that if the plastic seal on the outside of the battery is broken then the battery will catch fire. I tested this by cutting the battery with a craft knife. With every cut the battery sparked a bit and then when I levered up layers of the battery (it seems to be multiple flat layers of copper and black stuff inside the battery) there were more sparks. The battery warmed up, it’s plausible that in a confined environment that could get hot enough to set something on fire. But when the battery was resting on a brick in my backyard that wasn’t going to happen.

The next claim is that a Li-Ion battery fire will be increased with water. The first thing to note is that Li-Ion batteries don’t contain Lithium metal (the Lithium high power non-rechargeable batteries do). Lithium metal will seriously go off it exposed to water. But lots of other Lithium compounds will also react vigorously with water (like Lithium oxide for example). After cutting through most of the center of the battery I dripped some water in it. The water boiled vigorously and the corners of the battery (which were furthest away from the area I cut) felt warmer than they did before adding water. It seems that significant amounts of energy are released when water reacts with whatever is inside the Li-Ion battery. The reaction was probably giving off hydrogen gas but didn’t appear to generate enough heat to ignite hydrogen (which is when things would really get exciting). Presumably if a battery was cut in the presence of water while in an enclosed space that traps hydrogen then the sparks generated by the battery reacting with air could ignite hydrogen generated from the water and give an exciting result.

It seems that a CO2 fire extinguisher would be best for a phone/tablet/laptop fire as that removes oxygen and cools it down. If that isn’t available then a significant quantity of water will do the job, water won’t stop the reaction (it can prolong it), but it can keep the reaction to below 100C which means it won’t burn a hole in the floor and the range of toxic chemicals released will be reduced.

The rumour that a phone fire on a plane could do a “China syndrome” type thing and melt through the Aluminium body of the plane seems utterly bogus. I gave it a good try and was unable to get a battery to burn through it’s plastic and metal foil case. A spare battery for a laptop in checked luggage could be a major problem for a plane if it ignited. But a battery in the passenger area seems unlikely to be a big problem if plenty of water is dumped on it to prevent the plastic case from burning and polluting the air.

I was not able to get a result that was even worthy of a photograph. I may do further tests with laptop batteries.

Cryptogram Nihilistic Password Security Questions

Planet DebianJelmer Vernooij: Debian Janitor: Expanding Into Improving Multi-Arch

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

As of dpkg 1.16.2 and apt 0.8.13, Debian has full support for multi-arch. To quote from the multi-arch implementation page:

Multiarch lets you install library packages from multiple architectures on the same machine. This is useful in various ways, but the most common is installing both 64 and 32- bit software on the same machine and having dependencies correctly resolved automatically. In general you can have libraries of more than one architecture installed together and applications from one architecture or another installed as alternatives.

The Multi-Arch specification describes a new Multi-Arch header which can be used to indicate how to resolve cross-architecture dependencies.

The existing Debian Multi-Arch hinter is a version of that compares binary packages between architectures and suggests fixes to resolve multi-arch problems. It provides hints as to what Multi- Arch fields can be set, allowing the packages to be safely installed in a Multi-Arch world. The full list of almost 10,000 hints generated by the hinter is available at

Recent versions of lintian-brush now include a command called apply-multiarch-hints that downloads and locally caches the hints and can apply them to a package maintained in Git. For example, to apply multi-arch hints to autosize.js:

 $ debcheckout autosize.js
 declared git repository at
 git clone autosize.js ...
 Cloning into 'autosize.js'...
 $ cd autosize.js
 $ apply-multiarch-hints
 Downloading new version of multi-arch hints.
 libjs-autosize: Add Multi-Arch: foreign.
 node-autosize: Add Multi-Arch: foreign.
 $ git log -p
 commit 3f8d1db5af4a87e6ebb08f46ddf79f6adf4e95ae (HEAD -> master)
 Author: Jelmer Vernooij <>
 Date:   Fri Sep 18 23:37:14 2020 +0000

     Apply multi-arch hints.
     + libjs-autosize, node-autosize: Add Multi-Arch: foreign.

     Changes-By: apply-multiarch-hints

 diff --git a/debian/changelog b/debian/changelog
 index e7fa120..09af4a7 100644
 --- a/debian/changelog
 +++ b/debian/changelog
 @@ -1,3 +1,10 @@
 +autosize.js (4.0.2~dfsg1-5) UNRELEASED; urgency=medium
 +  * Apply multi-arch hints.
 +    + libjs-autosize, node-autosize: Add Multi-Arch: foreign.
 + -- Jelmer Vernooij <>  Fri, 18 Sep 2020 23:37:14 -0000
  autosize.js (4.0.2~dfsg1-4) unstable; urgency=medium

    * Team upload
 diff --git a/debian/control b/debian/control
 index 01ca968..fbba1ae 100644
 --- a/debian/control
 +++ b/debian/control
 @@ -20,6 +20,7 @@ Architecture: all
  Depends: ${misc:Depends}
  Recommends: javascript-common
  Breaks: ruby-rails-assets-autosize (<< 4.0)
 +Multi-Arch: foreign
  Description: script to automatically adjust textarea height to fit text - NodeJS
   Autosize is a small, stand-alone script to automatically adjust textarea
   height to fit text. The autosize function accepts a single textarea element,
 @@ -32,6 +33,7 @@ Package: node-autosize
  Architecture: all
  Depends: ${misc:Depends}
   , nodejs
 +Multi-Arch: foreign
  Description: script to automatically adjust textarea height to fit text - Javascript
   Autosize is a small, stand-alone script to automatically adjust textarea
   height to fit text. The autosize function accepts a single textarea element,

The Debian Janitor also has a new multiarch-fixes suite that runs apply-multiarch-hints across packages in the archive and proposes merge requests. For example, you can see the merge request against autosize.js here.

For more information about the Janitor's lintian-fixes efforts, see the landing page.


Planet DebianDaniel Lange: Fixing the Nextcloud menu to show more than eight application icons

I have been late to adopt an on-premise cloud solution as the security of Owncloud a few years ago wasn't so stellar (cf. my comment from 2013 in Encryption files ... for synchronization across the Internet). But the follow-up product Nextcloud has matured quite nicely and we use it for collaboration both in the company and in FLOSS related work at multiple nonprofit organizations.

There is a very annoying "feature" in Nextcloud though that the designers think menu items for apps at the top need to be limited to eight or less to prevent information overload in the header. The whole item discussion is worth reading as it it an archetypical example of design prevalence vs. user choice.

And of course designers think they are right. That's a feature of the trade.
And because they know better there is no user configurable option to extend that 8 items to may be 12 or so which would prevent the annoying overflow menu we are seeing with 10 applications in use:

Screenshot of stock Nextcloud menu

Luckily code can be changed and there are many comments floating around the Internet to change const minAppsDesktop = 8. In this case it is slightly complicated by the fact that the javascript code is distributed in compressed form (aka "minified") as core/js/dist/main.js and you probably don't want to build the whole beast locally to change one constant.


const breakpoint_mobile_width = 1024;

const resizeMenu = () => {
    const appList = $('#appmenu li')
    const rightHeaderWidth = $('.header-right').outerWidth()
    const headerWidth = $('header').outerWidth()
    const usePercentualAppMenuLimit = 0.33
    const minAppsDesktop = 8
    let availableWidth = headerWidth - $('#nextcloud').outerWidth() - (rightHeaderWidth > 210 ? rightHeaderWidth : 210)
    const isMobile = $(window).width() < breakpoint_mobile_width
    if (!isMobile) {
        availableWidth = availableWidth * usePercentualAppMenuLimit
    let appCount = Math.floor((availableWidth / $(appList).width()))
    if (isMobile && appCount > minAppsDesktop) {
        appCount = minAppsDesktop
    if (!isMobile && appCount < minAppsDesktop) {
        appCount = minAppsDesktop

    // show at least 2 apps in the popover
    if (appList.length - 1 - appCount >= 1) {

    $('#more-apps a').removeClass('active')
    let lastShownApp
    for (let k = 0; k < appList.length - 1; k++) {
        const name = $(appList[k]).data('id')
        if (k < appCount) {
            $('#apps li[data-id=' + name + ']').addClass('in-header')
            lastShownApp = appList[k]
        } else {
            $('#apps li[data-id=' + name + ']').removeClass('in-header')
            // move active app to last position if it is active
            if (appCount > 0 && $(appList[k]).children('a').hasClass('active')) {
                $('#apps li[data-id=' + $(lastShownApp).data('id') + ']').removeClass('in-header')
                $('#apps li[data-id=' + name + ']').addClass('in-header')

    // show/hide more apps icon
    if ($('#apps li:not(.in-header)').length === 0) {
    } else {

gets compressed during build time to become part of one 15,000+ character line. The relevant portion reads:

var f=function(){var e=s()("#appmenu li"),t=s()(".header-right").outerWidth(),n=s()("header").outerWidth()-s()("#nextcloud").outerWidth()-(t>210?t:210),i=s()(window).width()<1024;i||(n*=.33);var r,o=Math.floor(n/s()(e).width());i&&o>8&&(o=8),!i&&o<8&&(o=8),e.length-1-o>=1&&o--,s()("#more-apps a").removeClass("active");for(var a=0;a<e.length-1;a++){var l=s()(e[a]).data("id");a<o?(s()(e[a]).removeClass("hidden"),s()("#apps li[data-id="+l+"]").addClass("in-header"),r=e[a]):(s()(e[a]).addClass("hidden"),s()("#apps li[data-id="+l+"]").removeClass("in-header"),o>0&&s()(e[a]).children("a").hasClass("active")&&(s()(r).addClass("hidden"),s()("#apps li[data-id="+s()(r).data("id")+"]").removeClass("in-header"),s()(e[a]).removeClass("hidden"),s()("#apps li[data-id="+l+"]").addClass("in-header")))}0===s()("#apps li:not(.in-header)").length?(s()("#more-apps").hide(),s()("#navigation").hide()):s()("#more-apps").show()}

Well, we can still patch that, can we?

Continue reading "Fixing the Nextcloud menu to show more than eight application icons"

Planet DebianSven Hoexter: Avoiding the GitHub WebUI

Now that GitHub released v1.0 of the gh cli tool, and this is all over HN, it might make sense to write a note about my clumsy aliases and shell functions I cobbled together in the past month. Background story is that my dayjob moved to GitHub coming from Bitbucket. From my point of view the WebUI for Bitbucket is mediocre, but the one at GitHub is just awful and painful to use, especially for PR processing. So I longed for the terminal and ended up with gh and wtfutil as a dashboard.

The setup we have is painful on its own, with several orgs and repos which are more like monorepos covering several corners of infrastructure, and some which are very focused on a single component. All workflows are anti GitHub workflows, so you must have permission on the repo, create a branch in that repo as a feature branch, and open a PR for the merge back into master.

gh functions and aliases

# setup a token with perms to everything, dealing with SAML is a PITA
export GITHUB_TOKEN="c0ffee4711"
# I use a light theme on my terminal, so adjust the gh theme
export GLAMOUR_STYLE="light"

#simple aliases to poke at a PR
alias gha="gh pr review --approve"
alias ghv="gh pr view"
alias ghd="gh pr diff"

### github support functions, most invoked with a PR ID as $1

#primary function to review PRs
function ghs {
    gh pr view ${1}
    gh pr checks ${1}
    gh pr diff ${1}

# very custom PR create function relying on ORG and TEAM settings hard coded
# main idea is to create the PR with my team directly assigned as reviewer
function ghc {
    if git status | grep -q 'Untracked'; then
        echo "ERROR: untracked files in branch"
        git status
        return 1
    git push --set-upstream origin HEAD
    gh pr create -f -r "$(git remote -v | grep push | grep -oE 'myorg-[a-z]+')/myteam"

# merge a PR and update master if we're not in a different branch
function ghm {
    gh pr merge -d -r ${1}
    if [[ "$(git rev-parse --abbrev-ref HEAD)" == "master" ]]; then
        git pull

# get an overview over the files changed in a PR
function ghf {
    gh pr diff ${1} | diffstat -l

# generate a link to a commit in the WebUI to pass on to someone else
# input is a git commit hash
function ghlink {
    local repo="$(git remote -v | grep -E "github.+push" | cut -d':' -f 2 | cut -d'.' -f 1)"
    echo "${repo}/commit/${1}"


I have a terminal covering half my screensize with small dashboards listing PRs for the repos I care about. For other repos I reverted back to mail notifications which get sorted and processed from time to time. A sample dashboard config looks like this:

  apiKey: "c0ffee4711"
  baseURL: ""
      title: "Pull Requests"
      filter: "is:open is:pr -author:hoexter -label:dependencies"
  enabled: true
  enableStatus: true
  showOpenReviewRequests: false
  showStats: false
    top: 0
    left: 0
    height: 3
    width: 1
  refreshInterval: 30
    - "myorg/admin"
  uploadURL: ""
  username: "hoexter"
  type: github

The -label:dependencies is used here to filter out dependabot PRs in the dashboard.


Look at a PR with ghv $ID, if it's ok ACK it with gha $ID. Create a PR from a feature branch with ghc and later on merge it with ghm $ID. The $ID is retrieved from looking at my wtfutil based dashboard.

Security Considerations

The world is full of bad jokes. For the WebUI access I've the full array of pain with SAML auth, which expires too often, and 2nd factor verification for my account backed by a Yubikey. But to work with the CLI you basically need an API token with full access, everything else drives you insane. So I gave in and generated exactly that. End result is that I now have an API token - which is basically a password - which has full power, and is stored in config files and environment variables. So the security features created around the login are all void now. Was that the aim of it after all?

Planet Linux AustraliaFrancois Marier: Setting up and testing an NPR modem on Linux

After acquiring a pair of New Packet Radio modems on behalf of VECTOR, I set it up on my Linux machine and ran some basic tests to check whether it could achieve the advertised 500 kbps transfer rates, which are much higher than AX25) packet radio.

The exact equipment I used was:

Radio setup

After connecting the modems to the power supply and their respective antennas, I connected both modems to my laptop via micro-USB cables and used minicom to connect to their console on /dev/ttyACM[01]:

minicom -8 -b 921600 -D /dev/ttyACM0
minicom -8 -b 921600 -D /dev/ttyACM1

To confirm that the firmware was the latest one, I used the following command:

ready> version
firmware: 2020_02_23
freq band: 70cm

then I immediately turned off the radio:

radio off

which can be verified with:


Following the British Columbia 70 cm band plan, I picked the following frequency, modulation (bandwidth of 360 kHz), and power (0.05 W):

set frequency 433.500
set modulation 22
set RF_power 7

and then did the rest of the configuration for the master:

set callsign VA7GPL_0
set is_master yes
set DHCP_active no
set telnet_active no

and the client:

set callsign VA7GPL_1
set is_master no
set DHCP_active yes
set telnet_active no

and that was enough to get the two modems to talk to one another.

On both of them, I ran the following:


and confirmed that they were able to successfully connect to each other:


Monitoring RF

To monitor what is happening on the air and quickly determine whether or not the modems are chatting, you can use a software-defined radio along with gqrx with the following settings:

frequency: 433.500 MHz
filter width: user (80k)
filter shape: normal
mode: Raw I/Q

I found it quite helpful to keep this running the whole time I was working with these modems. The background "keep alive" sounds are quite distinct from the heavy traffic sounds.

IP setup

The radio bits out of the way, I turned to the networking configuration.

On the master, I set the following so that I could connect the master to my home network ( without conflicts:

set def_route_active yes
set DNS_active no
set modem_IP
set IP_begin
set master_IP_size 29
set netmask

(My router's DHCP server is configured to allocate dynamic IP addresses from to

At this point, I connected my laptop to the client using a CAT-5 network cable and the master to the ethernet switch, essentially following Annex 5 of the Advanced User Guide.

My laptop got assigned IP address and so I used another computer on the same network to ping my laptop via the NPR modems:


This gave me a round-trip time of around 150-250 ms.

Performance test

Having successfully established an IP connection between the two machines, I decided to run a quick test to measure the available bandwidth in an ideal setting (i.e. the two antennas very close to each other).

On both computers, I installed iperf:

apt install iperf

and then setup the iperf server on my desktop computer:

sudo iptables -A INPUT -s -p TCP --dport 5001 -j ACCEPT
sudo iptables -A INPUT -s -u UDP --dport 5001 -j ACCEPT
iperf --server

On the laptop, I set the MTU to 750 in NetworkManager:

and restarted the network.

Then I created a new user account (npr with a uid of 1001):

sudo adduser npr

and made sure that only that account could access the network by running the following as root:

# Flush all chains.
iptables -F

# Set defaults policies.
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP

# Don't block localhost and ICMP traffic.
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT

# Don't re-evaluate already accepted connections.
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

# Allow connections to/from the test user.
iptables -A OUTPUT -m owner --uid-owner 1001 -m conntrack --ctstate NEW -j ACCEPT

# Log anything that gets blocked.
iptables -A INPUT -j LOG
iptables -A OUTPUT -j LOG
iptables -A FORWARD -j LOG

then I started the test as the npr user:

sudo -i -u npr
iperf --client


The results were as good as advertised both with modulation 22 (360 kHz bandwidth):

$ iperf --client --time 30
Client connecting to, TCP port 5001
TCP window size: 85.0 KByte (default)
[  3] local port 58462 connected with port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-34.5 sec  1.12 MBytes   274 Kbits/sec

Client connecting to, TCP port 5001
TCP window size: 85.0 KByte (default)
[  3] local port 58468 connected with port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-42.5 sec  1.12 MBytes   222 Kbits/sec

Client connecting to, TCP port 5001
TCP window size: 85.0 KByte (default)
[  3] local port 58484 connected with port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-38.5 sec  1.12 MBytes   245 Kbits/sec

and modulation 24 (1 MHz bandwitdh):

$ iperf --client --time 30
Client connecting to, TCP port 5001
TCP window size: 85.0 KByte (default)
[  3] local port 58148 connected with port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-31.1 sec  1.88 MBytes   506 Kbits/sec

Client connecting to, TCP port 5001
TCP window size: 85.0 KByte (default)
[  3] local port 58246 connected with port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.5 sec  2.00 MBytes   550 Kbits/sec

Client connecting to, TCP port 5001
TCP window size: 85.0 KByte (default)
[  3] local port 58292 connected with port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.0 sec  2.00 MBytes   559 Kbits/sec

Worse Than FailureError'd: Just a Trial Run

"How does Netflix save money when making their original series? It's simple. They just use trial versions of VFX software," Nick L. wrote.


Chris A. writes, "Why get low quality pixelated tickets when you can have these?"


"Better make room! This USB disk enclosure is ready for supper and som really mega-bytes!" wrote Stuart L.


Scott writes, "Go Boncos!"


"With rewards like these, I can't believe more people don't pledge on Patreon!" writes Chris A.


[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Krebs on SecurityChinese Antivirus Firm Was Part of APT41 ‘Supply Chain’ Attack

The U.S. Justice Department this week indicted seven Chinese nationals for a decade-long hacking spree that targeted more than 100 high-tech and online gaming companies. The government alleges the men used malware-laced phishing emails and “supply chain” attacks to steal data from companies and their customers. One of the alleged hackers was first profiled here in 2012 as the owner of a Chinese antivirus firm.

Image: FBI

Charging documents say the seven men are part of a hacking group known variously as “APT41,” “Barium,” “Winnti,” “Wicked Panda,” and “Wicked Spider.” Once inside of a target organization, the hackers stole source code, software code signing certificates, customer account data and other information they could use or resell.

APT41’s activities span from the mid-2000s to the present day. Earlier this year, for example, the group was tied to a particularly aggressive malware campaign that exploited recent vulnerabilities in widely-used networking products, including flaws in Cisco and D-Link routers, as well as Citrix and Pulse VPN appliances. Security firm FireEye dubbed that hacking blitz “one of the broadest campaigns by a Chinese cyber espionage actor we have observed in recent years.”

The government alleges the group monetized its illicit access by deploying ransomware and “cryptojacking” tools (using compromised systems to mine cryptocurrencies like Bitcoin). In addition, the gang targeted video game companies and their customers in a bid to steal digital items of value that could be resold, such as points, powers and other items that could be used to enhance the game-playing experience.

APT41 was known to hide its malware inside fake resumes that were sent to targets. It also deployed more complex supply chain attacks, in which they would hack a software company and modify the code with malware.

“The victim software firm — unaware of the changes to its product, would subsequently distribute the modified software to its third-party customers, who were thereby defrauded into installing malicious software code on their own computers,” the indictments explain.

While the various charging documents released in this case do not mention it per se, it is clear that members of this group also favored another form of supply chain attacks — hiding their malware inside commercial tools they created and advertised as legitimate security software and PC utilities.

One of the men indicted as part of APT41 — now 35-year-old Tan DaiLin — was the subject of a 2012 KrebsOnSecurity story that sought to shed light on a Chinese antivirus product marketed as Anvisoft. At the time, the product had been “whitelisted” or marked as safe by competing, more established antivirus vendors, although the company seemed unresponsive to user complaints and to questions about its leadership and origins.

Tan DaiLin, a.k.a. “Wicked Rose,” in his younger years. Image: iDefense

Anvisoft claimed to be based in California and Canada, but a search on the company’s brand name turned up trademark registration records that put Anvisoft in the high-tech zone of Chengdu in the Sichuan Province of China.

A review of Anvisoft’s website registration records showed the company’s domain originally was created by Tan DaiLin, an infamous Chinese hacker who went by the aliases “Wicked Rose” and “Withered Rose.” At the time of story, DaiLin was 28 years old.

That story cited a 2007 report (PDF) from iDefense, which detailed DaiLin’s role as the leader of a state-sponsored, four-man hacking team called NCPH (short for Network Crack Program Hacker). According to iDefense, in 2006 the group was responsible for crafting a rootkit that took advantage of a zero-day vulnerability in Microsoft Word, and was used in attacks on “a large DoD entity” within the USA.

“Wicked Rose and the NCPH hacking group are implicated in multiple Office based attacks over a two year period,” the iDefense report stated.

When I first scanned Anvisoft at back in 2012, none of the antivirus products detected it as suspicious or malicious. But in the days that followed, several antivirus products began flagging it for bundling at least two trojan horse programs designed to steal passwords from various online gaming platforms.

Security analysts and U.S. prosecutors say APT41 operated out of a Chinese enterprise called Chengdu 404 that purported to be a network technology company but which served a legal front for the hacking group’s illegal activities, and that Chengdu 404 used its global network of compromised systems as a kind of dragnet for information that might be useful to the Chinese Communist Party.

Chengdu404’s offices in China. Image: DOJ.

“CHENGDU 404 developed a ‘big data’ product named ‘SonarX,’ which was described…as an ‘Information Risk Assessment System,'” the government’s indictment reads. “SonarX served as an easily searchable repository for social media data that previously had been obtained by CHENGDU 404.”

The group allegedly used SonarX to search for individuals linked to various Hong Kong democracy and independence movements, and snoop on a U.S.-backed media outlet that ran stories examining the Chinese government’s treatment of Uyghur people living in its Xinjian region.

As noted by TechCrunch, after the indictments were filed prosecutors said they obtained warrants to seize websites, domains and servers associated with the group’s operations, effectively shutting them down and hindering their operations.

“The alleged hackers are still believed to be in China, but the allegations serve as a ‘name and shame’ effort employed by the Justice Department in recent years against state-backed cyber attackers,” wrote TechCrunch’s Zack Whittaker.

Cryptogram Amazon Delivery Drivers Hacking Scheduling System

Amazon drivers — all gig workers who don’t work for the company — are hanging cell phones in trees near Amazon delivery stations, fooling the system into thinking that they are closer than they actually are:

The phones in trees seem to serve as master devices that dispatch routes to multiple nearby drivers in on the plot, according to drivers who have observed the process. They believe an unidentified person or entity is acting as an intermediary between Amazon and the drivers and charging drivers to secure more routes, which is against Amazon’s policies.

The perpetrators likely dangle multiple phones in the trees to spread the work around to multiple Amazon Flex accounts and avoid detection by Amazon, said Chetan Sharma, a wireless industry consultant. If all the routes were fed through one device, it would be easy for Amazon to detect, he said.

“They’re gaming the system in a way that makes it harder for Amazon to figure it out,” Sharma said. “They’re just a step ahead of Amazon’s algorithm and its developers.”

Cryptogram Friday Squid Blogging: Nano-Sized SQUIDS

SQUID news:

Physicists have developed a small, compact superconducting quantum interference device (SQUID) that can detect magnetic fields. The team l focused on the instrument’s core, which contains two parallel layers of graphene.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianRussell Coker: Dell BIOS Updates

I have just updated the BIOS on a Dell PowerEdge T110 II. The process isn’t too difficult, Google for the machine name and BIOS, download a shell script encoded firmware image and GPG signature, then run the script on the system in question.

One problem is that the Dell GPG key isn’t signed by anyone. How hard would it be to get a few well connected people in the Linux community to sign the key used for signing Linux scripts for updating the BIOS? I would be surprised if Dell doesn’t employ a few people who are well connected in the Linux community, they should just ask all employees to sign such GPG keys! Failing that there are plenty of other options. I’d be happy to sign the Dell key if contacted by someone who can prove that they are a responsible person in Dell. If I could phone Dell corporate and ask for the engineering department and then have someone tell me the GPG fingerprint I’ll sign the key and that problem will be partially solved (my key is well connected but you need more than one signature).

The next issue is how to determine that a BIOS update works. What you really don’t want is to have a BIOS update fail and brick your system! So the Linux update process loads the image into something (special firmware RAM maybe) and then reboots the system and the reboot then does a critical part of the update. If the reboot doesn’t work then you end up with the old version of the BIOS. This is overall a good thing.

The PowerEdge T110 II is a workstation with an NVidia video card (I tried an ATI card but that wouldn’t boot for unknown reasons). The Nouveau driver has some issues. One thing I have done to work around some Nouveau issues is to create a file “~/.config/plasma-workspace/env/” (for KDE sessions) with the following contents:


I previously wrote about using this just for Kmail to stop it crashing [1]. But after doing that I still had other problems with video and disabling all GL on the NVidia card was necessary.

The latest problem I’ve had is that even when using that configuration things don’t go well. When I run the “reboot” command I end up with a kernel message about the GPU not responding and then it doesn’t reboot. That means that the BIOS update doesn’t apply, a hard reboot signals to the system that the new BIOS wasn’t good and I end up with the old BIOS again. I discovered that disabling sddm (the latest xdm program in Debian) from starting on boot meant that a reboot command would work. Then I ran the BIOS update script and it’s reboot command worked and gave a successful BIOS update.

So I’ve gone from a 2013 BIOS to a 2018 BIOS! The update list says that some CVEs have been addressed, but the spectre-meltdown-checker doesn’t report any fewer vulnerabilities.

Google AdsenseDirector of Global Partnerships Solutions

The GNI Digital Growth Program helps small and mid-sized publishers around the world grow their businesses online.

Kevin RuddMamamia: Are Australia and China Breaking Up?

Worse Than FailureConfiguration Errors

Automation and tooling, especially around continuous integration and continuous deployment is standard on applications, large and small.

Paramdeep Singh Jubbal works on a larger one, with a larger team, and all the management overhead such a team brings. It needs to interact with a REST API, and as you might expect, the URL for that API is different in production and test environments. This is all handled by the CI pipeline, so long as you remember to properly configure which URLs map to which environments.

Paramdeep mistyped one of the URLs when configuring a build for a new environment. The application passed all the CI checks, and when it was released to the new environment, it crashed and burned. Their error handling system detected the failure, automatically filed a bug ticket, Paramdeep saw the mistake, fixed it, and everything was back to the way it was supposed to be within five minutes.

But a ticket had been opened. For a bug. And the team lead, Emmett, saw it. And so, in their next team meeting, Emmett launched with, “We should talk about Bug 264.”

“Ah, that’s already resolved,” Paramdeep said.

“Is it? I didn’t see a commit containing a unit test attached to the ticket,” Emmett said.

“I didn’t write one,” Paramdeep said, getting a little confused at this point. “It was just a misconfiguration.”

“Right, so there should be a test to ensure that the correct URL is configured before we deploy to the environment.” That was, in fact, the policy: any bug ticket which was closed was supposed to have an associated unit test which would protect against regressions.

“You… want a unit test which confirms that the environment URLs are configured correctly?”

“Yes,” Emmett said. “There should never be a case where the application connects to an incorrect URL.”

“But it gets that URL from the configuration.”

“And it should check that configuration is correct. Honestly,” Emmett said, “I know I’m not the most technical person on this team, but that just sounds like common sense to me.”

“It’s just…” Paramdeep considered how to phrase this. “How does the unit test tell which are the correct or incorrect URLs?”

Emmett huffed. “I don’t understand why I’m explaining this to a developer. But the URLs have a pattern. URLs which match the pattern are valid, and they pass the test. URLs which don’t fail the test, and we should fallback to a known-good URL for that environment.”

“And… ah… how do we know what the known-good URL is?”

“From the configuration!”

“So you want me to write a unit test which checks the configuration to see if the URLs are valid, and if they’re not, it uses a valid URL from the configuration?”

“Yes!” Emmett said, fury finally boiling over. “Why is that so hard?”

Paramdeep thought the question explained why it was so hard, but after another moment’s thought, there was an even better reason that Emmett might actually understand.

“Because the configuration files are available during the release step, and the unit tests are run after the build step, which is before the release step.”

Emmett blinked and considered how their CI pipeline worked. “Right, fair enough, we’ll put a pin in that then, and come back to it in a future meeting.”

“Well,” Paramdeep thought, “that didn’t accomplish anything, but it was a good waste of 15 minutes.”

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.1: New and Exciting Logging Package

Very thrilled to announce a new package RcppSpdlog which is now on CRAN in its first release 0.0.1. We had tweeted once about the earliest version which had already caught the eyes of Gabi upstream.

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic.

I had meant to package this for a few years now but didn’t find an (easy, elegant) way to completely lift stdout / stderr and related uses which R wants us to remove for smoother operations from R itself including synchronized input/output. It was only a few weeks ago that I realized I should subclass a logger (or, more concretely, a sink for a logger) which could then use R input/output. With the right idea the implementaton was easy, and [Gabi](( was most helpful in making sure R CMD check would not see one or two remaining C++ i/o operations (which we currently do by not activating a default logger, and substituing REprintf() in one call). So this is now clean and sween and a simple use is included in an example in the package we can show here too (in slightly shorter form minus the documentation header):

The NEWS entry for the first release follows.

Changes in RcppSpdlog version 0.0.1 (2020-09-08)

  • Initial release with added R/Rcpp logging sink example

The only sour grapes, if any, are over the CRAN processing. This was originally uploaded three weeks ago. As a new package, it got extra attention and some truly idiosyncratic attention to two details that were already supplied in the first uploaded version. Yet it needed two rounds of going back and forth for really no great net gain, yet wasting a week each time. I am not all that impressed by this, and not particularly pleased either, but I presume is the “tax” we all pay in order to enjoy the unsurpassed richness of the CRAN repository system which continues to work just flawlessly.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianSteve Kemp: Implementing a FORTH-like language ..

Four years ago somebody posted a comment-thread describing how you could start writing a little reverse-polish calculator, in C, and slowly improve it until you had written a minimal FORTH-like system:

At the time I read that comment I'd just hacked up a simple FORTH REPL of my own, in Perl, and I said "thanks for posting". I was recently reminded of this discussion, and decided to work through the process.

Using only minimal outside resources the recipe worked as expected!

The end-result is I have a working FORTH-lite, or FORTH-like, interpreter written in around 2000 lines of golang! Features include:

  • Reverse-Polish mathematical operations.
  • Comments between ( and ) are ignored, as expected.
    • Single-line comments \ to the end of the line are also supported.
  • Support for floating-point numbers (anything that will fit inside a float64).
  • Support for printing the top-most stack element (., or print).
  • Support for outputting ASCII characters (emit).
  • Support for outputting strings (." Hello, World ").
  • Support for basic stack operations (drop, dup, over, swap)
  • Support for loops, via do/loop.
  • Support for conditional-execution, via if, else, and then.
  • Load any files specified on the command-line
    • If no arguments are included run the REPL
  • A standard library is loaded, from the present directory, if it is present.

To give a flavour here we define a word called star which just outputs a single start-character:

: star 42 emit ;

Now we can call that (NOTE: We didn't add a newline here, so the REPL prompt follows it, that's expected):

> star

To make it more useful we define the word "stars" which shows N stars:

> : stars dup 0 > if 0 do star loop else drop then ;
> 0 stars
> 1 stars
*> 2 stars
**> 10 stars

This example uses both if to test that the parameter on the stack was greater than zero, as well as do/loop to handle the repetition.

Finally we use that to draw a box:

> : squares 0 do over stars cr loop ;
> 4 squares

> 10 squares

For fun we allow decompiling the words too:

> #words 0 do dup dump loop
Word 'square'
 0: dup
 1: *
Word 'cube'
 0: dup
 1: square
 2: *
Word '1+'
 0: store 1.000000
 2: +
Word 'test_hot'
  0: store 0.000000
  2: >
  3: if
  4: [cond-jmp 7.000000]
  6: hot
  7: then

Anyway if that is at all interesting feel free to take a peak. There's a bit of hackery there to avoid the use of return-stacks, etc. Compared to gforth this is actually more featureful in some areas:

  • I allow you to use conditionals in the REPL - outside a word-definition.
  • I allow you to use loops in the REPL - outside a word-definition.

Find the code here:

Krebs on SecurityTwo Russians Charged in $17M Cryptocurrency Phishing Spree

U.S. authorities today announced criminal charges and financial sanctions against two Russian men accused of stealing nearly $17 million worth of virtual currencies in a series of phishing attacks throughout 2017 and 2018 that spoofed websites for some of the most popular cryptocurrency exchanges.

The Justice Department unsealed indictments against Russian nationals Danil Potekhin and Dmitirii Karasavidi, alleging the duo was responsible for a sophisticated phishing and money laundering campaign that resulted in the theft of $16.8 million in cryptocurrencies and fiat money from victims.

Separately, the U.S. Treasury Department announced economic sanctions against Potekhin and Karasavidi, effectively freezing all property and interests of these persons (subject to U.S. jurisdiction) and making it a crime to transact with them.

According to the indictments, the two men set up fake websites that spoofed login pages for the currency exchanges Binance, Gemini and Poloniex. Armed with stolen login credentials, the men allegedly stole more than $10 million from 142 Binance victims, $5.24 million from 158 Poloniex users, and $1.17 million from 42 Gemini customers.

Prosecutors say the men then laundered the stolen funds through an array of intermediary cryptocurrency accounts — including compromised and fictitiously created accounts — on the targeted cryptocurrency exchange platforms. In addition, the two are alleged to have artificially inflated the value of their ill-gotten gains by engaging in cryptocurrency price manipulation using some of the stolen funds.

For example, investigators alleged Potekhin and Karasavidi used compromised Poloniex accounts to place orders to purchase large volumes of “GAS,” the digital currency token used to pay the cost of executing transactions on the NEO blockchain — China’s first open source blockchain platform.

“Using digital crurency in one victim Poloniex account, they placed an order to purchase approximately 8,000 GAS, thereby immediately increasing the market price of GAS from approximately $18 to $2,400,” the indictment explains.

Potekhin and others then converted the artificially inflated GAS in their own fictitious Poloniex accounts into other cryptocurrencies, including Ethereum (ETH) and Bitcoin (BTC). From the complaint:

“Before the Eight Fictitious Poloniex Accounts were frozen, POTEKHIN and others transferred approximately 759 ETH to nine digital currency addresses. Through a sophisticated and layered manner, the ETH from these nine digital currency addresses was sent through multiple intermediary accounts, before ultimately being deposited into a Bitfinex account controlled by Karasavidi.”

The Treasury’s action today lists several of the cryptocurrency accounts thought to have been used by the defendants. Searching on some of those accounts at various cryptocurrency transaction tracking sites points to a number of phishing victims.

“I would like to blow your bitch ass away, if you even had the balls to show yourself,” exclaimed one victim, posting in a comment on the Etherscan lookup service.

One victim said he contemplated suicide after being robbed of his ETH holdings in a 2017 phishing attack. Another said he’d been relieved of funds needed to pay for his 3-year-old daughter’s medical treatment.

“You and your team will leave a trail and will be found,” wrote one victim, using the handle ‘Illfindyou.’ “You’ll only be able to hide behind the facade for a short while. Go steal from whales you piece of shit.”

There is potentially some good news for victims of these phishing attacks. According to the Treasury Department, millions of dollars in virtual currency and U.S. dollars traced to Karasavidi’s account was seized in a forfeiture action by the United States Secret Service.

Whether any of those funds can be returned to victims of this phishing spree remains to be seen. And assuming that does happen, it could take years. In February 2020, KrebsOnSecurity wrote about being contacted by an Internal Revenue Service investigator seeking to return funds seized seven years earlier as part of the governments 2013 seizure of Liberty Reserve, a virtual currency service that acted as a $6 billion hub for the cybercrime world.

Today’s action is the latest indication that the Treasury Department is increasingly willing to use its authority to restrict the financial resources tied to various cybercrime activities. Earlier this month, the agency’s Office of Foreign Asset Control (OFAC) added three Russian nationals and a host of cryptocurrency addresses to its sanctions lists in a case involving efforts by Russian online troll farms to influence the 2018 mid-term elections.

In June, OFAC took action against six Nigerian nationals suspected of stealing $6 million from U.S. businesses and individuals through Business Email Compromise fraud and romance scams.

And in 2019, OFAC sanctioned 17 members allegedly associated with “Evil Corp.,” an Eastern European cybercrime syndicate that has stolen more than $100 million from small businesses via malicious software over the past decade.

A copy of the indictments against Potekhin and Karasavidi is available here (PDF).

Kevin RuddSky UK: Green Recovery, Tony Abbott and Donald Trump


Topics: Project Syndicate Conference; Climate Change; Tony Abbott; Trade; US Election

Kay Burley
The world’s green recovery from the pandemic is the centre of a virtual conference which will feature speakers from across the world, including former Prime Minister Gordon Ryan, also a former Australian Prime Minister and president of the Asia Society Policy Institute, Kevin Rudd, will be speaking at the event and he is with us now. Hello to Mr Rudd, it’s a pleasure to have you on the program this morning. Tell us a bit more about this conference. What are you hoping to achieve?

Kevin Rudd
Well, the purpose of the conference is two-fold. There is a massive global investment program underway at present to bring about economic recovery, given the impact of the COVID-19 crisis. And secondly, therefore, we either engineer a green economic recovery, or we lose this massive opportunity. So, therefore, these opportunities do not come along often to radically change the dial. And we’re talking about investments in renewable energy, we’re talking about decommissioning coal-fired stations, and we’re talking about also investment in the next generation of R&D so that we can, in fact, bring global greenhouse gas emissions down to the extent necessary to keep temperature increases this century within 1.5 degrees centigrade.

Kay Burley
It’s difficult though isn’t it with the global pandemic Covid infiltrating almost everyone’s lives around the globe. How are you going to get people focused on climate change?

Kevin Rudd
It’s called — starts with L ends with P — and it’s called leadership. Look, when we went through the Global Financial Crisis of a decade or so ago, many of us, President Obama, myself and others, we elected to bring about a green recovery then as well. In our case, we legislated for renewable energies in Australia to go from a then base of 4% of total electricity supply to 20% by 2020. Everyone said you can’t do that in the midst of a global financial crisis, all too difficult. But guess what, a decade later, we now have 21% renewables in Australia. We did the same as solar subsidies for people’s houses and on their roofs, and for the installation of people’s homes to bring down electricity demand. These are the practical things which can be done at scale when you’re seeking to invest to create new levels of economic activity, but at the same time doing so in a manner which keeps our greenhouse gas emissions heading south rather than north.

Kay Burley
Of course, you talk about leadership, the leader of the free world says that the planet is getting cooler.

Kevin Rudd
Yeah well President Trump on this question is stark raving mad. I realise that may not be regarded as your standard diplomatic reference to the leader of the free world but on the climate change question, he has abandoned the science all together. And mainstream Republicans and certainly mainstream Democrats fundamentally disagree with him. If there is a change in US administration, I think you will see climate change action move to the absolute centre of the domestic priorities of a Biden administration. And importantly, the international priorities. You in the United Kingdom will be hosting before too much longer the Conference of the Parties No.26, which will be critical in terms of increasing the ambition of the contracting parties to the Paris Agreement on climate change action, to begin to go down the road necessary to keep those temperature increases within 1.5 degrees centigrade. American leadership will be important. European leadership has been positive so far, but we’ll need to see the United Kingdom put its best foot forward as well given that you are the host country.

Kay Burley
You say that but of course Tony Abbott, who succeeded you as Prime Minister of Australia, is also a climate change denier.

Kevin Rudd
The climate change denialists seem to find their way into a range of conservative parties around the world regrettably. Not always the case — if you look in continental Europe, that’s basically a bipartisan consensus on climate change action — but certainly, Mr Abbott is notorious in this country, Australia, for describing climate change as, quote, absolute crap, and has been a climate change denialist for the better part of a decade and a half. And so if I was the United Kingdom, the government of the United Kingdom, would certainly not certainly not be letting Mr Abbott anywhere near a policy matter which has to do with the future of greenhouse gas emissions, or for that matter the highly contentious policy question of carbon tariffs which will be considered by the Europeans before much longer: tariffs imposed against those countries which are not lifting their weight globally to take their own national action on climate change.

Kay Burley
Very good at trade though isn’t he? He did a great deal with Japan, didn’t he when he was Prime Minister?

Kevin Rudd
Well, Mr Abbott is good at self-promotion on these questions. The reality is in the free trade agreements that we have agreed in recent years with China and with Japan, and with the Republic of Korea. These things are negotiations which spread over many years by many Australian governments. For example, the one we did with China took 10 years, begun by my predecessor, continued under me and concluded finally under the Abbott government. So I think for anyone to beat their chest and say that they are uniquely responsible as Prime Minister to bring about a particular trade agreement that belies the facts. And I think professional trade negotiators would have that view as well.

Kay Burley
You don’t like him very much then by the sounds of it.

Kevin Rudd
Well, I’m simply being objective. You asked me. I mean, I came to talk about climate change and you’ve asked me questions about Mr Abbott, and I’m simply being frank in my response. He’s a climate change denier. And on the trade policy front, ask your average trade negotiator who was material and bringing about these outcomes; perhaps the Australian trade ministers at the time, but also spread over multiple governments. These are complex negotiations. And they take a lot of effort to finally reach the point of agreement. So at a personal level, of course, Mr Abbott is a former Prime Minister of Australia, I wish him well, but you’ve asked me two direct questions and I’ve tried to give you a frank answer.

Kay Burley
And we’d like that from our politicians. We don’t always see that from politicians in the United Kingdom, I have to say, especially on this show. Let me ask you a final question before I let you go, if I may, please former prime minister. Why are you supporting Joe Biden? Is it all about climate change?

Kevin Rudd
Well, there are two reasons why on balance — I mean, I head an independent American think tank and I’m normally based in New York, so on questions of political alignment we are independent and we will work with any US administration. If you’re asking my personal view as a former Prime Minister of Australia and someone who’s spent 10 or 15 years working on climate change policy at home and abroad, then it is critical for the planet’s future that America return to the global leadership table. It’s the world’s second-largest emitter. Unless the Americans and the Chinese are able to bring about genuine global progress on bringing down their separate national emissions, then what we see in California, what we saw in Sydney in January, what you’ll see in other parts of the world in terms of extreme weather events, more intense drought, more intense cyclonic activity, et cetera, will simply continue. That’s a principal reason. But I think there’s a broader reason as well. It’s that we need to have America respected in the world again. America is a great country, enormous strengths, vibrant democracies, a massive economy, technological prowess, but we need to see also American global leadership which can bring together as friends and allies in the world to do with multiple global challenges that we face today, including the rise of China.

Kay Burley
Okay. It’s great to talk to you. Mr. Rudd. First time, I’ve interviewed, I think, and thank you for being so honest and straightforward. And hopefully we’ll see you again before too long on the program. Thank you.

Kevin Rudd
Good to be with you.

The post Sky UK: Green Recovery, Tony Abbott and Donald Trump appeared first on Kevin Rudd.

Planet DebianBastian Blank: Salsa hosted 1e6 CI jobs

Today, Salsa hosted it's 1,000,000th CI job. The price for hitting the target goes to the Cloud team. The job itself was not that interesting, but it was successful.

Planet DebianBits from Debian: Debian Local Groups at DebConf20 and beyond

There are a number of large and very successful Debian Local Groups (Debian France, Debian Brazil and Debian Taiwan, just to name a few), but what can we do to help support upcoming local groups or help spark interest in more parts of the world?

There has been a session about Debian Local Teams at Debconf20 and it generated quite a bit of constructive discussion in the live stream (recording available at, in the session's Etherpad and in the IRC channel (#debian-localgroups). This article is an attempt at summarizing the key points that were raised during that discussion, as well as the plans for the future actions to support new or existent Debian Local Groups and the possibility of setting up a local group support team.

Pandemic situation

During a pandemic it may seem strange to discuss offline meetings, but this is a good time to be planning things for the future. At the same time, the current situation makes it more important than before to encourage local interaction.

Reasoning for local groups

Debian can seem scary for those outside. Already having a connection to Debian - especially to people directly involved in it - seems to be the way through which most contributors arrive. But if one doesn't have a connection, it is not that easy; Local Groups facilitate that by improving networking.

Local groups are incredibly important to the success of Debian since they often help with translations, making us more diverse, support, setting up local bug squashing sprints, establishing a local DebConf team along with miniDebConfs, getting sponsors for the project and much more.

Existence of a Local Groups would also facilitate access to "swag" like stickers and mugs, since people not always have the time to deal with the process of finding a supplier to actually get those made. The activity of local groups might facilitate that by organizing related logistics.

How to deal with local groups, how to define a local group

Debian gathers the information about Local Groups in its Local Groups wiki page (and subpages). Other organisations also have their own schemes, some of them featuring a map, blogs, or clear rules about what constitutes a local group. In the case of Debian there is not a predefined set of "rules", even about the group name. That is perfectly fine, we assume that certain local groups may be very small, or temporary (created around a certain time when they plan several activities, and then become silent). However, the way the groups are named and how they are listed on the wiki page sets expectations with regards to what kinds of activities they involve.

For this reason, we encourage all the Debian Local Groups to review their entries in the Debian wiki, keep it current (e.g. add a line "Status: Active (2020)), and we encourage informal groups of Debian contributors that somehow "meet", to create a new entry in the wiki page, too.

What can Debian do to support Local Groups

Having a centralized database of groups is good (if up-to-date), but not enough. We'll explore other ways of propagation and increasing visibility, like organising the logistics of printing/sending swag and facilitate access to funding for Debian-related events.

Continuation of efforts

Efforts shall continue regarding Local Groups. Regular meetings are happening every two or three weeks; interested people are encouraged to explore some other relevant DebConf20 talks (Introducing Debian Brasil, Debian Academy: Another way to share knowledge about Debian, An Experience creating a local community on a small town), websites like Debian flyers (including other printed material as cube, stickers), visit the events section of the Debian website and the Debian Locations wiki page, and participate in the IRC channel #debian-localgroups at OFTC.

Sociological ImagesOfficer Friendly’s Adventures in Wonderland

Many of us know the Officer Friendly story. He epitomizes liberal police virtues. He seeks the public’s respect and willing cooperation to follow the law, and he preserves their favor with lawful enforcement.

Norman Rockwell’s The Runaway, 1958

The Officer Friendly story also inspired contemporary reforms that seek and preserve public favor, including what most people know as Community Policing. Norman Rockwell’s iconic painting is an idealized depiction of this narrative. Officer Friendly sits in full uniform. His blue shirt contrasts sharply with the black boots, gun, and small ticket book that blend together below the lunch counter. He is a paternalistic guardian. The officer’s eyes are fixed on the boy next to him. The lunch counter operator surveying the scene seems to smirk. All of them do, in fact. And all of them are White. The original was painted from the White perspective and highlighted the harmonious relationship between the officer and the boy. But for some it may be easy to imagine a different depiction: a hostile relationship between a boy of color and an officer in the 1950s and a friendly one between a White boy and an officer now.

Desmond Devlin (Writer) and Richard Williams’s (Artist) The Militarization of the Police Department, a painting parody of Rockwell’s The Runaway, 2014

The parody of Rockwell’s painting offers us a visceral depiction of contemporary urban policing. Both pictures depict different historical eras and demonstrate how police have changed. Officer Unfriendly is anonymous, of unknown race, and presumably male. He is prepared for battle, armed with several weapons that extend beyond his imposing frame. Officer Unfriendly is outfitted in tactical military gear with “POLICE” stamped across his back. The images also differ in their depictions of the boy’s race and his relationship to the officer. Officer Unfriendly appears more punitive than paternalistic. He looms over the Black boy sitting on the adjacent stool and peers at him through a tear gas mask. The boy and White lunch counter operator back away in fright. All of the tenderness in the original have given way to hostility in this parody.

Inspired by the critical race tradition, my new project “Officer Friendly’s Adventures in Wonderland: A Counter-Story of Race Riots, Police Reform, and Liberalism” employs composite counter-storytelling to narrate the experiences of young men of color in their explosive encounters with police. Counter-stories force dominant groups to see the world through the “Other’s” (non-White person’s) eyes, thereby challenging their preconceptions. I document the evolution of police-community relations in the last eighty years, and I reflect on the interrupted career of our protagonist, Officer Friendly. He worked with the Los Angeles Police Department (LAPD) for several stints primarily between the 1940s and 1990s.

My story focuses on Los Angeles, a city renowned for its police force and riot history. This story is richly informed by ethnographic field data and is further supplemented with archival and secondary historical data. It complicates the nature of so-called race riots, highlights how Officer Friendly was repeatedly evoked in the wake of these incidents, and reveals the pressures on LAPD officials to favor increasingly unfriendly police tactics. More broadly, the story of Officer Friendly’s embattled career raises serious questions about how to achieve racial justice. This work builds on my recently published coauthored book, The Limits of Community Policing, and can shape future critical race scholarship and historical and contemporary studies of police-community relations.

Daniel Gascón is an assistant professor of sociology at the University of Massachusetts Boston. For more on his latest work, follow him on Twitter.

(View original at

LongNowTime’s Arrow Flies through 500 Years of Classical Music, Physicists Say

A new statistical study of 8,000 musical compositions suggests that there really is a difference between music and noise: time-irreversibility. From The Smithsonian:

Noise can sound the same played forwards or backward in time, but composed music sounds dramatically different in those two time directions.

Compared with systems made of millions of particles, a typical musical composition consisting of thousands of notes is relatively short. Counterintuitively, that brevity makes statistically studying most music much harder, akin to determining the precise trajectory of a massive landslide based solely on the motions of a few tumbling grains of sand. For this study, however, [Lucas Lacasa, a physicist at Queen Mary University of London] and his co-authors exploited and enhanced novel methods particularly successful at extracting patterns from small samples. By translating sequences of sounds from any given composition into a specific type of diagrams or graphs, the researchers were able to marshal the power of graph theory to calculate time irreversibility.

In a time-irreversible music piece, the sense of directionality in time may help the listener generate expectations. The most compelling compositions, then, would be those that balance between breaking those expectations and fulfilling them—a sentiment with which anyone anticipating a catchy tune’s ‘hook’ would agree.

Worse Than FailureCodeSOD: Get My Switch

You know how it is. The team is swamped, so you’ve pulled on some junior devs, given them the bare minimum of mentorship, and then turned them loose. Oh, sure, there are code reviews, but it’s like, you just glance at it, because you’re already so far behind on your own development tasks and you’re sure it’s fine.

And then months later, if you’re like Richard, the requirements have changed, and now you’ve got to revisit the junior’s TypeScript code to make some changes.

		switch (false) {
			case (this.fooIsConfigured() === false && this.barIsConfigured() === false):
				this.contactAdministratorText = 'Foo & Bar';
			case this.fooIsConfigured():
				this.contactAdministratorText = 'Bar';
			case this.barIsConfigured():
				this.contactAdministratorText = 'Foo';

We’ve seen more positive versions of this pattern before, where we switch on true. We’ve even seen the false version of this switch before.

What makes this one interesting, to me, is just how dedicated it is to this inverted approach to logic: if it’s false that “foo” is false and “bar” is false, then obviously they’re all true, thus we output a message to that effect. If one of those is false, we need to figure out which one that is, and then do the opposite, because if “foo” is false, then clearly “bar” must be true, so we output that.

Richard was able to remove this code, and then politely suggest that maybe they should be more diligent in their code reviews.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianJoey Hess: comically bad shipping estimates and middlemen

My inverter has unfortunately died, and I wanted to replace it with the same model. Ideally before I lose the contents of the fridge. It's a 24v inverter, which is not at all as easy to find a replacement for as a 12v inverter would be.

Somehow Walmart was the only retailer that had it available with a delivery estimate: Just 2 days.

It's the second day now, with no indication they've shipped it. I noticed the "sold and shipped by Zoro", so went and found it on that website.

So, the reality is it ships direct from China via container ship. As does every product from Zoro, which all show as 2 day delivery on Walmart's website.

I don't think this is a pandemic thing. I think it's a trying to compete with Amazon and failing thing.

My other comically bad shipping estimate this pandemic was from Amazon though. There was a run this summer on Kayaks, because social distancing is great on the water. I found a high quality inflatable kayak.

Amazon said "only 2 left in stock" and promised delivery in 1 week. One week later, it had not shipped, and they updated the delivery estimate forward 1 week. A week after that, ditto.

Eventually I bought a new model from the same manufacturer, Advanced Elements. Unfortunately, that kayak exploded the second time I inflated it, due to a manufacturing defect.

So I got in touch with Advanced Elements and they offered a replacement. I asked if, instead, they maybe still had any of the older model of kayak I had tried to order. They checked their warehouse, and found "the last one" in a corner somewhere.

No shipping estimate was provided. It arrived in 3 days.


Cryptogram Former NSA Director Keith Alexander Joins Amazon’s Board of Directors

Cryptogram New Bluetooth Vulnerability

There’s a new unpatched Bluetooth vulnerability:

The issue is with a protocol called Cross-Transport Key Derivation (or CTKD, for short). When, say, an iPhone is getting ready to pair up with Bluetooth-powered device, CTKD’s role is to set up two separate authentication keys for that phone: one for a “Bluetooth Low Energy” device, and one for a device using what’s known as the “Basic Rate/Enhanced Data Rate” standard. Different devices require different amounts of data — and battery power — from a phone. Being able to toggle between the standards needed for Bluetooth devices that take a ton of data (like a Chromecast), and those that require a bit less (like a smartwatch) is more efficient. Incidentally, it might also be less secure.

According to the researchers, if a phone supports both of those standards but doesn’t require some sort of authentication or permission on the user’s end, a hackery sort who’s within Bluetooth range can use its CTKD connection to derive its own competing key. With that connection, according to the researchers, this sort of erzatz authentication can also allow bad actors to weaken the encryption that these keys use in the first place — which can open its owner up to more attacks further down the road, or perform “man in the middle” style attacks that snoop on unprotected data being sent by the phone’s apps and services.

Another article:

Patches are not immediately available at the time of writing. The only way to protect against BLURtooth attacks is to control the environment in which Bluetooth devices are paired, in order to prevent man-in-the-middle attacks, or pairings with rogue devices carried out via social engineering (tricking the human operator).

However, patches are expected to be available at one point. When they’ll be, they’ll most likely be integrated as firmware or operating system updates for Bluetooth capable devices.

The timeline for these updates is, for the moment, unclear, as device vendors and OS makers usually work on different timelines, and some may not prioritize security patches as others. The number of vulnerable devices is also unclear and hard to quantify.

Many Bluetooth devices can’t be patched.

Final note: this seems to be another example of simultaneous discovery:

According to the Bluetooth SIG, the BLURtooth attack was discovered independently by two groups of academics from the École Polytechnique Fédérale de Lausanne (EPFL) and Purdue University.

Planet DebianMolly de Blanc: “Actions, Inactions, and Consequences: Doctrine of Doing and Allowing” W. Quinn

There are a lot of interesting and valid things to say about the philosophy and actual arguments of the “Actions, Inactions, and Consequences: Doctrine of Doing and Allowing” by Warren Quinn. Unfortunately for me, none of them are things I feel particularly inspired by. I’m much more attracted to the many things implied in this paper. Among them are the role of social responsibility in making moral decisions.

At various points in the text, Quinn makes brief comments about how we have roles that we need to fulfill for the sake of society. These roles carry with them responsibilities that may supersede our regular moral responsibilities. Examples Quinn makes include being a private life guard (and being responsible for the life of one particular person) and being a trolley driver (and your responsibility is to make sure the train doesn’t kill anyone). This is part of what has led to me brushing Quinn off as another classist. Still, I am interested in the question of whether social responsibilities are more important than moral ones or whether there are times when this might occur.

One of the things I maintain is that we cannot be the best versions of ourselves because we are not living in societies that value our best selves. We survive capitalism. We negotiate climate change. We make decisions to trade the ideal for the functional. For me, this frequently means I click through terms of service, agree to surveillance, and partake in the use and proliferation of oppressive technology. I also buy an iced coffee that comes in a single use plastic cup; I shop at the store with questionable labor practices; I use Facebook.  But also, I don’t give money to panhandlers. I see suffering and I let it pass. I do not get involved or take action in many situations because I have a pass to not. These things make society work as it is, and it makes me work within society.

This is a self-perpetuating, mutually-abusive, co-dependent relationship. I must tell myself stories about how it is okay that I am preferring the status quo, that I am buying into the system, because I need to do it to survive within it and that people are relying on the system as it stands to survive, because that is how they know to survive.

Among other things, I am worried about the psychic damage this causes us. When we view ourselves as social actors rather than moral actors, we tell ourselves it is okay to take non-moral actions (or in-actions); however, we carry within ourselves intuitions and feelings about what is right, just, and moral. We ignore these in order to act in our social roles. From the perspective of the individual, we’re hurting ourselves and suffering for the sake of benefiting and perpetuating an caustic society. From the perspective of society, we are perpetuating something that is not just less than ideal, but actually not good because it is based on allowing suffering.[1]

[1] This is for the sake of this text. I don’t know if I actually feel that this is correct.

My goal was to make this only 500 words, so I am going to stop here.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, August 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, 237.25 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

August was a regular LTS month once again, even though it was only our 2nd month with Stretch LTS.
At the end of August some of us participated in DebConf 20 online where we held our monthly team meeting. A video is available.
As of now this video is also the only public resource about the LTS survey we held in July, though a written summary is expected to be released soon.

The security tracker currently lists 56 packages with a known CVE and the dla-needed.txt file has 55 packages needing an update.

Thanks to our sponsors

Sponsors that recently joined are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Worse Than FailureCodeSOD: //article title here

Menno was reading through some PHP code and was happy to see that it was thoroughly commented:

function degToRad ($value) { return $value * (pi()/180); // convert excel timestamp to php date }

Today's code is probably best explained in meme format:
expanding brain meme: write clear comments, comments recap the exact method name, write no comments, write wrong comments

As Menno summarizes: "It's a function that has the right name, does the right thing, in the right way. But I'm not sure how that comment came to be."

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianRussell Coker: More About the PowerEdge T710

I’ve got the T710 (mentioned in my previous post [1]) online. When testing the T710 at home I noticed that sometimes the VGA monitor I was using would start flickering when in some parts of the BIOS setup, it seemed that the horizonal sync wasn’t working properly. It didn’t seem to be a big deal at the time. When I deployed it the KVM display that I had planned to use with it mostly didn’t display anything. When the display was working the KVM keyboard wouldn’t work (and would prevent a regular USB keyboard from working if they were both connected at the same time). The VGA output of the T710 also wouldn’t work with my VGA->HDMI device so I couldn’t get it working with my portable monitor.

Fortunately the Dell front panel has a display and tiny buttons that allow configuring the IDRAC IP address, so I was able to get IDRAC going. One thing Dell really should do is allow the down button to change 0 to 9 when entering numbers, that would make it easier to enter for the DNS server. Another thing Dell should do is make the default gateway have a default value according to the IP address and netmask of the server.

When I got IDRAC going it was easy to setup a serial console, boot from a rescue USB device, create a new initrd with the driver for the MegaRAID controller, and then reboot into the server image.

When I transferred the SSDs from the old server to the newer Dell server the problem I had was that the Dell drive caddies had no holes in suitable places for attaching SSDs. I ended up just pushing the SSDs in so they are hanging in mid air attached only by the SATA/SAS connectors. Plugging them in took the space from the above drive, so instead of having 2*3.5″ disks I have 1*2.5″ SSD and need the extra space to get my hand in. The T710 is designed for 6*3.5″ disks and I’m going to have trouble if I ever want to have more than 3*2.5″ SSDs. Fortunately I don’t think I’ll need more SSDs.

After booting the system I started getting alerts about a “fault” in one SSD, with no detail on what the fault might be. My guess is that the SSD in question is M.2 and it’s in a M.2 to regular SATA adaptor which might have some problems. The data seems fine though, a BTRFS scrub found no checksum errors. I guess I’ll have to buy a replacement SSD soon.

I configured the system to use the “nosmt” kernel command line option to disable hyper-threading (which won’t provide much performance benefit but which makes certain types of security attacks much easier). I’ve configured BOINC to run on 6/8 CPU cores and surprisingly that didn’t cause the fans to be louder than when the system was idle. It seems that a system that is designed for 6 SAS disks doesn’t need a lot of cooling when run with SSDs.

Planet DebianNorbert Preining: GIMP washed out colors: Color to Alpha and layer recombination

Just to remind myself because I keep forgetting it again and again: If you get washed out colors when doing color to alpha and then recombine layers in GIMP, that is due to the new default in GIMP 2.10 that combines layers in linear RGB.

This creates problems because Color to Alpha works in perceptual RGB, and the recombination in linear creates washed out colors. The solution is to right click on the respective layer, select “Composite Space”, and there select “RGB (perceptual)”. Here is the “bug” report that has been open since 2 years.

Hoping that for next time I remember it.


Krebs on SecurityDue Diligence That Money Can’t Buy

Most of us automatically put our guard up when someone we don’t know promises something too good to be true. But when the too-good-to-be-true thing starts as our idea, sometimes that instinct fails to kick in. Here’s the story of how companies searching for investors to believe in their ideas can run into trouble.

Nick is an investment banker who runs a firm that helps raise capital for its clients (Nick is not his real name, and like other investment brokers interviewed in this story spoke with KrebsOnSecurity on condition of anonymity). Nick’s company works primarily in the mergers and acquisitions space, and his job involves advising clients about which companies and investors might be a good bet.

In one recent engagement, a client of Nick’s said they’d reached out to an investor from Switzerland — The Private Office of John Bernard — whose name was included on a list of angel investors focused on technology startups.

“We ran into a group that one of my junior guys found on a list of data providers that compiled information on investors,” Nick explained. “I told them what we do and said we were working with a couple of companies that were interested in financing, and asked them to send some materials over. The guy had a British accent, claimed to have made his money in tech and in the dot-com boom, and said he’d sold a company to Geocities that was then bought by Yahoo.”

But Nick wasn’t convinced Mr. Bernard’s company was for real. Nick and his colleagues couldn’t locate the company Mr. Bernard claimed to have sold, and while Bernard said he was based in Switzerland, virtually all of his staff were all listed on LinkedIn as residing in Ukraine.

Nick told his clients about his reservations, but each nevertheless was excited that someone was finally interested enough to invest in their ideas.

“The CEO of the client firm said, ‘This is great, someone is willing to believe in our company’,” Nick said. “After one phone call, he made an offer to invest tens of millions of dollars. I advised them not to pursue it, and one of the clients agreed. The other was very gung ho.”

When companies wish to link up with investors, what follows involves a process known as “due diligence” wherein each side takes time to research the other’s finances, management, and any lurking legal liabilities or risks associated with the transaction. Typically, each party will cover their own due diligence costs, but sometimes the investor or the company that stands to benefit from the transaction will cover the associated fees for both parties.

Nick said he wasn’t surprised when Mr. Bernard’s office insisted that its due diligence fees of tens of thousands of dollars be paid up front by his client. And he noticed the website for the due diligence firm that Mr. Bernard suggested using — — also was filled with generalities and stock photos, just like John Bernard’s private office website.

“He said we used to use big accounting firms for this but found them to be ineffective,” Nick said. “The company they wanted us to use looked like a real accounting firm, but we couldn’t find any evidence that they were real. Also, we asked to see an investment portfolio. He said he’s invested in over 30 companies, so I would expect to see a document that says, “here’s the various companies we’ve invested in.” But instead, we got two recommendation letters on letterhead saying how great these investors were.”

KrebsOnSecurity located two other investment bankers who had similar experiences with Mr. Bernard’s office.

“A number of us have been comparing notes on this guy, and he never actually delivers,” said one investment banker who asked not to be named because he did not have permission from his clients. “In each case, he agreed to invest millions with no push back, the documentation submitted from their end was shabby and unprofessional, and they seem focused on companies that will write a check for due diligence fees. After their fees are paid, the experience has been an ever increasing and inventive number of reasons why the deal can’t close, including health problems and all sorts of excuses.”

Mr. Bernard’s investment firm did not respond to multiple requests for comment. The one technology company this author could tie to Mr. Bernard was, a Swiss concern that provides encrypted email and data services. The domain was registered in 2015 by Inside Knowledge. In February 2020, Secure Swiss Data was purchased in an “undisclosed multimillion buyout” by SafeSwiss Secure Communication AG.

SafeSwiss co-CEO and Secure Swiss Data founder David Bruno said he couldn’t imagine that Mr. Bernard would be involved in anything improper.

“I can confirm that I know John Bernard and have always found him very honourable and straight forward in my dealings with him as an investor,” Bruno said. “To be honest with you, I struggle to believe that he would, or would even need to be, involved in the activity you mentioned, and quite frankly I’ve never heard about those things.”


John Bernard is named in historic WHOIS domain name registration records from 2015 as the owner of the due diligence firm Another “capital investment” company tied to John Bernard’s Swiss address is, which was registered in November 2017.

Curiously, in May 2018, its WHOIS ownership records switched to a new name with the same initials: one “Jonathan Bibi,” with an address in the offshore company haven of Seychelles. Likewise, Mr. Bibi was listed as a onetime owner of the domain for Mr. Bernard’s company  — — as well as

Running a reverse WHOIS search through [an advertiser on this site] reveals several other interesting domains historically tied to a Jonathan Bibi from the Seychelles. Among those is, a business that was blacklisted by French regulators in 2018 for promoting cryptocurrency scams.

Another Seychelles concern tied to Mr. Bibi was, which in 2017 and 2018 promoted sports betting via cryptocurrencies and offered tips on picking winners.

A Google search on Jonathan Bibi from Seychelles reveals he was listed as a respondent in a lawsuit filed in 2018 by the State of Missouri, which named him as a participant in an unlicensed “binary options” investment scheme that bilked investors out of their money.

Jonathan Bibi from Seychelles also was named as the director of another binary options scheme called the GoldmanOptions scam that was ultimately shut down by regulators in the Czech Republic.

Jason Kane is an attorney with Peiffer Wolf, a litigation firm that focuses on investment fraud. Kane said companies bilked by small-time investment schemes rarely pursue legal action, mainly because the legal fees involved can quickly surpass the losses. What’s more, most victims will likely be too ashamed to come forward.

“These are cases where you might win but you’ll never collect any money,” Kane said. “This seems like an investment twist on those fairly simple scams we all can’t believe people fall for, but as scams go this one is pretty good. Do this a few times a year and you can make a decent living and no one is really going to come after you.”

Planet DebianJonathan McDowell: onak 0.6.1 released

Yesterday I did the first release of my OpenPGP compatible keyserver, onak, in 4 years. Actually, 2 releases because I discovered my detection for various versions of libnettle needed some fixing.

It was largely driven by the need to get an updated package sorted for Debian due to the removal of dh-systemd, but it should have come sooner. This release has a number of clean-ups for dealing with the hostility shown to the keyserver network in recent years. In particular it implements some of dkg’s Abuse-Resistant OpenPGP Keystores, and finally adds support for verifying signatures fully. That opens up the ability to run a keyserver that will only allow verifiable updates to keys. This doesn’t tie in with folk who want to run PGP based systems because of the anonymity, but for those of us who think PGP’s strength is in the web of trust it’s pretty handy. And it’s all configurable to taste; you can turn off all the verification if you want, or verify everything but not require any signatures, or even enable v3 keys if you feel like it.

The main reason this release didn’t come sooner is that I’m painfully aware of the bits that are missing. In particular:

  • Documentation. It’s all out of date, it needs a lot of work.
  • FastCGI support. Presently you need to run the separate CGI binaries.
  • SKS Gossip support. onak only supports the email syncing option. If you run a stand alone server this is fine, but Gossip is the right approach for a proper keyserver network.

Anyway. Available locally or via GitHub.

0.6.0 - 13th September 2020

  • Move to CMake over autoconf
  • Add support for issuer fingerprint subpackets
  • Add experimental support for v5 keys
  • Add read-only OpenPGP keyring backed DB backend
  • Move various bits into their own subdirectories in the source tree
  • Add support for full signature verification
  • Drop v3 keys by default when cleaning keys
  • Various code cleanups
  • Implement pieces of draft-dkg-openpgp-abuse-resistant-keystore-03
  • Add support for a fingerprint blacklist (e.g. Evil32)
  • Deprecate the .conf configuration file format
  • Drop version info from armored output
  • Add option to deny new keys and only allow updates to existing keys
  • Various pieces of work removing support for 32 bit key IDs and coping with colliding 64 bit key IDs.
  • Remove support for libnettle versions that lack the full SHA2 suite

0.6.1 - 13th September 2020

  • Fixes for compilation without nettle + with later releases of nettle

Planet DebianEmmanuel Kasper: Quick debugging of a Linux printer via cups command line tools

Step by step cups debugging ( here with a network printer)

Which printer queue do I have configured ?
lpstat -p
printer epson is idle.  enabled since Sat Dec 24 13:18:09 2017
#here I have a printer called 'epson", doing nothing, that the cups daemon considers as enabled

Which connection am I using to get to this printer ?
lpstat -v
device for epson: lpd://epson34dea0.local:515/PASSTHRU
# here the locally configured 'epson' printer queue is backed by a network device at the adress epson34dea0.local, to which I am sending my print jobs via the lpd protocol

Is my printer ready ?
epson is ready
no entries

# here my local print queue 'epson' is accepting print jobs ( which does not say anything about the physical device, it might be offline

If here you local print queue 'epson' is not ready, you can try to reenable it in the cups system with:

sudo cupsenable epson

If you notice that the printer is disabled all the time, because for instance of a flaky network, you can edit /etc/cups/printers.conf and change the ErrorPolicy for each printer from stop-printer to retry-job.
It should be also possible to set this parameter in cupsd.conf

Finally you can print a test page with
lpr /usr/share/cups/data/testprint

Cory DoctorowIP

This week on my podcast, I read the first half of my latest Locus Magazine column, “IP,” the longest, most substantial column I’ve written in my 14 years on Locus‘s masthead.

IP explores the history of how we have allowed companies to control more and more of our daily lives, and has come to mean, “any law that I can invoke that allows me to control the conduct of my competitors, critics, and customers.”

It represents a major realization on my part after decades of writing, talking and thinking about this stuff. I hope you give it a listen and/or a read.


Planet DebianEmmanuel Kasper: Using Debian and RHEL troubleshootings containers on Kubernetes & OpenShift

You can connect to a running pod with oc/kubectl rsh pod_name, or start a copy of a running pod with oc debug pod_name, but as best practises recommend unprivileged, slim container images, where do you get sosreport, kdump, dig and nmap for troubleshooting ? 

Fortunately you can start either a transient Debian troubleshooting container with:

oc run troubleshooting-pod --stdin --tty --rm

or a Red Hat Entreprise Linux:

oc run troubleshooting-pod --stdin --tty --rm

Sociological ImagesStop Trying to Make Fetch Happen: Social Theory for Shaken Routines

It is hard to keep up habits these days. As the academic year starts up with remote teaching, hybrid teaching, and rapidly-changing plans amid the pandemic, many of us are thinking about how to design new ways to connect now that our old habits are disrupted. How do you make a new routine or make up for old rituals lost? How do we make them stick and feel meaningful?

Social science shows us how these things take time, and in a world where we would all very much like a quick solution to our current social problems, it can be tough to sort out exactly what new rules and routines can do for us.

For example, The New York Times recently profiled “spiritual consultants” in the workplace – teams that are tasked with creating a more meaningful and communal experience on the job. This is part of a larger social trend of companies and other organizations implementing things like mindfulness practices and meditation because they…keep workers happy? Foster a sense of community? Maybe just keep the workers just a little more productive in unsettled times?

It is hard to talk about the motives behind these programs without getting cynical, but that snark points us to an important sociological point. Some of our most meaningful and important institutions emerge from social behavior, and it is easy to forget how hard it is to design them into place.

This example reminded me of the classic Social Construction of Reality by Berger and Luckmann, who argue that some of our strongest and most established assumptions come from habit over time. Repeated interactions become habits, habits become routines, and suddenly those routines take on a life of their own that becomes meaningful to the participants in a way that “just is.” Trust, authority, and collective solidarity fall into place when people lean on these established habits. In other words: on Wednesdays we wear pink.

The challenge with emergent social institutions is that they take time and repetition to form. You have to let them happen on their own, otherwise they don’t take on the same same sense of meaning. Designing a new ritual often invites cringe, because it skips over the part where people buy into it through their collective routines. This is the difference between saying “on Wednesdays we wear pink” and saying

“Hey team, we have a great idea that’s going to build office solidarity and really reinforce the family dynamic we’ve got going on. We’re implementing: Pink. Wednesdays.”

All of our usual routines are disrupted right now, inviting fear, sadness, anger, frustration, and disappointment. People are trying to persist with the rituals closest to them, sometimes to the extreme detriment of public health (see: weddings, rallies, and ugh). I think there’s some good sociological advice for moving through these challenges for ourselves and our communities: recognize those emotions, trust in the routines and habits that you can safely establish for yourself and others, and know that they will take a long time to feel really meaningful again, but that doesn’t mean they aren’t working for you. In other words, stop trying to make fetch happen.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at

Kevin RuddA Rational Fear Podcast: Climate Action


The post A Rational Fear Podcast: Climate Action appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Nice Save

Since HTTP is fundamentally stateless, developers have found a million ways to hack state into web applications. One of my "favorites" was the ASP.NET ViewState approach.

The ViewState is essentially a dictionary, where you can store any arbitrary state values you might want to track between requests. When the server outputs HTML to send to the browser, the contents of ViewState are serialized, hashed, and base-64 encoded and dumped into an <input type="hidden"> element. When the next request comes in, the server unpacks the hidden field and deserializes the dictionary. You can store most objects in it, if you'd like. The goal of this, and all the other WebForm state stuff was to make handling web forms more like handling forms in traditional Windows applications.

It's "great". It's extra great when its default behavior is to ensure that the full state for every UI widget on the page. The incident which inpsired Remy's Law of Requirements Gathering was a case where our users wanted like 500 text boxes on a page, and we blew out our allowed request sizes due to gigundous ViewState because, at the time, we didn't know about that "helpful" feature.

Ryan N inherited some code which uses this, and shockingly, ViewState isn't the biggest WTF here:

protected void ValidateForm(object sender, EventArgs e) { bool Save = true; if (sInstructions == string.Empty) { sInstructions = string.Empty; } else { Save = false; } if (Save) {...} }

Let me just repeat part of that:

if (sInstructions == string.Empty) { sInstructions = string.Empty; }

If sInstructions is empty, set it to be empty. If sInstructions is not empty, then we set… Save to false? Reading this code, it implies if we have instructions we shouldn't save? What the heck is this sInstructions field anyway?

Well, Ryan explains: "sInstructions is a ViewState string variable, it holds error messages."

I had to spend a moment thinking "wait, why is it called 'instructions'?" But the method name is called ValidateForm, which explains it. It's because it's user instructions, as in, "please supply a valid email address". Honestly, I'm more worried about the fact that this approach is starting to make sense to me, than anything else.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianRuss Allbery: Review: Who Do You Serve, Who Do You Protect?

Review: Who Do You Serve, Who Do You Protect?, edited by Maya Schenwar, et al.

Editor: Maya Schenwar
Editor: Joe Macaré
Editor: Alana Yu-lan Price
Publisher: Haymarket Books
Copyright: June 2016
ISBN: 1-60846-684-1
Format: Kindle
Pages: 250

Who Do You Serve, Who Do You Protect? is an anthology of essays about policing in the United States. It's divided into two sections: one that enumerates ways that police are failing to serve or protect communities, and one that describes how communities are building resistance and alternatives. Haymarket Books (a progressive press in Chicago) has made it available for free in the aftermath of the George Floyd killing and resulting protests in the United States.

I'm going to be a bit unfair to this book, so let me start by admitting that the mismatch between it and the book I was looking for is not entirely its fault.

My primary goal was to orient myself in the discussion on the left about alternatives to policing. I also wanted to sample something from Haymarket Books; a free book was a good way to do that. I was hoping for a collection of short introductions to current lines of thinking that I could selectively follow in longer writing, and an essay collection seemed ideal for that.

What I had not realized (which was my fault for not doing simple research) is that this is a compilation of articles previously published by Truthout, a non-profit progressive journalism site, in 2014 and 2015. The essays are a mix of reporting and opinion but lean towards reporting. The earliest pieces in this book date from shortly after the police killing of Michael Brown, when racist police violence was (again) reaching national white attention.

The first half of the book is therefore devoted to providing evidence of police abuse and violence. This is important to do, but it's sadly no longer as revelatory in 2020, when most of us have seen similar things on video, as it was to white America in 2014. If you live in the United States today, while you may not be aware of the specific events described here, you're unlikely to be surprised that Detroit police paid off jailhouse informants to provide false testimony ("Ring of Snitches" by Aaron Miguel Cantú), or that Chicago police routinely use excessive deadly force with no consequences ("Amid Shootings, Chicago Police Department Upholds Culture of Impunity" by Sarah Macaraeg and Alison Flowers), or that there is a long history of police abuse and degradation of pregnant women ("Your Pregnancy May Subject You to Even More Law Enforcement Violence" by Victoria Law). There are about eight essays along those lines.

Unfortunately, the people who excuse or disbelieve these stories are rarely willing to seek out new evidence, let alone read a book like this. That raises the question of intended audience for the catalog of horrors part of this book. The answer to that question may also be the publication date; in 2014, the base of evidence and example for discussion had not been fully constructed. This sort of reporting is also obviously relevant in the original publication context of web-based journalism, where people may encounter these accounts individually through social media or other news coverage. In 2020, they offer reinforcement and rhetorical evidence, but I'm dubious that the people who would benefit from this knowledge will ever see it in this form. Those of us who will are already sickened, angry, and depressed.

My primary interest was therefore in the second half of the book: the section on how communities are building resistance and alternatives. This is where I'm going to be somewhat unfair because the state of that conversation may have been different in 2015 than it is now in 2020. But these essays were lacking the depth of analysis that I was looking for.

There is a human tendency, when one becomes aware of an obvious wrong, to simply publicize the horrible thing that is happening and expect someone to do something about it. It's obviously and egregiously wrong, so if more people knew about it, certainly it would be stopped! That has happened repeatedly with racial violence in the United States. It's also part of the common (and school-taught) understanding of the Civil Rights movement in the 1960s: activists succeeded in getting the violence on the cover of newspapers and on television, people were shocked and appalled, and the backlash against the violence created political change.

Putting aside the fact that this is too simplistic of a picture of the Civil Rights era, it's abundantly clear at this point in 2020 that publicizing racist and violent policing isn't going to stop it. We're going to have to do something more than draw attention to the problem. Deciding what to do requires political and social analysis, not just of the better world that we want to see but of how our current world can become that world.

There is very little in that direction in this book. Who Do You Serve, Who Do You Protect? does not answer the question of its title beyond "not us" and "white supremacy." While those answers are not exactly wrong, they're also not pushing the analysis in the direction that I wanted to read.

For example (and this is a long-standing pet peeve of mine in US political writing), it would be hard to tell from most of the essays in this book that any country besides the United States exists. One essay ("Killing Africa" by William C. Anderson) talks about colonialism and draws comparisons between police violence in the United States and international treatment of African and other majority-Black countries. One essay talks about US military behavior oversees ("Beyond Homan Square" by Adam Hudson). That's about it for international perspective. Notably, there is no analysis here of what other countries might be doing better.

Police violence against out-groups is not unique to the United States. No one has entirely solved this problem, but versions of this problem have been handled with far more success than here. The US has a comparatively appalling record; many countries in the world, particularly among comparable liberal democracies in Europe, are doing far better on metrics of racial oppression by agents of the government and of law enforcement violence. And yet it's common to approach these problems as if we have to develop a solution de novo, rather than ask what other countries are doing differently and if we could do some of those things.

The US has some unique challenges, both historical and with the nature of endemic violence in the country, so perhaps such an analysis would turn up too many US-specific factors to copy other people's solutions. But we need to do the analysis, not give up before we start. Novel solutions can lead to novel new problems; other countries have tested, working improvements that could provide a starting framework and some map of potential pitfalls.

More fundamentally, only the last two essays of this book propose solutions more complex than "stop." The authors are very clear about what the police are doing, seem less interested in why, and are nearly silent on how to change it. I suspect I am largely in political agreement with most of the authors, but obviously a substantial portion of the country (let alone its power structures) is not, and therefore nothing is changing. Part of the project of ending police violence is understanding why the violence exists, picking apart the motives and potential fracture lines in the political forces supporting the status quo, and building a strategy to change the politics. That isn't even attempted here.

For example, the "who do you serve?" question of the book's title is more interesting than the essays give it credit. Police are not a monolith. Why do Black people become police officers? What are their experiences? Are there police forces in the United States that are doing better than others? What makes them different? Why do police act with violence in the moment? What set of cultural expectations, training experiences, anxieties, and fears lead to that outcome? How do we change those factors?

Or, to take another tack, why are police not held accountable even when there is substantial public outrage? What political coalition supports that immunity from consequences, what are its fault lines and internal frictions, and what portions of that coalition could be broken off, pealed away, or removed from power? To whom, institutionally, are police forces accountable? What public offices can aspiring candidates run for that would give them oversight capability? This varies wildly throughout the United States; political approaches that work in large cities may not work in small towns, or with county sheriffs, or with the FBI, or with prison guards.

To treat these organizations as a monolith and their motives as uniform is bad political tactics. It gives up points of leverage.

I thought the best essays of this collection were the last two. "Community Groups Work to Provide Emergency Medical Alternatives, Separate from Police," by Candice Bernd, is a profile of several local emergency response systems that divert emergency calls from the police to paramedics, mental health experts, or social workers. This is an idea that's now relatively mainstream, and it seems to be finding modest success where it has been tried. It's more of a harm mitigation strategy than an attempt to deal with the root problem, but we're going to need both.

The last essay, "Building Community Safety" by Ejeris Dixon, is the only essay in this book that is pushing in the direction that I was hoping to read. Dixon describes building an alternative system that can intervene in violent situations without using the police. This is fascinating and I'm glad that I read it.

It's also frustrating in context because Dixon's essay should be part of a discussion. Dixon describes spending years learning de-escalation techniques, doing hard work of community discussion and collective decision-making, and making deep investment in the skills required to handle violence without calling in a dangerous outside force. I greatly admire this approach (also common in parts of the anarchist community) and the people who are willing to commit to it. But it's an immense amount of work, and as Dixon points out, that work often falls on the people who are least able to afford it. Marginalized communities, for whom the police are often dangerous, are also likely to lack both time and energy to invest in this type of skill training. And many people simply will not do this work even if they do have the resources to do it.

More fundamentally, this approach conflicts somewhat with division of labor. De-escalation and social work are both professional skills that require significant time and practice to hone, and as much as I too would love to live in a world where everyone knows how to do some amount of this work, I find it hard to imagine scaling this approach without trained professionals. The point of paying someone to do this work as their job is that the money frees up their time to focus on learning those skills at a level that is difficult to do in one's free time. But once you have an organized group of professionals who do this work, you have to find a way to keep them from falling prey to the problems that plague the police, which requires understanding the origins of those problems. And that's putting aside the question of how large the residual of dangerous crime that cannot be addressed through any form of de-escalation might be, and what organization we should use to address it.

Dixon's essay is great; I wouldn't change anything about it. But I wanted to see the next essay engaging with Dixon's perspective and looking for weaknesses and scaling concerns, and then the next essay that attempts to shore up those weaknesses, and yet another essay that grapples with the challenging philosophical question of a government monopoly on force and how that can and should come into play in violent crime. And then essays on grass-roots organizing in the context of police reform or abolition, and on restorative justice, and on the experience of attempting police reform from the inside, and on how to support public defenders, and on the merits and weaknesses of focusing on electing reform-minded district attorneys. Unfortunately, none of those are here.

Overall, Who Do You Serve, Who Do You Protect? was a disappointment. It was free, so I suppose I got what I paid for, and I may have had a different reaction if I read it in 2015. But if you're looking for a deep discussion on the trade-offs and challenges of stopping police violence in 2020, I don't think this is the place to start.

Rating: 3 out of 10


Planet DebianEnrico Zini: Travel links

A few interesting places to visit. Traveling could be complicated, and internet searches could be interesting enough.

For example, churches:

Or fascinating urbanistic projects, for which it's worth to look up photos:

Or nature, like Get Lost in Mega-Tunnels Dug by South American Megafauna

Planet DebianSteinar H. Gunderson: mlocate slowness

/usr/bin/mlocate asdf > /dev/null  23.28s user 0.24s system 99% cpu 23.536 total
/usr/local/sbin/mlocate-fast asdf > /dev/null  1.97s user 0.27s system 99% cpu 2.251 total

That is just changing mbsstr to strstr. Which I guess causes trouble for EUC_JP locales or something? But guys, scanning linearly through millions of entries is sort of outdated :-(

Planet DebianJonathan Carter: Wootbook / Tongfang laptop

Old laptop

I’ve been meaning to get a new laptop for a while now. My ThinkPad X250 is now 5 years old and even though it’s still adequate in many ways, I tend to run out of memory especially when running a few virtual machines. It only has one memory slot, which I maxed out at 16GB shortly after I got it. Memory has been a problem in considering a new machine. Most new laptops have soldered RAM and local configurations tend to ship with 8GB RAM. Getting a new machine with only a slightly better CPU and even just the same amount of RAM as what I have in the X250 seems a bit wasteful. I was eyeing the Lenovo X13 because it’s a super portable that can take up to 32GB of RAM, and it ships with an AMD Ryzen 4000 series chip which has great performance. With Lenovo’s discount for Debian Developers it became even more attractive. Unfortunately that’s in North America only (at least for now) so that didn’t work out this time.

Enter Tongfang

I’ve been reading a bunch of positive reviews about the Tuxedo Pulse 14 and KDE Slimbook 14. Both look like great AMD laptops, supports up to 64GB of RAM and clearly runs Linux well. I also noticed that they look quite similar, and after some quick searches it turns out that these are made by Tongfang and that its model number is PF4NU1F.

I also learned that a local retailer (Wootware) sells them as the Wootbook. I’ve seen one of these before although it was an Intel-based one, but it looked like a nice machine and I was already curious about it back then. After struggling for a while to find a local laptop with a Ryzen CPU and that’s nice and compact and that breaks the 16GB memory barrier, finding this one that jumped all the way to 64GB sealed the deal for me.

This is the specs for the configuration I got:

  • Ryzen 7 4800H 2.9GHz Octa Core CPU (4MB L2 cache, 8MB L3 cache, 7nm process).
  • 64GB RAM (2x DDR4 2666mHz 32GB modules)
  • 1TB nvme disk
  • 14″ 1920×1280 (16:9 aspect ratio) matt display.
  • Real ethernet port (gigabit)
  • Intel Wifi 6 AX200 wireless ethernet
  • Magnesium alloy chassis

This configuration cost R18 796 (€947 / $1122). That’s significantly cheaper than anything else I can get that even starts to approach these specs. So this is a cheap laptop, but you wouldn’t think so by using it.

I used the Debian netinstall image to install, and installation was just another uneventful and boring Debian installation (yay!). Unfortunately it needs the firmware-iwlwifi and firmare-amd-graphics packages for the binary blobs that drives the wifi card and GPU. At least it works flawlessly and you don’t need an additional non-free display driver (as is the case with NVidia GPUs). I haven’t tested the graphics extensively yet, but desktop graphics performance is very snappy. This GPU also does fancy stuff like VP8/VP9 encoding/decoding, so I’m curious to see how well it does next time I have to encode some videos. The wifi upgrade was nice for copying files over. My old laptop maxed out at 300Mbps, this one connects to my home network between 800-1000Mbps. At this speed I don’t bother connecting via cable at home.

I read on Twitter that Tuxedo Computers thinks that it’s possible to bring Coreboot to this device. That would be yet another plus for this machine.

I’ll try to answer some of my own questions about this device that I had before, that other people in the Debian community might also have if they’re interested in this device. Since many of us are familiar with the ThinkPad X200 series of laptops, I’ll compare it a bit to my X250, and also a little to the X13 that I was considering before. Initially, I was a bit hesitant about the 14″ form factor, since I really like the portability of the 12.5″ ThinkPad. But because the screen bezel is a lot smaller, the Wootbook (that just rolls off the tongue a lot better than “the PF4NU1F”) is just slightly wider than the X250. It weighs in at 1.1KG instead of the 1.38KG of the X250. It’s also thinner, so even though it has a larger display, it actually feels a lot more portable. Here’s a picture of my X250 on top of the Wootbook, you can see a few mm of Wootbook sticking out to the right.

Card Reader

One thing that I overlooked when ordering this laptop was that it doesn’t have an SD card reader. I see that some variations have them, like on this Slimbook review. It’s not a deal-breaker for me, I have a USB card reader that’s very light and that I’ll just keep in my backpack. But if you’re ordering one of these machines and have some choice, it might be something to look out for if it’s something you care about.


On to the keyboard. This keyboard isn’t quite as nice to type on as on the ThinkPad, but, it’s not bad at all. I type on many different laptop keyboards and I would rank this keyboard very comfortably in the above average range. I’ve been typing on it a lot over the last 3 days (including this blog post) and it started feeling natural very quickly and I’m not distracted by it as much as I thought I would be transitioning from the ThinkPad or my mechanical desktop keyboard. In terms of layout, it’s nice having an actual “Insert” button again. This is things normal users don’t care about, but since I use mc (where insert selects files) this is a welcome return :). I also like that it doesn’t have a Print Screen button at the bottom of my keyboard between alt and ctrl like the ThinkPad has. Unfortunately, it doesn’t have dedicated pgup/pgdn buttons. I use those a lot in apps to switch between tabs. At leas the Fn button and the ctrl buttons are next to each other, so pressing those together with up and down to switch tabs isn’t that horrible, but if I don’t get used to it in another day or two I might do some remapping. The touchpad has en extra sensor-button on the top left corner that’s used on Windows to temporarily disable the touchpad. I captured it’s keyscan codes and it presses left control + keyscan code 93. The airplane mode, volume and brightness buttons work fine.

I do miss the ThinkPad trackpoint. It’s great especially in confined spaces, your hands don’t have to move far from the keyboard for quick pointer operations and it’s nice for doing something quick and accurate. I painted a bit in Krita last night, and agree with other reviewers that the touchpad could do with just a bit more resolution. I was initially disturbed when I noticed that my physical touchpad buttons were gone, but you get right-click by tapping with two fingers, and middle click with tapping 3 fingers. Not quite as efficient as having the real buttons, but it actually works ok. For the most part, this keyboard and touchpad is completely adequate. Only time will tell whether the keyboard still works fine in a few years from now, but I really have no serious complaints about it.


The X250 had a brightness of 172 nits. That’s not very bright, I think the X250 has about the dimmest display in the ThinkPad X200 range. This hasn’t been a problem for me until recently, my eyes are very photo-sensitive so most of the time I use it at reduced brightness anyway, but since I’ve been working from home a lot recently, it’s nice to sometimes sit outside and work, especially now that it’s spring time and we have some nice days. At full brightness, I can’t see much on my X250 outside. The Wootbook is significantly brighter even (even at less than 50% brightness), although I couldn’t find the exact specification for its brightness online.


The Wootbook has 3x USB type A ports and 1x USB type C port. That’s already quite luxurious for a compact laptop. As I mentioned in the specs above, it also has a full-sized ethernet socket. On the new X13 (the new ThinkPad machine I was considering), you only get 2x USB type A ports and if you want ethernet, you have to buy an additional adapter that’s quite expensive especially considering that it’s just a cable adapter (I don’t think it contains any electronics).

It has one hdmi port. Initially I was a bit concerned at lack of displayport (which my X250 has), but with an adapter it’s possible to convert the USB-C port to displayport and it seems like it’s possible to connect up to 3 external displays without using something weird like display over usual USB3.

Overall remarks

When maxing out the CPU, the fan is louder than on a ThinkPad, I definitely noticed it while compiling the zfs-dkms module. On the plus side, that happened incredibly fast. Comparing the Wootbook to my X250, the biggest downfall it has is really it’s pointing device. It doesn’t have a trackpad and the touchpad is ok and completely usable, but not great. I use my laptop on a desk most of the time so using an external mouse will mostly solve that.

If money were no object, I would definitely choose a maxed out ThinkPad for its superior keyboard/mouse, but the X13 configured with 32GB of RAM and 128GB of SSD retails for just about double of what I paid for this machine. It doesn’t seem like you can really buy the perfect laptop no matter how much money you want to spend, there’s some compromise no matter what you end up choosing, but this machine packs quite a punch, especially for its price, and so far I’m very happy with my purchase and the incredible performance it provides.

I’m also very glad that Wootware went with the gray/black colours, I prefer that by far to the white and silver variants. It’s also the first laptop I’ve had since 2006 that didn’t come with Windows on it.

The Wootbook is also comfortable/sturdy enough to carry with one hand while open. The ThinkPads are great like this and with many other brands this just feels unsafe. I don’t feel as confident carrying it by it’s display because it’s very thin (I know, I shouldn’t be doing that with the ThinkPads either, but I’ve been doing that for years without a problem :) ).

There’s also a post on Reddit that tracks where you can buy these machines from various vendors all over the world.

Kevin RuddDoorstop: Queensland’s Covid Response


Topic: The Palaszczuk Government’s Covid-19 response

Journalist: Kevin, what’s your stance on the border closures at the moment? Do you think it’s a good idea to open them up, or to keep them closed?

Kevin Rudd: Anyone who’s honest will tell you these are really hard decisions and it’s often not politically popular to do the right thing. But I think Annastacia Palaszczuk, faced with really hard decisions, I think has made the right on-balance judgment, because she’s putting first priority on the health security advice for all Queenslanders. That I think is fundamental. You see, in my case, I’m a Queenslander, proudly so, but the last five years I’ve lived in New York. Let me tell you what it’s like living in a city like New York, where they threw health security to the wind. It was a complete debacle. And so many thousands of people died as a result. So I think we’ve got to be very cautious about disregarding the health security advice. So when I see the Murdoch media running a campaign to force Premier Palaszczuk to open the borders of Queensland, I get suspicious about what’s underneath all that. Here’s the bottom line: Do you want your Premier to make a decision based on political advice, or based on media bullying? Or do you want your Premier to make a decision based on what the health professionals are saying? And from my point of view — as someone who loves this state, loves the Sunshine Coast, grew up here and married to a woman who spent her life in business employing thousands of people — on balance, what Queenslanders want is for them to have their health needs put first. And you know something? It may not be all that much longer that we need to wait. But as soon as we say, ‘well, it’s time for politics to take over, it’s time for the Murdoch media to take over and dictate when elected governments should open the borders or not’, I think that’s a bad day for Queensland.

Journalist: Do you think the recession and the economic impact is something that Australia will be able to mitigate post pandemic?

Kevin Rudd: Well, the responsibility for mitigating the national recession lies in the hands of Prime Minister Morrison. I know something about what these challenges mean. I was Prime Minister during the Global Financial Crisis when the rest of the world went into recession. So he has it within his hands to devise the economic plans necessary for the nation; state governments are responsible primarily for the health of their citizens. That’s the division of labour in our commonwealth. So, it’s either you have a situation where Premiers are told that it’s in your political interests, because the Murdoch media has launched a campaign to throw open your borders, or you’re attentive to the health policy advice from the health professionals to keep all Queenslanders safe. It’s hard, it’s tough, but on balance I think she’s probably made the right call. In fact, I’m pretty sure she has.

The post Doorstop: Queensland’s Covid Response appeared first on Kevin Rudd.

Planet DebianRussell Coker: Setting Up a Dell T710 Server with DRAC6

I’ve just got a Dell T710 server for LUV and I’ve been playing with the configuration options. Here’s a list of what I’ve done and recommendations on how to do things. I decided not to try to write a step by step guide to doing stuff as the situation doesn’t work for that. I think that a list of how things are broken and what to avoid is more useful.


Firstly with a Dell server you can upgrade the BIOS from a Linux root shell. Generally when a server is deployed you won’t want to upgrade the BIOS (why risk breaking something when it’s working), so before deployment you probably should install the latest version. Dell provides a shell script with encoded binaries in it that you can just download and run, it’s ugly but it works. The process of loading the firmware takes a long time (a few minutes) with no progress reports, so best to plug both PSUs in and leave it alone. At the end of loading the firmware a hard reboot will be performed, so upgrading your distribution while doing the install is a bad idea (Debian is designed to not lose data in this situation so it’s not too bad).


IDRAC is the Integrated Dell Remote Access Controller. By default it will listen on all Ethernet ports, get an IP address via DHCP (using a different Ethernet hardware address to the interface the OS sees), and allow connections. Configuring it to be restrictive as to which ports it listens on may be a good idea (the T710 had 4 ports built in so having one reserved for management is usually OK). You need to configure a username and password for IDRAC that has administrative access in the BIOS configuration.

Web Interface

By default IDRAC will run a web server on the IP address it gets from DHCP, you can connect to that from any web browser that allows ignoring invalid SSL keys. Then you can use the username and password configured in the BIOS to login. IDRAC 6 on the PowerEdge T710 recommends IE 6.

To get a ssl cert that matches the name you want to use (and doesn’t give browser errors) you have to generate a CSR (Certificate Signing Request) on the DRAC, the only field that matters is the CN (Common Name), the rest have to be something that Letsencrypt will accept. Certbot has the option “--config-dir /etc/letsencrypt-drac” to specify an alternate config directory, the SSL key for DRAC should be entirely separate from the SSL key for other stuff. Then use the “--csr” option to specify the path of the CSR file. When you run letsencrypt the file name of the output file you want will be in the format “*_chain.pem“. You then have to upload that to IDRAC to have it used. This is a major pain for the lifetime of letsencrypt certificates. Hopefully a more recent version of IDRAC has Certbot built in.

When you access RAC via ssh (see below) you can run the command “racadm sslcsrgen” to generate a CSR that can be used by certbot. So it’s probably possible to write expect scripts to get that CSR, run certbot, and then put the ssl certificate back. I don’t expect to use IDRAC often enough to make it worth the effort (I can tell my browser to ignore an outdated certificate), but if I had dozens of Dells I’d script it.


The web interface allows configuring ssh access which I strongly recommend doing. You can configure ssh access via password or via ssh public key. For ssh access set TERM=vt100 on the host to enable backspace as ^H. Something like “TERM=vt100 ssh root@drac“. Note that changing certain other settings in IDRAC such as enabling Smartcard support will disable ssh access.

There is a limit to the number of open “sessions” for managing IDRAC, when you ssh to the IDRAC you can run “racadm getssninfo” to get a list of sessions and “racadm closessn -i NUM” to close a session. The closessn command takes a “-a” option to close all sessions but that just gives an error saying that you can’t close your own session because of programmer stupidity. The IDRAC web interface also has an option to close sessions. If you get to the limits of both ssh and web sessions due to network issues then you presumably have a problem.

I couldn’t find any documentation on how the ssh host key is generated. I Googled for the key fingerprint and didn’t get a match so there’s a reasonable chance that it’s unique to the server (please comment if you know more about this).

Don’t Use Graphical Console

The T710 is an older server and runs IDRAC6 (IDRAC9 is the current version). The Java based graphical console access doesn’t work with recent versions of Java. The Debian package icedtea-netx has has the javaws command for running the .jnlp command for the console, by default the web browser won’t run this, you download the .jnlp file and pass that as the first parameter to the javaws program which then downloads a bunch of Java classes from the IDRAC to run. One error I got with Java 11 was ‘Exception in thread “Timer-0” java.util.ServiceConfigurationError: java.nio.charset.spi.CharsetProvider: Provider sun.nio.cs.ext.ExtendedCharsets could not be instantiated‘, Google didn’t turn up any solutions to this. Java 8 didn’t have that problem but had a “connection failed” error that some people reported as being related to the SSL key, but replacing the SSL key for the web server didn’t help. The suggestion of running a VM with an old web browser to access IDRAC didn’t appeal. So I gave up on this. Presumably a Windows VM running IE6 would work OK for this.

Serial Console

Fortunately IDRAC supports a serial console. Here’s a page summarising Serial console setup for DRAC [1]. Once you have done that put “console=tty0 console=ttyS1,115200n8” on the kernel command line and Linux will send the console output to the virtual serial port. To access the serial console from remote you can ssh in and run the RAC command “console com2” (there is no option for using a different com port). The serial port seems to be unavailable through the web interface.

If I was supporting many Dell systems I’d probably setup a ssh to JavaScript gateway to provide a web based console access. It’s disappointing that Dell didn’t include this.

If you disconnect from an active ssh serial console then the RAC might keep the port locked, then any future attempts to connect to it will give the following error:

/admin1-> console com2
console: Serial Device 2 is currently in use

So far the only way I’ve discovered to get console access again after that is the command “racadm racreset“. If anyone knows of a better way please let me know. As an aside having “racreset” being so close to “racresetcfg” (which resets the configuration to default and requires a hard reboot to configure it again) seems like a really bad idea.

Host Based Management

deb xenial openmanage

The above apt sources.list line allows installing Dell management utilities (Xenial is old but they work on Debian/Buster). Probably the packages srvadmin-storageservices-cli and srvadmin-omacore will drag in enough dependencies to get it going.

Here are some useful commands:

# show hardware event log
omreport system esmlog
# show hardware alert log
omreport system alertlog
# give summary of system information
omreport system summary
# show versions of firmware that can be updated
omreport system version
# get chassis temp
omreport chassis temps
# show physical disk details on controller 0
omreport storage pdisk controller=0

RAID Control

The RAID controller is known as PERC (PowerEdge Raid Controller), the Dell web site has an rpm package of the perccli tool to manage the RAID from Linux. This is statically linked and appears to have different versions of libraries used to make it. The command “perccli show” gives an overview of the configuration, but the command “perccli /c0 show” to give detailed information on controller 0 SEGVs and the kernel logs a “vsyscall attempted with vsyscall=none” message. Here’s an overview of the vsyscall enmulation issue [2]. Basically I could add “vsyscall=emulate” to the kernel command line and slightly reduce security for the system to allow system calls from perccli that are called from old libc code to work, or I could run code from a dubious source as root.

Some versions of IDRAC have a “racadm raid” command that can be run from a ssh session to perform RAID administration remotely, mine doesn’t have that. As an aside the help for the RAC system doesn’t list all commands and the Dell documentation is difficult to find so blog posts from other sysadmins is the best source of information.

I have configured IDRAC to have all of the BIOS output go to the virtual serial console over ssh so I can see the BIOS prompt me for PERC administration but it didn’t accept my key presses when I tried to do so. In any case this requires downtime and I’d like to be able to change hard drives without rebooting.

I found vsyscall_trace on Github [3], it uses the ptrace interface to emulate vsyscall on a per process basis. You just run “vsyscall_trace perccli” and it works! Thanks Geoffrey Thomas for writing this!

Here are some perccli commands:

# show overview
perccli show
# help on adding a vd (RAID)
perccli /c0 add help
# show controller 0 details
perccli /c0 show
# add a vd (RAID) of level RAID0 (r0) with the drive 32:0 (enclosure:slot from above command)
perccli /c0 add vd r0 drives=32:0

When a disk is added to a PERC controller about 525MB of space is used at the end for RAID metadata. So when you create a RAID-0 with a single device as in the above example all disk data is preserved by default except for the last 525MB. I have tested this by running a BTRFS scrub on a disk from another system after making it into a RAID-0 on the PERC.

Planet Linux AustraliaSimon Lyall: Talks from KubeCon + CloudNativeCon Europe 2020 – Part 1

Various talks I watched from their YouTube playlist.

Application Autoscaling Made Easy With Kubernetes Event-Driven Autoscaling (KEDA) – Tom Kerkhove

I’ve been using Keda a little bit at work. Good way to scale on random stuff. At work I’m scaling pods against length of AWS SQS Queues and as a cron. Lots of other options. This talk is a 9 minute intro. A bit hard to read the small font on the screen of this talk.

Autoscaling at Scale: How We Manage Capacity @ Zalando – Mikkel Larsen, Zalando SE

  • These guys have their own HPA replacement for scaling. Kube-metrics-adapter .
  • Outlines some new stuff in scaling in 1.18 and 1.19.
  • They also have a fork of the Cluster Autoscaler (although some of what it seems to duplicate Amazon Fleets).
  • Have up to 1000 nodes in some of their clusters. Have to play with address space per nodes, also scale their control plan nodes vertically (control plan autoscaler).
  • Use Virtical Pod autoscaler especially for things like prometheus that varies by the size of the cluster. Have had problems with it scaling down too fast. They have some of their own custom changes in a fork

Keynote: Observing Kubernetes Without Losing Your Mind – Vicki Cheung

  • Lots of metrics dont’t cover what you want and get hard to maintain and complex
  • Monitor core user workflows (ie just test a pod launch and stop)
  • Tiny tools
    • 1 watches for events on cluster and logs them -> elastic
    • 2 watches container events -> elastic
    • End up with one timeline for a deploy/job covering everything
    • Empowers users to do their own debugging

Autoscaling and Cost Optimization on Kubernetes: From 0 to 100 – Guy Templeton & Jiaxin Shan

  • Intro to HPA and metric types. Plus some of the newer stuff like multiple metrics
  • Vertical pod autoscaler. Good for single pod deployments. Doesn’t work will with JVM based workloads.
  • Cluster Autoscaler.
    • A few things like using prestop hooks to give pods time to shutdown
    • pod priorties for scaling.
    • –expandable-pods-priority-cutoff to not expand for low-priority jobs
    • Using the priority-expander to try and expand spots first and then fallback to more expensive node types
    • Using mixed instance policy with AWS . Lots of instance types (same CPU/RAM though) to choose from.
    • Look at poddistruptionbudget
    • Some other CA flags like scale-down-utilisation-threshold to lok at.
  • Mention of Keda
  • Best return is probably tuning HPA
  • There is also another similar talk . Note the Male Speaker talks very slow so crank up the speed.

Keynote: Building a Service Mesh From Scratch – The Pinterest Story – Derek Argueta

  • Changed to Envoy as a http proxy for incoming
  • Wrote own extension to make feature complete
  • Also another project migrating to mTLS
    • Huge amount of work for Java.
    • Lots of work to repeat for other languages
    • Looked at getting Envoy to do the work
    • Ingress LB -> Inbound Proxy -> App
  • Used j2 to build the Static config (with checking, tests, validation)
  • Rolled out to put envoy in front of other services with good TLS termination default settings
  • Extra Mesh Use Cases
    • Infrastructure-specific routing
    • SLI Monitoring
    • http cookie monitoring
  • Became a platform that people wanted to use.
  • Solving one problem first and incrementally using other things. Many groups had similar problems. “Just a node in a mesh”.

Improving the Performance of Your Kubernetes Cluster – Priya Wadhwa, Google

  • Tools – Mostly tested locally with Minikube (she is a Minikube maintainer)
  • Minikube pause – Pause the Kubernetes systems processes and leave app running, good if cluster isn’t changing.
  • Looked at some articles from Brendon Gregg
  • Ran USE Method against Minikube
  • eBPF BCC tools against Minikube
  • biosnoop – noticed lots of writes from etcd
  • KVM Flamegraph – Lots of calls from ioctl
  • Theory that etcd writes might be a big contributor
  • How to tune etcd writes ( updated –snapshot-count flag to various numbers but didn’t seem to help)
  • Noticed CPU spkies every few seconds
  • “pidstat 1 60” . Noticed kubectl command running often. Running “kubectl apply addons” regularly
  • Suspected addon manager running often
  • Could increase addon manager polltime but then addons would take a while to show up.
  • But in Minikube not a problem cause minikube knows when new addons added so can run the addon manager directly rather than it polling.
  • 32% reduction in overhead from turning off addon polling
  • Also reduced coredns number to one.
  • pprof – go tool
  • kube-apiserver pprof data
  • Spending lots of times dealing with incoming requests
  • Lots of requests from kube-controller-manager and kube-scheduler around leader-election
  • But Minikube is only running one of each. No need to elect a leader!
  • Flag to turn both off –leader-elect=false
  • 18% reduction from reducing coredns to 1 and turning leader election off.
  • Back to looking at etcd overhead with pprof
  • writeFrameAsync in http calls
  • Theory could increase –proxy-refresh-interval from 30s up to 120s. Good value at 70s but unsure what behavior was. Asked and didn’t appear to be a big problem.
  • 4% reduction in overhead



Planet DebianRuss Allbery: PGP::Sign 1.03

Part of the continuing saga to clean up CPAN testing failures with this module. Test reports uncovered a tighter version dependency for the flags I was using for GnuPG v2 (2.1.23) and a version dependency for GnuPG v1 (1.4.20). As with the previous release, I'm now skipping tests if the version is too old, since this makes the CPAN test results more useful to me.

I also took advantage of this release to push the Debian packaging to Salsa (and the upstream branch as well since it's easier) and update the package metadata, as well as add an upstream metadata file since interest in that in Debian appears to have picked up again.

You can get the latest release from CPAN or from the PGP::Sign distribution page.

Planet DebianRyan Kavanagh: Configuring OpenIKED VPNs for StrongSwan Clients

A few weeks ago I configured a road warrior VPN setup. The remote end is on a VPS running OpenBSD and OpenIKED, the VPN is an IKEv2 VPN using x509 authentication, and the local end is StrongSwan. I also configured an IKEv2 VPN between my VPSs. Here are the notes for how to do so.

In all cases, to use x509 authentication, you will need to generate a bunch of certificates and keys:

  • a CA certificate
  • a key/certificate pair for each client

Fortunately, OpenIKED provides the ikectl utility to help you do so. Before going any further, you might find it useful to edit /etc/ssl/ikeca.cnf to set some reasonable defaults for your certificates.

Begin by creating and installing a CA certificate:

# ikectl ca vpn create
# ikectl ca vpn install

For simplicity, I am going to assume that the you are managing your CA on the same host as one of the hosts that you want to configure for the VPN. If not, see the bit about exporting certificates at the beginning of the section on persistent host-host VPNs.

Create and install a key/certificate pair for your server. Suppose for example your first server is called

# ikectl ca vpn certificate create
# ikectl ca vpn certificate install

Persistent host-host VPNs

For each other server that you want to use, you need to also create a key/certificate pair on the same host as the CA certificate, and then copy them over to the other server. Assuming the other server is called

# ikectl ca vpn certificate create
# ikectl ca vpn certificate export

This last command will produce a tarball Copy it over to and install it:

# tar -C /etc/iked -xzpvf

Next, it is time to configure iked. To do so, you will need to find some information about the certificates you just generated. On the host with the CA, run

$ cat /etc/ssl/vpn/index.txt
V       210825142056Z           01      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/
V       210825142208Z           02      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/

Pick one of the two hosts to play the “active” role (in this case, Using the information you gleaned from index.txt, add the following to /etc/iked.conf, filling in the srcid and dstid fields appropriately.

ikev2 'server1_server2_active' active esp from to \
	local peer \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/'

On the other host, add the following to /etc/iked.conf

ikev2 'server2_server1_passive' passive esp from to \
	local peer \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/'

Note that the names 'server1_server2_active' and 'server2_server1_passive' in the two stanzas do not matter and can be omitted. Reload iked on both hosts:

# ikectl reload

If everything worked out, you should see the negotiated security associations (SAs) in the output of

# ikectl show sa

On OpenBSD, you should also see some output on success or errors in the file /var/log/daemon.

For a road warrior

Add the following to /etc/iked.conf on the remote end:

ikev2 'responder_x509' passive esp \
	from to \
	local peer any \
	srcid \
	config address \
	config name-server \
	tag "ROADW"

Configure or omit the address range and the name-server configurations to suit your needs. See iked.conf(5) for details. Reload iked:

# ikectl reload

If you are on OpenBSD and want the remote end to have an IP address, add the following to /etc/hostname.vether0, again configuring the address to suit your needs:


Put the interface up:

# ifconfig vether0 up

Now create a client certificate for authentication. In my case, my road-warrior client was

# ikectl ca vpn certificate create
# ikectl ca vpn certificate export

Copy to client and run

# tar -C /etc/ipsec.d/ -xzf -- \
	./private/ \
	./certs/ ./ca/ca.crt

Install StrongSwan and add the following to /etc/ipsec.conf, configuring appropriately:


conn server1

Add the following to /etc/ipsec.secrets:

# space is important : RSA

Restart StrongSwan, put the connection up, and check its status:

# ipsec restart
# ipsec up server1
# ipsec status

That should be it.


Planet DebianRyan Kavanagh: Configuring OpenIKED VPNs for Road Warriors

A few weeks ago I configured a road warrior VPN setup. The remote end is on a VPS running OpenBSD and OpenIKED, the VPN is an IKEv2 VPN using x509 authentication, and the local end is StrongSwan. I also configured an IKEv2 VPN between my VPSs. Here are the notes for how to do so.

In all cases, to use x509 authentication, you will need to generate a bunch of certificates and keys:

  • a CA certificate
  • a key/certificate pair for each client

Fortunately, OpenIKED provides the ikectl utility to help you do so. Before going any further, you might find it useful to edit /etc/ssl/ikeca.cnf to set some reasonable defaults for your certificates.

Begin by creating and installing a CA certificate:

# ikectl ca vpn create
# ikectl ca vpn install

For simplicity, I am going to assume that the you are managing your CA on the same host as one of the hosts that you want to configure for the VPN. If not, see the bit about exporting certificates at the beginning of the section on persistent host-host VPNs.

Create and install a key/certificate pair for your server. Suppose for example your first server is called

# ikectl ca vpn certificate create
# ikectl ca vpn certificate install

Persistent host-host VPNs

For each other server that you want to use, you need to also create a key/certificate pair on the same host as the CA certificate, and then copy them over to the other server. Assuming the other server is called

# ikectl ca vpn certificate create
# ikectl ca vpn certificate export

This last command will produce a tarball Copy it over to and install it:

# tar -C /etc/iked -xzpvf

Next, it is time to configure iked. To do so, you will need to find some information about the certificates you just generated. On the host with the CA, run

$ cat /etc/ssl/vpn/index.txt
V       210825142056Z           01      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/
V       210825142208Z           02      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/

Pick one of the two hosts to play the “active” role (in this case, Using the information you gleaned from index.txt, add the following to /etc/iked.conf, filling in the srcid and dstid fields appropriately.

ikev2 'server1_server2_active' active esp from to \
	local peer \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/'

On the other host, add the following to /etc/iked.conf

ikev2 'server2_server1_passive' passive esp from to \
	local peer \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/'

Note that the names 'server1_server2_active' and 'server2_server1_passive' in the two stanzas do not matter and can be omitted. Reload iked on both hosts:

# ikectl reload

If everything worked out, you should see the negotiated security associations (SAs) in the output of

# ikectl show sa

On OpenBSD, you should also see some output on success or errors in the file /var/log/daemon.

For a road warrior

Add the following to /etc/iked.conf on the remote end:

ikev2 'responder_x509' passive esp \
	from to \
	local peer any \
	srcid \
	config address \
	config name-server \
	tag "ROADW"

Configure or omit the address range and the name-server configurations to suit your needs. See iked.conf(5) for details. Reload iked:

# ikectl reload

If you are on OpenBSD and want the remote end to have an IP address, add the following to /etc/hostname.vether0, again configuring the address to suit your needs:


Put the interface up:

# ifconfig vether0 up

Now create a client certificate for authentication. In my case, my road-warrior client was

# ikectl ca vpn certificate create
# ikectl ca vpn certificate export

Copy to client and run

# tar -C /etc/ipsec.d/ -xzf -- \
	./private/ \
	./certs/ ./ca/ca.crt

Install StrongSwan and add the following to /etc/ipsec.conf, configuring appropriately:


conn server1

Add the following to /etc/ipsec.secrets:

# space is important : RSA

Restart StrongSwan, put the connection up, and check its status:

# ipsec restart
# ipsec up server1
# ipsec status

That should be it.


Kevin RuddAustralian Jewish News: Rudd Responds To Critics

With the late Shimon Peres

Published in The Australian Jewish News on 12 September 2020

Since writing in these pages last month, I’ve been grateful to receive many kind messages from readers reflecting on my record in office. Other letters, including those published in the AJN, deserve a response.

First to Michael Gawenda, whose letter repeated the false assertion that I believe in some nefarious connection between Mark Leibler’s lobbying of Julia Gillard and the factional machinations that brought her to power. Challenged to provide evidence for this fantasy, Gawenda pointed to this passage in my book, The PM Years: “The meticulous work of moving Gillard from the left to the right on foreign policy has already begun in earnest more than a year before the coup”.

Hold the phone! Of course Gillard was on the move – on the US, on Israel, and even on marriage equality. On all these, unlike me, she had a socialist-left background and wanted to appeal to right-wing factional bosses. Even Gawenda must comprehend that a strategy on Gillard’s part does not equate to Leibler plotting my demise.

My complaint against Leibler is entirely different: his swaggering arrogance in insisting Labor back virtually every move by Benjamin Netanyahu, good or bad; that we remain silent over the Mossad’s theft of Australian passports to carry out an assassination, when they’d already been formally warned under Howard never to do it again; and his bad behaviour as a dinner guest at the Lodge when, having angrily disagreed with me over the passports affair, leaned over the table and said menacingly to my face: “Julia is looking very good in the public eye these days, prime minister.”

There is a difference between bad manners and active conspiracy. Gawenda knows this. If he’d bothered to read my book, rather than flipping straight to the index, he would have found on page 293 the list of those actually responsible for plotting the coup. They were: Wayne Swan, Mark Arbib, David Feeney, Don Farrell and Stephen Conroy (with Bill Ludwig, Paul Howes, Bill Shorten, Tony Sheldon and Karl Bitar in supporting roles). All Labor Party members.

So what was Leibler up to? Unsurprisingly, he was cultivating Gillard as an influential contact in the government. Isn’t that, after all, what lobbyists do? They lobby. And Leibler, by his own account and Gawenda’s obsequious reporting, was very good at it.

Finally, I am disappointed by Gawenda’s lack of contrition for not contacting me before publication. This was his basic professional duty as a journalist. If he’d bothered, I could have dispelled his conspiracy theory inside 60 seconds. But, then again, it would have ruined his yarn.

A second letter, by Leon Poddebsky, asserted I engaged in “bombastic opposition to Israel’s blockade against the importation by Hamas of military and dual-purpose materials for its genocidal war against the Jewish state”.

I draw Mr Poddebsky to my actual remarks in 2010: “When it comes to a blockade against Gaza, preventing the supply of humanitarian aid, such a blockade should be removed. We believe that the people of Gaza … should be provided with humanitarian assistance.” It is clear my remarks were limited to curbs on humanitarian aid. The AJN has apologised for publishing this misrepresentation, and I accept that apology with no hard feeling.

Regarding David Singer’s letter, who denies Netanyahu’s stalled West Bank plan constitutes “annexation”, I decline to debate him on ancient history. I only note the term “annexation” is used by the governments of Australia and Britain, the European Union, the United Nations, the Israeli press and even the AJN.

Mr Singer disputes the illegality of annexation, citing documents from 1922. I direct him to the superseding Oslo II Accord of 1995 where Israel agreed “neither side shall initiate or take any step that will change the status of the West Bank and Gaza Strip pending the outcome of permanent status negotiations”.

Further, Mr Singer challenges my assertion that Britain’s Prime Minister agrees that annexation would violate international law. I direct Mr Singer to Boris Johnson’s article for Yedioth Ahronoth in July: “Annexation would represent a violation of International law. It would also be a gift to those who want to perpetuate old stories about Israel… If it does (go ahead), the UK will not recognise any changes to the 1967 lines, except those agreed by both parties.”

Finally, on Leibler’s continuing protestations about his behaviour at dinner, I will let others who know him better pass judgment on his character. More importantly, the net impact of Leibler’s lobbying has been to undermine Australia’s once-strong bipartisan support for Israel. His cardinal error, together with others, has been to equate support for Israel with support for Netanyahu’s policies. This may be tactically smart for their Likud friends, but it’s strategically dumb given the challenges Israel will face in the decades ahead.

The post Australian Jewish News: Rudd Responds To Critics appeared first on Kevin Rudd.

Sam VargheseWhen will Michael Hayden explain why the NSA did not predict 9/11?

As America marks the 19th anniversary of the destruction of the World Trade Centre towers by terrorists, it is a good time to ask when General Michael Hayden, head of the NSA at the time of 9/11, will come forward and explain why the agency was unable to detect the chatter among those who had banded together to wreak havoc in the US.

Before I continue, let me point out that nothing of what appears below is new; it was all reported some four years ago, but mainstream media have conspicuously avoided pursuing the topic because it would probably trouble some people in power.

The tale of how Hayden came to throw out a system known as ThinThread, devised by probably the most brilliant metadata analyst, William Binney, at that time the technical director of the NSA, has been told in a searing documentary titled A Good American.

Binney devised the system which would look at links between all people around the globe and added in measures to prevent violations of Americans’ privacy. Cryptologist Ed Loomis, analyst Kirk Wiebe, software engineer Tom Drake and Diane Roark, senior staffer at the House Intelligence Committee, worked along with him.

A skunkworks project, Binney’s ThinThread system handled data as it was ingested, unlike the existing method at the NSA which was to just collect all the data and analyse it later.

But when Hayden took the top job in the mid-1990s, he wanted to spread the NSA’s work around and also boost its budget – a bit of empire building, no less. Binney was asked what he could do with a billion dollars and more for the ThinThread system but said he could not use more than US$300 million.

What followed was surprising. Though ThinThread had been demonstrated to be able to sort data fast and accurately and track patterns within it, Hayden banned its use and put in place plans to develop a system known as Trailblazer – which incidentally had to be trashed many years later as an unmitigated disaster after many billions had gone down the drain.

Trailblazer was being used when September 11 came around.

Drake points out in the film that after 9/11, they set up ThinThread again and analysed all the data flowing into the NSA in the lead-up to the attack and found clear indications of the terrorists’ plans.

At the beginning of the film, Binney’s voice is heard saying, “It was just revolting… and disgusting that we allowed it to happen,” as footage of people leaping from the burning Trade Centre is shown.

After the tragedy, ThinThread, sans its privacy protections, was used to conduct blanket surveillance on Americans. Binney and his colleagues left the NSA shortly thereafter.

Hayden is often referred to as a great American patriot by American talk-show host Bill Maher. He was offered an interview by the makers of A Good American but did not take it up. Perhaps he would like to break his silence now.

Planet DebianMarkus Koschany: My Free Software Activities in August 2020

Welcome to Here is my monthly report (+ the first week in September) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • I packaged a new upstream release of teeworlds, the well-known 2D multiplayer shooter with cute characters called tees to resolve a Python 2 bug (although teeworlds is actually a C++ game). The update also fixed a severe remote denial-of-service security vulnerability, CVE-2020-12066. I prepared a patch for Buster and will send it to the security team later today.
  • I sponsored updates of mgba, a Game Boy Advance emulator, for Ryan Tandy, and osmose-emulator for Carlos Donizete Froes.
  • I worked around a RC GCC 10 bug in megaglest by compiling with -fcommon.
  • Thanks to Gerardo Ballabio who packaged a new upstream version of galois which I uploaded for him.
  • Also thanks to Reiner Herrmann and Judit Foglszinger who fixed a regression (crash) in monsterz due to the earlier port to Python 3. Reiner also made fans of supertuxkart happy by packaging the latest upstream release version 1.2.

Debian Java


  • I was contacted by the upstream maintainer of privacybadger, a privacy addon for Firefox and Chromium, who dislikes the idea of having a stable and unchanging version in Debian stable releases. Obviously I can’t really do much about it although I believe the release team would be open-minded for regular point updates of browser addons though. However I don’t intend to do regular updates for all of my packages in stable unless there is a really good reason to do so. At the moment I’m willing to make an exception for ublock-origin and https-everywhere because I feel these addons should be core browser functionality anyway. I talked about this on our Debian Mozilla Extension Maintainers mailinglist and it seems someone is interested to take over privacybadger and prepare regular stable point updates. Let’s see how it turns out.
  • Finally this month saw the release of ublock-origin 1.29.0 and the creation of two different browser-specific binary packages for Firefox and Chromium. I have talked about it before and I believe two separate packages for ublock-origin are more aligned to upstream development and make the whole addon easier to maintain which benefits users, upstream and maintainers.
  • imlib2, an image library, and binaryen also got updated this month.

Debian LTS

This was my 54. month as a paid contributor and I have been paid to work 20 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-2303-1. Issued a security update for libssh fixing 1 CVE.
  • DLA-2327-1. Issued a security update for lucene-solr fixing 1 CVE.
  • DLA-2369-1. Issued a security update for libxml2 fixing 8 CVE.
  • Triaged CVE-2020-14340, jboss-xnio as not-affected for Stretch.
  • Triaged CVE-2020-13941, lucene-solr as no-dsa because the security impact was minor.
  • Triaged CVE-2019-17638, jetty9 as not-affected for Stretch and Buster.
  • squid3: I backported the patches for CVE-2020-15049, CVE-2020-15810, CVE-2020-15811 and CVE-2020-24606 from squid 4 to squid 3.


Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 „Jessie“. This was my 27. month and I have been paid to work 14,25 hours on ELTS.

  • ELA-271-1. Issued a security update for squid3 fixing 19 CVE. Most of the work was already done before ELTS started, only the patch for CVE-2019-12529 had to be adjusted for the nettle version in Jessie.
  • ELA-273-1. Issued a security update for nss fixing 1 CVE.
  • ELA-276-1. Issued a security update for libjpeg-turbo fixing 2 CVE.
  • ELA-277-1. Issued a security update for graphicsmagick fixing 1 CVE.
  • ELA-279-1. Issued a security update for imagemagick fixing 3 CVE.
  • ELA-280-1. Issued a security update for libxml2 fixing 4 CVE.

Thanks for reading and see you next time.

Planet DebianJelmer Vernooij: Debian Janitor: All Packages Processed with Lintian-Brush

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

On 12 July 2019, the Janitor started fixing lintian issues in packages in the Debian archive. Now, a year and a half later, it has processed every one of the almost 28,000 packages at least once.

Graph with Lintian Fixes Burndown

As discussed two weeks ago, this has resulted in roughly 65,000 total changes. These 65,000 changes were made to a total of almost 17,000 packages. Of the remaining packages, for about 4,500 lintian-brush could not make any improvements. The rest (about 6,500) failed to be processed for one of many reasons – they are e.g. not yet migrated off alioth, use uncommon formatting that can't be preserved or failed to build for one reason or another.

Graph with runs by status (success, failed, nothing-to-do)

Now that the entire archive has been processed, packages are prioritized based on the likelihood of a change being made to them successfully.

Over the course of its existence, the Janitor has slowly gained support for a wider variety of packaging methods. For example, it can now edit the templates for some of the generated control files. Many of the packages that the janitor was unable to propose changes for the first time around are expected to be correctly handled when they are reprocessed.

If you’re a Debian developer, you can find the list of improvements made by the janitor in your packages by going to

For more information about the Janitor's lintian-fixes efforts, see the landing page.


Sam VargheseSerena Williams, please go before people start complaining

The US Open 2020 represented the best chance for an aging Serena Williams to win that elusive 24th Grand Slam title and equal the record of Australian Margaret Court. Seeds Bianca Andreescu (6), Ashleigh Barty (1), Simona Halep (2), Kiki Bertens (7) and Elina Svitolina (5) are all not taking part.

But Williams, now 39, could not get past Victoria Azarenka in the semi-finals, losing 1-6, 6-3, 6-3.

Prior to this, Williams had lost four Grand Slam finals in pursuit of Court’s record: Andreescu defeated her at the US Open in 2019, Angelique Kerber beat her at Wimbledon in 2018, Naomi Osaka took care of her in the 2018 US Open and Halep accounted for Williams at Wimbledon in 2019. In all those finals, Williams was unable to win more than four games in any set.

Williams took a break to have a child some years ago and after returning her only final win was in the Australian Open in 2017 when she beat her sister, Venus, 6-4, 6-4.

It looks like it is time to bow out, if not gracefully, then clumsily. One, unfortunately, cannot associate grace with Williams.

But, no, Williams has already signed up to play in the French Open which has been pushed back to September 27 from its normal time of May due to the coronavirus pandemic. Only Barty has said she would not be participating.

Sportspeople are remembered for more than mere victories. One remembers the late Arthur Ashe not merely because he was the first black man to win Wimbledon, the US Open and the Australian Open, but for the way he carried himself.

He was a great ambassador for his country and the soul of professionalism. Sadly, he died at the age of 49 after contracting AIDS from a blood transfusion.

Another lesser known person who was in the Ashe mould is Larry Gomes, the West Indies cricketer who was part of the world-beating sides that never lost a Test series for 15 years from 1980 to 1995.

Gomes quit after playing 60 Tests, and the letter he sent to the Board when he did so was a wonderful piece of writing, reflecting both his humility and his professionalism.

In an era of dashing stroke-players, he was the steadying influence on the team many a time, and was thoroughly deserving of his place among the likes of the much better-known superstars like Viv Richards, Gordon Greenidge, Desmond Haynes and Clive Lloyd.

In Williams case, she has shown herself to be a poor sportsperson, far too focused on herself and unable to see the bigger picture.

She is at the other end of the spectrum compared to players like Chris Evert and Steffi Graf, both great champions, and also models of good fair-minded competitors.

It would be good for tennis if Williams leaves the scene after the French Open, no matter if she wins there or loses. There is more to a good sportsperson than mere statistics.

Cryptogram Friday Squid Blogging: Calamari vs. Squid

St. Louis Magazine answers the important question: “Is there a difference between calamari and squid?” Short answer: no.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Kevin RuddWPR Trend Lines: China Under Xi Jinping


Topics: China under Xi Jinping

World Politics Review: Mr. Rudd, thank you so much for joining us on Trend Lines.

Kevin Rudd: Happy to be on the program.

WPR: I want to start with a look at this bipartisan consensus that’s emerged in the United States, this reappraisal of China in the U.S., but also in Europe and increasingly in Australia. This idea that the problems of China’s trade policies and the level playing fields within the domestic market, aren’t going away. If anything, they’re getting worse. Same thing with regard to human rights and political liberalization under Xi Jinping. And that the West really needs to start preparing for a period of strategic rivalry with China, including increased friction and maybe some bumping of elbows. Are you surprised by how rapidly this new consensus has emerged and how widespread it is?

Mr. Rudd: I’m not surprised by the emergence of a form of consensus across the democracies, whether they happen to be Western or Asian democracies, or democracies elsewhere in the world. And there’s a reason for that, and that is that the principle dynamic here has been China’s changing course itself.

And secondly, quite apart from a change in management under Xi Jinping and the change in direction we’ve seen under him, China is now increasingly powerful. It doesn’t matter what matrix of power we’re looking at—economic power, trade, investment; whether it’s capital markets, whether it’s technology or whether it’s the classic determinants of international power and various forms of military leverage.

You put all those things together, we have a new guy in charge who has decided to be more assertive about China’s interests and values in the world beyond China’s borders. And secondly, a more powerful China capable of giving that effect.

So as a result of that, many countries for the first time have had this experience rub up against them. In the past, it’s only been China’s near neighbors who have had this experience. Now it’s a much broader and shared experience around the region and around the world.

So there are the structural factors at work in terms of shaping an emerging consensus on the part of the democracies, and those who believe in an open, free trading system in the world, and those who are committed to the retention of the liberal international order—that these countries are beginning to find a common cause in dealing with their collective challenges with the Middle Kingdom.

WPR: Now you mentioned the new guy in charge, obviously that’s Xi Jinping who’s been central to all of the recent shifts in China, whether it’s domestic policy or foreign policy. At the same time there’s some suggestion that as transformational as Xi is, that there’s quite a bit of continuity in terms of some of the more assertive policies of his predecessor Hu Jintao. Is it a mistake to focus so much on Xi? Does he reflect a break in China’s approach to the world, or is it continuity or more of a transition that responds to the context of perhaps waning American power as perceived by China? So is it a mistake to focus so much on Xi, as opposed to trying to understand the Chinese leadership writ large, their perception of their own national interest?

Mr. Rudd: I think it’s a danger to see policy continuity and policy change in China as some binary alternative, because with Xi Jinping, yes, there are elements of continuity, but there are also profound elements of change. So on the continuity front, yes, in the period of Hu Jintao, we began to see a greater Chinese experimental behavior in the South China Sea, a more robust approach to the assertion of China’s territorial claims there, for example.

But with Xi Jinping, this was taken to several extra degrees in intensity when we saw a full-blown island reclamation exercise, where you saw rocky atolls suddenly being transformed into sand-filled islands, and which were then militarized. So is that continuity or change? That becomes, I think, more of a definitional question.

If I was trying to sum up what is new, however, about Xi Jinping in contrast to his predecessors, it would probably be in these terms.

In politics, Chinese domestic politics, he has changed the discourse by taking it further to the left—by which I mean a greater role for the party and ideology, and the personal control of the leader, compared with what existed before.

On the economy we see a partial shift to the left with a resuscitation of state-owned enterprises and some disincentives emerging for the further and continued growth of China’s own hitherto successful private-sector entrepreneurial champions.

On nationalism we have seen a further push under Xi Jinping further to the right than his predecessors.

And in terms of degrees of international assertiveness, whether it’s over Hong Kong, the South China Sea, or over Taiwan, in relation to Japan and the territorial claims in the East China Sea, or with India, or in the big bilateral relationship with the United States as well as with other American allies—the Canadians, the Australians and the Europeans, et cetera—as well as big, new large-canvas foreign policy initiatives like the Belt and Road Initiative, what we’ve seen is an infinitely more assertive China.

So, yes, there are elements of continuity, but it would be foolish for us to underestimate the degree of change as a consequence of the agency of Xi Jinping’s leadership.

WPR: You mentioned the concentration of power in the leader’s hands, Xi Jinping’s hands. He’s clearly the most powerful Chinese leader since Mao Zedong. At the same time and especially in the immediate aftermath of the coronavirus pandemic, but over the course of the last year or so, there’s been some reporting about rumbling within the Chinese Communist Party about his leadership. What do you make of those reports? Is he a leader who’s in firm control in China? Is that something that’s always somewhat at question, given the kind of internal politics of the Chinese Communist Party? Or are there potential challenges to his leadership?

Mr. Rudd: Well, in our analysis of Chinese politics, it hasn’t really changed a lot since I was a young diplomat working in Beijing in the mid-1980s, and that is: The name of the game is opacity. With Chinese domestic politics we are staring through a glass dimly. And that’s because the nature of a Marxist-Leninist party, in particular a Leninist party, is deeply secretive about its own internal operations. So we are left to speculate.

But what we know from the external record, we know that Xi Jinping, as you indicated in your question, has become China’s most powerful leader at least since Deng and probably since Mao, against most matrices of power. He’s become what is described as the “chairman of everything.” Every single leading policy group of the politburo is now chaired by him.

And on top of that, the further instruments of power consolidation have been reflected in his utilization in the anti-corruption campaign, and now the unleashing of a Party Rectification campaign, which is designed to reinforce compliance on the part of party members to central leadership diktat. So that’s the sort of individual that we have, and that’s the journey that he has traveled in the last six to seven years, and where most of his principle opponents have either been arrested or incarcerated or have committed suicide.

So where does that leave us in terms of the prospects for any organized political opposition? Under the party constitution, Xi Jinping is up for a reelect at the 20th Party Congress, and that is his reappointment as general secretary of the party and as chairman of the Central Military Commission. Separately, he’s up for a reelect by the National People’s Congress for a further term as China’s president.

For him to go a further term would be to break all the post-Deng Xiaoping conventions built around the principles of shared or collective leadership. But Xi Jinping, in my judgment, is determined to remain China’s paramount leader through the 2020s and into the 2030s. And how would he do that? He would perhaps see himself at the next Party Congress appointed as party chairman, a position last occupied by Chairman Mao and Mao’s immediate successor, Chairman Hua Guofeng.

He would probably retain the position of president of the country, given the constitutional changes he brought about three years or so ago to remove the two-term limit for the presidency. And he would, I think, most certainly retain the chairmanship of the Central Military Commission.

These predispositions, however, of themselves combined with the emerging cult of personality around Xi, combined with fundamental disagreements on elements of economic policy and international policy, have created considerable walls of opposition to Xi Jinping within the Chinese Communist Party.

And the existence of that opposition is best proven by the fact that Xi Jinping, in August of 2020, decided to launch this new Party Rectification campaign in order to reestablish, from his perspective, proper party discipline—that is, obedience to Xi Jinping. So the $6,000 question is, Can these different sources of dissent within the Chinese Communist Party coalesce? There’s no obvious candidate to do the coalescing, just as Deng Xiaoping in the past was a candidate for coalescing different political and policy views to those of Mao Zedong, which is why Deng Xiaoping was purged at least two times in his latter stages of his career before his final return and rehabilitation.

So we can’t see a ready candidate, but the dynamics of Chinese politics tend to have been that if there is, however, a catastrophic event—an economic implosion, a major foreign policy or international policy mishap, misstep, or crisis or conflict which goes wrong—then these tend to generate their own dynamics. So Xi Jinping’s watchword between now and the 20th Party Congress to be held in October/November of 2022 will be to prevent any such crises from emerging.

WPR: It’s a natural transition to my next question, because another point of disagreement among China watchers is whether a lot of these moves that we’ve seen over the past five years, accelerating over the past five years, but this reassertion of the party centrality within or across all spheres of Chinese society, but also some of the international moves, whether they’re signs of strength in the face of what they see as a waning United States, or whether they’re signs of weakness and a sense of having to move faster than perhaps was previously anticipated? A lot of the challenges that China faces, whether it can become a rich country before it becomes an old country, some of the environmental degradation that its development has caused, none of them are going away and none of them have gone away. So what’s your view on that in terms of China’s future prospects? Do you see a continued rise? A leveling off? The danger of a failing China and everything that that might imply?

Mr. Rudd: Well, there are a bunch of questions embedded in what you’re saying, but I’d begin by saying that Americans shouldn’t talk themselves out of global leadership in the future. America remains a powerful country in economic terms, in technological terms and in military terms, and against all three measures still today more powerful than China. In the case of the military, significantly more powerful than China.

Of course, the gap begins to narrow, but this narrowing process is not overnight, it takes a long period of time, and there are a number of potential mishaps for China. An anecdote from the Chinese commentariat recently—in a debate about America’s preparedness to go into armed conflict or war with China in the South China Sea or over Taiwan, and the Chinese nationalist constituency in China basically chanting, “Bring it on.” If people chant, “USA,” in the United States, similar gatherings would probably chant, “PRC,” in Beijing.

But it was interesting what a Chinese scholar had to say as this debate unfolded, when one of the commentators said, “Well, at the end of the day, America is a paper tiger.” Of course, that was a phrase used by Mao back in the 1950s, ‘60s. The response from the Chinese scholar in China’s online discourse was, “No, America is not a paper tiger. America is a tiger with real teeth.”

So it’s important for Americans to understand that they are still in an extraordinarily powerful position in relation to both China and the rest of the world. So the question becomes one ultimately of the future leadership direction in the United States.

The second point I’d make in response to your question is that, when we seek to understand China’s international behavior, the beginning of wisdom is to understand its domestic politics, to the extent that we can. So when we are asked questions about strength or weakness, or why is China in the COVID world doubling down on its posture towards Hong Kong, the South China Sea, towards Taiwan, the East China Sea, Japan, as well as other countries like India, Canada, Australia, and some of the Europeans and the United States.

Well, I think the determination on the part of Xi Jinping’s leadership was to make it plain to all that China had not been weakened by COVID-19. Is that a sign of weakness or of strength? I think that again becomes a definitional question.

In terms of Chinese domestic politics, however, if Xi Jinping is under attack in terms of his political posture at home and the marginalization of others within the Chinese Communist Party, if he is under attack in terms of its direction on economic policy, with the slowing of growth even in the pre-COVID period, then of course from Xi Jinping’s perspective, the best way to deal with any such dissension on the home front is to become more nationalist on the foreign front. And we’ve seen evidence of that. Is that strength, or is it weakness? Again, it’s a definitional question, but it’s the reality of what we’re dealing with.

For the future, I think ultimately what Xi Jinping’s administration will be waiting for is what sort of president emerges from November of 2020. If Trump is victorious, I think Xi Jinping will privately be pretty happy about that, because he sees the Trump administration on the one hand being hard-lined towards China, but utterly chaotic in its management of America’s national China strategy—strong on some things, weak on the other, vacillating between A and B above—and ultimately a divisive figure in terms of the solidarity of American alliances around the world.

If Biden wins, I think the judgment in China will be that America could seriously get its act back together again, run a coherent hard-line national comprehensive China strategy. Secondly, do so—unlike the Trump administration—with the friends and allies around the world in full harness and incorporative harness. So it really does depend on what the United States chooses to do in terms of its own national leadership, and therefore foreign policy direction for the future.

WPR: You mentioned the question of whether the U.S. is a paper tiger or not. Clearly in terms of military assets and capabilities, it’s not, but there’s been some strategic debate in Australia over the past five years or so—and I’m thinking particularly of Hugh White and his line of analysis—the idea that regardless of what America can do, that there really won’t be, when push comes to shove, the will to engage in the kind of military conflict that would be required to respond, for instance, to Chinese aggression or attempt to reunify militarily with Taiwan, let alone something like the Senkaku islands, the territorial dispute with Japan. And the recently released Australian Defense White Paper suggests that there’s a sense that the U.S. commitment to mutual defense in the Asia-Pacific might be waning. So what are your thoughts about the future of American primacy in Asia? And you mentioned it’s a question of leadership, but are you optimistic about the public support in America for that historical project? And if not, what are the implications for Australia, for other countries in Asia and the world?

Mr. Rudd: Well, it’s been a while since I’ve been in rural Pennsylvania. So it’s hard for me to answer your question. [Laughter] I am obviously both a diplomat by training and a politician, so I kind of understand the policy world, but I also understand rural politics.

It does depend on which way the American people go. When I look at the most recent series of findings from the Pew Research Project on American attitudes to the world, what I find interesting is that contrary to many of the assumptions by the American political class, that nearly three-quarters of Americans are strongly supportive of trade and strongly supportive of variations of free trade. If you listen to the populist commentary in the United States, you would think that notions of free trade had become a bit like the anti-Christ.

So therefore I think it will be a challenge for the American political class to understand that the American public, at least reflected in these polling numbers on trade, are still fundamentally internationally engaged, and therefore are not in support of America incrementally withdrawing from global economic leadership. On the question of political leadership and America’s global national security role and its regional national security role in the Asia Pacific, again it’s a question of the proper harnessing of American resources.

I think one of the great catastrophes of the last 20 years was America’s decision to invade Iraq. You flushed up a whole lot of political capital and foreign policy capital around the world against the wall. You expended so much in blood and treasure that I think the political wash through also in America itself was a deep set of reservations in American voters about, let’s call it extreme foreign policy folly.

Remember, that enterprise was about eliminating weapons of mass destruction, which the Bush administration alleged to exist in Saddam Hussein’s bathroom locker, and they never did. So we’re partly experiencing the wash up of all of that in the American body politic, where there is a degree of, shall we say, exhaustion about unnecessary wars and about unnecessary foreign policy engagement. So the wisdom for the future will be what is necessary, as opposed to that which is marginal.

Now, I would think, on that basis, given the galvanizing force of American public sentiment—on COVID questions, on trade and investment questions, on national security questions, and on human rights questions—that the No. 1 foreign policy priority for the United States, for the period ahead, will be China. And so what I therefore see is that the centrality of future U.S. administrations, Republican and Democrat, is likely to be galvanized by a finding from the American people that, contrary to any of the urban myths, that the American people still want to see their country become more prosperous through trade. They want their country to become bigger through continued immigration—another finding in terms of Pew research. And they want their country to deal with the greatest challenge to America’s future, which is China’s rise.

So that is different from becoming deeply engaged and involved in every nuance of national security policy in the eastern reaches of either Libya or in the southern slopes of Lebanon. There has been something of a long-term Middle Eastern quagmire, and I think the wake-up call of the last several years has been for America to say that if there’s to be a second century of American global leadership, another Pax Americana for the 21st century, then it will ultimately be resolved on whether America rises to the global economic challenge, the global technology leadership challenge and the global China challenge.

And I think the American people in their own wonderfully, shall we say unorchestrated way, but subject to the bully pulpit of American presidential leadership and politics, can harness themselves for that purpose.

WPR: So much of the West’s engagement with China over the past 30 years has been this calculated gamble on what kind of power China would be when it did achieve great power status, and this idea that trade and engagement would pull China more toward being the kind of power that’s compatible with the international order, the postwar American-led international order. Do you think we have an answer to that question yet, and has the gamble of engaging with China paid off?

Mr. Rudd: The bottom line is that if you look at the history of America’s engagement strategy with China and the engagement strategy of many of America’s allies over the decades—really since the emergence of Deng in the late ‘70s and recommenced in the aftermath of Tiananmen, probably from about 1992, 1993—it has always been engagement, but hedge. In other words, there was engagement with China across all the instrumentalities of the global economy and global political and economic governance, and across the wide panoply of the multilateral system.

But there was always a conditionality and there was always a condition, which is, countries like the United States were not about to deplete their military, even after it won the Cold War against the Soviet Union. And America did not deplete its military. America’s military remained formidable. There are ebbs and flows in the debate, but the capabilities remained world class and dominant. And so in fairness to those who prosecuted that strategy of engagement, it always was engagement plus hedge.

And so what’s happened as China’s leadership direction has itself changed over the last six or seven years under Xi Jinping, is the hedge component of engagement has reared its head and said, “Well, I’m glad we did so.” Because we still, as the United States, have the world’s most powerful military. We still, as the United States, are the world’s technology leaders—including in artificial intelligence, despite all the hype coming out of Beijing. We are the world’s leaders when it comes to the core driver of the future of artificial intelligence, which is the production of computer chips, of semiconductors and other fundamental advances in computing. And you still are the world’s leaders in the military. And so therefore America is not in a hugely disadvantageous position. And on top of all the above, you still retain the global reserve currency called the U.S. dollar.

Each of these is under challenge by China. China is pursuing a highly systematic strategy to close the gap in each of these domains. But I think there is a degree of excessive pessimism when you look at America through the Washington lens, that nothing is happening. Well, a lot is. You go to Silicon Valley, you go to the Pacific Fleet, you go to the New York Stock Exchange, and you look at the seminal debates in Hong Kong at the moment, about whether Hong Kong will remain linked between the Hong Kong dollar and the U.S. dollar.

All this speaks still to the continued strength of America’s position, but it really does depend on whether you choose to husband these strengths for the future, and whether you choose to direct them in the future towards continued American global leadership of what we call the rules-based liberal international order.

WPR: Mr. Rudd, thank you so much for being so generous with your time and your insights.

Kevin Rudd: Good to be with you.

The post WPR Trend Lines: China Under Xi Jinping appeared first on Kevin Rudd.

Cryptogram The Third Edition of Ross Anderson’s Security Engineering

Ross Anderson’s fantastic textbook, Security Engineering, will have a third edition. The book won’t be published until December, but Ross has been making drafts of the chapters available online as he finishes them. Now that the book is completed, I expect the publisher to make him take the drafts off the Internet.

I personally find both the electronic and paper versions to be incredibly useful. Grab an electronic copy now while you still can.

Cryptogram Ranking National Cyber Power

Harvard Kennedy School’s Belfer Center published the “National Cyber Power Index 2020: Methodology and Analytical Considerations.” The rankings: 1. US, 2. China, 3. UK, 4. Russia, 5. Netherlands, 6. France, 7. Germany, 8. Canada, 9. Japan, 10. Australia, 11. Israel. More countries are in the document.

We could — and should — argue about the criteria and the methodology, but it’s good that someone is starting this conversation.

Executive Summary: The Belfer National Cyber Power Index (NCPI) measures 30 countries’ cyber capabilities in the context of seven national objectives, using 32 intent indicators and 27 capability indicators with evidence collected from publicly available data.

In contrast to existing cyber related indices, we believe there is no single measure of cyber power. Cyber Power is made up of multiple components and should be considered in the context of a country’s national objectives. We take an all-of-country approach to measuring cyber power. By considering “all-of-country” we include all aspects under the control of a government where possible. Within the NCPI we measure government strategies, capabilities for defense and offense, resource allocation, the private sector, workforce, and innovation. Our assessment is both a measurement of proven power and potential, where the final score assumes that the government of that country can wield these capabilities effectively.

The NCPI has identified seven national objectives that countries pursue using cyber means. The seven objectives are:

  1. Surveilling and Monitoring Domestic Groups;
  2. Strengthening and Enhancing National Cyber Defenses;
  3. Controlling and Manipulating the Information Environment;
  4. Foreign Intelligence Collection for National Security;
  5. Commercial Gain or Enhancing Domestic Industry Growth;
  6. Destroying or Disabling an Adversary’s Infrastructure and Capabilities; and,
  7. Defining International Cyber Norms and Technical Standards.

In contrast to the broadly held view that cyber power means destroying or disabling an adversary’s infrastructure (commonly referred to as offensive cyber operations), offense is only one of these seven objectives countries pursue using cyber means.

LongNowStunning New Universe Fly-Through Really Puts Things Into Perspective

A stunning new video lets viewers tour the universe at superluminal speed. Miguel Aragon of Johns Hopkins, Mark Subbarao of the Adler Planetarium, and Alex Szalay of Johns Hopkins reconstructed the layout of 400,000 galaxies based on information from the Sloan Digital Sky Survey (SDSS) Data Release 7:

“Vast as this slice of the universe seems, its most distant reach is to redshift 0.1, corresponding to roughly 1.3 billion light years from Earth. SDSS Data Release 9 from the Baryon Oscillation Spectroscopic Survey (BOSS), led by Berkeley Lab scientists, includes spectroscopic data for well over half a million galaxies at redshifts up to 0.8 — roughly 7 billion light years distant — and over a hundred thousand quasars to redshift 3.0 and beyond.”

Planet DebianPetter Reinholdtsen: Buster update of Norwegian Bokmål edition of Debian Administrator's Handbook almost done

Thanks to the good work of several volunteers, the updated edition of the Norwegian translation for "The Debian Administrator's Handbook" is now almost completed. After many months of proof reading, I consider the proof reading complete enough for us to move to the next step, and have asked for the print version to be prepared and sent of to the print on demand service While it is still not to late if you find any incorrect translations on the hosted Weblate service, but it will be soon. :) You can check out the Buster edition on the web until the print edition is ready.

The book will be for sale on and various web book stores, with links available from the web site for the book linked to above. I hope a lot of readers find it useful.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Worse Than FailureError'd: This Could Break the Bank!

"Sure, free for the first six months is great, but what exactly does happen when I hit month seven?" Stuart L. wrote.


"In order to add an app on the App Store Connect dashboard, you need to 'Register a new bundle ID in Certificates, Identifiers & Profiles'," writes Quentin, "Open the link, you have a nice 'register undefined' and cannot type anything in the identifier input field!"


"I was taught to keep money amounts as pennies rather than fractional dollars, but I guess I'm an old-fashioned guy!" writes Paul F.


Anthony C. wrote, "I was looking for headphones on and well, I guess they figured I'd like to look at something else for a change?"


"Build an office chair using only a spork, a napkin, and a coffee stirrer? Sounds like a job for McGuyver!"


"Translation from Swedish, 'We assume that most people who watch Just Chatting probably also like Just Chatting.' Yes, I bet it's true!," Bill W. writes.


[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianLouis-Philippe Véronneau: Hire me!

I'm happy to announce I handed out my Master's Thesis last Monday. I'm not publishing the final copy just yet1, as it still needs to go through the approval committee. If everything goes well, I should have my Master of Economics diploma before Christmas!

It sure hasn't been easy, and although I regret nothing, I'm also happy to be done with university.

Looking for a job

What an odd time to be looking for a job, right? Turns out for the first time in 12 years, I don't have an employer. It's oddly freeing, but also a little scary. I'm certainly not bitter about it though and it's nice to have some time on my hands to work on various projects and read things other than academic papers. Look out for my next blog posts on using the NeTV2 as an OSHW HDMI capture card, on hacking at security tokens and much more!

I'm not looking for anything long term (I'm hoping to teach Economics again next Winter), but for the next few months, my calendar is wide open.

For the last 6 years, I worked as Linux system administrator, mostly using a LAMP stack in conjunction with Puppet, Shell and Python. Although I'm most comfortable with Puppet, I also have decent experience with Ansible, thanks to my work in the DebConf Videoteam.

I'm not the most seasoned Debian Developer, but I have some experience packaging Python applications and libraries. Although I'm no expert at it, lately I've also been working on Clojure packages, as I'm trying to get Puppet 6 in Debian in time for the Bullseye freeze. At the rate it's going though, I doubt we're going to make it...

If your company depends on Puppet and cares about having a version in Debian 11 that is maintained (Puppet 5 is EOL in November 2020), I'm your guy!

Oh, and I guess I'm a soon-to-be Master of Economics specialising in Free and Open Source Software business models and incentives theory. Not sure I'll ever get paid putting that in application, but hey, who knows.

If any of that resonates with you, contact me and let's have a chat! I promise I don't bite :)

  1. The title of the thesis is What are the incentive structures of Free Software? An economic analysis of Free Software's specific development model. Once the final copy is approved, I'll be sure to write a longer blog post about my findings here. 

Planet DebianReproducible Builds (diffoscope): diffoscope 160 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 160. This version includes the following changes:

* Check that pgpdump is actually installed before attempting to run it.
  Thanks to Gianfranco Costamagna (locutusofborg). (Closes: #969753)
* Add some documentation for the EXTERNAL_TOOLS dictionary.
* Ensure we check FALLBACK_FILE_EXTENSION_SUFFIX, otherwise we run pgpdump
  against all files that are recognised by file(1) as "data".

You find out more by visiting the project homepage.


Planet Linux AustraliaDavid Rowe: Playing with PAPR

The average power of a FreeDV signal is surprisingly hard to measure as the parallel carriers produce a waveform that has many peaks an troughs as the various carriers come in and out of phase with each other. Peter, VK3RV has been working on some interesting experiments to measure FreeDV power using calorimeters. His work got me thinking about FreeDV power and in particular ways to improve the Peak to Average Power Ratio (PAPR).

I’ve messed with a simple clipper for FreeDV 700C in the past, but decided to take a more scientific approach and use some simulations to measure the effect of clipping on FreeDV PAPR and BER. As usual, asking a few questions blew up into a several week long project. The usual bugs and strange, too good to be true initial results until I started to get results that felt sensible. I’ve tested some of the ideas over the air (blowing up an attenuator along the way), and learnt a lot about PAPR and related subjects like Peak Envelope Power (PEP).

The goal of this work is to explore the effect of a clipper on the average power and ultimately the BER of a received FreeDV signal, given a transmitter with a fixed peak output power.

Clipping to reduce PAPR

In normal operation we adjust our Tx drive so the peaks just trigger the ALC. This sets the average power at Ppeak – PAPR Watts, for example Pav = 100WPEP – 10dB = 10W average.

The idea of the clipper is to chop the tops off the FreeDV waveform so the PAPR is decreased. We can then increase the Tx drive, and get a higher average power. For example if PAPR is reduced from 10 to 4dB, we get Pav = 100WPEP – 4dB – 40W. That’s 4x the average power output of the 10dB PAPR case – Woohoo!

In the example below the 16 carrier waveform was clipped and the PAPR reduced from 10.5 to 4.5dB. The filtering applied after the clipper smooths out the transitions (and limits the bandwidth to something reasonable).

However it gets complicated. Clipping actually reduces the average power, as we’ve removed the high energy parts of the waveform. It also distorts the signal. Here is a scatter diagram of the signal before and after clipping:

The effect looks like additive noise. Hmmm, and what happens on multipath channels, does the modem perform the same as for AWGN with clipped signals? Another question – how much clipping should we apply?

So I set about writing a simulation (papr_test.m) and doing some experiments to increase my understanding of clippers, PAPR, and OFDM modem performance using typical FreeDV waveforms. I started out trying a few different compression methods such as different compander curves, but found that clipping plus a bandpass filter gives about the same result. So for simplicity I settled on clipping. Throughout this post many graphs are presented in terms of Eb/No – for the purpose of comparison just consider this the same thing as SNR. If the Eb/No goes up by 1dB, so does the SNR.

Here’s a plot of PAPR versus the number of carriers, showing PAPR getting worse with the number of carriers used:

Random data was used for each symbol. As the number of carriers increases, you start to get phases in carriers cancelling due to random alignment, reducing the big peaks. Behaivour with real world data may be different; if there are instances where the phases of all carriers are aligned there may be larger peaks.

To define the amount of clipping I used an estimate of the PDF and CDF:

The PDF (or histogram) shows how likely a certain level is, and the CDF shows the cumulative PDF. High level samples are quite unlikely. The CDF shows us what proportion of samples are above and below a certain level. This CDF shows us that 80% of the samples have a level of less than 4, so only 20% of the samples are above 4. So a clip level of 0.8 means the clipper hard limits at a level of 4, which would affect the top 20% of the samples. A clip value of 0.6, would mean samples with a level of 2.7 and above are clipped.

Effect of clipping on BER

Here are a bunch of curves that show the effect of clipping on and AWGN and multipath channel (roughly CCIR poor). A 16 carrier signal was used – typical of FreeDV waveforms. The clipping level and resulting PAPR is shown in the legend. I also threw in a Tx diversity curve – sending each symbol twice on double the carriers. This is the approach used on FreeDV 700C and tends to help a lot on multipath channels.

As we clip the signal more and more, the BER performance gets worse (Eb/No x-axis) – but the PAPR is reduced so we can increase the average power, which improves the BER. I’ve tried to show the combined effect on the (peak Eb/No x-axis) curves which scales each curve according to it’s PAPR requirements. This shows the peak power required for a given BER. Lower is better.

Take aways:

  1. The 0.8 and 0.6 clip levels work best on the peak Eb/No scale, ie when we combine effect of the hit on BER performance (bad) and PAPR improvement (good).
  2. There is about 4dB improvement across a range of operating points. This is pretty signficant – similar to gains we get from Tx diversity or a good FEC code.
  3. AWGN and Multipath improvements are similar – good. Sometimes you get an algorithm that works well on AWGN but falls in a heap on multipath channels, which are typically much tougher to push bits through.
  4. I also tried 8 carrier waveforms, which produced results about 1dB better, as I guess fewer carriers have a lower PAPR to start with.
  5. Non-linear techniques like clipping spread the energy in frequency.
  6. Filtering to constrain the frequency spread brings the PAPR up again. We can trade off PAPR with bandwidth: lower PAPR, more bandwidth.
  7. Non-linear technqiques will mess with QAM more. So we may hit a wall at high data rates.

Testing on a Real PA

All these simulations are great, but how do they compare with operation on a real HF radio? I designed an experiment to find out.

First, some definitions.

The same FreeDV OFDM signal is represented in different ways as it winds it’s way through the FreeDV system:

  1. Complex valued samples are used for much of the internal signal processing.
  2. Real valued samples at the interfaces, e.g. for getting samples in and out of a sound card and standard HF radio.
  3. Analog baseband signals, e.g. voltage inside your radio.
  4. Analog RF signals, e.g. at the output of your PA, and input to your receiver terminals.
  5. An electromagnetic wave.

It’s the same signal, as we can convert freely between the representations with no loss of fidelity, but it’s representation can change the way measures like PAPR work. This caused me some confusion – for example the PAPR of the real signal is about 3dB higher than the complex valued version! I’m still a bit fuzzy on this one, but have satisfied myself that the PAPR of the complex signal is the same as the PAPR of the RF signal – which is what we really care about.

Another definition that I had to (re)study was Peak Envelope Power (PEP) – which is the peak power averaged over one or more carrier cycles. This is the RF equivalent to our “peak” in PAPR. When driven by any baseband input signal, it’s the maximum RF power of the radio, averaged over one or more carrier cycles. Signals such as speech and FreeDV waveforms will have occasional peaks that hit the PEP. A baseband sine wave driving the radio would generate a RF signal that sits at the PEP power continuously.

Here is the experimental setup:

The idea is to play canned files through the radio, and measure the average Tx power. It took me several attempts before my experiment gave sensible results. A key improvement was to make the peak power of each sampled signal the same. This means I don’t have to keep messing with the audio drive levels to ensure I have the same peak power. The samples are 16 bits, so I normalised each file such that the peak was at +/- 10000.

Here is the RF power sampler:

It works pretty well on signals from my FT-817 and IC-7200, and will help prevent any more damage to RF test equipment. I used my RF sampler after my first attempt using a SMA barell attenuator resulted in it’s destruction when I accdentally put 5W into it! Suddenly it went from 30dB to 42dB attenuation. Oops.

For all the experiments I am tuned to 7.175 MHz and have the FT-817 on it’s lowest power level of 0.5W.

For my first experiment I played a 1000 Hz sine wave into the system, and measured the average power. I like to start with simple signals, something known that lets me check all the fiddly RF kit is actually working. After a few hours of messing about – I did indeed see 27dBm (0.5W) on my spec-an. So, for a signal with 0dB PAPR, we measure average power = PEP. Check.

In my next experiment, I measured the effect of ALC on TX power. With the FT-817 on it’s lowest power setting (0.5W), I increased the drive until just before the ALC bars came on: Here is the relationship I found with output power:

Bars Tx Power
0 26.7
1 26.4
2 26.7
3 27.0

So the ALC really does clamp the power at the peak value.

On to more complex FreeDV signals.

Mesuring the average power OFDM/parallel tone signals proved much harder to measure on the spec-an. The power bounces around over a period of several seconds the ODFM waveform evolves which can derail many power measurement techniques. The time constant, or measurement window is important – we want to capture the total power over a few seconds and average the value.

After several attempts and lots of head scratching I settled on the following spec-an settings:

  1. 10s sweep time so the RBW filter is averging a lot of time varying power at each point in the sweep.
  2. 100kHz span.
  3. RBW/VBW of 10 kHz so we capture all of the 1kHz wide OFDM signal in the RBW filter peak when averaging.
  4. Power averaging over 5 samples.

The two-tone signal was included to help me debug my spec-an settings, as it has a known (3dB) PAPR.

Here is a table showing the results for several test signals, all of which have the same peak power:

Sample Description PAPR Theory/Sim (dB) PAPR Mesured (dB)
sine1000 sine wave at 1000 Hz 0 0
sine_800_1200 two tones at 800 and 1200Hz 3 4
vanilla 700D test frames unclipped 7.1 7
clip0.8 700D test frames clipped at 0.8 3.4 4
ve9qrp 700D with real speech payload data 11 10.5

Click on the file name to listen to a 5 second sample of the sample. The lower PAPR (higher average power) signals sound louder – I guess our ears work on average power too! I kept the drive constant and the PEP/peak just happened to hit 26dBm. It’s not critical, as long as the drive (and hence peak level) is the same across all waveforms tested.

Note the two tone “control” is 1dB off (4dB measured on a known 3dB PAPR signal), I’m not happy about that. This suggests a spec-an set up issue or limitation on my spec-an (e.g. the way it averages power).

However the other signals line up OK to the simulated values, within about +/- 0.5dB, which suggests I’m on the right track with my simulations.

The modulated 700D test frame signals were generated by the Octave ofdm_tx.m script, which reports the PAPR of the complex signal. The same test frame repeats continuously, which makes BER measurements convenient, but is slightly unrealistic. The PAPR was lower than the ve9qrp signal which has real speech payload data. Perhaps because the more random, real world payload data leads to occasional frames where the phase of the carriers align leading to large peaks.

Another source of discrepancy is the non flat frequency filtering in the baseband audio/crystal filter path the signal has to flow through before it emerges as RF.

The zero-span spec-an setting plots power over time, and is very useful for visualing PAPR. The first plot shows the power of our 1000 Hz sine signal (yellow), and the two tone test signal (purple):

You can see how mixing just two signals modulates the power over time, the effect on PAPR, and how the average power is reduced. Next we have the ve9qrp signal (yellow), and our clip 0.8 signal (purple):

It’s clear the clipped signal has a much higher average power. Note the random way the waveform power peaks and dips, as the various carriers come into phase. Note very few high power peaks in the ve9qrp signal – in this sample we don’t have any that hits +26dBm, as they are fairly rare.

I found eye-balling the zero-span plots gave me similar values to non-zero span results in the table above, a good cross check.

Take aways:

  1. Clipping is indeed improving our measured average power, but there are some discrepencies between the measured and PAPR values estimated from theory/simulation.
  2. Using a SDR to receive the signal and measure PAPR using my own maths might be easier than fiddling with the spec-an and guessing at it’s internal algorithms.
  3. PAPR is worse for real world signals (e.g. ve9qrp) than my canned test frames due to relatively rare alignments of the carrier phases. This might only happen once every few seconds, but significantly raises the PAPR, and hurts our average power. These occasional peaks might be triggering the ALC, pushing the average power down every time they occur. As they are rare, these peaks can be clipped with no impact on perceived speech quality. This is why I like the CDF/PDF method of setting thresholds, it lets us discard rare (low probability) outliers that might be hurting our average power.

Conclusions and Further work

The simulations suggest we can improve FreeDV by 4dB using the right clipper/filter combination. Initial tests over a real PA show we can indeed reduce PAPR in line with our simulations.

This project has lead me down an interesting rabbit hole that has kept me busy for a few weeks! Just in case I haven’t had enough, some ideas for further work:

  1. Align these clipping levels and filtering to FreeDV 700D (and possibly 2020). There is existing clipper and filter code but the thresholds were set by educated guess several years for 700C.
  2. Currently each FreeDV waveform is scaled to have the same average power. This is the signal fed via the sound card to your Tx. Should the levels of each FreeDV waveform be adjusted to be the same peak value instead?
  3. Design an experiment to prove BER performance at a given SNR is improved by 4dB as suggested by these simulations. Currently all we have measured is the average power and PAPR – we haven’t actually verified the expected 4dB increase in performance (suggested by the BER simulations above) which is the real goal.
  4. Try the experiments on a SDR Tx – they tend to get results closer to theory due to no crystal filters/baseband audio filtering.
  5. Try the experiments on a 100WPEP Tx – I have ordered a dummy load to do that relatively safely.
  6. Explore the effect of ALC on FreeDV signals and why we set the signals to “just tickle” the ALC. This is something I don’t really understand, but have just assumed is good practice based on other peoples experiences with parallel tone/OFDM modems and on-air FreeDV use. I can see how ALC would compress the amplitude of the OFDM waveform – which this blog post suggests might be a good thing! Perhaps it does so in an uncontrolled manner – as the curves above show the amount of compression is pretty important. “Just tickling the ALC” guarantees us a linear PA – so we can handle any needed compression/clipping carefully in the DSP.
  7. Explore other ways of reducing PAPR.

To peel away the layers of a complex problem is very satisfying. It always takes me several goes, improvements come as the bugs fall out one by one. Writing these blog posts oftens makes me sit back and say “huh?”, as I discover things that don’t make sense when I write them up. I guess that’s the review process in action.


Design for a RF Sampler I built, mine has a 46dB loss.

Peak to Average Power Ratio for OFDM – Nice discussion of PAPR for OFDM signals from DSPlog.

Planet DebianDaniel Silverstone: Broccoli Sync Conversation

Broccoli Sync Conversation

A number of days ago (I know, I'm an awful human who failed to post this for over a week), myself, Lars, Mark, and Vince discussed Dropbox's article about Broccoli Sync. It wasn't quite what we'd expected but it was an interesting discussion of compression and streamed data.

Vince observed that it was interesting in that it was a way to move storage compression cost to the client edge. This makes sense because decompression (to verify the uploaded content) is cheaper than compression; and also since CPU and bandwidth are expensive, spending the client CPU to reduce bandwidth is worthwhile.

Lars talked about how even in situations where everyone has gigabit data connectivity with no limit on total transit, bandwidth/time is a concern, so it makes sense.

We liked how they determined the right compresison level to use available bandwidth (i.e. not be CPU throttled) but also gain the most compression possible. Their diagram showing relative compression sizes for level 1 vs. 3 vs. 5 suggests that the gain for putting the effort in for 5 rather than 1. It's interesting in that diagram that 'documents' don't compress well but then again it is notable that such documents are likely DEFLATE'd zip files. Basically if the data is already compressed then there's little hope Brotli will gain much.

I raised that it was interesting that they chose Brotli, in part, due to the availability of a pure Rust implementation of Brotli. Lars mentioned that Microsoft and others talk about how huge quantities of C code has unexpected memory safety issues and so perhaps that is related. Daniel mentioned that the document talked about Dropbox having a policy of not running unconstrained C code which was interesting.

Vince noted that in their deployment challenges it seemed like a very poor general strategy to cope with crasher errors; but Daniel pointed out that it might be an over-simplified description, and Mark suggested that it might be sufficient until a fix can be pushed out. Vince agreed that it's plausible this is a tiered/sharded deployment process and thus a good way to smoke out problems.

Daniel found it interesting that their block storage sounds remarkably like every other content-addressible storage and that while they make it clear in the article that encryption, client identification etc are elided, it looks like they might be able to deduplicate between theoretically hostile clients.

We think that the compressed-data plus type plus hash (which we assume also contains length) is an interesting and nice approach to durability and integrity validation in the protocol. And the compressed blocks can then be passed to the storage backend quickly and effectively which is nice for latency.

Daniel raised that he thought it was fun that their rust-brotli library is still workable on Rust 1.12 which is really quite old.

We ended up on a number of tangential discussions, about Rust, about deployment strategies, and so on. While the article itself was a little thin, we certainly had a lot of good chatting around topics it raised.

We'll meet again in a month (on the 28th Sept) so perhaps we'll have a chunkier article next time. (Possibly this and/or related articles)

Worse Than FailureCodeSOD: Put a Dent in Your Logfiles

Valencia made a few contributions to a large C++ project run by Harvey. Specifically, there were some pass-by-value uses of a large data structure, and changing those to pass-by-reference fixed a number of performance problems, especially on certain compilers.

“It’s a simple typo,” Valencia thought. “Anyone could have done that.” But they kept digging…

The original code-base was indented with spaces, but Harvey just used tabs. That was a mild annoyance, but Harvey used a lot of tabs, as his code style was “nest as many blocks as deeply as possible”. In addition to loads of magic numbers that should be enums, Harvey also had a stance that “never use an int type when you can store your number as a double”.

Then, for example, what if you have a char and you want to turn the char into a string? Do you just use the std::string() constructor that accepts a char parameter? Not if you’re Harvey!

std::string ToString(char c)
    std::stringstream ss;
    std::string out = "";
    ss << c;
    ss >> out;
    return out;

What if you wanted to cache some data in memory? A map would be a good place to start. How many times do you want to access a single key while updating a cache entry? How does “four times” work for you? It works for Harvey!

void WriteCache(std::string key, std::string value)
    Setting setting = mvCache["cache_"+key];
    if (!setting.initialized)
        setting.initialized=true; = "";
    } = value;

And I don’t know exactly what they are trying to communicate with the mv prefix, but people have invented all sorts of horrible ways to abuse Hungarian notation. Fortunately, Valencia clarifies: “Harvey used an incorrect Hungarian notation prefix while they were at it.”

That’s the easy stuff. Ugly, bad code, sure, but nothing that leaves you staring, stunned into speechlessness.

Let’s say you added a lot of logging messages, and you wanted to control how many logging messages appeared. You’ve heard of “logging levels”, and that gives you an inspiration for how to solve this problem:

bool LogLess(int iMaxLevel)
     int verboseLevel = rand() % 1000;
     if (verboseLevel < iMaxLevel) return true;
     return false;

//how it's used:
   log.debug("I appear half of the time");

Normally, I’d point out something about how they don’t need to return true or return false when they could just return the boolean expression, but what’d be the point? They’ve created probabilistic log levels. It’s certainly one way to solve the “too many log messages” problem: just randomly throw some of them away.

Valencia gives us a happy ending:

Needless to say, this has since been rewritten… the end result builds faster, uses less memory and is several orders of magnitude faster.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.


Kevin RuddNew Israel Fund: Australia and the Middle East


The post New Israel Fund: Australia and the Middle East appeared first on Kevin Rudd.

Rondam RamblingsThis is what the apocalypse looks like

This is a photo of our house taken at noon today:                 This is not the raw image.  I took this with an iPhone, whose auto-exposure made the image look much brighter than it actually is.  I've adjusted the brightness and color balance to match the actual appearance as much as I can.  Even so, this image doesn't do justice to the reality.  For one thing, the sky is much too blue.  The

Cryptogram How the FIN7 Cybercrime Gang Operates

The Grugq has written an excellent essay on how the Russian cybercriminal gang FIN7 operates. An excerpt:

The secret of FIN7’s success is their operational art of cyber crime. They managed their resources and operations effectively, allowing them to successfully attack and exploit hundreds of victim organizations. FIN7 was not the most elite hacker group, but they developed a number of fascinating innovations. Looking at the process triangle (people, process, technology), their technology wasn’t sophisticated, but their people management and business processes were.

Their business… is crime! And every business needs business goals, so I wrote a mock FIN7 mission statement:

Our mission is to proactively leverage existing long-term, high-impact growth strategies so that we may deliver the kind of results on the bottom line that our investors expect and deserve.

How does FIN7 actualize this vision? This is CrimeOps:

  • Repeatable business process
  • CrimeBosses manage workers, projects, data and money.
  • CrimeBosses don’t manage technical innovation. They use incremental improvement to TTP to remain effective, but no more
  • Frontline workers don’t need to innovate (because the process is repeatable)

Cryptogram Privacy Analysis of Ambient Light Sensors

Interesting privacy analysis of the Ambient Light Sensor API. And a blog post. Especially note the “Lessons Learned” section.

Cryptogram Interesting Attack on the EMV Smartcard Payment Standard

It’s complicated, but it’s basically a man-in-the-middle attack that involves two smartphones. The first phone reads the actual smartcard, and then forwards the required information to a second phone. That second phone actually conducts the transaction on the POS terminal. That second phone is able to convince the POS terminal to conduct the transaction without requiring the normally required PIN.

From a news article:

The researchers were able to demonstrate that it is possible to exploit the vulnerability in practice, although it is a fairly complex process. They first developed an Android app and installed it on two NFC-enabled mobile phones. This allowed the two devices to read data from the credit card chip and exchange information with payment terminals. Incidentally, the researchers did not have to bypass any special security features in the Android operating system to install the app.

To obtain unauthorized funds from a third-party credit card, the first mobile phone is used to scan the necessary data from the credit card and transfer it to the second phone. The second phone is then used to simultaneously debit the amount at the checkout, as many cardholders do nowadays. As the app declares that the customer is the authorized user of the credit card, the vendor does not realize that the transaction is fraudulent. The crucial factor is that the app outsmarts the card’s security system. Although the amount is over the limit and requires PIN verification, no code is requested.

The paper: “The EMV Standard: Break, Fix, Verify.”

Abstract: EMV is the international protocol standard for smartcard payment and is used in over 9 billion cards worldwide. Despite the standard’s advertised security, various issues have been previously uncovered, deriving from logical flaws that are hard to spot in EMV’s lengthy and complex specification, running over 2,000 pages.

We formalize a comprehensive symbolic model of EMV in Tamarin, a state-of-the-art protocol verifier. Our model is the first that supports a fine-grained analysis of all relevant security guarantees that EMV is intended to offer. We use our model to automatically identify flaws that lead to two critical attacks: one that defrauds the cardholder and another that defrauds the merchant. First, criminals can use a victim’s Visa contact-less card for high-value purchases, without knowledge of the card’s PIN. We built a proof-of-concept Android application and successfully demonstrated this attack on real-world payment terminals. Second, criminals can trick the terminal into accepting an unauthentic offline transaction, which the issuing bank should later decline, after the criminal has walked away with the goods. This attack is possible for implementations following the standard, although we did not test it on actual terminals for ethical reasons. Finally, we propose and verify improvements to the standard that prevent these attacks, as well as any other attacks that violate the considered security properties.The proposed improvements can be easily implemented in the terminals and do not affect the cards in circulation.

Planet DebianReproducible Builds: Reproducible Builds in August 2020

Welcome to the August 2020 report from the Reproducible Builds project.

In our monthly reports, we summarise the things that we have been up to over the past month. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced from the original free software source code to the pre-compiled binaries we install on our systems. If you’re interested in contributing to the project, please visit our main website.

This month, Jennifer Helsby launched a new website to address the lack of reproducibility of Python wheels.

To quote Jennifer’s accompanying explanatory blog post:

One hiccup we’ve encountered in SecureDrop development is that not all Python wheels can be built reproducibly. We ship multiple (Python) projects in Debian packages, with Python dependencies included in those packages as wheels. In order for our Debian packages to be reproducible, we need that wheel build process to also be reproducible

Parallel to this, was also launched, a service that verifies the contents of URLs against a publicly recorded cryptographic log. It keeps an append-only log of the cryptographic digests of all URLs it has seen. (GitHub repo)

On 18th September, Bernhard M. Wiedemann will give a presentation in German, titled Wie reproducible builds Software sicherer machen (“How reproducible builds make software more secure”) at the Internet Security Digital Days 2020 conference.

Reproducible builds at DebConf20

There were a number of talks at the recent online-only DebConf20 conference on the topic of reproducible builds.

Holger gave a talk titled “Reproducing Bullseye in practice”, focusing on independently verifying that the binaries distributed from are made from their claimed sources. It also served as a general update on the status of reproducible builds within Debian. The video (145 MB) and slides are available.

There were also a number of other talks that involved Reproducible Builds too. For example, the Malayalam language mini-conference had a talk titled എനിയ്ക്കും ഡെബിയനില്‍ വരണം, ഞാന്‍ എന്തു് ചെയ്യണം? (“I want to join Debian, what should I do?”) presented by Praveen Arimbrathodiyil, the Clojure Packaging Team BoF session led by Elana Hashman, as well as Where is Salsa CI right now? that was on the topic of Salsa, the collaborative development server that Debian uses to provide the necessary tools for package maintainers, packaging teams and so on.

Jonathan Bustillos (Jathan) also gave a talk in Spanish titled Un camino verificable desde el origen hasta el binario (“A verifiable path from source to binary”). (Video, 88MB)

Development work

After many years of development work, the compiler for the Rust programming language now generates reproducible binary code. This generated some general discussion on Reddit on the topic of reproducibility in general.

Paul Spooren posted a ‘request for comments’ to OpenWrt’s openwrt-devel mailing list asking for clarification on when to raise the PKG_RELEASE identifier of a package. This is needed in order to successfully perform rebuilds in a reproducible builds context.

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update.

Chris Lamb provided some comments and pointers on an upstream issue regarding the reproducibility of a Snap / SquashFS archive file. []


Holger Levsen identified that a large number of Debian .buildinfo build certificates have been “tainted” on the official Debian build servers, as these environments have files underneath the /usr/local/sbin directory []. He also filed against bug for debrebuild after spotting that it can fail to download packages from [].

This month, several issues were uncovered (or assisted) due to the efforts of reproducible builds.

For instance, Debian bug #968710 was filed by Simon McVittie, which describes a problem with detached debug symbol files (required to generate a traceback) that is unlikely to have been discovered without reproducible builds. In addition, Jelmer Vernooij called attention that the new Debian Janitor tool is using the property of reproducibility (as well as diffoscope when applying archive-wide changes to Debian:

New merge proposals also include a link to the diffoscope diff between a vanilla build and the build with changes. Unfortunately these can be a bit noisy for packages that are not reproducible yet, due to the difference in build environment between the two builds. []

56 reviews of Debian packages were added, 38 were updated and 24 were removed this month adding to our knowledge about identified issues. Specifically, Chris Lamb added and categorised the nondeterministic_version_generated_by_python_param and the lessc_nondeterministic_keys toolchain issues. [][]

Holger Levsen sponsored Lukas Puehringer’s upload of the python-securesystemslib pacage, which is a dependency of in-toto, a framework to secure the integrity of software supply chains. []

Lastly, Chris Lamb further refined his merge request against the debian-installer component to allow all arguments from sources.list files (such as [check-valid-until=no]) in order that we can test the reproducibility of the installer images on the Reproducible Builds own testing infrastructure and sent a ping to the team that maintains that code.

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of these patches, including:


diffoscope is our in-depth and content-aware diff utility that can not only locate and diagnose reproducibility issues, it provides human-readable diffs of all kinds. In August, Chris Lamb made the following changes to diffoscope, including preparing and uploading versions 155, 156, 157 and 158 to Debian:

  • New features:

    • Support extracting data of PGP signed data. (#214)
    • Try files named .pgp against pgpdump(1) to determine whether they are Pretty Good Privacy (PGP) files. (#211)
    • Support multiple options for all file extension matching. []
  • Bug fixes:

    • Don’t raise an exception when we encounter XML files with <!ENTITY> declarations inside the Document Type Definition (DTD), or when a DTD or entity references an external resource. (#212)
    • pgpdump(1) can successfully parse some binary files, so check that the parsed output contains something sensible before accepting it. []
    • Temporarily drop gnumeric from the Debian build-dependencies as it has been removed from the testing distribution. (#968742)
    • Correctly use fallback_recognises to prevent matching .xsb binary XML files.
    • Correct identify signed PGP files as file(1) returns “data”. (#211)
  • Logging improvements:

    • Emit a message when ppudump version does not match our file header. []
    • Don’t use Python’s repr(object) output in “Calling external command” messages. []
    • Include the filename in the “… not identified by any comparator” message. []
  • Codebase improvements:

    • Bump Python requirement from 3.6 to 3.7. Most distributions are either shipping with Python 3.5 or 3.7, so supporting 3.6 is not only somewhat unnecessary but also cumbersome to test locally. []
    • Drop some unused imports [], drop an unnecessary dictionary comprehensions [] and some unnecessary control flow [].
    • Correct typo of “output” in a comment. []
  • Release process:

    • Move generation of debian/tests/control to an external script. []
    • Add some URLs for the site that will appear on []
    • Update “author” and “author email” in for and similar. []
  • Testsuite improvements:

    • Update PPU tests for compatibility with Free Pascal versions 3.2.0 or greater. (#968124)
    • Mark that our identification test for .ppu files requires ppudump version 3.2.0 or higher. []
    • Add an assert_diff helper that loads and compares a fixture output. [][][][]
  • Misc:

In addition, Mattia Rizzolo documented in that diffoscope works with Python version 3.8 [] and Frazer Clews applied some Pylint suggestions [] and removed some deprecated methods [].


This month, Chris Lamb updated the main Reproducible Builds website and documentation to:

  • Clarify & fix a few entries on the “who” page [][] and ensure that images do not get to large on some viewports [].
  • Clarify use of a pronoun re. Conservancy. []
  • Use “View all our monthly reports” over “View all monthly reports”. []
  • Move a “is a” suffix out of the link target on the SOURCE_DATE_EPOCH age. []

In addition, Javier Jardón added the freedesktop-sdk project [] and Kushal Das added SecureDrop project [] to our projects page. Lastly, Michael Pöhn added internationalisation and translation support with help from Hans-Christoph Steiner [].

Testing framework

The Reproducible Builds project operate a Jenkins-based testing framework to power This month, Holger Levsen made the following changes:

  • System health checks:

    • Improve explanation how the status and scores are calculated. [][]
    • Update and condense view of detected issues. [][]
    • Query the canonical configuration file to determine whether a job is disabled instead of duplicating/hardcoding this. []
    • Detect several problems when updating the status of reporting-oriented ‘metapackage’ sets. []
    • Detect when diffoscope is not installable  [] and failures in DNS resolution [].
  • Debian:

    • Update the URL to the Debian security team bug tracker’s Git repository. []
    • Reschedule the unstable and bullseye distributions often for the arm64 architecture. []
    • Schedule buster less often for armhf. [][][]
    • Force the build of certain packages in the work-in-progress package rebuilder. [][]
    • Only update the stretch and buster base build images when necessary. []
  • Other distributions:

    • For F-Droid, trigger jobs by commits, not by a timer. []
    • Disable the Archlinux HTML page generation job as it has never worked. []
    • Disable the alternative OpenWrt rebuilder jobs. []
  • Misc;

Many other changes were made too, including:

Finally, build node maintenance was performed by Holger Levsen [], Mattia Rizzolo [][] and Vagrant Cascadian [][][][]

Mailing list

On our mailing list this month, Leo Wandersleb sent a message to the list after he was wondering how to expand his project (which aims to improve the security of Bitcoin wallets) from Android wallets to also monitor Linux wallets as well:

If you think you know how to spread the word about reproducibility in the context of Bitcoin wallets through WalletScrutiny, your contributions are highly welcome on this PR []

Julien Lepiller posted to the list linking to a blog post by Tavis Ormandy titled You don’t need reproducible builds. Morten Linderud (foxboron) responded with a clear rebuttal that Tavis was only considering the narrow use-case of proprietary vendors and closed-source software. He additionally noted that the criticism that reproducible builds cannot prevent against backdoors being deliberately introduced into the upstream source (“bugdoors”) are decidedly (and deliberately) outside the scope of reproducible builds to begin with.

Chris Lamb included the Reproducible Builds mailing list in a wider discussion regarding a tentative proposal to include .buildinfo files in .deb packages, adding his remarks regarding requiring a custom tool in order to determine whether generated build artifacts are ‘identical’ in a reproducible context. []

Jonathan Bustillos (Jathan) posted a quick email to the list requesting whether there was a list of To do tasks in Reproducible Builds.

Lastly, Chris Lamb responded at length to a query regarding the status of reproducible builds for Debian ISO or installation images. He noted that most of the technical work has been performed but “there are at least four issues until they can be generally advertised as such”. He pointed that the privacy-oriented Tails operation system, which is based directly on Debian, has had reproducible builds for a number of years now. []

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Kevin RuddABC: Journalists in China and Rio Tinto


Topics: Mike Smith, Bill Birtles, Cheng Lei, Rio Tinto and Queensland’s health border

Patricia Karvelas
Kevin Rudd, welcome.

Kevin Rudd
Good to be on the program, Patricia.

Patricia Karvelas
Does the treatment of the ABC and the AFR China correspondents suggest Australia and China and the relationship between the two nations has entered a new, more dangerous phase now?

Kevin Rudd
I believe it’s been trending that way for quite some time. And without being in the business of, you know, apportioning responsibility for that, the bottom line is the trajectory is all negative. So this is just one further step in that direction and deeply disturbing if your fundamental interest here is press freedom, and your interest is the proper protection of foreign correspondents in an authoritarian country like China.

Patricia Karvelas
Is there any doubt in your mind that these two correspondents were caught up in politics?

Kevin Rudd
No, listen, I actually know these two guys reasonably well. They’re highly professional journalists and when I’ve been in China myself, I’ve spent time with them. And these are journalists just doing the professional duties of their task. To infer that these folks were somehow involved in some, you know, nefarious plot against the Chinese government is just laughable. Absolutely laughable. If that was the case then, you know, you would have every foreign correspondent in Beijing up on the witness stand at the moment. It’s just nonsensical.

Patricia Karvelas
Okay. But these are Australians that have been targeted. What do you read into that? Should we be particularly concerned about the fact that they are Australians — obviously we’re are concerned about that — but does that demonstrate that the relationship with Australia is particularly troubled?

Kevin Rudd
Look at the end of the day, at this stage, we don’t know what has actually driven this particular, you know, roundup of these two Australian journalists. We know that it’s directly related to the Cheng Lei case and we don’t know what underpins that. And she’s an Australian citizen, and she needs to have her rights properly protected by the Australian Government, and I’m confident that DFAT is doing everything it can on that score. But in the broader bilateral relationship, I think you’re right. When Beijing looks at the world at the moment, it sees its number one problem as being the United States and the collapse in the US-China relationship which is now in its worst state in 50 years. And secondly, in terms of its most adverse relationships abroad, it would then place Australia, and I simply base that on what I read in each day’s official commentary in the Chinese media.

Patricia Karvelas
So is this retaliation? Is that how we should see this?

Kevin Rudd
As I said, we don’t know the full details here, other than that the Chinese themselves have confirmed that it’s related to the Cheng Lei case. But what underpins that in terms of her Australian citizenship, in terms of any other matters that she was engaged in, we don’t know. But certainly the overall frame of the Australia-China relationship is a factor here, and including the accusations contained in today’s Global Times alleging that Australian intelligence officials engaged with Chinese correspondents in their homes here in Australia on the 26th of June.

Patricia Karvelas
Bill Birtles says he was questioned about the case of this detained Chinese Australian broadcaster you’ve mentioned Cheng Lei. Do you see these incidences as related? What what does that demonstrate? And how concerned are you about Miss Cheng Lei?

Kevin Rudd
Well, she’s an Australian citizen and I’ve met her as well, in fact, at a conference I was attending in Beijing, from memory, last November. And as with any Australian, and whoever they work for, and despite the fact that she worked for China’s international media arm CGTN, we’ve got a responsibility to do everything we can for her wellbeing as well. But I think if we stand back from the individual cases, there are two big facts we should focus on. One we’ve touched on, which is the spiralling nature of the Australia-China relationship and I’ve not seen anything like it in my 35-years-plus of working on Australia-China relations. But the second one is the changing nature of Chinese domestic politics. As China increasingly cracks down on all forms of dissent, both real and imagined, and as a result what we’ve seen in Chinese domestic politics is the assertion of greater and greater powers by China’s police, security and intelligence apparatus. And the traditional constraints exercise, for example, by the Chinese Foreign Ministry, not just on this case but on others, are being pushed to one side.

Patricia Karvelas
Chinese authorities have confirmed that Ms Cheng is suspected of endangering national security. What does that suggest about how she’ll be dealt with?

Kevin Rudd
Well, the Chinese domestic national security law, quite apart from the one which we focused on recently which has been legislated for Hong Kong, the Chinese domestic law is draconian, and the terms used in it are wide-reaching, and provide, you know, maximum leverage for the prosecutors to go after an individual. Regrettably, the precedence in Chinese judicial practice is that once a formal investigation has been launched, as I understand has been the case in Cheng Lei’s case, then the prospect of securing a good outcome are radically reduced. I’ve seen this in so many cases over the years. That should not prevent the Australian Government from doing everything it possibly can, but I am concerned about where this case has already reached given that she was only arrested in the middle of August.

Patricia Karvelas
Do you see her case as an example of what’s being called hostage diplomacy?

Kevin Rudd
I don’t believe so, but I obviously am constrained by what we don’t know. What I have been concerned about and have been intimately — engaged is the wrong term — but in discussions with various people about is the case of the two Canadians who have now been incarcerated in China for well over a year in what was plainly a retaliatory action by the Chinese government over the incarceration of Madam Meng by the Canadian government at the request of the United States, Madam Meng being the daughter of the owner and chief executive of Huawei, China’s Major 5G telecommunications company. So China has acted in this way already in relation to Canada. I presume that was the basis upon which the Australian government issued its own travel advisory in early July about the risks faced by Australians traveling in China. But on these circumstances concerning Cheng Lei, I’d be speculating rather than providing hard analysis, Patricia.

Patricia Karvelas
You say in 35 years of watching you haven’t seen relations this low or deteriorate quite like they have. You’ve talked about responsibility here, who is responsible? Is it ultimately China and the Chinese regime that’s making this relationship untenable?

Kevin Rudd
Well, after the last Australian election, I went and saw Mr Morrison and was very blunt with him, which is that this is a difficult relationship. It was difficult when I was in office, and it’s difficult for him. So I’m not about to pretend that it’s easy managing a relationship with an authoritarian state which is now the world’s second-largest economy and during the course of the next 10 years likely to become the largest, and whose foreign policy is becoming increasingly assertive. There’s a high degree of difficulty here. What I have been critical of in the Australian public media is the fact that every time an issue arises in the Australia-China relationship, I’ve seen a knee-jerk reaction by some in the Australian Government and various ministers to pull out the megaphone within 30 minutes and to take what is a problem, and frankly to play it into Australian domestic politics. So China is difficult to deal with, but I think the current Australian government from time to time has been less-than-skilled in its handling. I’ve got to say however the professionals in DFAT including, you know, Ambassador Graham Fletcher’s handling of this most recent case in Beijing and the Consul-General in Shanghai, has been first class and they should be congratulated for their efforts.

Patricia Karvelas
And how should the Morrison government deal with these issues? Clearly, they’ve been pretty key here too, as have the media companies involved, both the ABC our own Managing Director, but also the Australian Financial Review. How do you think the Morison government should be behaving now?

Kevin Rudd
Look on this question of Australian media access to China, which is important as a public policy objective in its own right, and broader Western media access to China, bearing in mind we’ve had something like 17 American journalists booted out in one form or another over the last year or two. What I would suggest is that the Australian Government and the media companies keep their powder dry for a bit. What China has done in my judgment is unacceptable in the treatment of these two individuals, these two Australian journalists, but there’s a structural interest here which is to try and find an opportunity for the re-establishment of an Australian media presence in China. So until we have greater clarity in terms of the direction of Chinese overall policy on these questions, I mean, my counsel, if I was in government at the time, would be to keep our powder dry. If possible, rebuild this element of the relationship. And if not take whatever actions are then necessary. But it’s far better to be cautious about your response to these questions than to automatically jump into the domestic political trenches and try and score domestic political points in Australia by beating your chest and showing how hairy-chested you are at the same time.

Patricia Karvelas
How will the fact that there are no correspondents from Australian media left in China affect how this incredibly important subject is covered?

Kevin Rudd
Well, Patricia, to state the bleeding obvious it’s not going to help. Both Birtles and the correspondent from the Australian Financial Review, Mike Smith, are first-class journalists and they write well, and so their coverage has been important in terms of the continuing awareness of Australians not just of the Australia-China relationship, but frankly, its fundamentals which is: what’s going on in China domestically? Ultimately, what we see at this end is driven by Chinese domestic politics and the Chinese domestic economy. China’s foreign policy is ultimately a product of those factors. And the more you have effective analysis of those things, the better we are in Australia and around the world in framing our own response to this phenomenally complex country which is now becoming increasingly assertive, requiring an increasingly sophisticated response from governments like Australia.

Patricia Karvelas
Just a couple of other issues. It would be odd for me not to ask is it reasonable for Queensland to continue keeping its border with New South Wales closed given how well authorities there have kept the virus under control?

Kevin Rudd
Look I think Annastacia Palaszcuk has played this pretty well from day one, and she like the other state premiers has been highly attentive to the professional advice the chief medical officer. I mean, pardon me for being old fashioned about these things, Patricia, but I always think you should listen to the experts. We do believe, at least on our side of politics, in something called objective science. And if the experts are saying ‘be cautious about this’ then so you should be. And the other thing to say is, what would I do if I was in her position right now? Well, you could have listened to the leader of the LNP in Queensland, Ms Frecklington and opened the borders to the south many, many months ago, and run all sorts of risks including people coming in from Victoria as well. Annastasia Palaszczuk resisted those demands. I think she’s been proven to be right as a result. So if I was her, and if I was in her shoes today, I think I’d be doing exactly what she’s doing which is listening to what the chief medical officer is advising her to do.

Patricia Karvelas
Just finally, I spoke to Noel Pearson a little while ago about of course this issue around Rio Tinto and that conduct around the destruction of the Juukan Gorge caves. And of course, you know, you as prime minister delivered the the apology and has I know a long interest in Indigenous affairs. What’s your assessment of the way the company has behaved here? And what is the sort of lasting legacy here that you’re concerned about?

Kevin Rudd
For Rio Tinto — which will soon be known in Australia as Rio TNT, I think — for Rio Tinto it has blown up its own reputation as anything approximating a responsible corporate citizen in Australia. What we know already from the parliamentary inquiry is that far from this being some sort of shock and surprise or accident, that in fact, senior management within Rio Tinto were already taking legal advice and PR advice about how to handle a reaction once the detonation occurred. So for the company, they should be hauled over the coals. If there’s a fining regime in place, it should be deployed. The executives responsible for this decision should no longer be executives. If I was a shareholder of Rio Tinto, this is just appalling for Indigenous people. Look, you know, something as old as 40,000 years. This is as old as the ancient caves of Altamira and Lascaux in in Europe. And to simply allow someone to walk in with a stick of jelly and bang, you’re gone? I mean, this will be devastating for the Indigenous people who in this generation are responsible for the custodianship of the land which means these ancient sites. So for our Indigenous brothers and sisters, this has been an appalling development. For the company, I think their reputation now is mud.

Patricia Karvelas
Kevin Rudd, many thanks for joining us this afternoon.

Kevin Rudd
Good to be on the program.

The post ABC: Journalists in China and Rio Tinto appeared first on Kevin Rudd.

Planet DebianGunnar Wolf: RPi 4 + 8GB, Finally, USB-functional!

So… Finally, kernel 5.8 entered the Debian Unstable repositories. This means that I got my Raspberry image from their usual location and was able to type the following, using only my old trusty USB keyboard:

So finally, the greatest and meanest Raspberry is fully supported with a pure Debian image! (only tarnished by the nonfree raspi-firmware package.

Oh, in case someone was still wondering — The images generated follow the stable release. Only the kernel and firmware are installed from unstable. If / when kernel 5.8 enters Backports, I will reduce the noise of adding a different suit to the sources.list.

Worse Than FailureWeb Server Installation

Connect the dots puzzle

Once upon a time, there lived a man named Eric. Eric was a programmer working for the online development team of a company called The Company. The Company produced Media; their headquarters were located on The Continent where Eric happily resided. Life was simple. Straightforward. Uncomplicated. Until one fateful day, The Company decided to outsource their infrastructure to The Service Provider on Another Continent for a series of complicated reasons that ultimately benefited The Budget.

Part of Eric's job was to set up web servers for clients so that they could migrate their websites to The Platform. Previously, Eric would have provisioned the hardware himself. Under the new rules, however, he had to request that The Service Provider do the heavy lifting instead.

On Day 0 of our story, Eric received a server request from Isaac, a representative of The Client. On Day 1, Eric asked for the specifications for said server, which were delivered on Day 2. Day 2 being just before a long weekend, it was Day 6 before the specs were delivered to The Service Provider. The contact at The Service Provider, Thomas, asked if there was a deadline for this migration. Eric replied with the hard cutover date almost two months hence.

This, of course, would prove to be a fatal mistake. The following story is true; only the names have been changed to protect the guilty. (You might want some required listening for this ... )

Day 6

  • Thomas delivers the specifications to a coworker, Ayush, without requesting a GUI.
  • Ayush declares that the servers will be ready in a week.

Day 7

  • Eric informs The Client that the servers will be delivered by Day 16, so installations could get started by Day 21 at the latest.
  • Ayush asks if The Company wants a GUI.

Day 8

  • Eric replies no.

Day 9

  • Another representative of The Service Provider, Vijay, informs Eric that the file systems were not configured according to Eric's request.
  • Eric replies with a request to configure the file systems according to the specification.
  • Vijay replies with a request for a virtual meeting.
  • Ayush tells Vijay to configure the system according to the specification.

Day 16

  • The initial delivery date comes and goes without further word. Eric's emails are met with tumbleweeds. He informs The Client that they should be ready to install by Day 26.

Day 19

  • Ayush asks if any ports other than 22 are needed.
  • Eric asks if the servers are ready to be delivered.
  • Ayush replies that if port 22 needs to be opened, that will require approval from Eric's boss, Jack.

Day 20

  • Ayush delivers the server names to Eric as an FYI.

Day 22

  • Thomas asks Eric if there's been any progress, then asks Ayush to schedule a meeting to discuss between the three of them.

Day 23

  • Eric asks for the login credentials to the aforementioned server, as they were never provided.
  • Vijay replies with the root credentials in a plaintext email.
  • Eric logs in and asks for some network configuration changes to allow admin access from The Client's network.
  • Mehul, yet another person at The Service Provider, asks for the configuration change request to be delivered via Excel spreadsheet.
  • Eric tells The Client that Day 26 is unlikely, but they should probably be ready by end of Day 28, still well before the hard deadline of Day 60.

Day 28

  • The Client reminds Eric that they're decommissioning the old datacenter on Day 60 and would very much like to have their website moved by then.
  • Eric tells Mehul that the Excel spreadsheet requires information he doesn't have. Could he make the changes?
  • Thomas asks Mehul and Ayush if things are progressing. Mehul replies that he doesn't have the source IP (which was already sent). Thomas asks whom they're waiting for. Mehul replies and claims that Eric requested access from the public Internet.
  • Mehul escalates to Jack.
  • Thomas reminds Ayush and Mehul that if their work is pending some data, they should work toward getting that obstacle solved.

Day 29

  • Eric, reading the exchange from the evening before, begins to question his sanity as he forwards the original email back over, along with all the data they requested.

Day 30

  • Mehul replies that access has been granted.

Day 33

  • Eric discovers he can't access the machine from inside The Client's network, and requests opening access again.
  • Mehul suggests trying from the Internet, claiming that the connection is blocked by The Client's firewall.
  • Eric replies that The Client's datacenter cannot access the Internet, and that the firewall is configured properly.
  • Jack adds more explicit instructions for Mehul as to exactly how to investigate the network problem.

Day 35

  • Mehul asks Eric to try again.

Day 36

  • It still doesn't work.
  • Mehul replies with instructions to use specific private IPs. Eric responds that he is doing just that.
  • Ayush asks if the problem is fixed.
  • Eric reminds Thomas that time is running out.
  • Thomas replies that the firewall setting changes must have been stepped on by changes on The Service Provider's side, and he is escalating the issue.

Day 37

  • Mehul instructs Eric to try again.

Day 40

  • It still doesn't work.

Day 41

  • Mehul asks Eric to try again, as he has personally verified that it works from the Internet.
  • Eric reminds Mehul that it needs to work from The Client's datacenter—specifically, for the guy doing the migration at The Client.

Day 42

  • Eric confirms that the connection does indeed work from Internet, and that The Client can now proceed with their work.
  • Mehul asks if Eric needs access through The Company network.
  • Eric replies that the connection from The Company network works fine now.

Day 47

  • Ayush requests a meeting with Eric about support handover to operations.

Day 48

  • Eric asks what support is this referring to.
  • James (The Company, person #3) replies that it's about general infrastructure support.

Day 51

  • Eric notifies Ayush and Mehul that server network configurations were incorrect, and that after fixing the configuration and rebooting the server, The Client can no longer log in to the server because the password no longer works.
  • Ayush instructs Vijay to "setup the repository ASAP." Nobody knows what repository he's talking about.
  • Vijay responds that "licenses are not updated for The Company servers." Nobody knows what licenses he is talking about.
  • Vijay sends original root credentials in a plaintext email again.

Day 54

  • Thomas reminds Ayush and Mehul that the servers need to be moved by day 60.
  • Eric reminds Thomas that the deadline was extended to the end of the month (day 75) the previous week.
  • Eric replies to Vijay that the original credentials sent no longer work.
  • Vijay asks Eric to try again.
  • Mehul asks for the details of the unreachable servers, which were mentioned in the previous email.
  • Eric sends a summary of current status (can't access from The Company's network again, server passwords not working) to Thomas, Ayush, Mehul and others.
  • Vijay replies, "Can we discuss on this."
  • Eric replies that he's always reachable by Skype or email.
  • Mehul says that access to private IPs is not under his control. "Looping John and Jared," but no such people were added to the recipient list. Mehul repeats that from The Company's network, private IPs should be used.
  • Thomas tells Eric that the issue has been escalated again on The Service Provider's side.
  • Thomas complains to Roger (The Service Provider, person #5), Theodore (The Service Provider, person #6) and Matthew (The Service Provider, person #7) that the process isn't working.

Day 55

  • Theodore asks Peter (The Service Provider, person #8), Mehul, and Vinod (The Service Provider, person #9) what is going on.
  • Peter responds that websites should be implemented using Netscaler, and asks no one in particular if they could fill an Excel template.
  • Theodore asks who should be filling out the template.
  • Eric asks Thomas if he still thinks the sites can be in production by the latest deadline, Day 75, and if he should install the server on AWS instead.
  • Thomas asks Theodore if configuring the network really takes two weeks, and tells the team to try harder.

Day 56

  • Theodore replies that configuring the network doesn't take two weeks, but getting the required information for that often does. Also that there are resourcing issues related to such configurations.
  • Thomas suggests a meeting to fill the template.
  • Thomas asks if there's any progress.

Day 57

  • Ayush replies that if The Company provides the web service name, The Service Provider can fill out the rest.
  • Eric delivers a list of site domains and required ports.
  • Thomas forwards the list to Peter.
  • Tyler (The Company, person #4) informs Eric that any AWS servers should be installed by Another Service Provider.
  • Eric explains that the idea was that he would install the server on The Company's own AWS account.
  • Paul (The Company, person #5) informs Eric that all AWS server installations are to be done by Another Service Provider, and that they'll have time to do it ... two months down the road.
  • Kane (The Company, person #6) asks for a faster solution, as they've been waiting for nearly two months already.
  • Eric sets up the server on The Company's AWS account before lunch and delivers it to The Client.

Day 58

  • Peter replies that he needs a list of fully qualified domain names instead of just the site names.
  • Eric delivers a list of current blockers to Thomas, Theodore, Ayush and Jagan (The Service Provider, person #10).
  • Ayush instructs Vijay and the security team to check network configuration.
  • Thomas reminds Theodore, Ayush and Jagan to solve the issues, and reminds them that the original deadline for this was a month ago.
  • Theodore informs everyone that the servers' network configuration wasn't compatible with the firewall's network configuration, and that Vijay and Ayush are working on it.

Day 61

  • Peter asks Thomas and Ayush if they can get the configuration completed tomorrow.
  • Thomas asks Theodore, Ayush, and Jagan if the issues are solved.

Day 62

  • Ayush tells Eric that they've made configuration changes, and asks if he can now connect.

Day 63

  • Eric replies to Ayush that he still has trouble connecting to some of the servers from The Company's network.
  • Eric delivers network configuration details to Peter.
  • Ayush tells Vijay and Jai (The Service Provider, person #11) to reset passwords on servers so Eric can log in, and asks for support from Theodore with network configurations.
  • Matthew replies that Theodore is on his way to The Company.
  • Vijay resets the password and sends it to Ayush and Jai.
  • Ayush sends the password to Eric via plaintext email.
  • Theodore asks Eric and Ayush if the problems are resolved.
  • Ayush replies that connection from The Company's network does not work, but that the root password was emailed.

Day 64

  • Tyler sends an email to everyone and cancels the migration.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Cryptogram US Space Cybersecurity Directive

The Trump Administration just published “Space Policy Directive – 5“: “Cybersecurity Principles for Space Systems.” It’s pretty general:

Principles. (a) Space systems and their supporting infrastructure, including software, should be developed and operated using risk-based, cybersecurity-informed engineering. Space systems should be developed to continuously monitor, anticipate,and adapt to mitigate evolving malicious cyber activities that could manipulate, deny, degrade, disrupt,destroy, surveil, or eavesdrop on space system operations. Space system configurations should be resourced and actively managed to achieve and maintain an effective and resilient cyber survivability posture throughout the space system lifecycle.

(b) Space system owners and operators should develop and implement cybersecurity plans for their space systems that incorporate capabilities to ensure operators or automated control center systems can retain or recover positive control of space vehicles. These plans should also ensure the ability to verify the integrity, confidentiality,and availability of critical functions and the missions, services,and data they enable and provide.

These unclassified directives are typically so general that it’s hard to tell whether they actually matter.

News article.

Krebs on SecurityMicrosoft Patch Tuesday, Sept. 2020 Edition

Microsoft today released updates to remedy nearly 130 security vulnerabilities in its Windows operating system and supported software. None of the flaws are known to be currently under active exploitation, but 23 of them could be exploited by malware or malcontents to seize complete control of Windows computers with little or no help from users.

The majority of the most dangerous or “critical” bugs deal with issues in Microsoft’s various Windows operating systems and its web browsers, Internet Explorer and Edge. September marks the seventh month in a row Microsoft has shipped fixes for more than 100 flaws in its products, and the fourth month in a row that it fixed more than 120.

Among the chief concerns for enterprises this month is CVE-2020-16875, which involves a critical flaw in the email software Microsoft Exchange Server 2016 and 2019. An attacker could leverage the Exchange bug to run code of his choosing just by sending a booby-trapped email to a vulnerable Exchange server.

“That doesn’t quite make it wormable, but it’s about the worst-case scenario for Exchange servers,” said Dustin Childs, of Trend Micro’s Zero Day Initiative. “We have seen the previously patched Exchange bug CVE-2020-0688 used in the wild, and that requires authentication. We’ll likely see this one in the wild soon. This should be your top priority.”

Also not great for companies to have around is CVE-2020-1210, which is a remote code execution flaw in supported versions of Microsoft Sharepoint document management software that bad guys could attack by uploading a file to a vulnerable Sharepoint site. Security firm Tenable notes that this bug is reminiscent of CVE-2019-0604, another Sharepoint problem that’s been exploited for cybercriminal gains since April 2019.

Microsoft fixed at least five other serious bugs in Sharepoint versions 2010 through 2019 that also could be used to compromise systems running this software. And because ransomware purveyors have a history of seizing upon Sharepoint flaws to wreak havoc inside enterprises, companies should definitely prioritize deployment of these fixes, says Alan Liska, senior security architect at Recorded Future.

Todd Schell at Ivanti reminds us that Patch Tuesday isn’t just about Windows updates: Google has shipped a critical update for its Chrome browser that resolves at least five security flaws that are rated high severity. If you use Chrome and notice an icon featuring a small upward-facing arrow inside of a circle to the right of the address bar, it’s time to update. Completely closing out Chrome and restarting it should apply the pending updates.

Once again, there are no security updates available today for Adobe’s Flash Player, although the company did ship a non-security software update for the browser plugin. The last time Flash got a security update was June 2020, which may suggest researchers and/or attackers have stopped looking for flaws in it. Adobe says it will retire the plugin at the end of this year, and Microsoft has said it plans to completely remove the program from all Microsoft browsers via Windows Update by then.

Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for Windows updates to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Cryptogram More on NIST’s Post-Quantum Cryptography

Back in July, NIST selected third-round algorithms for its post-quantum cryptography standard.

Recently, Daniel Apon of NIST gave a talk detailing the selection criteria. Interesting stuff.

NOTE: We’re in the process of moving this blog to WordPress. Comments will be disabled until the move is complete. The management thanks you for your cooperation and support.

Cory DoctorowMy first-ever Kickstarter: the audiobook for Attack Surface, the third Little Brother book

I have a favor to ask of you. I don’t often ask readers for stuff, but this is maybe the most important ask of my career. It’s a Kickstarter – I know, ‘another crowdfunder?’ – but it’s:

a) Really cool;

b) Potentially transformative for publishing.

c) Anti-monopolistic

Here’s the tldr: Attack Surface – AKA Little Brother 3- is coming out in 5 weeks. I retained audio rights and produced an amazing edition that Audible refuses to carry. You can pre-order the audiobook, ebook (and previous volumes), DRM- and EULA-free.

That’s the summary, but the details matter. First: the book itself. ATTACK SURFACE is a standalone Little Brother book about Masha, the young woman from the start and end of the other two books; unlike Marcus, who fights surveillance tech, Masha builds it.

Attack Surface is the story of how Masha has a long-overdue moral reckoning with the way that her work has hurt people, something she finally grapples with when she comes home to San Francisco.

Masha learns her childhood best friend is leading a BLM-style uprising – and she’s being targeted by the same cyberweapons that Masha built to hunt Iraqi insurgents and post-Soviet democracy movements.

I wrote Little Brother in 2006, it came out in 2008, and people tell me it’s “prescient” because the digital human rights issues it grapples with – high-tech authoritarianism and high-tech resistance – are so present in our current world.

But it’s not so much prescient as observant. I wrote Little Brother during the Bush administration’s vicious, relentless, tech-driven war on human rights. Little Brother was a bet that these would not get better on their own.

And it was a bet that tales of seizing the means of computation would inspire people to take up digital arms of their own. It worked. Hundreds of cryptographers, security experts, cyberlawyers, etc have told me that Little Brother started them on their paths.

ATTACK SURFACE – a technothriller about racial injustice, police brutality, high-tech turnkey totalitarianism, mass protests and mass surveillance – was written between May 2016 and Nov 2018, before the current uprisings and the tech worker walkouts.

But just as with Little Brother, the seeds of the current situation were all around us in 2016, and if Little Brother inspired a cohort of digital activists, I hope Attack Surface will give a much-needed push to a group of techies (currently) on the wrong side of history.

As I learned from Little Brother, there is something powerful about technologically rigorous thrillers about struggles for justice – stories that marry excitement, praxis and ethics. Of all my career achievements, the people I’ve reached this way matter the most.

Speaking of careers and ethics. As you probably know, I hate DRM with the heat of 10000 suns: it is a security/privacy nightmare, a monopolist’s best friend, and a gross insult to human rights. As you may also know, Audible will not carry any audiobooks unless they have DRM.

Audible is Amazon’s audiobook division, a monopolist with a total stranglehold on the audiobook market. Audiobooks currently account for almost as much revenue as hardcovers, and if you don’t sell on Audible, you sacrifice about 95% of that income.

That’s a decision I’ve made, and it means that publishers are no longer willing to pay for my audiobook rights (who can blame them?). According to my agent, living my principles this way has cost me enough to have paid off my mortgage and maybe funding my retirement.

I’ve tried a lot of tactics to get around Audible; selling through the indies (,, etc), through Google Play, and through my own shop (

I appreciate the support there but it’s a tiny fraction of what I’m giving up – both in terms of dollars and reach – by refusing to lock my books (and my readers) (that’s you) to Amazon’s platform for all eternity with Audible DRM.

Which brings me to this audiobook.

Look, this is a great audiobook. I hired Amber Benson (a brilliant writer and actor who played Tara on Buffy), Skyboat Media and director Cassandra de Cuir, and Wryneck Studios, and we produced a 15h long, unabridged masterpiece.

It’s done. It’s wild. I can’t stop listening to it. It drops on Oct 13, with the print/ebook edition.

It’ll be on sale in all audiobook stores (except Audible) on the 13th,for $24.95.

But! You can get it for a mere $20 via my first Kickstarter.

What’s more, you can pre-order the ebook – and also buy the previous ebooks and audiobooks (read by Wil Wheaton and Kirby Heyborne) – all DRM free, all free of license “agreements.”

The deal is: “You bought it, you own it, don’t violate copyright law and we’re good.”

And here’s the groundbreaking part. For this Kickstarter, I’m the retailer. If you pre-order the ebook from my KS, I get the 30% that would otherwise go to Jeff Bezos – and I get the 25% that is the standard ebook royalty.

This is a first-of-its-kind experiment in letting authors, agents, readers and a major publisher deal directly with one another in a transaction that completely sidesteps the monopolists who have profited so handsomely during this crisis.

Which is where you come in: if you help me pre-sell a ton of ebooks and audiobooks through this crowdfunder, it will show publishing that readers are willing to buy their ebooks and audiobooks without enriching a monopolist, even if it means an extra click or two.

So, to recap:

Attack Surface is the third Little Brother book

It aims to radicalize a generation of tech workers while entertaining its audience as a cracking, technologically rigorous thriller

The audiobook is amazing, read by the fantastic Amber Benson

If you pre-order through the Kickstarter:

You get a cheaper price than you’ll get anywhere else

You get a DRM- and EULA-free purchase

You’ll fight monopolies and support authorship

If you’ve ever enjoyed my work and wondered how you could pay me back: this is it. This is the thing. Do this, and you will help me artistically, professionally, politically, and (ofc) financially.

Thank you!

PS: Tell your friends!

Cory DoctorowAttack Surface Kickstarter Promo Excerpt!

This week’s podcast is a generous excerpt – 3 hours! – of the audiobook for Attack Surface, the third Little Brother book, which is available for pre-order today on my very first Kickstarter.

This Kickstarter is one of the most important moments in my professional career, an experiment to see if I can viably publish audiobooks without caving into Amazon’s monopolistic requirement that all Audible books be sold with DRM that locks it to Amazon’s corporate platform…forever. If you’ve ever wanted to thank me for this podcast or my other work, there has never been a better way than to order the audiobook (or ebook) (or both!).

Attack Surface is a standalone novel, meaning you can enjoy it without reading Little Brother or its sequel, Homeland. Please give this extended preview a listen and, if you enjoy it, back the Kickstarter and (this is very important): TELL YOUR FRIENDS.

Thank you, sincerely.


Planet DebianGunnar Wolf: Welcome to the family

Need I say more? OK, I will…

Still wanting some more details? Well…

I have had many cats through my life. When I was about eight years old, my parents tried to have a dog… but the experiment didn’t work, and besides those few months, I never had one.

But as my aging cats spent the final months of their last very long lifes, it was clear to us that, after them, we would be adopting a dog.

Last Saturday was the big day. We had seen some photos of the mother and the nine (!) pups. My children decided almost right away her name; they were all brownish, so the name would be corteza (tree bark. They didn’t know, of course, that dogs also have a bark! 😉)

Anyway, welcome little one!

Kevin RuddSMH: Scott Morrison is Yearning for a Donald Trump victory

Published in The Sydney Morning Herald on 08 September 2020

The PM will be praying for a Republican win in the US to back up his inaction on climate and the Paris Agreement.

A year out from Barack Obama’s election in 2008, John Howard made a stunning admission that he thought Americans should be praying for a Republican victory. Ideologically this was unremarkable. But the fact Howard said so publicly was because he knew just how uncomfortable an Obama victory would be for him given his refusal to withdraw our troops from Iraq.

Fast forward more than a decade, and Scott Morrison – even in the era of Donald Trump – will also be yearning desperately for a Republican victory come November. But this time it is the conservative recalcitrance on a very different issue that risks Australia being isolated on the world stage: climate change.

And as the next summer approaches, Australians will be reminded afresh of how climate change, and its impact on our country and economy, has not gone away.

Former vice-president Joe Biden has put at the centre of his campaign a historic plan to fight climate change both at home and abroad. On his first day in office, he has promised to return the US to the Paris Agreement. And he recently unveiled an unprecedented $2 trillion green investment plan, including the complete decarbonisation of the domestic electricity system by 2035.

By contrast, Morrison remains hell-bent on Australia doing its best to disrupt global momentum to tackle the climate crisis and burying our head in the sand when it comes to embracing the new economic opportunities that come with effective climate change action.

As a result, if Biden is elected this November, we will be on track for a collision course with our American ally in a number of areas.

First, Morrison remains recklessly determined on being able to carry over so-called “credits” from the overachievement of our 2020 Kyoto target to help it meet its already lacklustre 2030 target under the new Paris regime.

No other government in the world is digging their heels in like this. None. It is nothing more than an accounting trick to allow Australia to do less. Perhaps the greatest irony is that this “overachievement” was also in large part because of the mitigation actions of our government.

That aside, these carbon credits also do nothing for the atmosphere. At worst, using them beyond 2020 could be considered illegal and only opens the back door for other countries to also do less by following Morrison’s lead.

This will come to a head at the next UN climate talks in Glasgow next year. While Australia has thus far been able to dig in against objections by most of the rest of the world, a Biden victory would only strengthen the hand of the UK hosts to simply ride over the top of any further Australian intransigence. Morrison would be foolhardy to believe that Boris Johnson’s government will burn its political capital at home and abroad to defend the indefensible Australian position.

Second, unlike 114 countries around the world, Morrison remains hell-bent on ignoring the central promise of Paris: that all governments increase their 2030 targets by the time they get to Glasgow. That’s because even if all those commitments were fully implemented, it would only give the planet one-third of what is necessary to keep average temperature increases within 1.5 degrees by 2100, as the Paris Agreement requires. This is why governments agreed to increase their ambition every five years as technologies improved, costs lowered and political momentum built.

In 2014, the Liberal government explained our existing Paris target on the basis that it was the same as what the Americans were doing. In reality, the Obama administration planned to achieve the same cut of 26 to 28 per cent on 2005 emissions by 2025 – not 2030 as we pledged and sought to disguise.

So based on the logic that what America does is this government’s benchmark for its global climate change commitments, if the US is prepared to increase it’s Paris target (as it will under Biden), so too should we. Biden himself has not just committed the US to the goal of net zero emissions by 2050, but has undertaken to embed it in legislation as countries such as Britain and New Zealand have done, and rally others to do the same.

Unsurprisingly, despite the decisions of 121 countries around the world, Morrison also refuses to even identify a timeline for achieving the Paris Agreement’s long-term goal to reach net zero emissions. As the science tells us, this needs to be by 2050 to have any shot of protecting the world’s most vulnerable populations – including in the Pacific – and saving Australia from a rolling apocalypse of weather-related disasters that will wreak havoc on our economy.

For our part, the government insists that it won’t “set a target without a plan”. But governments exist to do the hard work. And politically, it goes against the myriad of support domestically for a net zero by 2050 goal, including from the peak business, industry and union groups, the top bodies for energy and agriculture (two sectors that together account for almost half of our emissions), as well as our national airline, two largest mining companies, every state and territory government, and even a majority of conservative voters.

As Tuvalu’s recent prime minister Enele Sopoaga reminded us recently, the fact that Morrison himself looked Pacific island leaders in the eye last year and promised to develop such a long-term plan – a promise he reiterated at the G20 – also shows we risk being a country that does not do what we say. For those in the Pacific, this just rubs salt into the wound of Morrison’s decision to blindly follow Trump’s lead in halting payments to the Green Climate Fund (something Biden would also reverse), requiring them to navigate a bureaucratic maze of individual aid programs as a result.

Finally, Biden has undertaken to also align trade and climate policy by imposing carbon tariffs against those countries that fail to do their fair share in global greenhouse gas reductions. The EU is in the process of embracing the same approach. So if Morrison doesn’t act, he’s going to put our entire export sector at risk of punitive tariffs because the Liberals have so consistently failed to take climate change seriously.

Under Trump, Morrison has been able to get one giant leave pass for doing nothing on climate. But under Biden, he’ll be seen as nothing more than the climate change free-loader that he is. As he will by the rest of the world. And our economy will be punished as a result.

The post SMH: Scott Morrison is Yearning for a Donald Trump victory appeared first on Kevin Rudd.

LongNowTime-Binding and The Music History Survey

Musicologist Phil Ford, co-host of the Weird Studies podcast, makes an eloquent argument for the preservation of the “Chants to Minimalism” Western Music History survey—the standard academic curriculum for musicology students, akin to the “fish, frogs, lizards, birds” evolutionary spiral taught in bio classes—in an age of exponential change and an increased emphasis on “relevance” over the remembrance of canonical works:

Perhaps paradoxically, the rate of cultural change increases in proportional measure to the increase in cultural memory. Writing and its successor media of prosthetic memory enact a contradiction: the easy preservation of cultural memory enables us to break with the past, to unbind time. At its furthest extremes, this is manifested in the familiar and dismal spectacle of fascist and communist regimes, impelled by intellectual notions permitted by the intensified time-binding of literacy, imagining utopias that will ‘wipe the slate clean’ and trying to force people to live in a world entirely divorced from the bound time of social/cultural tradition.

See, for instance, Mao Zedong’s crusade to destroy Tibetan Buddhism by the erasure of context that held the Dalai Lama’s social role in place for fourteen generations. How is the culture to find a fifteenth Dalai Lama if no child can identify the relics of a prior Dalai Lama? Ironically this speaks to larger questions of the agency of landscapes and materials, and how it isn’t just our records as we understand them that help scaffold our identity; but that we are in some sense colonial organisms inextricable from, made by, our environments — whether built or wild. As recounted in countless change-of-station stories, monarchs and other leaders, like whirlpools or sea anemones, dissolve when pulled out of the currents that support them.

That said, the current isn’t just stability but change. Both novelty and structure are required to bind time. By pushing to extremes, modernity self-undermines, imperils its own basis for existence; and those cultures that slam on the brakes and dig into conservative tradition risk self-suffocation, or being ripped asunder by the friction of collision with the moving edge of history:

Modernity is (among other things) the condition in which time-binding is threatened by its own exponential expansion, and yet where it’s not clear exactly how we are to slow its growth.  Very modern people are reflexively opposed to anything that would slow down the acceleration: for them, the essence of the human is change. Reactionaries are reflexively opposed to anything that will speed up acceleration: for them, the essence of the human is continuity. Both are right!  Each side, given the opportunity to realize its imagined utopia of change or continuity, would make a world no sensible person would be caught dead in.

Ultimately, therefore, a conservative-yet-innovative balance must be found in the embrace of both new information technologies and their use for preservation of historic repertoires. When on a rocket into space, a look back at the whole Earth is essential to remember where we come from:

The best argument for keeping Sederunt in the classroom is that it is one of the nearly-infinite forms of music that the human mind has contrived, and the memory of those forms — time-binding — is crucial not only to the craft of musicians but to our continued sense of what it is to be a human being.

This isn’t just future-shocked reactionary work but a necessary integrative practice that enables us to reach beyond:

To tell the story of who we are is to engage in the scholar’s highest mission. It is the gift that shamans give their tribe.

Worse Than FailureCodeSOD: Sleep on It

If you're fetching data from a remote source, "retry until a timeout is hit" is a pretty standard pattern. And with that in mind, this C++ code from Auburus doesn't look like much of a WTF.

bool receiveData(uint8_t** data, std::chrono::milliseconds timeToWait) { start = now(); while ((now() - start) < timeToWait) { if (/* successfully receive data */) { return true; } std::this_thread::sleep_for(100ms); } return false; }

Track the start time. While the difference between the current time and the start is less than our timeout, try and get the data. If you don't, sleep for 100ms, then retry.

This all seems pretty reasonable, at first glance. We could come up with better ways, certainly, but that code isn't quite a WTF.

This code is:

// The ONLY call to that function receiveData(&dataPtr, 100ms);

By calling this with a 100ms timeout, and because we hard-coded in a 100ms sleep, we've guaranteed that we will never retry. That may or not be intentional, and that's what really bugs me about this code. Maybe they meant to do that (because they originally retried, and found it caused other bugs?). Maybe they didn't. But they didn't document it, either in the code or as a commit comment, so we'll never know.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.


LongNowStudy Group for Progress Launches with Discount for Long Now Members

Long Now Member Jason Crawford, founder of The Roots of Progress, is starting up a weekly learning group on progress with a steep discount for Long Now Members:

The Study Group for Progress is a weekly discussion + Q&A on the history, economics and philosophy of progress. Long Now members can get 50% off registration using the link below.

Each week will feature a special guest for Q&A. Confirmed speakers so far include top economists and historians such as Robert J. Gordon (Northwestern, The Rise & Fall of American Growth), Margaret Jacob (UCLA), Richard Nelson (Columbia), Patrick Collison, and Anton Howes. Readings from each author will be given out ahead of time. Participants will also receive a set of readings originally created for the online learning program Progress Studies for Young Scholars: a summary of the history of technology, including advances in materials and manufacturing, agriculture, energy, transportation, communication, and disease.

The group will meet weekly on Sundays at 4:00–6:30pm Pacific, from September 13 through December 13 (recordings available privately afterwards). See the full announcement here and register for 50% off with this link

Worse Than FailureCodeSOD: Classic WTF: Covering All Cases… And Then Some

It's Labor Day in the US, where we celebrate the labor movement and people who, y'know, do actual work. So let's flip back to an old story, which does a lot of extra work. Original -- Remy

Ben Murphy found a developer who liked to cover all of his bases ... then cover the dug-out ... then the bench. If you think this method to convert input (from 33 to 0.33) is a bit superflous, you should see data validation.

Static Function ConvertPercent(v_value As Double)
  If v_value > 1 Then
    ConvertPercent = v_value / 100
  ElseIf v_value = 1 Then
    ConvertPercent = v_value / 100
  ElseIf v_value < 1 Then
    ConvertPercent = v_value / 100
  ElseIf v_value = -1 Then
    ConvertPercent = v_value / 100
    ConvertPercent = v_value
  End If 
End Function

The original article- from 2004!- featured Alex asking for a logo. Instead, let me remind you to submit your WTF. Our stories come from our readers. If nothing else, it's a great chance to anonymously vent about work.
[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Cryptogram is Moving

I’m switching my website software from Movable Type to WordPress, and moving to a new host.

The migration is expected to last from approximately 3 AM EST Monday until 4 PM EST Tuesday. The site will still be visible during that time, but comments will be disabled. (This is to prevent any new comments from disappearing in the move.)

This is not a site redesign, so you shouldn’t notice many differences. Even the commenting system is pretty much the same, though you’ll be able to use Markdown instead of HTML if you want to.

The conversion to WordPress was done by Automattic, who did an amazing job of getting all of the site’s customizations and complexities — this website is 17 years old — to work on a new platform. Automattic is also providing the new hosting on their Pressable service. I’m not sure I could have done it without them.

Hopefully everything will work smoothly.


Cryptogram Friday Squid Blogging: Morning Squid

Asa ika means “morning squid” in Japanese.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: We Must Explore the Null!

"Beyond the realm of Null, past the Black Stump, lies the mythical FILE_NOT_FOUND," writes Chris A.


"I know, Zwift, I should have started paying you 50 years ago," Jeff wrote, "But hey, thanks for still giving leeches like me a free ride!"


Drake C. writes, "!"


"I'm having a hard time picking between these 'Exclusives'. It's a shame they're both scheduled for the same time," wrote Rutger.


"Wait, so is this beer zero raised to hex FF? At that price I'll take 0x02!" wrote Tony B.


Kevin F. writes, "Some weed killers say they're powerful. This one backs up that claim!"


[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram Hacking AI-Graded Tests

The company Edgenuity sells AI systems for grading tests. Turns out that they just search for keywords without doing any actual semantic analysis.


Worse Than FailureCodeSOD: Learning the Hard Way

If you want millions in VC funding, mumble the words “machine learning” and “disruption” and they’ll blunder out of the woods to just throw money at your startup.

At its core, ML is really about brute-forcing a statistical model. And today’s code from Norine could have possibly been avoided by applying a little more brute force to the programmer responsible.

This particular ML environment, like many, uses Python to wrap around lower-level objects. The ease of Python coupled with the speed of native/GPU-accelerated code. It has a collection of Model datatypes, and at runtime, it needs to decide which concrete Model type it should instantiate. If you come from an OO background in pretty much any other language, you’re thinking about factory patterns and abstract classes, but that’s not terribly Pythonic. Not that this developer’s solution is Pythonic either.

def choose_model(data, env):
  ModelBase = getattr(import_module(env.modelpath), env.modelname)
  class Model(ModelBase):
    def __init__(self, data, env):
      if env.data_save is None:
        if env.counter == 0:
 = data
          raise ValueError("data unavailable with counter > 0")
        with open(env.data_save, "r") as df:
 = json.load(df)
      ModelBase.__init__(self, **
  return Model(data, env)

This is an example of metaprogramming. We use import_module to dynamically load a module at runtime- potentially smart, because modules may take some time to load, so we shouldn’t load a module we don’t know that we’re going to use. Then, with get_attr, we extract the definition of a class with whatever name is stored in env.modelname.

This is the model class we want to instantiate. But instead of actually instantiating it, we instead create a new derived class, and slap a bunch of logic and file loading into it.

Then we instantiate and return an instance of this dynamically defined derived class.

There are so many things that make me cringe. First, I hate putting file access in the constructor. That’s maybe more personal preference, but I hate constructors which can possibly throw exceptions. See also the raise ValueError, where we explicitly throw exceptions. That’s just me being picky, though, and it’s not like this constructor will ever get called from anywhere else.

More concretely bad, these kinds of dynamically defined classes can have some… unusual effects in Python. For example, in Python2 (which this is), each call to choose_model will tag the returned instance with the same type, regardless of which base class it used. Since this method might potentially be using a different base class depending on the env passed in, that’s asking for confusion. You can route around these problems, but they’re not doing that here.

But far, far more annoying is that the super-class constructor, ModelBase.__init__, isn’t called until the end.

You’ll note that our child class manipulates, and while it’s not pictured here, our base model classes? They also use a property called data, but for a different purpose. So our child class inits a child class property, specifically to build a dictionary of key/value pairs, which it then passes as kwargs, or keyword arguments (the ** operator) to the base class constructor… which then overwrites the our child class was using.

So why do any of that?

Norine changed the code to this simpler, more reliable version, which doesn’t need any metaprogramming or dynamically defined classes:

def choose_model(data, env):
  Model = getattr(import_module(env.modelpath), env.modelname)
  if env.data_save is not None:
    with open(env.data_save, "r") as df:
      data = json.load(df)
  elif env.counter != 0:
    raise ValueError('if env.counter > 0 then must use data_save parameter')

  return Model(**data)

Norine adds:

I’m thinking of holding on to the original, and showing it to interviewees like a Rorschach test. What do you see in this? The fragility of a plugin system? The perils of metaprogramming? The hollowness of an overwritten argument? Do you see someone with more cleverness than sense? Or someone intelligent but bored? Or perhaps you see, in the way the superclass init is called, TRWTF: a Python 2 library written within the last 3 years.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Cryptogram 2017 Tesla Hack

Interesting story of a class break against the entire Tesla fleet.

Krebs on SecurityThe Joys of Owning an ‘OG’ Email Account

When you own a short email address at a popular email provider, you are bound to get gobs of spam, and more than a few alerts about random people trying to seize control over the account. If your account name is short and desirable enough, this kind of activity can make the account less reliable for day-to-day communications because it tends to bury emails you do want to receive. But there is also a puzzling side to all this noise: Random people tend to use your account as if it were theirs, and often for some fairly sensitive services online.

About 16 years ago — back when you actually had to be invited by an existing Google Mail user in order to open a new Gmail account — I was able to get hold of a very short email address on the service that hadn’t yet been reserved. Naming the address here would only invite more spam and account hijack attempts, but let’s just say the account name has something to do with computer hacking.

Because it’s a relatively short username, it is what’s known as an “OG” or “original gangster” account. These account names tend to be highly prized among certain communities, who busy themselves with trying to hack them for personal use or resale. Hence, the constant account takeover requests.

What is endlessly fascinating is how many people think it’s a good idea to sign up for important accounts online using my email address. Naturally, my account has been signed up involuntarily for nearly every dating and porn website there is. That is to be expected, I suppose.

But what still blows me away is the number of financial and other sensitive accounts I could access if I were of a devious mind. This particular email address has accounts that I never asked for at H&R Block, Turbotax, TaxAct, iTunes, LastPass, Dashlane, MyPCBackup, and Credit Karma, to name just a few. I’ve lost count of the number of active bank, ISP and web hosting accounts I can tap into.

I’m perpetually amazed by how many other Gmail users and people on similarly-sized webmail providers have opted to pick my account as a backup address if they should ever lose access to their inbox. Almost certainly, these users just lazily picked my account name at random when asked for a backup email — apparently without fully realizing the potential ramifications of doing so. At last check, my account is listed as the backup for more than three dozen Yahoo, Microsoft and other Gmail accounts and their associated file-sharing services.

If for some reason I ever needed to order pet food or medications online, my phantom accounts at Chewy, Coupaw and Petco have me covered. If any of my Weber grill parts ever fail, I’m set for life on that front. The Weber emails I periodically receive remind me of a piece I wrote many years ago for The Washington Post, about companies sending email from [companynamehere], without considering that someone might own that domain. Someone did, and the results were often hilarious.

It’s probably a good thing I’m not massively into computer games, because the online gaming (and gambling) profiles tied to my old Gmail account are innumerable.

For several years until recently, I was receiving the monthly statements intended for an older gentleman in India who had the bright idea of using my Gmail account to manage his substantial retirement holdings. Thankfully, after reaching out to him he finally removed my address from his profile, although he never responded to questions about how this might have happened.

On balance, I’ve learned it’s better just not to ask. On multiple occasions, I’d spend a few minutes trying to figure out if the email addresses using my Gmail as a backup were created by real people or just spam bots of some sort. And then I’d send a polite note to those that fell into the former camp, explaining why this was a bad idea and ask what motivated them to do so.

Perhaps because my Gmail account name includes a hacking term, the few responses I’ve received have been less than cheerful. Despite my including detailed instructions on how to undo what she’d done, one woman in Florida screamed in an ALL CAPS reply that I was trying to phish her and that her husband was a police officer who would soon hunt me down. Alas, I still get notifications anytime she logs into her Yahoo account.

Probably for the same reason the Florida lady assumed I was a malicious hacker, my account constantly gets requests from random people who wish to hire me to hack into someone else’s account. I never respond to those either, although I’ll admit that sometimes when I’m procrastinating over something the temptation arises.

Losing access to your inbox can open you up to a cascading nightmare of other problems. Having a backup email address tied to your inbox is a good idea, but obviously only if you also control that backup address.

More importantly, make sure you’re availing yourself of the most secure form of multi-factor authentication offered by the provider. These may range from authentication options like one-time codes sent via email, phone calls, SMS or mobile app, to more robust, true “2-factor authentication” or 2FA options (something you have and something you know), such as security keys or push-based 2FA such as Duo Security (an advertiser on this site and a service I have used for years).

Email, SMS and app-based one-time codes are considered less robust from a security perspective because they can be undermined by a variety of well-established attack scenarios, from SIM-swapping to mobile-based malware. So it makes sense to secure your accounts with the strongest form of MFA available. But please bear in mind that if the only added authentication options offered by a site you frequent are SMS and/or phone calls, this is still better than simply relying on a password to secure your account.

Maybe you’ve put off enabling multi-factor authentication for your important accounts, and if that describes you, please take a moment to visit and see whether you can harden your various accounts.

As I noted in June’s story, Turn on MFA Before Crooks Do It For You, people who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control.

Are you in possession of an OG email account? Feel free to sound off in the comments below about some of the more gonzo stuff that winds up in your inbox.


Cryptogram Insider Attack on the Carnegie Library

Greg Priore, the person in charge of the rare book room at the Carnegie Library, stole from it for almost two decades before getting caught.

It’s a perennial problem: trusted insiders have to be trusted.

Worse Than FailureBidirectional

Merge-short arrows

Trung worked for a Microsoft and .NET framework shop that used AutoMapper to simplify object mapping between tiers. Their application's mapping configuration was performed at startup, as in the following C# snippet:

public void Configure(ConfigurationContext context)

where the AfterMap() method's Map delegate was to map discrepancies that AutoMapper couldn't.

One day, a senior dev named Van approached Trung for help. He was repeatedly getting AutoMapper's "Missing type map configuration or unsupported mapping. Mapping types Y -> X ..." error.

Trung frowned a little, wondering what was mysterious about this problem. "You're ... probably missing mapping configuration for Y to X," he said.

"No, I'm not!" Van pointed to his monitor, at the same code snippet above.

Trung shook his head. "That mapping is one-way, from X to Y only. You can create the reverse mapping by using the Bidirectional() extension method. Here ..." He leaned over to type in the addition:


This resolved Van's error. Both men returned to their usual business.

A few weeks later, Van approached Trung again, this time needing help with refactoring due to a base library change. While they huddled over Van's computer and dug through compilation errors, Trung kept seeing strange code within multiple AfterMap() delegates:

void Map(X src, Y desc)
desc.QueueId = src.Queue.Id;
src.Queue = Queue.GetById(desc.QueueId);

"Wait a minute!" Trung reached for the mouse to highlight two such lines and asked, "Why is this here?"

"The mapping is supposed to be bidirectional! Remember?" Van replied. "I’m copying from X to Y, then from Y to X."

Trung resisted the urge to clap a hand to his forehead or mutter something about CS101 and variable-swapping—not that this "swap" was necessary. "You realize you'd have nothing but X after doing that?"

The quizzical look on the senior developer's face assured Trung that Van hadn't realized any such thing.

Trung could only sigh and help Van trudge through the delegates he'd "fixed," working out a better mapping procedure for each.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Cory DoctorowGet Radicalized for a mere $2.99

The ebook of my 2019 book RADICALIZED — finalist for the Canada Reads award, LA Library book of the year, etc — is on sale today for $2.99 on all major platforms!


There are a lot of ways to get radicalized in 2020, but this is arguably the cheapest.

MEBBB vs Jitsi

I previously wrote about how I installed the Jitsi video-conferencing system on Debian [1]. We used that for a few unofficial meetings of LUV to test it out. Then we installed Big Blue Button (BBB) [2]. The main benefit of Jitsi over BBB is that it supports live streaming to YouTube. The benefits of BBB are a better text chat system and a “whiteboard” that allows conference participants to draw shared diagrams. So if you have the ability to run both systems then it’s best to use Jitsi when you have so many viewers that a YouTube live stream is needed and to use BBB in all other situations.

One problem is with the ability to run both systems. Jitsi isn’t too hard to install if you are installing it on a VM that is not used for anything else. BBB is a major pain no matter what you do. The latest version of BBB is 2.2 which was released in March 2020 and requires Ubuntu 16.04 (which was released in 2016 and has “standard support” until April next year) and doesn’t support Ubuntu 18.04 (released in 2018 and has “standard support” until 2023). The install script doesn’t check for correct apt repositories and breaks badly with no explanation if you don’t have Ubuntu Multiverse enabled.

I expect that they rushed a release because of the significant increase in demand for video conferencing this year. But that’s no reason for demanding the 2016 version of Ubuntu, why couldn’t they have developed on version 18.04 for the last 2 years? Since that release they have had 6 months in which they could have released a 2.2.1 version supporting Ubuntu 18.04 or even 20.04.

The dependency list for BBB is significant, among other things it uses LibreOffice for the whiteboard. This adds to the pain of installing and maintaining it. It wouldn’t surprise me if some of the interactions between all the different components have security issues.


If you want something that’s not really painful to install and run then use Jitsi.

If you need YouTube live streaming use Jitsi.

If you need whiteboards and a good text chat system or if you generally need to run things like a classroom then BBB is a good option. But only if you can manage it, know someone who can manage it for you, or are happy to pay for a managed service provider to do it for you.

Kevin Rudd2GB: Morrison’s Retirement Rip-Off


Topics: Superannuation; Cheng Lei consular matter

Ben Fordham
Now, superannuation to increase or not? Right now 9.5% of your wage goes towards super, and it sits there until you retire. From July 1 next year, the compulsory rate is going up. It will climb by half a per cent every year until it hits 12% in 2025. So it’s slowly going from 9.5% to 12%. Now that was legislated long before Coronavirus. Now we are in recession, and the government is hinting strongly that it’s ready to dump or delay the policy to increase super contributions. Now I reckon this is a genuine barbecue stopper. It’s not a question of Labor versus Liberal or left versus right. Some want their money now to help them out of hardship. Others say no, we have super for a reason and that is to save for the future. Former Prime Minister Kevin Rudd has got a strong view on this and he joins us on the line. Kevin Rudd, good morning to you.

Kevin Rudd
Morning, Ben. Thanks for having me on the program.

Ben Fordham
No problem. You want Scott Morrison and Josh Frydenberg to leave super alone.

Kevin Rudd
That’s right. And Mr Morrison did promise to maintain this policy which we brought in when he went to the people at the last election. And remember, Ben, back in 2014, they already deferred this for five years. Otherwise, this thing would be done and dusted and it’d be all the way up to 12 by now. I’m just worried we’re going to find one excuse after another to kick this into the Never Never Land. And the result is that working families, those people listening to your program this morning, are not going to have a decent nest egg for their retirement.

Ben Fordham
All right, most of those hard-working Aussies are telling me that they would like the option of having the money right now.

Kevin Rudd
Well, the problem with super is that if you open the floodgates and allow people what Morrison calls as ‘early access’, then what happens is they hollow out and then if you take out $10,000 now as a 35-year-old, by the time you retire you’re going to be $65,000 to $130,000 worse off. That’s how it builds up. So I’m really worried about that. And also, you know Ben, then we’re living longer. Once upon a time, we used to retire at 65 and we’d all be dead by 70. Guess what, that’s not the case anymore. People are living to 80, 90 and the young people listen to your program, or a large number of them, are going to be around until they’re 100. So what we have for retirement income is really important, otherwise you’re back on the age pension which, despite changes I made in office, is not hugely generous.

Ben Fordham
I’m sure you respect the view of the Reserve Bank governor Philip Lowe. Philip Lowe says lifting the super guarantee would reduce wages, cut consumer spending and cost jobs. So he’s got a very different view to you.

Kevin Rudd
Well, I’ve actually had a look at what Governor Lowe had to say. I’ve been reading his submission in the last 24 hours or so. On the question of the impact on wages, yes, he says it would be a potential deferral of wages, but he doesn’t express a view one way or the other to whether that is good or bad. But on employment and the argument used by the government that this is somehow some negative effect on employment, it just doesn’t stack up. By the way, Ben, remember, if this logic held that somehow if we don’t have the superannuation guarantee levy going up, that wages would increase; well, after the government deferred this for five years, starting from 2014, guess what, working people got no increase in their super, but also their wages have flatlined as well. I’m just worried about how this all lands at the end for working people wanting to have a decent retirement.

Ben Fordham
Okay, but don’t we need to be aware of the times that we’re living in? You said earlier, you’re concerned that the government’s looking for excuses to put this thing off or kill this thing off. Well, we do have a global health pandemic at the moment. Isn’t that the ultimate reason why we should be adjusting our position?

Kevin Rudd
There’s always a crisis. I took the country through the global financial crisis, which threw every economy in the world, every major one, into recession. We managed to avoid it here in Australia through a combination of good policy and some other factors as well. It didn’t cross our mind to kill super during that period of time, or superannuation increases. It was simply not in our view the right approach, because we were concerned about keeping the economy going in here and now, but also making proper preparations for the future. But then here’s the rub. If 9% is good enough for everybody, or 9.5% where it is at the moment, then why the politicians and their staffers currently on 15.4%? Very generous for them. Not so generous for working families. That’s what worries me.

Ben Fordham
We know that you keep a keen eye on China. We wake up this morning to the news that Chinese authorities have detained an Australian journalist, Cheng Lei, without charge. Is the timing of this at all suspicious?

Kevin Rudd
You know, Ben, I don’t know enough of the individual circumstances surrounding this case. I don’t want to say anything which jeopardizes the individual concerned. All I’d say is, the Australian Government has a responsibility to look after any Australian — Chinese Australian, Anglo Saxon Australian, whoever Australian — if they get into strife abroad. And I’m sure, knowing the professionalism of the Australian Foreign Service, that they’re doing everything physically possible at present to try and look after this person.

Ben Fordham
Yeah, we know that Marise Payne’s doing that this morning. We appreciate you jumping on the phone and talking to us.

Kevin Rudd
Thanks, Ben. Appreciate it.

Ben Fordham
Former Prime Minister Kevin Rudd, I reckon this is one of these issues where you can’t just put a line down the middle of the page and say, ‘okay, Labor supporters are going to think this way and Liberal supporters are going to think that way’. I think there are two schools of thought and it depends on your age, it depends on your circumstance, it depends on your attitude. Some say ‘give me the money now, it’s my money, not yours’. Others say ‘no, we have super for a reason, it’s there for our retirement’. Where do you stand? It’s 7.52 am.

The post 2GB: Morrison’s Retirement Rip-Off appeared first on Kevin Rudd.

Kevin RuddSunrise: Protecting Australian Retirees


Now two former Labor prime ministers have taken aim at the government demanding it go ahead with next year’s planned increased to compulsory super. Paul Keating introduced the scheme back in 1992 and says workers should not miss out.

Paul Keating
[Recording] They want to gyp ordinary people by two and a half per cent of their income for the rest of their life. I mean, the gall of it. I mean, the heartlessness of it.

Kevin Rudd, who moved to increase super contributions as well says the rise to 12% in the years ahead, should not be stalled.

Kevin Rudd
[Recording] This is a cruel assault by Morrison on the retirement income of working Australians and using the cover of COVID to try and get away with it.

The government is yet to make an official decision. Joining me now is the former prime minister Kevin Rudd. Kevin Rudd, good morning to you. Rather than being a cruel assault by the federal government, is it an acknowledgment that we’re going into the worst recession since the Depression?

Kevin Rudd
Well, you know, Kochie, there’s always been an excuse not to do super and not to continue with super. And what we’ve seen in the past is the Liberal Party at various stages just trying to kill this scheme which Paul Keating got going for the benefit of working Australians all those years ago. They had no real excuse for deferring this move from nine to 12%. When they did it back in 2014, and this would be a further deferral. Look, what’s really at stake here, Kochie, is just working families watching your program this morning, having a decent retirement. That’s why Paul brought it in.


Kevin Rudd
That’s why we both decided to come and speak out.

I absolutely agree with it, but it’s a matter of timing. What do you say to all the small business owners out there who are just trying to keep afloat? To say, hey gang, you’re gonna have to pay an extra half a per cent in super, that you’re going to have to pay on a quarterly basis, to add to your bills again, to try and survive this.

Kevin Rudd
Well, what Mr Morrison saying to those small business folks is the reason we don’t want to do this super increase is because it’s going to get in the road of a wage increase and you can’t have this both ways, mate. Either you’ve got an employer adding 0.5 by way of a wage increase, or by super that’s the bottom line here.


Kevin Rudd
You can’t simply argue that this is all going to disappear into some magic pudding. The bottom line is: the reason we did it this way, and Paul before me, was a small increment each year.


Kevin Rudd
But it builds up as you know, you’re a finance guy Kochie, into a ginormous nest egg for people.


Kevin Rudd
And for the country.

I do not disagree with the overall theory of it. It’s just in the timing. So what you’re saying is to Australian bosses around the country is to go to your staff and say, ‘no, you’re not going to get a pay increase, because I’m going to put more into your super, and you’ve got to like it or lump it’.

Kevin Rudd
Well, Kochie, if that was the case, why is it that we’ve had no super increase in the guarantee levy over the last five or six years, and wages growth has been absolutely doodly-squat over that period of time? In other words, the argument for the last five years is we couldn’t do an SGL increase from nine to 12 because would impact on wages. Guess what, we got no increase in super and no increase in real wages. And it just doesn’t hold, mate.

The Reserve Bank is saying don’t do it. Social services group are saying don’t do it.

Kevin Rudd
Well, mate, if you look carefully at what the governor the RBA says, he says on the impact on wages, yes, it is, in his language of wage deferral on which he does not express an opinion. And as for the employment, the jobs impact, he says he does not have a view. I think we need to be very careful in reading the detail of what governor Lowe has had to say. Our argument is just, what’s decent for working families? And why are the pollies and their staffers getting 15.4% and yet working families who Paul would try to look after with this massive reform 30 years ago, stuck at nine? I don’t think that’s fair. It’s a double standard.

Yep. I absolutely agree with you on that as well.

The post Sunrise: Protecting Australian Retirees appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Unknown Purpose

Networks are complex beasts, and as they grow, they get more complicated. Diagnosing and understanding problems on networks rapidly gets hard. “Fortunately” for the world, IniTech ships one of those tools.

Leonore works on IniTech’s protocol analyzer. As you might imagine, a protocol analyzer gathers a lot of data. In the case of IniTech’s product, the lowest level of data acquisition is frequently sampled voltage measurements over time. And it’s a lot of samples- depending on the protocol in question, it might need samples on the order of nanoseconds.

In Leonore’s case, those raw voltage samples are the “primary data”. Now, there are all sorts of cool things that you can do with that primary data, but those computations become expensive. If your goal is to be able to provide realtime updates to the UI, you can’t do most of those computations- you do those outside of the UI update loop.

But you can do some of them. Things like level crossings and timing information can be built quickly enough for the UI. These values are “secondary data”.

As data is collected, there are a number of other sections of the application which need to be notified: the UI and the various high-level analysis components. Architecturally, Leonore’s team made an event-driven approach to doing this. As data is collected, a DataUpdatedEvent fires. The DataUpdatedEvent fires twice: once for the “primary data” and once for the “secondary data”. These two events always happen in lockstep, and they happen so closely together that, for all other modules in the application, they can safely be considered simultaneous, and no components in the application ever only care about one- they always want to see both the primary and the secondary data.

So, to review: the data collection module outputs a pair of data updated events, one containing primary data, one containing secondary data, and can never do anything else, and these two events could basically be viewed as the same event by everything else in the application.

Which raises a question about this C++/COM enum, used to tag the different events:

  enum DataUpdatedEventType 
    [helpstring("Unknown data type.")] UnknownDataType = 0, 
    [helpstring("Primary data.")] PrimaryData = 1,
    [helpstring("Secondary data.")] SecondaryData = 2,

As stated, the distinction between primary/secondary events is unnecessary. In fact, sending two events makes all the consuming code more complicated, because in many cases, they can’t start working until they’ve received the secondary data, and thus have to cache the primary data until the next event arrives.

But that’s minor. The UnknownDataType is never used. It can never be used. There is no case in which the data collection module will ever output that. There’s no reason why it would ever need to output that. None of the consumers are prepared to handle that- sending an UnknownDataType would almost certainly cause a crash in most configurations.

So why is it there? I’ll let Leonore explain:

The only answer I can give is this: When this was written, half of us didn’t know what we were doing most of the time, and most of us didn’t know what we were doing half of the time. So now there’s an enum in the code base that has never been used and, I would submit, CAN never be used. Or maybe I ought to say SHOULD never be used. I would just delete it, but I’ve never quite been able to bring myself to do so.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Cryptogram North Korea ATM Hack

The US Cybersecurity and Infrastructure Security Agency (CISA) published a long and technical alert describing a North Korea hacking scheme against ATMs in a bunch of countries worldwide:

This joint advisory is the result of analytic efforts among the Cybersecurity and Infrastructure Security Agency (CISA), the Department of the Treasury (Treasury), the Federal Bureau of Investigation (FBI) and U.S. Cyber Command (USCYBERCOM). Working with U.S. government partners, CISA, Treasury, FBI, and USCYBERCOM identified malware and indicators of compromise (IOCs) used by the North Korean government in an automated teller machine (ATM) cash-out scheme­ — referred to by the U.S. Government as “FASTCash 2.0: North Korea’s BeagleBoyz Robbing Banks.”

The level of detail is impressive, as seems to be common in CISA’s alerts and analysis reports.


LongNowMichael McElligott, A Staple of San Francisco Art and Culture, Dies at 50

It is with great sadness that we share the news that Michael McElligott, an event producer, thespian, writer, long-time Long Now staff member, and relentless promoter of the San Francisco avant-garde, has died. He was 50 years old.

Michael battled an aggressive form of brain cancer over the past year. He kept his legendary sense of humor throughout his challenging treatment. He died surrounded by family, friends, and his long-time partner, Danielle Engelman, Long Now’s Director of Programs.

Most of the Long Now community knew Michael as the face of the Conversations at The Interval speaking series, which began in 02014 with the opening of Long Now’s Interval bar/cafe. But he did much more than host the talks. For the first five years of the series, each of the talks was painstakingly produced by Michael. This included finding speakers, developing the talk with the speakers, helping curate all the media associated with each talk, and oftentimes hosting the talks. Many of the production ideas explored in this series by Michael became adopted across other Long Now programs, and we are so thankful we got to work with him.

An event producer since his college days, Michael was active in San Francisco’s art and theater scene as a performer and instigator of unusual events for more than 20 years. From 01999 to 02003 Michael hosted and co-produced The Tentacle Sessions, a monthly series spotlighting accomplished individuals in the San Francisco Bay Area scene—including writers, artists, and scientists. He has produced and performed in numerous alternative theater venues over the years including Popcorn Anti-Theater, The EXIT, 21 Grand, Stage Werx, and the late, great Dark Room Theater. He also produced events for Speechless LIVE and The Battery SF.

Michael was a long-time blogger (usually under his nom de kunst mikl-em) for publications including celebrated arts magazine Hi Fructose and award-winning internet culture site Laughing Squid. His writing can be found in print in the Hi Fructose Collected Edition books and Tales of The San Francisco Cacophony Society in which he recounted some of his adventures with that noted countercultural group.

Beginning in the late 01990s as an employee at Hot Wired, Michael worked in technology in various marketing, technical and product roles. He worked at both startups and tech giants; helped launch products for both consumers and enterprise; and worked with some of the best designers and programmers in the industry.

Originally from Richmond, Virginia, he co-founded a college radio station and an underground art space before moving to San Francisco. In the Bay Area, he was involved with myriad artistic projects and creative ventures, including helping start the online radio station Radio Valencia.

Michael had been a volunteer and associate of Long Now since 02006; he helped at events and Seminars, wrote for the blog and newsletter, and was a technical advisor. In 02013 he officially joined the staff to help raise funds, run social media, and design and produce the Conversations at The Interval lecture series.

To honor Michael’s role in helping to fundraise for and build The Interval, we have finally completed the design on the donor wall which will be dedicated to him.  It should be completed in the next few weeks and will bear a plaque remembering Michael. You can watch a playlist of all of Michael’s Interval talks here. Below, you can find a compilation of moments from the more than a hundred talks that Michael hosted over the years.

“This moment is really both an end and a beginning. And like the name The Interval, which is a measure of time, and a place out of time, this is that interval.”

Michael McElligott

Kevin RuddPress Conference: Morrison’s Assault on Superannuation

31 AUGUST 2020

Kevin Rudd
The reason I’m speaking to you this afternoon, here in Brisbane, is that Paul Keating, former Prime Minister of Australia, and myself, have a deep passion for the future of superannuation, retirement income adequacy for working families for the future, the future of our national savings and the national economy. So former prime minister Paul Keating is speaking to the media now in Sydney, and I’m speaking to national media now in Brisbane. And I don’t think Paul and I have ever done a joint press conference before, albeit socially distanced between Brisbane and Sydney. But the reason we’re doing it today is because this is a major matter of public importance for the country.

Let tell you why. Keating is the architect of our national superannuation policy. This was some 30 years ago. And as a result of his efforts, we now have the real possibility of decent retirement income policy for working families for the first time in this country’s history. And on top of that, we’ve accumulated something like $3 trillion worth of national savings. If you ask the question today, why is it that Australia still has a triple-A credit rating around the world, it’s because we have a bucketload of national savings. And so Paul Keating should be thanked for that, not just for the macroeconomy though, but also for delivering this enormous dividend to working families and giving them retirement dignity. Of course, what we did in government was announce that we would move the superannuation guarantee level from 9% to 12%. And we legislated to that effect. And prior to last election, Mr Morrison said that that was also Liberal and National Party policy as well. What Mr Keating and I are deeply concerned about is whether, in fact, this core undertaking to Australian working families is now in the process of being junked.

There are two arguments, which I think we need to bear in mind. The first is already we’ve had the Morison Government rip out $40 billion-plus from people’s existing superannuation accounts. And the reason why they’ve done that is because they haven’t had an economic policy alternative other than to say to working families, if you’re doing it tough as a result of the COVID crisis, then you can go and raid your super. Well, that’s all very fine and dandy, but when those working people then go to retire in the decades ahead, they will have gutted their retirement income. And that’s because this government has allowed them to do that, and in fact forced them to do that, in the absence of an economic policy alternative. Therefore, we’ve had this slug taken to the existing national superannuation pile. But furthermore, the second big slug is this indication increasingly from both Mr Morrison and Mr Frydenberg that they’re now going to betray the Australian people, betray working families, by repudiating their last pre-election commitment by abandoning the increase from 9.5% where it is now to 12%. This is a cruel assault by Morrison on the retirement income of working Australians and using the cover of COVID to try and get away with it.

The argument which the Australian Government seems to be advancing to justify this most recent assault on retirement income policy is that they say that if we go ahead with increasing the superannuation guarantee level from 9.5% and to 10, to 10.5 to 11, to 11.5 to 12 in the years that are to come, that that will somehow depress natural wages growth in the Australian economy. Pigs might fly. That is the biggest bullshit argument I have ever heard against going ahead with decent provision for people’s superannuation savings for the future. There is no statistical foundation for it. There is no logical foundation for it. There is no data-based argument sustained. This is an increment of half-a-percent a year out for the next several years until we get to 12%. What is magic about 12%? It’s fundamental in terms of the calculations that have been done to provide people with decent superannuation adequacy, retirement income adequacy, when they stop working. That’s why we’re doing it. But the argument that somehow by not proceeding with the increase from 9.5 to 12%, we’re going to deny people a proper increase in wages in the period ahead is an absolute nonsense. There is no basis to that argument whatsoever.

And what does it mean for an average working family? If you’re currently on $70,000 a year and superannuation is frozen at 9.5%, and not increased to 12, by the time you retire, you’re going to be at least $70,000 worse off, than would otherwise be the case. Why have we, in successive Labor government’s been so passionate about superannuation policy? Because we believe that every Australian, every working family should have the opportunity for some decency, dignity and independence in their retirement. And guess what: as we live longer, we’re going to spend longer in retirement and this is going to mean more and more for the generations to come. Of course, what’s the alternative if we don’t have superannuation adequacy, and if this raid on super continues under cover of COVID again? Well, it means that Mr Morrison and Mr Frydenberg in the future are going to be forcing more and more people under the age pension and my challenge to Australians is simply this: do you really trust your future and your retirement to Mr Morrison’s generosity in years to come on the age pension? It’s a bit like saying that you trust Mr Morrison in terms of his custodianship of the aged care system in this country. Successive conservative governments have never supported effective increases to the age pension, and they’ve never properly supported the aged care sector either.

But the bottom line is, if you deny people dignity and independence through the superannuation system, and these measures which the current conservative government are undertaking and are foreshadowing take us further in that direction. Then there’s only one course left for people when they retire and that’s to go onto the age pension. One of the things I’m proudest of in our period government was that we brought about the biggest single adjustment and the aged pension in its history. It was huge, something like $65. And we made that as a one-off adjustment which was indexed to the future. But let me tell you, that would never happen under a conservative government. And therefore entrust people’s future retirement to the future generosity of whichever conservative government might be around at the time is frankly folly. The whole logic of us having a superannuation system is that every working Australian can have their own independent dignity in their own retirement. That’s what it’s about.

So my appeal to Mr Morrison and Mr Frydenberg today is: Scotty, Joshy, think about it again. This is a really bad idea. My appeal to them as human beings as they look to the retirement of people who are near and dear to them in the future is: don’t take a further meataxe to the retirement income of working families for the future. It’s just un-Australian. Thank you.

Well, what do you think of the argument that delaying the superannuation guarantee increase would actually give people more money in their take home pay? I know you’ve used fairly strong language.

Kevin Rudd
Well, it is a fraudulent argument. There’s nothing in the data to suggest that that would happen. Let me give you one small example. In the last seven or eight years, we’ve had significant productivity growth in the Australian economy, in part because of some of the reforms we brought about in the economy during our own period in government. These things flow through. But if you look at productivity growth on the one hand, and look at the negligible growth in real wage levels over that same period of time, there is no historical argument to suggest that somehow by sacrificing superannuation increases that you’re going to generate an increase in average wages and average income. There’s simply nothing in the argument whatsoever.

So therefore, I can only conclude that this is a made-up argument by Mr Morrison using COVID cover, when in fact, what is their motivation? The Liberal Party have never liked the compulsory superannuation scheme, ever. They’ve opposed it all the way through. And I can only think that the reason for that is because Mr Keating came up with the idea in the first place. And on top of it, that because we now have such large industry superannuation funds around Australia, and $3 trillion therefore worth of muscle in the superannuation industry, that somehow represents a threat to their side of politics. But the argument that this somehow is going to effect wage increases for future average Australians is simply without logical foundation.

Sure, but you’re comparing historical data with not exactly like-for-like given we’re now in a recession and the immediate future will be deeply in recession. So, in terms of the argument that delaying [inaudible] will end up increasing take-home pay packets. Do admit that, you know, by looking at historical data and looking at the current trajectory it’s not like for like?

Kevin Rudd
The bottom line is we’ve had relatively flat growth in the economy in the last several years, and I have seen so many times in recent decades conservative parties [inaudible] that somehow, by increasing superannuation, we’re going to depress average income levels. Remember, the conservatives have already delayed the implementation of this increase of 2.5% since they came to power in 2013-14. Whatever excuses they managed to marshall that time in so doing. But the bottom line is, as this data indicates, that hasn’t resulted in some significant increase in wages. In fact, the data suggests the reverse.

So what I’m suggesting to you is: for them to argue that a 0.5% a year increase in the superannuation guarantee level, is going to send a torpedo a’midships into the prospects of wage increases for working Australians makes no sense. What doesn’t make sense is the accumulation of those savings over a lifetime. If Paul Keating hadn’t done what he did back then, there’d be no $3 trillion worth of Australian national savings. Paul had the vision to do it. Good on him. We tried to complete that vision by going from 9 to 12. And this mob have tried to stop it. But the real people who miss out are your parents, your parents, and I’m sorry to tell you both, you’ll both get older and you too in terms of the adequacy of your retirement income when the day comes.

So if it’s so important then, why did you only increase it by 0.5% during your six years in government, sharing that period of course with Julia Gillard?

Kevin Rudd
Well, the bottom line is: we decided to increase it gradually, so that we would not present any one-off assault to the ability of employers and employees to enjoy reasonable wage increases. It was a small increase every year and, guess what: it continues to be a very small increase every year until we get to 12. The other thing I’d say, which I haven’t raised so far in our discussion today, is that for most of the last five years, I’ve been in the United States. I run an American think tank. When I’ve traveled around the world and people know of my background in Australian politics, I am always asked this question: how did you guys come up with such a brilliant national savings policy? Very few, if any other countries in the world have this. But what we have done is a marvelous piece of long-term planning for generations of Australians. And with great macroeconomic benefit for the Australian economy in terms of this pool of national savings. We’re the envy of the world.

And yet what are we doing? Turning around and trashing it. So the reason we are gradual about it was to be responsible, not give people a sudden 3% hit, to tailor it over time, and we did so, just like Paul did with the original move from zero, first to 3, then 6 to 9. It happened gradually. But the cumulative effect of this over time for people retiring in 10, 20, 30 40 years’ time is enormous. And that’s why these changes are so important to the future. As you know, I rarely call a press conference. Paul doesn’t call many press conferences either, but he and I are angry as hell that this mob have decided it seems to take a meataxe to this important part of our national economic future and our social wellbeing. That’s what it’s about.

So we know that [inaudible] super accounts have been wiped completely. What damage do you think that would do if it’s extended? So that people can continue to access their super?

Kevin Rudd
The damage it does for individual working Australians, as I said before, it throws them back onto the age pension. And the age pension is simply the absolute basic backbone, the absolute basic provision, for people’s retirement for the future. If no other options exist. And as I said, in office, we undertook a fundamental reform to take it from below poverty level to above poverty level. But if you want for the future, for folks who are retiring to look at that as their option, well, if you continue to destroy this nation’s superannuation nest egg, that’s exactly where you’re going to end up. I can’t understand the logic of this. I thought conservatives were supposed to favour thrift. I thought conservatives were supposed to favour saving. They’re accusation against those of us who come from the Labor side of politics apparently is that we love to spend; actually, we like to save, and we do it through a national savings policy. Good for working families and good for the national economy.

And I think it’s just wrong that people have as their only option there for the future to be thrown back on the age pension and on that point, apart from the wellbeing of individual families, think about the impact in the future on the national budget. Most countries say to me that they envy our national savings policy because it takes pressure off the national budget in the future. Why do you think so many of the ratings agencies are marking economies down around the world? Because they haven’t made adequate future provision for retirement. They haven’t made adequate provision for the future superannuation entitlements of government employees as well. So what we have with the Future Fund, which I concede readily was an initiative of the conservative government, but supported by us on a bipartisan basis, is dealing with that liability in terms of the retirement income needs of federal public servants. But in terms of the rest of the nation, that’s what our national superannuation policy was about. Two arms to it. So I can’t understand why a conservative government would want to take the meataxe to [inaudible].

Following on from your comments in 2018 when you said national Labor should look at distancing themselves from the CFMEU, do you think that’s something Queensland Labor should do given the events of last week?

Kevin Rudd
Who are you from by the way?

The Courier-Mail.

Kevin Rudd
Well, when the Murdoch media ask me a question, I’m always skeptical in terms of why it’s been asked. So I don’t know the context of this particular question. I simply stand by my historical comments.

Do you think in light of what happened last week, Michael Ravbar came out quite strongly against Queensland Labor as that they have no economic plan and that the left faction was a bit not tapped into what everyone was thinking normally. So I just wanted to know whether that’s something you think should happen at the state level?

Kevin Rudd
What I know about the Murdoch media is that you have no interest in the future of the Labor government and no interest in the future of the Labor Party. What you’re interested in is a headline in tomorrow’s Courier-Mail which attacks the Palaszczuk government. I don’t intend to provide that for you. I kind of know what the agenda is here. I’ve been around for a long time and I know what instructions you’re going to get.

But let me say this about the Palaszczuk government: the Palaszczuk government has a strong economic record. The Palaszczuk government has handled the COVID crisis well. The Palaszczuk government is up against an LNP opposition led by Frecklington which has repeatedly called for Queensland’s borders to be opened. For for those reasons, the state opposition has no credibility. And for those reasons, Annastacia Palaszczuk has bucketloads of credibility. So as the internal debates, I will leave it to you and all the journalists who will follow them from the Curious Mail.

Do you think Labor will do well at the election, Mr Rudd?

Kevin Rudd
That’s a matter for the Queensland people but Annastacia Palaszczuk, given all the challenges that state premiers are facing right now, is doing a first-class job in very difficult circumstances. I used to work for state government. I was Wayne Goss’s chief of staff. I used to be director-general of the Cabinet Office. And I do know something about how state governments operate. And I think she should be commended given the difficult choices which are available to her at this time for running a steady ship. [inaudible] Thanks very much.

The post Press Conference: Morrison’s Assault on Superannuation appeared first on Kevin Rudd.

Cory DoctorowHow to Destroy Surveillance Capitalism

For this week’s podcast, I read an excerpt from “How to Destroy Surveillance Capitalism,” a free short book (or long pamphlet, or “nonfiction novella”) I published with Medium’s Onezero last week. HTDSC is a long critical response to Shoshanna Zuboff’s book and paper on the subject, which re-centers the critique on monopolism and the abusive behavior it abets, while expressing skepticism that surveillance capitalists are really as good at manipulating our behavior as they claim to be. It is a gorgeous online package, and there’s a print/ebook edition following.


Worse Than FailureThoroughly Tested

Zak S worked for a retailer which, as so often happens, got swallowed up by Initech's retail division. Zak's employer had a big, ugly ERP systems. Initech had a bigger, uglier ERP and once the acquisition happened, they all needed to play nicely together.

These kinds of marriages are always problematic, but this particular one was made more challenging: Zak's company ran their entire ERP system from a cluster of Solaris servers- running on SPARC CPUs. Since upgrading that ERP system to run in any other environment was too expensive to seriously consider, the existing services were kept on life-support (with hardware replacements scrounged from the Vintage Computing section of eBay), while Zak's team was tasked with rebuilding everything- point-of-sale, reporting, finance, inventory and supply chain- atop Initech's ERP system.

The project was launched with the code name "Cold Stone", with Glenn as new CTO. At the project launch, Glenn stressed that, "This is a high impact project, with high visibility throughout the organization, so it's on us to ensure that the deliverables are completed on time, on budget, to provide maximum value to the business and to that end, I'll be starting a series of meetings to plan the meetings and checkpoints we'll use to ensure that we have an action-plan that streamlines our…"

"Cold Stone" launched with a well defined project scope, but about 15 seconds after launch, that scope exploded. New "business critical" systems were discovered under every rock, and every department in the company had a moment of, "Why weren't we consulted on this plan? Our vital business process isn't included in your plan!" Or, "You shouldn't have included us in this plan, because our team isn't interested in a software upgrade, we're going to continue using the existing system until the end of time, thank you very much."

The expanding scope required expanding resources. Anyone with any programming experience more complex than "wrote a cool formula in Excel" was press-ganged into the project. You know how to script sending marketing emails? Get on board. You wrote a shell script to purge old user accounts? Great, you're going to write a plugin to track inventory at retail stores.

The project burned through half a dozen business analysts and three project managers, and that's before the COVID-19 outbreak forced the company to quickly downsize, and squish together several project management roles into one person.

"Fortunately" for Initech, that one person was Edyth, who was one of those employees who has given their entire life over to the company, and refuses to sotp working until the work is done. She was the sort of manager who would schedule meetings at 12:30PM, because she knew no one else would be scheduling meetings during the lunch hour. Or, schedule a half hour meeting at 4:30PM, when the workday ends at 5PM, then let it run long, "Since we're all here anyway, let's keep going." She especially liked to abuse video conferencing for this.

As the big ball of mud grew, the project slowly, slowly eased its way towards completion. And as that deadline approached, Edyth started holding meetings which focused on testing. Which is where Edyth started to raise some concerns.

"Lucy," Edyth said, "I noticed that you've marked the test for integration between the e-commerce site and the IniRewards™ site as not-applicable?"

"Well, yeah," Lucy said. "It says to test IniRewards™ signups on the e-commerce site, but our site doesn't do that. Signups entirely happen on the IniRewards™ site. There isn't really any integration."

"Oh," Edyth said. "So that sounds like it's a Zak thing?"

Zak stared at his screen for a moment. He was responsible for the IniRewards™ site, a port of their pre-acquisition customer rewards system to work with Initech's rewards system. He hadn't written it, but somewhere along the way, he became the owner of it, for reasons which remain murky. "Uh… it's a static link."

Edyth nodded, as if she understood what that meant. "So how long will that take to test? A day? Do you need any special setup for this test?"

"It's… a link. I'll click it."

"Great, yes," Edyth said. "Why don't you write up the test plan document for this user story, and then we'll schedule the test for… next week? Can you do it any earlier?"

"I can do it right now," Zak said.

"No, no," Edyth said. "We need to schedule these tests in advance so you're not interacting with anyone else using the test environment. I'll set up a followup meeting to review your test plan."

Test plans, of course, had a template which needed to be filled out. It was a long document, loaded with boiler plate, for the test to be, "Click the 'Rewards Signup' link in the e-commerce site footer. Expected behavior: the browser navigates to the IniRewards™ home page."

Zak added the document to the project document site, labelled "IniRewards Hyper-Link Test", and waited for the next meeting with Edyth to discuss schedule. This time, Glenn, the CTO was in the meeting.

"This 'Hyper-Link' test sounds very important," Glenn said. He enunciated "hyper-link" like it was a word in a foreign language. "Can we move that up in the schedule? I'd like that done tomorrow."

"I… can do it right now," Zak said. "It won't interact with other tests-"

"No, we shouldn't rush things." Glenn's eyes shifted towards another window as he reviewed the testing schedule. "It looks like there's nothing scheduled for testing between 10AM and 2PM tomorrow. Do you think four hours is enough time? Yes? Great, I'll block that off for you."

Suffice to say, the test passed, and was verified quite thoroughly.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram Seny Kamara on "Crypto for the People"

Seny Kamara gave an excellent keynote talk this year at the (online) CRYPTO Conference. He talked about solving real-world crypto problems for marginalized communities around the world, instead of crypto problems for governments and corporations. Well worth watching and listening to.


Planet Linux AustraliaSimon Lyall: Audiobooks – August 2020

Truth, Lies, and O-Rings: Inside the Space Shuttle Challenger Disaster by Allan J. McDonald

The author was a senior manager in the booster team who cooperated more fully with the investigation than NASA or his company’s bosses would have preferred. Mostly accounts of meetings, hearings & coverups with plenty of technical details. 3/5

The Ascent of Money: A Financial History of the World by Niall Ferguson

A quick tour though the rise of various financial concepts like insurance, bonds, stock markets, bubbles, etc. Nice quick intro and some well told stories. 4/5

The Other Side of the Coin: The Queen, the Dresser and the Wardrobe by Angela Kelly

An authorized book from the Queen’s dresser. Some interesting stories. Behind-the-scenes on typical days and regular events Okay even without photos. 3/5

Second Wind: A Sunfish Sailor, an Island, and the Voyage That Brought a Family Together by Nathaniel Philbrick

A writer takes up competitive sailing after a gap of 15 years, training on winter ponds in prep for the Nationals. A nice read. 3/5

Spitfire Pilot by Flight-Lieutentant David M. Crook, DFC

An account of the Author’s experiences as a pilot during the Battle of Britain. Covering air-combat, missions, loss of friends/colleagues and off-duty life. 4/5

Wild City: A Brief History of New York City in 40 Animals
by Thomas Hynes

A Chapter on each species. Usually information about incidents they were involved in (see “Tigers”) or the growth, decline, comeback of their population & habit. 3/5

Fire in the Sky: Cosmic Collisions, Killer Asteroids, and the Race to Defend Earth by Gordon L. Dillow

A history of the field and some of the characters. Covers space missions, searchers, discovery, movies and the like. Interesting throughout. 4/5

The Long Winter: Little House Series, Book 6 by Laura Ingalls Wilder

The family move into their store building in town for the winter. Blizzard after blizzard sweeps through the town over the next few months and starvation or freezing threatens. 3/5

The Time Traveller’s Almanac Part 1: Experiments edited by Anne and Jeff VanderMeer

First of 4 volumes of short stories. 14 stories, many by well known names (ie Silverberg, Le Guin). A good collection. 3/5

A Long Time Ago in a Cutting Room Far, Far Away: My Fifty Years Editing Hollywood Hits—Star Wars, Carrie, Ferris Bueller’s Day Off, Mission: Impossible, and More by Paul Hirsch

Details of the editing profession & technology. Lots of great stories 4/5

My Scoring System

  • 5/5 = Brilliant, top 5 book of the year
  • 4/5 = Above average, strongly recommend
  • 3/5 = Average. in the middle 70% of books I read
  • 2/5 = Disappointing
  • 1/5 = Did not like at all


Planet Linux AustraliaSimon Lyall: Linkedin putting pressure on users to enable location tracking

I got this email from Linkedin this morning. It is telling me that they are going to change my location from “Auckland, New Zealand” to “Auckland, Auckland, New Zealand“.

Email from Linkedin on 30 August 2020

Since “Auckland, Auckland, New Zealand” sounds stupid to New Zealanders (Auckland is pretty much a big city with a single job market and is not a state or similar) I clicked on the link and opened the application to stick with what I currently have

Except the problem is that the pulldown doesn’t offer many any other locations

The only way to change the location is to click “use Current Location” and then allow Linkedin to access my device’s location.

According to the help page:

By default, the location on your profile will be suggested based on the postal code you provided in the past, either when you set up your profile or last edited your location. However, you can manually update the location on your LinkedIn profile to display a different location.

but it appears the manual method is disabled. I am guessing they have a fixed list of locations in my postcode and this can’t be changed.

So it appears that my options are to accept Linkedin’s crappy name for my location (Other NZers have posted problems with their location naming) or to allow Linkedin to spy on my location and it’ll probably still assign the same dumb name.

The basically appears to be a way for Linkedin to push user to enable location tracking. While at the same time they get to force their own ideas on how New Zealand locations work on users.



Planet Linux AustraliaChris Smart: How to create bridges on bonds (with and without VLANs) using NetworkManager

Some production systems you face might make use of bonded network connections that you need to bridge in order to get VMs onto them. That bond may or may not have a native VLAN (in which case you bridge the bond), or it might have VLANs on top (in which case you want to bridge the VLANs), or perhaps you need to do both.

Let’s walk through an example where we have a bond that has a native VLAN, that also has the tagged VLAN 123 on top (and maybe a second VLAN 456), all of which need to be separately bridged. This means we will have the bond (bond0) with a matching bridge (br-bond0), plus a VLAN on the bond (bond0.123) with its matching bridge (br-vlan123). It should look something like this.

+------+   +---------+                           +---------------+
| eth0 |---|         |          +------------+   |  Network one  |
+------+   |         |----------|  br-bond0  |---| (native VLAN) |
           |  bond0  |          +------------+   +---------------+
+------+   |         |                                            
| eth1 |---|         |                                            
+------+   +---------+                           +---------------+
            | |   +---------+   +------------+   |  Network two  |
            | +---| vlan123 |---| br-vlan123 |---| (tagged VLAN) |
            |     +---------+   +------------+   +---------------+
            |     +---------+   +------------+   +---------------+
            +-----| vlan456 |---| br-vlan456 |---| Network three |
                  +---------+   +------------+   | (tagged VLAN) |

To make it more complicated, let’s say that the native VLAN on the bond needs a static IP and to operate at an MTU of 1500 while the other uses DHCP and needs MTU of 9000.

OK, so how do we do that?

Start by creating the bridge, then later we create the interface that attaches to that bridge. When creating VLANs, they are created on the bond, but then attached as a slave to the bridge.

Create the bridge for the bond

First, let’s create the bridge for our bond. We’ll export some variables to make scripting easier, including the name, value for spanning tree protocol (SPT) and MTU. Note that in this example the bridge will have an MTU of 1500 (but the bond itself will be 9000 to support other VLANs at that MTU size.)


OK so let’s create the bridge for the native VLAN on the bond (which doesn’t exist yet).

nmcli con add ifname "${BRIDGE}" type bridge con-name "${BRIDGE}"
nmcli con modify "${BRIDGE}" bridge.stp "${BRIDGE_STP}"
nmcli con modify "${BRIDGE}" 802-3-ethernet.mtu "${BRIDGE_MTU}"

By default this will look for an address with DHCP. If you don’t want that you can either set it manually:

nmcli con modify "${BRIDGE}" ipv4.method static ipv4.address ipv6.method ignore

Or disable IP addressing:

nmcli con modify "${BRIDGE}" ipv4.method disabled ipv6.method ignore

Finally, bring up the bridge. Yes, we don’t have anything attached to it yet, but that’s OK.

nmcli con up "${BRIDGE}"

You should be able to see it with nmcli and brctl tools (if available on your distro), although note that there is no device attached to this bridge yet.

nmcli con
brctl show

Next, we create the bond to attach to the bridge.

Create the bond and attach to the bridge

Let’s create the bond. In my example I’m using active-backup (mode 1) but your bond may use balance-rr (round robin, mode 0) or, depending on your switching, perhaps something like link aggregation control protocol (LACP) which is 802.3ad (mode 4).

Let’s say that your bond (we’re going to call bond0) has two interfaces, which are eth0 and eth1 respectively. Note that in this example, although the native interface on this bond wants an MTU of 1500, the VLANs which sit on top of the bond need a higher MTU of 9000. Thus, we set the bridge to 1500 in the previous step, but we need to set the bond and its interfaces to 9000. Let’s export those now to make scripting easier.


Now we can go ahead and create the bond, setting the options and the slave devices.

nmcli con add type bond ifname "${BOND}" con-name "${BOND}"
nmcli con modify "${BOND}" bond.options mode="${BOND_MODE}"
nmcli con modify "${BOND}" 802-3-ethernet.mtu "${BOND_MTU}"
nmcli con add type ethernet con-name "${BOND}-slave-${BOND_SLAVE0}" ifname "${BOND_SLAVE0}" master "${BOND}"
nmcli con add type ethernet con-name "${BOND}-slave-${BOND_SLAVE1}" ifname "${BOND_SLAVE1}" master "${BOND}"
nmcli con modify "${BOND}-slave-${BOND_SLAVE0}" 802-3-ethernet.mtu "${BOND_MTU}"
nmcli con modify "${BOND}-slave-${BOND_SLAVE1}" 802-3-ethernet.mtu "${BOND_MTU}"

OK at this point you have a bond specified, great! But now we need to attach it to the bridge, which is what will make the bridge actually work.

nmcli con modify "${BOND}" master "${BRIDGE}" slave-type bridge

Note that before we bring up the bond (or afterwards) we need to disable or delete any existing network connections for the individual interfaces. Check this with nmcli con and delete or disable those connections. Note that this may disconnect you, so make sure you have a console to the machine.

Now, we can bring the bond up which will also activate our interfaces.

nmcli con up "${BOND}"

We can check that the bond came up OK.

cat /proc/net/bonding/bond0

And this bond should also now be on the network, via the bridge which has an IP set.

Now if you look at the bridge you can see there is an interface (bond0) attached to it (your distro might not have brctl).

nmcli con
ls /sys/class/net/br-bond0/brif/
brctl show

Bridging a VLAN on a bond

Now that we have our bond, we can create the bridged for our tagged VLANs (remember that the bridge connected to the bond is a native VLAN so it didn’t need a VLAN interface).

Create the bridge for the VLAN on the bond

Create the new bridge, which for our example is going to use VLAN 123 which will use MTU of 9000.


OK let’s go! (This is the same as the first bridge we created.)

nmcli con add ifname "${BRIDGE}" type bridge con-name "${BRIDGE}"
nmcli con modify "${BRIDGE}" bridge.stp "${BRIDGE_STP}"
nmcli con modify "${BRIDGE}" 802-3-ethernet.mtu "${BRIDGE_MTU}"

Again, this will look for an address with DHCP, so if you don’t want that, then disable it or set an address manually (as per first example). Then you can bring the device up.

nmcli con up "${BRIDGE}"

Create the VLAN on the bond and attach to bridge

OK, now we have the bridge, we create the VLAN on top of bond0 and then attach it to the bridge we just created.

nmcli con add type vlan con-name "${BOND}.${VLAN}" ifname "${BOND}.${VLAN}" dev "${BOND}" id "${VLAN}"
nmcli con modify "${BOND}.${VLAN}" master "${BRIDGE}" slave-type bridge
nmcli con modify "${BOND}.${VLAN}" 802-3-ethernet.mtu "${BRIDGE_MTU}"

If you look at bridges now, you should see the one you just created, attached to a VLAN device (note, your distro might not have brctl).

nmcli con
brctl show

And that’s about it! Now you can attach VMs to those bridges and have them on those networks. Repeat the process for any other VLANs you need on to of the bond.


Sam VargheseManaging a relationship is hard work

For many years, Australia has been trading with China, apparently in the belief that one can do business with a country for yonks without expecting the development of some sense of obligation. The attitude has been that China needs Australian resources and the relationship needs to go no further than the transfer of sand dug out of Australia and sent to China.

Those in Beijing, obviously, haven’t seen the exchange this way. There has been an expectation that there would be some obligation for the relationship to go further than just the impersonal exchange of goods for money. Australia, in true colonial fashion, has expected China to know its place and keep its distance.

This is similar to the attitude the Americans took when they pushed for China’s admission to the World Trade Organisation: all they wanted was a means of getting rid of their manufacturing so their industries could grow richer and an understanding that China would agree to go along with the American diktat to change as needed to keep the US on top of the trading world.

But then you cannot invite a man into your house for a dinner party and insist that he eat only bread. Once inside, he is free to choose what he wants to consume. It appears that the Americans do not understand this simple rule.

Both Australia and the US have forgotten they are dealing with the oldest civilisation in the world. A culture that plays the long waiting game. The Americans read the situation completely wrong for the last 70 years, assuming initially that the Kuomintang would come out on top and that the Communists would be vanquished. In the interim, the Americans obtained most of the money used for the early development of their country by selling opium to the Chinese.

China has not forgotten that humiliation.

There was never a thought given to the very likely event that China would one day want to assert itself and ask to be treated as an equal. Which is what is happening now. Both Australia and the US are feigning surprise and acting as though they are competely innocent in this exercise.

Fast forward to 2020 when the Americans and the Australians are both on the warpath, asserting that China is acting aggressively and trying to intimidate Australia while refusing to bow to American demands that it behave as it is expected to. There are complaints about Chinese demands for technology transfers, completely ignoring the fact that a developing country can ask for such transfers under WTO rules.

There are allegations of IP theft by the Americans, completely forgetting that they stole IP from Britain in the early days of the colonies; the name Samuel Slater should ring a bell in this context. Many educated Americans have themselves written about Slater.

Racism is one trait that defines the Australian approach to China. The Asian nation has been expected to confine itself to trade and never ask for more. And Australia, in condescending fashion, has lauded its approach, never understanding that it is seen as an American lapdog and no more. China has been waiting for the day when it can level scores.

It is difficult to comprehend why Australia genuflects before the US. There has been an attitude of veneration going back to the time of Harold Holt who is well known for his “All the way with LBJ” line, referring to the fact that Australian soldiers would be sent to Vietnam to serve as cannon fodder for the Americans and would, in short, do anything as long as the US decided so. Exactly what fight Australia had with Vietnam is not clear.

At that stage, there was no seminal action by the US that had put the fear of God into Australia; this came later, in 1975, when the CIA manipulated Australian politics and influenced the sacking of prime minister Gough Whitlam by the governor-general, Sir John Kerr. There is still resistance from Australian officialdom and its toadies to this version of events, but the evidence is incontrovertible; Australian journalist Guy Rundle has written two wonderful accounts of how the toppling took place.

Whitlam’s sins? Well, he had cracked down on the Australian Security Intelligence Organisation, an agency that spied on Australians and conveyed information to the CIA, when he discovered that it was keeping tabs on politicians. His attorney-general, Lionel Murphy, even ordered the Australian Federal Police to raid the ASIO, a major affront to the Americans who did not like their client being treated this way.

Whitlam also hinted that he would not renew a treaty for the Americans to continue using a base at Pine Gap as a surveillance centre. This centre was offered to the US, with the rent being one peppercorn for 99 years.

Of course, this was pure insolence coming from a country which the Americans — as they have with many other nations — treated as a vassal state and one only existing to do their bidding. So Whitlam was thrown out.

On China, too, Australia has served the role of American lapdog. In recent days, the Australian Prime Minister Scott Morrison has made statements attacking China soon after he has been in touch with the American leadership. In other words, the Americans are using Australia to provoke China. It’s shameful to be used in this manner, but then once a bootlicker, always a bootlicker.

Australia’s subservience to the US is so great that it even co-opted an American official, former US Secretary of Homeland Security Kirstjen Nielsen, to play a role in developing a cyber security strategy. There are a large number of better qualified people in the country who could do a much better job than Nielsen, who is a politician and not a technically qualified individual. But the slave mentality has always been there and will remain.

Cryptogram Friday Squid Blogging: How Squid Survive Freezing, Oxygen-Deprived Waters

Lots of interesting genetic details.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Krebs on SecuritySendgrid Under Siege from Hacked Accounts

Email service provider Sendgrid is grappling with an unusually large number of customer accounts whose passwords have been cracked, sold to spammers, and abused for sending phishing and email malware attacks. Sendgrid’s parent company Twilio says it is working on a plan to require multi-factor authentication for all of its customers, but that solution may not come fast enough for organizations having trouble dealing with the fallout in the meantime.

Image: Wikipedia

Many companies use Sendgrid to communicate with their customers via email, or else pay marketing firms to do that on their behalf using Sendgrid’s systems. Sendgrid takes steps to validate that new customers are legitimate businesses, and that emails sent through its platform carry the proper digital signatures that other companies can use to validate that the messages have been authorized by its customers.

But this also means when a Sendgrid customer account gets hacked and used to send malware or phishing scams, the threat is particularly acute because a large number of organizations allow email from Sendgrid’s systems to sail through their spam-filtering systems.

To make matters worse, links included in emails sent through Sendgrid are obfuscated (mainly for tracking deliverability and other metrics), so it is not immediately clear to recipients where on the Internet they will be taken when they click.

Dealing with compromised customer accounts is a constant challenge for any organization doing business online today, and certainly Sendgrid is not the only email marketing platform dealing with this problem. But according to multiple emails from readers, recent threads on several anti-spam discussion lists, and interviews with people in the anti-spam community, over the past few months there has been a marked increase in malicious, phishous and outright spammy email being blasted out via Sendgrid’s servers.

Rob McEwen is CEO of, an anti-spam firm whose data on junk email trends are used to improve the spam-blocking technologies deployed by several Fortune 100 companies. McEwen said no other email service provider has come close to generating the volume of spam that’s been emanating from Sendgrid accounts lately.

“As far as the nasty criminal phishes and viruses, I think there’s not even a close second in terms of how bad it’s been with Sendgrid over the past few months,” he said.

Trying to filter out bad emails coming from a major email provider that so many legitimate companies rely upon to reach their customers can be a dicey business. If you filter the emails too aggressively you end up with an unacceptable number of “false positives,” i.e., benign or even desirable emails that get flagged as spam and sent to the junk folder or blocked altogether.

But McEwen said the incidence of malicious spam coming from Sendgrid has gotten so bad that he recently launched a new anti-spam block list specifically to filter out email from Sendgrid accounts that have been known to be blasting large volumes of junk or malicious email.

“Before I implemented this in my own filtering system a week ago, I was getting three to four phone calls or stern emails a week from angry customers wondering why these malicious emails were getting through to their inboxes,” McEwen said. “And I just am not seeing anything this egregious in terms of viruses and spams from the other email service providers.”

In an interview with KrebsOnSecurity, Sendgrid parent firm Twilio acknowledged the company had recently seen an increase in compromised customer accounts being abused for spam. While Sendgrid does allow customers to use multi-factor authentication (also known as two-factor authentication or 2FA), this protection is not mandatory.

But Twilio Chief Security Officer Steve Pugh said the company is working on changes that would require customers to use some form of 2FA in addition to usernames and passwords.

“Twilio believes that requiring 2FA for customer accounts is the right thing to do, and we’re working towards that end,” Pugh said. “2FA has proven to be a powerful tool in securing communications channels. This is part of the reason we acquired Authy and created a line of account security products and services. Twilio, like other platforms, is forming a plan on how to better secure our customers’ accounts through native technologies such as Authy and additional account level controls to mitigate known attack vectors.”

Requiring customers to use some form of 2FA would go a long way toward neutralizing the underground market for compromised Sendgrid accounts, which are sold by a variety of cybercriminals who specialize in gaining access to accounts by targeting users who re-use the same passwords across multiple websites.

One such individual, who goes by the handle “Kromatix” on several forums, is currently selling access to more than 400 compromised Sendgrid user accounts. The pricing attached to each account is based on volume of email it can send in a given month. Accounts that can send up to 40,000 emails a month go for $15, whereas those capable of blasting 10 million missives a month sell for $400.

“I have a large supply of cracked Sendgrid accounts that can be used to generate an API key which you can then plug into your mailer of choice and send massive amounts of emails with ensured delivery,” Kromatix wrote in an Aug. 23 sales thread. “Sendgrid servers maintain a very good reputation with [email service providers] so your content becomes much more likely to get into the inbox so long as your setup is correct.”

Neil Schwartzman, executive director of the anti-spam group CAUCE, said Sendgrid’s 2FA plans are long overdue, noting that the company bought Authy back in 2015.

Single-factor authentication for a company like this in 2020 is just ludicrous given the potential damage and malicious content we’re seeing,” Schwartzman said.

“I understand that it’s a task to invoke 2FA, and given the volume of customers Sendgrid has that’s something to consider because there’s going to be a lot of customer overhead involved,” he continued. “But it’s not like your bank, social media account, email and plenty of other places online don’t already insist on it.”

Schwartzman said if Twilio doesn’t act quickly enough to fix the problem on its end, the major email providers of the world (think Google, Microsoft and Apple) — and their various machine-learning anti-spam algorithms — may do it for them.

“There is a tipping point after which receiving firms start to lose patience and start to more aggressively filter this stuff,” he said. “If seeing a Sendgrid email according to machine learning becomes a sign of abuse, trust me the machines will make the decisions even if the people don’t.”

Worse Than FailureError'd: Don't Leave This Page

"My Kindle showed me this for the entire time I read this book. Luckily, page 31 is really exciting!" writes Hans H.


Tim wrote, "Thanks JustPark, I'd love to verify my account! about that button?"


"I almost managed to uninstall Viber, or did I?" writes Simon T.


Marco wrote, "All I wanted to do was to post a one-time payment on a reputable cloud provider. Now I'm just confused."


Brinio H. wrote, "Somehow I expected my muscles to feel more sore after walking over 382 light-years on one day."


"Here we have PowerBI failing to dispel the perception that 'Business Intelligence' is an oxymoron," writes Craig.


[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.


Krebs on SecurityConfessions of an ID Theft Kingpin, Part II

Yesterday’s piece told the tale of Hieu Minh Ngo, a hacker the U.S. Secret Service described as someone who caused more material financial harm to more Americans than any other convicted cybercriminal. Ngo was recently deported back to his home country after serving more than seven years in prison for running multiple identity theft services. He now says he wants to use his experience to convince other cybercriminals to use their skills for good. Here’s a look at what happened after he got busted.

Hieu Minh Ngo, 29, in a recent photo.

Part I of this series ended with Ngo in handcuffs after disembarking a flight from his native Vietnam to Guam, where he believed he was going to meet another cybercriminal who’d promised to hook him up with the mother of all consumer data caches.

Ngo had been making more than $125,000 a month reselling ill-gotten access to some of the biggest data brokers on the planet. But the Secret Service discovered his various accounts at these data brokers and had them shut down one by one. Ngo became obsessed with restarting his business and maintaining his previous income. By this time, his ID theft services had earned roughly USD $3 million.

As this was going on, Secret Service agents used an intermediary to trick Ngo into thinking he’d trodden on the turf of another cybercriminal. From Part I:

The Secret Service contacted Ngo through an intermediary in the United Kingdom — a known, convicted cybercriminal who agreed to play along. The U.K.-based collaborator told Ngo he had personally shut down Ngo’s access to Experian because he had been there first and Ngo was interfering with his business.

“The U.K. guy told Ngo, ‘Hey, you’re treading on my turf, and I decided to lock you out. But as long as you’re paying a vig through me, your access won’t go away’,” the Secret Service’s Matt O’Neill recalled.

After several months of conversing with his apparent U.K.-based tormentor, Ngo agreed to meet him in Guam to finalize the deal. But immediately after stepping off of the plane in Guam, he was apprehended by Secret Service agents.

“One of the names of his identity theft services was findget[.]me,” O’Neill said. “We took that seriously, and we did like he asked.”

In an interview with KrebsOnSecurity, Ngo said he spent about two months in a Guam jail awaiting transfer to the United States. A month passed before he was allowed a 10 minute phone call to his family and explain what he’d gotten himself into.

“This was a very tough time,” Ngo said. “They were so sad and they were crying a lot.”

First stop on his prosecution tour was New Jersey, where he ultimately pleaded guilty to hacking into MicroBilt, the first of several data brokers whose consumer databases would power different iterations of his identity theft service over the years.

Next came New Hampshire, where another guilty plea forced him to testify in three different trials against identity thieves who had used his services for years. Among them was Lance Ealy, a serial ID thief from Dayton, Ohio who used Ngo’s service to purchase more than 350 “fullz” — a term used to describe a package of everything one would need to steal someone’s identity, including their Social Security number, mother’s maiden name, birth date, address, phone number, email address, bank account information and passwords.

Ealy used Ngo’s service primarily to conduct tax refund fraud with the U.S. Internal Revenue Service (IRS), claiming huge refunds in the names of ID theft victims who first learned of the fraud when they went to file their taxes and found someone else had beat them to it.

Ngo’s cooperation with the government ultimately led to 20 arrests, with a dozen of those defendants lured into the open by O’Neill and other Secret Service agents posing as Ngo.

The Secret Service had difficulty pinning down the exact amount of financial damage inflicted by Ngo’s various ID theft services over the years, primarily because those services only kept records of what customers searched for — not which records they purchased.

But based on the records they did have, the government estimated that Ngo’s service enabled approximately $1.1 billion in new account fraud at banks and retailers throughout the United States, and roughly $64 million in tax refund fraud with the states and the IRS.

“We interviewed a number of Ngo’s customers, who were pretty open about why they were using his services,” O’Neill said. “Many of them told us the same thing: Buying identities was so much better for them than stolen payment card data, because card data could be used once or twice before it was no good to them anymore. But identities could be used over and over again for years.”

O’Neill said he still marvels at the fact that Ngo’s name is practically unknown when compared to the world’s most infamous credit card thieves, some of whom were responsible for stealing hundreds of millions of cards from big box retail merchants.

“I don’t know of anyone who has come close to causing more material harm than Ngo did to the average American,” O’Neill said. “But most people have probably never heard of him.”

Ngo said he wasn’t surprised that his services were responsible for so much financial damage. But he was utterly unprepared to hear about the human toll. Throughout the court proceedings, Ngo sat through story after dreadful story of how his work had ruined the financial lives of people harmed by his services.

“When I was running the service, I didn’t really care because I didn’t know my customers and I didn’t know much about what they were doing with it,” Ngo said. “But during my case, the federal court received like 13,000 letters from victims who complained they lost their houses, jobs, or could no longer afford to buy a home or maintain their financial life because of me. That made me feel really bad, and I realized I’d been a terrible person.”

Even as he bounced from one federal detention facility to the next, Ngo always seemed to encounter ID theft victims wherever he went, including prison guards, healthcare workers and counselors.

“When I was in jail at Beaumont, Texas I talked to one of the correctional officers there who shared with me a story about her friend who lost her identity and then lost everything after that,” Ngo recalled. “Her whole life fell apart. I don’t know if that lady was one of my victims, but that story made me feel sick. I know now that what I was doing was just evil.”

Ngo’s former ID theft service usearching[.]info.

The Vietnamese hacker was released from prison a few months ago, and is now finishing up a mandatory three-week COVID-19 quarantine in a government-run facility near Ho Chi Minh city. In the final months of his detention, Ngo started reading everything he could get his hands on about computer and Internet security, and even authored a lengthy guide written for the average Internet user with advice about how to avoid getting hacked or becoming the victim of identity theft.

Ngo said while he would like to one day get a job working in some cybersecurity role, he’s in no hurry to do so. He’s already had at least one job offer in Vietnam, but he turned it down. He says he’s not ready to work yet, but is looking forward to spending time with his family — and specifically with his dad, who was recently diagnosed with Stage 4 cancer.

Longer term, Ngo says, he wants to mentor young people and help guide them on the right path, and away from cybercrime. He’s been brutally honest about his crimes and the destruction he’s caused. His LinkedIn profile states up front that he’s a convicted cybercriminal.

“I hope my work can help to change the minds of somebody, and if at least one person can change and turn to do good, I’m happy,” Ngo said. “It’s time for me to do something right, to give back to the world, because I know I can do something like this.”

Still, the recidivism rate among cybercriminals tends to be extremely high, and it would be easy for him to slip back into his old ways. After all, few people know as well as he does how best to exploit access to identity data.

O’Neill said he believes Ngo probably will keep his nose clean. But he added that Ngo’s service if it existed today probably would be even more successful and lucrative given the sheer number of scammers involved in using stolen identity data to defraud states and the federal government out of pandemic assistance loans and unemployment insurance benefits.

“It doesn’t appear he’s looking to get back into that life of crime,” O’Neill said. “But I firmly believe the people doing fraudulent small business loans and unemployment claims cut their teeth on his website. He was definitely the new coin of the realm.”

Ngo maintains he has zero interest in doing anything that might send him back to prison.

“Prison is a difficult place, but it gave me time to think about my life and my choices,” he said. “I am committing myself to do good and be better every day. I now know that money is just a part of life. It’s not everything and it can’t bring you true happiness. I hope those cybercriminals out there can learn from my experience. I hope they stop what they are doing and instead use their skills to help make the world better.”

Worse Than FailureCodeSOD: Win By Being Last

I’m going to open with just one line, just one line from Megan D, before we dig into the story:

public static boolean comparePasswords(char[] password1, char[] password2)

A long time ago, someone wrote a Java 1.4 application. It’s all about getting data out of data files, like CSVs and Excel and XML, and getting it into a database, where it can then be turned into plots and reports. Currently, it has two customers, but boy, there’s a lot of technology invested in it, so the pointy-hairs decided that it needed to be updated so they could sell it to new customers.

The developers played a game of “Not It!” and Megan lost. It wasn’t hard to see why no one wanted to touch this code. The UI section was implemented in code generated by an Eclipse plugin that no longer exists. There was UI code which wasn’t implemented that way, but there were no code paths that actually showed it. The project didn’t have one “do everything” class of utilities- it had many of them.

The real magic was in All the data got converted into strings before going into the database, and data got pulled back out as lists of strings- one string per row, prepended with the number of columns in that row. The string would get split up and converted back into the actual real datatypes.

Getting back to our sample line above, Megan adds:

No restrictions on any data in the database, or even input cleaning - little Bobby Tables would have a field day. There are so many issues that the fact that passwords are plaintext barely even registers as a problem.

A common convention used in the database layer is “loop and compare”. Want to check if a username exists in the database? SELECT username FROM users WHERE username = 'someuser', loop across the results, and if the username in the result set matches 'someuser', set a flag to true (set it to false otherwise). Return the flag. And if you're wondering why they need to look at each row instead of just seeing a non-zero number of matches, so am I.

Usernames are not unique, but the username/group combination should be.

Similarly, if you’re logging in, it uses a “loop and compare”. Find all the rows for users with that username. Then, find all the groups for that username. Loop across all the groups and check if any of them match the user trying to log in. Then loop across all the stored- plaintext stored passwords and see if they match.

But that raises the question: how do you tell if two strings match? Just use an equality comparison? Or a .equals? Of course not.

We use “loop and compare” on sequences of rows, so we should also use “loop and compare” on sequences of characters. What could be wrong with that?

   * Compares two given char arrays for equality.
   * @param password1
   *          The first password to compare.
   * @param password2
   *          The second password to compare.
   * @return True if the passwords are equal false otherwise.
  public static boolean comparePasswords(char[] password1, char[] password2)
    // assume false until prove otherwise
    boolean aSameFlag = false;
    if (password1 != null && password2 != null)
      if (password1.length == password2.length)
        for (int aIndex = 0; aIndex < password1.length; aIndex++)
          aSameFlag = password1[aIndex] == password2[aIndex];
    return aSameFlag;

If the passwords are both non-null, if they’re both the same length, compare them one character at a time. For each character, set the aSameFlag to true if they match, false if they don’t.

Return the aSameFlag.

The end result of this is that only the last letter matters, so from the perspective of this code, there’s no difference between the word “ship” and a more accurate way to describe this code.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!


Planet Linux AustraliaDavid Rowe: Open IP over VHF/UHF 2

The goal of this project is to develop a “100 kbit/s IP link” for VHF/UHF using just a Pi and RTLSDR hardware, and open source modem software. Since the first post in this series there’s been quite a bit of progress:

  1. I have created a GitHub repo with build scripts, a project plan and command lines on how to run some basic tests.
  2. I’ve built some integrated applications, e.g. rtl_fsk.c – that combines rtl_sdr, a CSDR decimator, and the Codec 2 fsk_demod in one handy application.
  3. Developed a neat little GUI system so we can see whats going on. I’ve found real time GUIs invaluable for physical layer experimentation. That’s one thing you don’t get with a chipset.

Spectral Purity

Bill and I have built and tested (on a spec-an) Tx filters for our Pis, that ensure the transmitted signal meets Australian Ham spurious requirements (which are aligned with international ITU requirements). I also checked the phase noise at 1MHz offset and measured -90dBm/Hz, similar to figures I have located online for my FT-817 at a 5W output level (e.g. DF9IC website quotes +37dBm-130dBc=-93dBm/Hz).

While there are no regulations for Ham phase noise in Australia, my Tx does appear to be compliant with ETSI EN 300 220-1 which deals with short range ISM band devices (maximum of -36dBm in a 100kHz bandwidth at 1MHz offset). Over an ideal 10km link, a -90dBm/Hz signal would be attenuated down to -180dBm/Hz, beneath the thermal noise floor of -174dBm/Hz.

I set up an experiment to pulse the Pi Tx signal off and on at 1 second intervals. Listening on a SSB Rx at +/-50kHz and +/-1MHz about 50m away and in a direct line of site to the roof mounted Pi Tx antenna I can hear no evidence of interference from phase noise.

I am satisfied my Pi based Tx won’t be interfering with anyone.

However as Mark VK5QI suggested I would not recommend amplifying the Tx level to the several watt level. If greater powers are required there are some other Tx options. For example the FSK transmitters of chipset transmitters work quite well, and some have better phase noise specs.

Over the Air Tests

Bill and I spent a few afternoons attempting to send packet at various bit rates. We measured our path loss at -135dBm over a 10km, non-line of site suburban path. Using our FT817s, this path is a noisy copy on SSB using a few watts.

Before I start any tests I ask myself “what would we expect to see?”. Well with 12dBm Tx power thats +12 – 135 = -123dBm into the receiver. Re-arranging for Eb/No, and using Rb=1000 bits/s and a RTL-SDR noise figure of 7dB:

Rx    = Eb/No + 10log10(Rb) + NF - 174
Eb/No = Rx - 10log10(Rb) - NF + 174
      = -123 - 10*log10(1000) - 7 + 174
      = 14 dB

Here is a plot of Eb/No versus BER generated by the mfsk.m script:

Looking up our 2FSK Bit Error Rate (BER) for Eb/No of 14dB, we should get better than 1E-4 (it’s off the graph – but actually about 2E-6). So at 1000 bit/s, we expect practically no errors.

I was disappointed with the real world OTA results: I received packets at 1000 bit/s with 8% BER (equivalent to an Eb/No of 5.5dB) which suggests we are losing 8.5dB somewhere. Our receiver seems a bit deaf.

Here’s a screenshot of the “dashboard” with a 4FSK signal sent by Bill (we experimented with 4FSK as well as 2FSK):

You can see a bunch of other signals (perhaps local EMI) towering over our little 4FSK signal. The red crosses show the frequency estimates of the demodulator – they should lock onto each of the FSK tones (4 in this case).

One risk with low cost SDRs (in a city) is strong signal interference causing the receiver to block and fall over entirely. When I connected my RTL-SDR to my antenna, I had to back the gain off about 3dB, not too bad. However Bill needed to back his gain off 20dB. So that’s one real-world factor we need to deal with.

Still we did it – sending packets 10km across town, through 135dB worth of trees and buildings with just a 12mW Pi and a RTLSDR! That’s a start, and no one said this would be easy! Time to dig in and find some missing dBs.

Over the Bench Testing

I decided to drill down into the MDS performance and system noise figure. After half a days wrangling with command line utilties I had a HackRF rigged up to play FSK signals. The HackRF has the benefit of a built in attenuator, so the output level can be quite low (-68dBm). This makes it much easier to reliably set levels in combination with an external switched attenuator, compared to using relatively high (+14dBm) output of the Pi which gets into everything and requires lots of shielding. Low levels out of your Tx greatly simplifies “over the bench” testing.

After a few days of RF and DSP fiddling I tracked down a problem with the sample rate. I was running the RTL-SDR at a sample rate of 240 kHz, using it’s internal hardware to handle the sample rate conversion. Sampling at 1.8MHz, and reducing the sample rate externally improved the performance by 3dB. I’m guessing this is due to the internal fixed point precision of the RTL-SDR, it may have significant quantisation noise with weak signals.

OK, so now I was getting system noise figures between 7.5 and 9dB when tested “over the bench”. Close, but a few dB above what I would expect. I eventually traced that to a very subtle measurement bug. In the photo you can see a splitter at the output of the switched attenuator. One side feeds to the RTL-SDR, the other to my spec-an. To calibrate the system I play a single freq carrier from the HackRF, as this makes measurement easier on the spec-an using power averaging.

Turns out, the RTL-SDR input port is only terminated when RTL-SDR software is running, i.e. the dongle is active. I often had the software off when measuring levels, so the levels were high by about 1dB, as one port of the splitter was un-terminated!

The following table maps the system noise figure for various gain settings:

g Rx (dBm) BER Eb/No Est NF
0 -128.0 0.01 9.0 7.0
49 -128.0 0.015 8.5 7.5
46 -128 0.06 6 10.0
40 -126.7 0.018 8 12
35 -123 0.068 6 15
30 -119 0.048 7 18

When repeating the low level measurements with -g 0 I obtained 8, 6.5, 7.0, 7.7, so there is some spread. The automatic gain (-g 0) seems about 0.5dB ahead of maximum manual gain (-g 49).

These results are consistent with those reported in this fine report, which measured the NF of the SDRs directly. I have also previously measured RTLSDR noise figures at around 7dB, although on an earlier model.

This helps us understand the effect of receiver gain. Lowering it is bad, especially lowering it a lot. However we may need to run at a lower gain setting, especially if the receiver is being overloaded by strong signals. At least this lets us engineer the link, and understand the effect of gain on system performance.

For fun I hooked up a LNA and using 4FSK I managed 2% BER at -136dBm, which works out to a 2.5dB system noise figure. This is a little higher than I would expect, however I could see some evidence of EMI on the dashboard. Such low levels are difficult on the bench without a Faraday cage of some sort.

Engineering the Physical Layer

With this project the physical layer (modem and radio) is wide open for us to explore. With chipset based approaches you get a link or your don’t, and perhaps a little diagnostic information like a received signal strength. Then again they “just work” most of the time so that’s probably OK! I like looking inside the black boxes and pushing up against the laws of physics, rather that the laws of business.

It gets interesting when you can measure the path loss. You have a variety of options to improve your bit error rate or increase your bit rate:

  1. Add transmit power
  2. Reposition your antenna above the terrain, or out of multipath nulls
  3. Lower loss coax run to your antenna, or mount your Pi and antenna together on top of the mast
  4. Use an antenna with a higher gain like Bill’s Yagi
  5. Add a low noise amplifier to reduce your noise figure
  6. Adjust your symbol/bit rate to spread that energy over fewer (or more) bits/s
  7. Use a more efficient modulation scheme, e.g. 4FSK performs 3dB better than 2FSK at the same bit rate
  8. Move to the country where the ambient RF and EMI is hopefully lower

Related Projects

There are some other cool projects in the “200kHz channel/several 100 kbit/s/IP” UHF Ham data space:

  1. New packet radio NPR70 project – very cool GFSK/TDMA system using a chipset modem approach. I’ve been having a nice discussion with the author Guillaume F4HDK around modem sensitivity and FEC. This project is very well documented.
  2. HNAP – a sophisticated OFDM/QAM/TDMA sytem that is being developed on Pluto SDR hardware as part of a masters thesis.

I’m a noob when it comes to protocols, so have a lot to learn from these projects. However I am pretty focussed on modem/FEC performance which is often sub-optimal (or delegated to a chipset) in the Ham Radio space. There are many dB to be gained from good modem design, which I prefer over adding a PA.

Next Steps

OK, so we have worked though a few bugs and can now get results consistent with the estimated NF of the receiver. It doesn’t explain the entire 8.5dB loss we experienced over the air, but it’s a step in the right direction. The bugs tend to reveal themselves one at a time ……

One possible reason for reduced sensitivity is EMI or ambient RF noise. There are some signs of this in the dashboard plot above. This is more subtle than strong signal overload, but could be increasing the effective noise figure on our link. All our calculations above assume no additional noise being fed into the antenna.

I feel it’s time for another go at Over The Air (OTA) tests. My goal is to get a solid 1000 bit/s link in both directions over our path, and understand any impact on performance such as strong signals or a raised noise floor. We can then proceed to the next steps in the project plan.

Reading Further

GitHub repo for this project with build scripts, a project plan and command lines on how to run some basic tests.
Open IP over VHF/UHF – first post in this series
Bill and I are documenting our OTA tests in this Pull Request
4FSK on 25 Microwatts low bit rate packets with sophisticated FEC at very low power levels. We’ll use the same FEC on this project.
Measuring SDR Noise Figure
Measuring SDR Noise Figure in Real Time
Evaluation of SDR Boards V1.0 – A fantastic report on the performance of several SDRs
NPR70 – New Packet Radio Web Site
HNAP Web site

Krebs on SecurityConfessions of an ID Theft Kingpin, Part I

At the height of his cybercriminal career, the hacker known as “Hieupc” was earning $125,000 a month running a bustling identity theft service that siphoned consumer dossiers from some of the world’s top data brokers. That is, until his greed and ambition played straight into an elaborate snare set by the U.S. Secret Service. Now, after more than seven years in prison Hieupc is back in his home country and hoping to convince other would-be cybercrooks to use their computer skills for good.

Hieu Minh Ngo, in his teens.

For several years beginning around 2010, a lone teenager in Vietnam named Hieu Minh Ngo ran one of the Internet’s most profitable and popular services for selling “fullz,” stolen identity records that included a consumer’s name, date of birth, Social Security number and email and physical address.

Ngo got his treasure trove of consumer data by hacking and social engineering his way into a string of major data brokers. By the time the Secret Service caught up with him in 2013, he’d made over $3 million selling fullz data to identity thieves and organized crime rings operating throughout the United States.

Matt O’Neill is the Secret Service agent who in February 2013 successfully executed a scheme to lure Ngo out of Vietnam and into Guam, where the young hacker was arrested and sent to the mainland U.S. to face prosecution. O’Neill now heads the agency’s Global Investigative Operations Center, which supports investigations into transnational organized criminal groups.

O’Neill said he opened the investigation into Ngo’s identity theft business after reading about it in a 2011 KrebsOnSecurity story, “How Much is Your Identity Worth?” According to O’Neill, what’s remarkable about Ngo is that to this day his name is virtually unknown among the pantheon of infamous convicted cybercriminals, the majority of whom were busted for trafficking in huge quantities of stolen credit cards.

Ngo’s businesses enabled an entire generation of cybercriminals to commit an estimated $1 billion worth of new account fraud, and to sully the credit histories of countless Americans in the process.

“I don’t know of any other cybercriminal who has caused more material financial harm to more Americans than Ngo,” O’Neill told KrebsOnSecurity. “He was selling the personal information on more than 200 million Americans and allowing anyone to buy it for pennies apiece.”

Freshly released from the U.S. prison system and deported back to Vietnam, Ngo is currently finishing up a mandatory three-week COVID-19 quarantine at a government-run facility. He contacted KrebsOnSecurity from inside this facility with the stated aim of telling his little-known story, and to warn others away from following in his footsteps.


Ten years ago, then 19-year-old hacker Ngo was a regular on the Vietnamese-language computer hacking forums. Ngo says he came from a middle-class family that owned an electronics store, and that his parents bought him a computer when he was around 12 years old. From then on out, he was hooked.

In his late teens, he traveled to New Zealand to study English at a university there. By that time, he was already an administrator of several dark web hacker forums, and between his studies he discovered a vulnerability in the school’s network that exposed payment card data.

“I did contact the IT technician there to fix it, but nobody cared so I hacked the whole system,” Ngo recalled. “Then I used the same vulnerability to hack other websites. I was stealing lots of credit cards.”

Ngo said he decided to use the card data to buy concert and event tickets from Ticketmaster, and then sell the tickets at a New Zealand auction site called TradeMe. The university later learned of the intrusion and Ngo’s role in it, and the Auckland police got involved. Ngo’s travel visa was not renewed after his first semester ended, and in retribution he attacked the university’s site, shutting it down for at least two days.

Ngo said he started taking classes again back in Vietnam, but soon found he was spending most of his time on cybercrime forums.

“I went from hacking for fun to hacking for profits when I saw how easy it was to make money stealing customer databases,” Ngo said. “I was hanging out with some of my friends from the underground forums and we talked about planning a new criminal activity.”

“My friends said doing credit cards and bank information is very dangerous, so I started thinking about selling identities,” Ngo continued. “At first I thought well, it’s just information, maybe it’s not that bad because it’s not related to bank accounts directly. But I was wrong, and the money I started making very fast just blinded me to a lot of things.”


His first big target was a consumer credit reporting company in New Jersey called MicroBilt.

“I was hacking into their platform and stealing their customer database so I could use their customer logins to access their [consumer] databases,” Ngo said. “I was in their systems for almost a year without them knowing.”

Very soon after gaining access to MicroBilt, Ngo says, he stood up Superget[.]info, a website that advertised the sale of individual consumer records. Ngo said initially his service was quite manual, requiring customers to request specific states or consumers they wanted information on, and he would conduct the lookups by hand.

Ngo’s former identity theft service, superget[.]info

“I was trying to get more records at once, but the speed of our Internet in Vietnam then was very slow,” Ngo recalled. “I couldn’t download it because the database was so huge. So I just manually search for whoever need identities.”

But Ngo would soon work out how to use more powerful servers in the United States to automate the collection of larger amounts of consumer data from MicroBilt’s systems, and from other data brokers. As I wrote of Ngo’s service back in November 2011:

“Superget lets users search for specific individuals by name, city, and state. Each “credit” costs USD$1, and a successful hit on a Social Security number or date of birth costs 3 credits each. The more credits you buy, the cheaper the searches are per credit: Six credits cost $4.99; 35 credits cost $20.99, and $100.99 buys you 230 credits. Customers with special needs can avail themselves of the “reseller plan,” which promises 1,500 credits for $500.99, and 3,500 credits for $1000.99.

“Our Databases are updated EVERY DAY,” the site’s owner enthuses. “About 99% nearly 100% US people could be found, more than any sites on the internet now.”

Ngo’s intrusion into MicroBilt eventually was detected, and the company kicked him out of their systems. But he says he got back in using another vulnerability.

“I was hacking them and it was back and forth for months,” Ngo said. “They would discover [my accounts] and fix it, and I would discover a new vulnerability and hack them again.”


This game of cat and mouse continued until Ngo found a much more reliable and stable source of consumer data: A U.S. based company called Court Ventures, which aggregated public records from court documents. Ngo wasn’t interested in the data collected by Court Ventures, but rather in its data sharing agreement with a third-party data broker called U.S. Info Search, which had access to far more sensitive consumer records.

Using forged documents and more than a few lies, Ngo was able to convince Court Ventures that he was a private investigator based in the United States.

“At first [when] I sign up they asked for some documents to verify,” Ngo said. “So I just used some skill about social engineering and went through the security check.”

Then, in March 2012, something even more remarkable happened: Court Ventures was purchased by Experian, one of the big three major consumer credit bureaus in the United States. And for nine months after the acquisition, Ngo was able to maintain his access.

“After that, the database was under control by Experian,” he said. “I was paying Experian good money, thousands of dollars a month.”

Whether anyone at Experian ever performed due diligence on the accounts grandfathered in from Court Ventures is unclear. But it wouldn’t have taken a rocket surgeon to figure out that this particular customer was up to something fishy.

For one thing, Ngo paid the monthly invoices for his customers’ data requests using wire transfers from a multitude of banks around the world, but mostly from new accounts at financial institutions in China, Malaysia and Singapore.

O’Neill said Ngo’s identity theft website generated tens of thousands of queries each month. For example, the first invoice Court Ventures sent Ngo in December 2010 was for 60,000 queries. By the time Experian acquired the company, Ngo’s service had attracted more than 1,400 regular customers, and was averaging 160,000 monthly queries.

More importantly, Ngo’s profit margins were enormous.

“His service was quite the racket,” he said. “Court Ventures charged him 14 cents per lookup, but he charged his customers about $1 for each query.”

By this time, O’Neill and his fellow Secret Service agents had served dozens of subpoenas tied to Ngo’s identity theft service, including one that granted them access to the email account he used to communicate with customers and administer his site. The agents discovered several emails from Ngo instructing an accomplice to pay Experian using wire transfers from different Asian banks.


Working with the Secret Service, Experian quickly zeroed in on Ngo’s accounts and shut them down. Aware of an opportunity here, the Secret Service contacted Ngo through an intermediary in the United Kingdom — a known, convicted cybercriminal who agreed to play along. The U.K.-based collaborator told Ngo he had personally shut down Ngo’s access to Experian because he had been there first and Ngo was interfering with his business.

“The U.K. guy told Ngo, ‘Hey, you’re treading on my turf, and I decided to lock you out. But as long as you’re paying a vig through me, your access won’t go away’,” O’Neill recalled.

The U.K. cybercriminal, acting at the behest of the Secret Service and U.K. authorities, told Ngo that if he wanted to maintain his access, he could agree to meet up in person. But Ngo didn’t immediately bite on the offer.

Instead, he weaseled his way into another huge data store. In much the same way he’d gained access to Court Ventures, Ngo got an account at a company called TLO, another data broker that sells access to extremely detailed and sensitive information on most Americans.

TLO’s service is accessible to law enforcement agencies and to a limited number of vetted professionals who can demonstrate they have a lawful reason to access such information. In 2014, TLO was acquired by Trans Union, one of the other three big U.S. consumer credit reporting bureaus.

And for a short time, Ngo used his access to TLO to power a new iteration of his business — an identity theft service rebranded as usearching[.]info. This site also pulled consumer data from a payday loan company that Ngo hacked into, as documented in my Sept. 2012 story, ID Theft Service Tied to Payday Loan Sites. Ngo said the hacked payday loans site gave him instant access to roughly 1,000 new fullz records each day.

Ngo’s former ID theft service usearching[.]info.


By this time, Ngo was a multi-millionaire: His various sites and reselling agreements with three Russian-language cybercriminal stores online had earned him more than USD $3 million. He told his parents his money came from helping companies develop websites, and even used some of his ill-gotten gains to pay off the family’s debts (its electronics business had gone belly up, and a family member had borrowed but never paid back a significant sum of money).

But mostly, Ngo said, he spent his money on frivolous things, although he says he’s never touched drugs or alcohol.

“I spent it on vacations and cars and a lot of other stupid stuff,” he said.

When TLO locked Ngo out of his account there, the Secret Service used it as another opportunity for their cybercriminal mouthpiece in the U.K. to turn the screws on Ngo yet again.

“He told Ngo he’d locked him out again, and the he could do this all day long,” O’Neill said. “And if he truly wanted lasting access to all of these places he used to have access to, he would agree to meet and form a more secure partnership.”

After several months of conversing with his apparent U.K.-based tormentor, Ngo agreed to meet him in Guam to finalize the deal. Ngo says he understood at the time that Guam is an unincorporated territory of the United States, but that he discounted the chances that this was all some kind of elaborate law enforcement sting operation.

“I was so desperate to have a stable database, and I got blinded by greed and started acting crazy without thinking,” Ngo said. “Lots of people told me ‘Don’t go!,’ but I told them I have to try and see what’s going on.”

But immediately after stepping off of the plane in Guam, he was apprehended by Secret Service agents.

“One of the names of his identity theft services was findget[.]me,” O’Neill said. “We took that seriously, and we did like he asked.”

This is Part I of a multi-part series. Part II in this series is available at this link.

Worse Than FailureCodeSOD: Where to Insert This

If you run a business of any size, you need some sort of resource-management/planning software. Really small businesses use Excel. Medium businesses use Excel. Enterprises use Excel. But in addition to that, the large businesses also pay through the nose for a gigantic ERP system, like Oracle or SAP, that they can wire up to Excel.

Small and medium businesses can’t afford an ERP, but they might want to purchase a management package in the niche realm of “SMB software”- small and medium business software. Much like their larger cousins, these SMB tools have… a different idea of code quality.

Cassandra’s company had deployed such a product, and with it came a slew of tickets. The performance was bad. There were bugs everywhere. While the company provided support, Cassandra’s IT team was expected to also do some diagnosing.

While digging around in one nasty performance problem, Cassandra found that one button in the application would generate and execute this block of SQL code using a SQLCommand object in C#.

DECLARE @tmp TABLE (Id uniqueidentifier)

--{ Dynamic single insert statements, may be in the hundreds. }

IF NOT EXISTS (SELECT TOP 1 1 FROM SomeTable AS st INNER JOIN @tmp t ON t.Id = st.PK)
    INSERT INTO SomeTable (PK, SomeDate) SELECT Id, getdate() as SomeDate FROM @tmp 
    UPDATE st
        SET SomeDate = getdate()
        FROM @tmp t
        LEFT JOIN SomeTable AS st ON t.Id = st.PK AND SomeDate = NULL

At its core, the purpose of this is to take a temp-table full of rows and perform an “upsert” for all of them: insert if a record with that key doesn’t exist, update if a record with that key does. Now, this code is clearly SQL Server code, so a MERGE handles that.

But okay, maybe they’re trying to be as database agnostic as possible, and don’t want to use something that, while widely supported, has some dialect differences across databases. Fine, but there’s another problem here.

Whoever built this understood that in SQL Server land, cursors are frowned upon, so they didn’t want to iterate across every row. But here’s their problem: some of the records may exist, some of them may not, so they need to check that.

As you saw, this was their approach:

IF NOT EXISTS (SELECT TOP 1 1 FROM SomeTable AS st INNER JOIN @tmp t ON t.Id = st.PK)

This is wrong. This will be true only if none of the rows in the dynamically generated INSERT statements exist in the base table. If some of the rows exist and some don’t, you aren’t going to get the results you were expecting, because this code only goes down one branch: it either inserts or updates.

There are other things wrong with this code. For example, SomeDate = NULL is going to have different behavior based on whether the ANSI_NULLS database flag is OFF (in which case it works), or ON (in which case it doesn’t). There’s a whole lot of caveats about whether you set it at the database level, on the connection string, during your session, but in Cassandra’s example, ANSI_NULLS was ON at the time this ran, so that also didn’t work.

There are other weird choices and performance problems with this code, but the important thing is that this code doesn’t work. This is in a shipped product, installed by over 4,000 businesses (the vendor is quite happy to cite that number in their marketing materials). And it ships with code that can’t work.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Kevin RuddAFR: How Mitch Hooke Axed the Mining Tax and Climate Action

Published by The Australian Financial Review on 25 August 2020

The Australian political arena is full of reinventions.

Tony Abbott has gone from pushing emissions cuts under the Paris climate agreement to demanding Australia withdraw from the treaty altogether. And Scott Morrison, who accused Labor of presiding over “crippling” debt, now binges on wasteful debt-fuelled spending that makes our government’s stimulus look like a rounding error.

However, neither of these metamorphoses comes close to the transformation of Mitch Hooke, the former Minerals Council chief and conservative political operative, who now pretends he is a lifelong evangelist of carbon pricing.

Writing in The Australian Financial Review, (Ken Henry got it wrong on climate wars, mining tax on August 11) Hooke said he supported emissions trading throughout the mid-2000s until my government came to power in 2007.

I then supposedly “trashed that consensus” by using the proceeds of a carbon price to compensate motorists, low-income households and trade-exposed industries.

How dreadful to help those most impacted by a carbon price! The very point of an emissions trading scheme is that it can change consumers’ behaviour without making people on low to middle incomes worse off. That’s why you increase the price of emissions-intensive goods and services (relative to less polluting alternatives) then give that money back to people through the tax or benefits system so they’re no worse off. But they are then able to choose a more climate-friendly product.

The alternative is the government just pockets the cash – thereby defeating the entire purpose of a market-based scheme. Obviously this is pure rocket science for Mitch.

Hooke also seems to have forgotten that such compensation was not only appropriate, but it was exactly what Malcolm Turnbull was demanding in exchange for Liberal support for our proposal in the Senate. Without it, any emissions trading scheme would be a non-starter.

When that deal was tested in the Liberal party room, it was defeated by a single vote. Even so, enough Liberal senators crossed the floor to give the Green political party the balance of power.

Showing their true colours, Bob Brown’s senators sided with Tony Abbott and Barnaby Joyce to kill the legislation. The Green party has, to this day, been unable to adequately explain its decision to voters. If they hadn’t, Australia would now be 10 years down the path of steady decarbonisation.

For Hooke, the reality is that he never wanted an emissions trading scheme if he could avoid one. But rather than state this outright, he just insists on impossible preconditions. As for Hooke’s most beloved Howard government, John Winston would in all probability have gone even further than Labor in compensating people affected by his own proposed emissions trading scheme, given Howard’s legendary ability to bake middle-class welfare into any national budget. Just ask Peter Costello.

Hooke has, like Abbott, been one of the most destructive voices in Australian national climate change action. He also expresses zero remorse for his deceptive campaign of misinformation, in partnership with those wonderful corporate citizens at Rio, targeting my government’s efforts to introduce a profits-based tax for minerals, mirroring the petroleum resource rent tax implemented by the Hawke government in the 1980s.

Our Resource Super Profits Tax would have funded new infrastructure to address looming capacity constraints affecting the sector as well as an across-the-board company tax cut to 28 per cent. Most importantly it sought to fairly spread the proceeds of mining profits when they vastly exceeded the industry norms – such as during commodity price booms – with the broader Australian public. Lest we forget, they actually own those resources. Rio just rents them.

In response, Hooke and his mates at Rio and BHP accumulated a $90 million war chest and $22.2 million of shareholders’ funds were poured into a political advertising campaign over six weeks.

Another $1.9 million was tipped into Liberal and National party coffers to keep conservative politicians on side. All to keep Rio and BHP happy, while ignoring the deep structural interests of the rest of our mining sector, many of whom supported our proposal.

At their height, Hooke’s television ads were screening around 33 times per day on free-to-air channels. Claims the tax would be a “hand grenade” to retirement savings were blasted by the Australian Institute of Superannuation Trustees which referred the “irresponsible” and “scaremongering” campaign to regulators.

This was not an exercise in public debate to refine aspects of the tax’s design; it was a systematic effort to use the wealth of two multinational mining companies to bludgeon the government into submission.

And when Gillard and Swan capitulated as the first act of their new government, they essentially turned over the drafting pen to Hooke to write a new rent tax that collected almost zero revenue.

The industry, however, was far from unified. Fortescue Metals Group chairman Andrew “Twiggy” Forrest understood what we were trying to achieve, having circumvented Hooke’s spin machine to deal directly with my resources minister Martin Ferguson.

We ultimately agreed that Forrest would stand alongside me and pledge to support the tax. The next day, Gillard and Swan struck. And Hooke has been a happy man ever since, even though Australia is the poorer for it.

It doesn’t matter where you sit on the political spectrum, everyone involved in public debate should hope that they’ve helped to improve the lives of ordinary people.

That is not Hooke’s legacy. Nor his interest. However much he may now seek to rationalise his conduct, Hooke’s stock and trade was brutal, destructive politics in direct service of BHP, Rio and the carbon lobby.

He was paid handsomely to thwart climate change action and ensure wealthy multinationals didn’t pay a dollar more in tax than was absolutely necessary. He succeeded. But I’m not sure his grandchildren will be all that proud of his destructive record.

Congratulations, Mitch.

The post AFR: How Mitch Hooke Axed the Mining Tax and Climate Action appeared first on Kevin Rudd.

LongNowThe Alchemical Brothers: Brian Eno & Roger Eno Interviewed

Long Now co-founder Brian Eno on time, music, and contextuality in a recent interview, rhyming on Gregory Bateson’s definition of information as “a difference that makes a difference”:

If a Martian came to Earth and you played her a late Beethoven String Quartet and then another written by a first-year music student, it is unlikely that she would a) understand what the point of listening to them was at all, and b) be able to distinguish between them.

What this makes clear is that most of the listening experience is constructed in our heads. The ‘beauty’ we hear in a piece of music isn’t something intrinsic and immutable – like, say, the atomic weight of a metal is intrinsic – but is a product of our perception interacting with that group of sounds in a particular historical context. You hear the music in relation to all the other experiences you’ve had of listening to music, not in a vacuum. This piece you are listening to right now is the latest sentence in a lifelong conversation you’ve been having. What you are hearing is the way it differs from, or conforms to, the rest of that experience. The magic is in our alertness to novelty, our attraction to familiarity, and the alchemy between the two.

The idea that music is somehow eternal, outside of our interaction with it, is easily disproven. When I lived for a few months in Bangkok I went to the Chinese Opera, just because it was such a mystery to me. I had no idea what the other people in the audience were getting excited by. Sometimes they’d all leap up from their chairs and cheer and clap at a point that, to me, was effectively identical to every other point in the performance. I didn’t understand the language, and didn’t know what the conversation had been up to that point. There could be no magic other than the cheap thrill of exoticism.

So those poor deluded missionaries who dragged gramophones into darkest Africa because they thought the experience of listening to Bach would somehow ‘civilise the natives’ were wrong in just about every way possible: in thinking that ‘the natives’ were uncivilised, in not recognising that they had their own music, and in assuming that our Western music was culturally detachable and transplantable – that it somehow carried within it the seeds of civilisation. This cultural arrogance has been attached to classical music ever since it lost its primacy as the popular centre of the Western musical universe, as though the soundtrack of the Austro-Hungarian Empire in the 19th Century was somehow automatically universal and superior.