Planet Russell

,

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 04)

This week on my podcast, part four of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).

MP3

Planet DebianVishal Gupta: Ramblings // On Sikkim and Backpacking

What I loved the most about Sikkim can’t be captured on cameras. It can’t be taped since it would be intrusive and it can’t be replicated because it’s unique and impromptu. It could be described, as I attempt to, but more importantly, it’s something that you simply have to experience to know.

Now I first heard about this from a friend who claimed he’d been offered free rides and Tropicanas by locals after finishing the Ladakh Marathon. And then I found Ronnie’s song, whose chorus goes : “Dil hai pahadi, thoda anadi. Par duniya ke maya mein phasta nahi” (My heart belongs to the mountains. Although a little childish, it doesn’t get hindered by materialism). While the song refers his life in Manali, I think this holds true for most Himalayan states.

Maybe it’s the pleasant weather, the proximity to nature, the sense of safety from Indian Army being round the corner, independence from material pleasures that aren’t available in remote areas or the absence of the pollution, commercialisation, & cutthroat-ness of cities, I don’t know, there’s just something that makes people in the mountains a lot kinder, more generous, more open and just more alive.

Sikkimese people, are honestly some of the nicest people I’ve ever met. The blend of Lepchas, Bhutias and the humility and the truthfulness Buddhism ingrains in its disciples is one that’ll make you fall in love with Sikkim (assuming the views, the snow, the fab weather and food, leave you pining for more).

As a product of Indian parenting, I’ve always been taught to be wary of the unknown and to stick to the safer, more-travelled path but to be honest, I enjoy bonding with strangers. To me, each person is a storybook waiting to be flipped open with the right questions and the further I get from home, the wilder the stories get. Besides there’s something oddly magical about two arbitrary curvilinear lines briefly running parallel until they diverge to move on to their respective paths. And I think our society has been so busy drawing lines and spreading hate that we forget that in the end, we’re all just lines on the universe’s infinite canvas. So the next time you travel, and you’re in a taxi, a hostel, a bar, a supermarket, or on a long walk to a monastery (that you’re secretly wishing is open despite a lockdown), strike up a conversation with a stranger. Small-talk can go a long way.


Header icon made by Freepik from www.flaticon.com

Worse Than FailureCodeSOD: Documentation on Your Contract

Josh's company hired a contracting firm to develop an application. This project was initially specced for just a few months of effort, but requirements changed, scope changed, members of the development team changed jobs, new ones needed to be on-boarded. It stretched on for years.

Even through all those changes, though, each new developer on the project followed the same coding standards and architectural principles as the original developers. Unfortunately, those standards were "meh, whatever, it compiled, right?"

So, no, there weren't any tests. No, the code was not particularly readable or maintainable. No, there definitely weren't any comments, at least if you ignore the lines of code that were commented out in big blocks because someone didn't trust source control.

But once the project was finished, the code was given back to Josh's team. "There you are," management said. "You support this now."

Josh and the rest of his team had objections to this. Nothing about the code met their own internal standards for quality, and certainly it wasn't up to the standards specified in the contract.

"Well, yes," management replied, "but we've exhausted the budget."

"Right, but they didn't deliver what the contract was for," the IT team replied.

"Well, yes, but it's a little late to bring that up."

"That's literally your job. We'd fire a developer who handed us this code."

Eventually, management caved on documentation. Things like "code quality" and "robust testing" weren't clearly specified in the contract, and there was too much wiggle room to say, "We robustly tested it, you didn't say automated tests." But documentation was listed in the deliverables, and was quite clearly absent. So management pushed back: "We need documentation." The contractor pushed back in turn: "We need money."

Eventually, Josh's company had to spend more money to get the documentation added to the final product. It was not a trivial sum, as it was a large piece of software, and would take a large number of billable hours to fully document.

This was the result:

/** * Program represents a Program and its attributes. */

or

/** * Customer represents a Customer and its attributes. */
[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Planet DebianJunichi Uekawa: wake on lan.

wake on lan. I have not been able to get wake on lan working. I wonder if poweroff command is powering off the system too much and losing power on the ethernet too. Do I need to suspend?

Planet DebianJunichi Uekawa: Got a new machine Lenovo ThinkCenter M75s gen2, and installed Debian on it.

Got a new machine Lenovo ThinkCenter M75s gen2, and installed Debian on it. I wanted to try out the ryzen CPU. I haven't used a physical x86-64 Debian desktop machine since I threw away my Athlon 64 machine (dx5150), so that's like 15 years? Since then my main devices were macbooks and virtual machines (on GCE and Sakura) and raspberry pi. I got buster installed just fine. Finding the right keystrokes after boot was challenging because the graphical UI doesn't say anything. For BIOS set up to disable secure boot (F1 to enter set up: I wanted to play with kernel) and finding the keystroke to choose the boot disk (F10 to enter the dialog; I needed to choose the one labeled USB CD drive although I put in a USB SD card reader with installer image). The installation went fine for console, but getting X up was tricky, the support for the graphics (Renoir) part of the chip was added in kernel 5.5. Bullseye was 4.19 and I wasn't too comfortable with just updating the kernel. I ended up going for dist-upgrade to Bullseye. With Bullseye default kernel 5.10, X started without problems. So far I only tried out Emacs.

Planet DebianDominique Dumont: An improved GUI for cme and Config::Model

I’ve finally found the time to improve the GUI of my pet project: cme (aka Config::Model).

Several years ago, I stumbled on a usability problem on the GUI. Some configuration (like OpenSsh or Systemd) feature a lot of configuration parameters. Which means that the GUI displays all these parameters, so finding a specfic parameter might be challenging:

To workaround this problem, I’ve added a Filter widget in 2018 which did more or less the job, but it suffered from several bugs which made its behavior confusing.

This is now fixed. The Filter widget is now working in a more consistent way.

In the example below, I’ve typed “IdentityFile” (1) in the Filter widget to show the identityFile used for various hosts (2):

Which is quite good, but some hosts use the default identity file so no value show up in the GUI. You can then click on “hide empty value” checkbox to show only the hosts that use a specific identity file:

I hope that this new behavior of the Filter box will make this project more useful.

The improved GUI was released with Config::Model::TkUI 1.374. This new version is available on CPAN and on Debian/experimental). It will be released on Debian/unstable once the next Debian version is out.

All the best

Planet DebianSteinar H. Gunderson: JavaScript madness

Yesterday, I had the problem that while socket.io from the browser would work just fine against a given server endpoint (which I do not control), talking to the same server from Node.js would just give hangs and/or inscrutinable “7:::1” messages (which I later learned meant “handshake missing”).

To skip six hours of debugging, the server set a cookie in the initial HTTP handshake, and expected to get it back when opening a WebSocket, presumably to steer the connection to the same backend that got the handshake. (Chrome didn't show the cookie in the WS debugging, but Firefox did.) So we need to keep track of chose cookies. While still remaining on socket.io 0.9.5 (for stupid reasons). No fear, we add this incredibly elegant bit of code:

var io = require('socket.io-client');
// Hook into XHR to pick out the cookie when we receive it.
var my_cookie;
io.util.request = function() {
        var XMLHttpRequest = require('xmlhttprequest').XMLHttpRequest;
        var xhr = new XMLHttpRequest();
        xhr.setDisableHeaderCheck(true);
        const old_send = xhr.send;
        xhr.send = function() {
                // Add our own readyStateChange hook in front, to get the cookie if we don't have it.
                xhr.old_onreadystatechange = xhr.onreadystatechange;
                xhr.onreadystatechange = function() {
                        if (xhr.readyState == xhr.HEADERS_RECEIVED) {
                                const cookie = xhr.getResponseHeader('set-cookie');
                                if (cookie) {
                                        my_cookie = cookie[0].split(';')[0];
                                }
                        }
                        xhr.old_onreadystatechange.call(xhr, arguments);
                };
                // Set the cookie if we have it.
                if (my_cookie) {
                        xhr.setRequestHeader("Cookie", my_cookie);
                }
                return old_send.call(this, arguments);
        };
        return xhr;
};
;
// Now override the socket.io WebSockets transport to include our header.
io.Transport['websocket'].prototype.open = function() {
        const query = io.util.query(this.socket.options.query);
        const WebSocket = require('ws');
        // Include our cookie.
        let options = {};
        if (my_cookie) {
                options['headers'] = { 'Cookie': my_cookie };
        }
        this.websocket = new WebSocket(this.prepareUrl() + query, options);
        // The rest is just repeated from the existing function.
        const self = this;
        this.websocket.onopen = function () {
                self.onOpen();
                self.socket.setBuffer(false);
        };
        this.websocket.onmessage = function (ev) {
                self.onData(ev.data);
        };
        this.websocket.onclose = function () {
                self.onClose();
                self.socket.setBuffer(true);
        };
        this.websocket.onerror = function (e) {
                self.onError(e);
        };
        return this;
};
// And now, finally!
var socket = io.connect('https://example.com', { transports: ['websocket'] });

It's a reminder that talking HTTP and executing JavaScript does not make you into a (headless) browser. And that you shouldn't let me write JavaScript. :-)

(Apologies for the lack of blank lines; evidently, they confuse Markdown.)

Planet DebianRussell Coker: Scanning with a MFC-9120CN on Bullseye

I previously wrote about getting a Brother MFC-9120CN multifunction printer/scanner to print on Linux [1]. I had also got it scanning which I didn’t blog about.

found USB scanner (vendor=0x04f9, product=0x021d) at libusb:003:002

I recently upgraded that Linux system to Debian/Testing (which will soon be released as Debian/Bullseye) and scanning broke. The command sane-find-scanner would find the USB connected scanner (with the above output), but “scanimage -L” didn’t.

It turned out that I had to edit /etc/sane.d/dll.d/hplip which had a single uncommented line of “hpaio” and replace that with “brother3” to make SANE load the driver /usr/lib64/sane/libsane-brother3.so from the brscan3 package (which Brother provided from their web site years ago).

I have the following script to do the scanning (which can run as non-root):

#!/bin/bash
set -e
if [ "$1" == "" ]; then
  echo "specify output filename"
  exit 1
fi

TMP=$(mktemp)

scanimage > $TMP
convert $TMP $1
rm $TMP

Final Note

This blog post doesn’t describe everything that needs to be done to setup a scanner, I already had part of it setup from 10 years ago. But for anyone who finds this after having trouble, /etc/sane.d/dll.d is one place you should look for important configuration (especially if sane-find-scanner works and “scanimage -L” fails). Also the Brother drivers are handy to have although I apparently had it working in the past with the hpaio driver from HP (the Brother device emulates a HP device).

Planet DebianRuss Allbery: Review: Learning React

Review: Learning React, by Alex Banks & Eve Porcello

Publisher: O'Reilly
Copyright: June 2020
ISBN: 1-4920-5172-1
Format: Trade paperback
Pages: 287

My first JavaScript project was a React frontend to a REST service. As part of that project, I read two books: JavaScript: The Definitive Guide to learn the language foundation and this book to learn the framework on top of it. This was an unintentional experiment in the ways programming books can approach the topic.

I commented in my review of JavaScript: the Definitive Guide that it takes the reference manual approach to the language. Learning React is the exact opposite. It's goal-driven, example-heavy, and has a problem and solution structure. The authors present a sample application, describe some desired new feature or deficiency in it, and then introduce the specific React technique that solves that problem. There is some rewriting of previous examples using more sophisticated techniques, but most chapters introduce new toy applications along with new parts of the React framework.

The best part of this book is its narrative momentum, so I think the authors were successful at their primary goal. The first eight chapters of the book (more on the rest of the book in a moment) feel like a whirlwind tour where one concept flows naturally into the next and one's questions while reading about one technique are often answered in the next section. I thought the authors tried too hard in places and overdid the enthusiasm, but it's very readable in a way that I think may appeal to people who normally find programming books dry. Learning React is also firm and definitive about the correct way to use React, which may appeal to readers who only want to learn the preferred way of using the framework. (For example, React class components are mentioned briefly, mostly to tell the reader not to use them, and the rest of the book only uses functional components.)

I had two major problems with this book, however. The first is that this breezy, narrative style turns out to be awful when one tries to use it as a reference. I read through most of this book with both enjoyment and curiosity, sat down to write a React component, and immediately struggled to locate the information I needed. Everything felt logically connected when I was focusing on the problems the authors introduced, but as soon as I started from my own problem, the structure of the book fell apart. I had to page through chapters to locate some nugget buried in the text, or re-read sections of the book to piece together which motivating problem my code was most similar to. It was a frustrating experience.

This may be a matter of learning style, since this is why I prefer programming books with a reference structure. But be warned that I can't recommend this book as a reference while you're programming, nor does it prepare you to use the official React documentation as a reference.

The second problem is less explicable and less defensible. I don't know what happened with O'Reilly's copy-editing for this book, but the code snippets are a train wreck. The Amazon reviews are full of people complaining about typos, syntax errors, omitted code, and glaring logical flaws, and they are entirely correct. It's so bad that I was left wondering if a very early, untested draft of the examples was somehow substituted into the book at the last minute by mistake.

I'm not the sort of person who normally types code in from a book, so I don't care about a few typos or obvious misprints as long as the general shape is correct. The general shape was not correct. In a few places, the code is so completely wrong and incomplete that even combined with the surrounding text I was unable to figure out what it was supposed to be. It's possible this is fixed in a later printing (I read the June 2020 printing of the second edition), but otherwise beware. The authors do include a link to a GitHub repository of the code samples, which are significantly different than what's printed in the book, but that repository is incomplete; many of the later chapter examples are only links to JavaScript web sandboxes, which bodes poorly for the longevity of the example code.

And then there's chapter nine of this book, which I found entirely baffling. This is a direct quote from the start of the chapter:

This is the least important chapter in this book. At least, that's what we've been told by the React team. They didn't specifically say, "this is the least important chapter, don't write it." They've only issued a series of tweets warning educators and evangelists that much of their work in this area will very soon be outdated. All of this will change.

This chapter is on suspense and error boundaries, with a brief mention of Fiber. I have no idea what I'm supposed to do with this material as a reader who is new to React (and thus presumably the target audience). Should I use this feature? When? Why is this material in the book at all when it's so laden with weird stream-of-consciousness disclaimers? It's a thoroughly odd editorial choice.

The testing chapter was similarly disappointing in that it didn't answer any of my concrete questions about testing. My instinct with testing UIs is to break out Selenium and do integration testing with its backend, but the authors are huge fans of unit testing of React applications. Great, I thought, this should be interesting; unit testing seems like a poor fit for UI code because of how artificial the test construction is, but maybe I'm missing some subtlety. Convince me! And then the authors... didn't even attempt to convince me. They just asserted unit testing is great and explained how to write trivial unit tests that serve no useful purpose in a real application. End of chapter. Sigh.

I'm not sure what to say about this book. I feel like it has so many serious problems that I should warn everyone away from it, and yet the narrative introduction to React was truly fun to read and got me excited about writing React code. Even though the book largely fell apart as a reference, I still managed to write a working application using it as my primary reference, so it's not all bad. If you like the problem and solution style and want a highly conversational and informal tone (that errs on the side of weird breeziness), this may still be the book for you. Just be aware that the code examples are a trash fire, so if you learn from examples, you're going to have to chase them down via the GitHub repository and hope that they still exist (or get a later edition of the book where this problem has hopefully been corrected).

Rating: 6 out of 10

Planet DebianAntoine Beaupré: Lost article ideas

I wrote for LWN for about two years. During that time, I wrote (what seems to me an impressive) 34 articles, but I always had a pile of ideas in the back of my mind. Those are ideas, notes, and scribbles lying around. Some were just completely abandoned because they didn't seem a good fit for LWN.

Concretely, I stored those in branches in a git repository, and used the branch name (and, naively, the last commit log) as indicators of the topic.

This was the state of affairs when I left:

remotes/private/attic/novena                    822ca2bb add letter i sent to novena, never published
remotes/private/attic/secureboot                de09d82b quick review, add note and graph
remotes/private/attic/wireguard                 5c5340d1 wireguard review, tutorial and comparison with alternatives
remotes/private/backlog/dat                     914c5edf Merge branch 'master' into backlog/dat
remotes/private/backlog/packet                  9b2c6d1a ham radio packet innovations and primer
remotes/private/backlog/performance-tweaks      dcf02676 config notes for http2
remotes/private/backlog/serverless              9fce6484 postponed until kubecon europe
remotes/private/fin/cost-of-hosting             00d8e499 cost-of-hosting article online
remotes/private/fin/kubecon                     f4fd7df2 remove published or spun off articles
remotes/private/fin/kubecon-overview            21fae984 publish kubecon overview article
remotes/private/fin/kubecon2018                 1edc5ec8 add series
remotes/private/fin/netconf                     3f4b7ece publish the netconf articles
remotes/private/fin/netdev                      6ee66559 publish articles from netdev 2.2
remotes/private/fin/pgp-offline                 f841deed pgp offline branch ready for publication
remotes/private/fin/primes                      c7e5b912 publish the ROCA paper
remotes/private/fin/runtimes                    4bee1d70 prepare publication of runtimes articles
remotes/private/fin/token-benchmarks            5a363992 regenerate timestamp automatically
remotes/private/ideas/astropy                   95d53152 astropy or python in astronomy
remotes/private/ideas/avaneya                   20a6d149 crowdfunded blade-runner-themed GPLv3 simcity-like simulator
remotes/private/ideas/backups-benchmarks        fe2f1f13 review of backup software through performance and features
remotes/private/ideas/cumin                     7bed3945 review of the cumin automation tool from WM foundation
remotes/private/ideas/future-of-distros         d086ca0d modern packaging problems and complex apps
remotes/private/ideas/on-dying                  a92ad23f another dying thing
remotes/private/ideas/openpgp-discovery         8f2782f0 openpgp discovery mechanisms (WKD, etc), thanks to jonas meurer
remotes/private/ideas/password-bench            451602c0 bruteforce estimates for various password patterns compared with RSA key sizes
remotes/private/ideas/prometheus-openmetrics    2568dbd6 openmetrics standardizing prom metrics enpoints
remotes/private/ideas/telling-time              f3c24a53 another way of telling time
remotes/private/ideas/wallabako                 4f44c5da talk about wallabako, read-it-later + kobo hacking
remotes/private/stalled/bench-bench-bench       8cef0504 benchmarking http benchmarking tools
remotes/private/stalled/debian-survey-democracy 909bdc98 free software surveys and debian democracy, volunteer vs paid work

Wow, what a mess! Let's see if I can make sense of this:

Attic

Those are articles that I thought about, then finally rejected, either because it didn't seem worth it, or my editors rejected it, or I just moved on:

  • novena: the project is ooold now, didn't seem to fit a LWN article. it was basically "how can i build my novena now" and "you guys rock!" it seems like the MNT Reform is the brain child of the Novena now, and I dare say it's even cooler!
  • secureboot: my LWN editors were critical of my approach, and probably rightly so - it's a really complex subject and I was probably out of my depth... it's also out of date now, we did manage secureboot in Debian
  • wireguard: LWN ended up writing extensive coverage, and I was biased against Donenfeld because of conflicts in a previous project

Backlog

Those were articles I was planning to write about next.

  • dat: I already had written Sharing and archiving data sets with Dat, but it seems I had more to say... mostly performance issues, beaker, no streaming, limited adoption... to be investigated, I guess?
  • packet: a primer on data communications over ham radio, and the cool new tech that has emerged in the free software world. those are mainly notes about Pat, Direwolf, APRS and so on... just never got around to making sense of it or really using the tech...
  • performance-tweaks: "optimizing websites at the age of http2", the unwritten story of the optimization of this website with HTTP/2 and friends
  • serverless: god. one of the leftover topics at Kubecon, my notes on this were thin, and the actual subject, possibly even thinner... the only lie worse than the cloud is that there's no server at all! concretely, that's a pile of notes about Kubecon which I wanted to sort through. Probably belongs in the attic now.

Fin

Those are finished articles, they were published on my website and LWN, but the branches were kept because previous drafts had private notes that should not be published.

Ideas

A lot of those branches were actually just an empty commit, with the commitlog being the "pitch", more or less. I'd send that list to my editors, sometimes with a few more links (basically the above), and they would nudge me one way or the other.

Sometimes they would actively discourage me to write about something, and I would do it anyways, send them a draft, and they would patiently make me rewrite it until it was a decent article. This was especially hard with the terminal emulator series, which took forever to write and even got my editors upset when they realized I had never installed Fedora (I ended up installing it, and I was proven wrong!)

Stalled

Oh, and then there's those: those are either "ideas" or "backlog" that got so far behind that I just moved them out of the way because I was tired of seeing them in my list.

  • stalled/bench-bench-bench benchmarking http benchmarking tools, a horrible mess of links, copy-paste from terminals, and ideas about benchmarking... some of this trickled out into this benchmarking guide at Tor, but not much more than the list of tools
  • stalled/debian-survey-democracy: "free software surveys and Debian democracy, volunteer vs paid work"... A long standing concern of mine is that all Debian work is supposed to be volunteer, and paying explicitly for work inside Debian has traditionally been frowned upon, even leading to serious drama and dissent (remember Dunc-Tank)? back when I was writing for LWN, I was also doing paid work for Debian LTS. I also learned that a lot (most?) Debian Developers were actually being paid by their job to work on Debian. So I was confused by this apparent contradiction, especially given how the LTS project has been mostly accepted, while Dunc-Tank was not... See also this talk at Debconf 16. I had hopes that this study would show the "hunch" people have offered (that most DDs are paid to work on Debian) but it seems to show the reverse (only 36% of DDs, and 18% of all respondents paid). So I am still confused and worried about the sustainability of Debian.

What do you think?

So that's all I got. As people might have noticed here, I have much less time to write these days, but if there's any subject in there I should pick, what is the one that you would find most interesting?

Oh! and I should mention that you can write to LWN! If you think people should know more about some Linux thing, you can get paid to write for it! Pitch it to the editors, they won't bite. The worst that can happen is that they say "yes" and there goes two years of your life learning to write. Because no, you don't know how to write, no one does. You need an editor to write.

That's why this article looks like crap and has a smiley. :)

,

Planet DebianGunnar Wolf: FLISOL • Talking about Jitsi

Every year since 2005 there is a very good, big and interesting Latin American gathering of free-software-minded people. Of course, Latin America is a big, big, big place, and it’s not like we are the most economically buoyant region to meet in something equiparable to FOSDEM.

What we have is a distributed free software conference — originally, a distributed Linux install-fest (which I never liked, I am against install-fests), but gradually it morphed into a proper conference: Festival Latinoamericano de Instalación de Software Libre (Latin American Free Software Installation Festival)

This FLISOL was hosted by the always great and always interesting Rancho Electrónico, our favorite local hacklab, and has many other interesting talks.

I like talking about projects where I am involved as a developer… but this time I decided to do otherwise: I presented a talk on the Jitsi videoconferencing server. Why? Because of the relevance videoconferences have had over the last year.

So, without further ado — Here is a video I recorded locally from the talk I gave (MKV), as well as the slides (PDF).

Sam VargheseAll the news (apart from the Middle East issue) that’s fit to print

The Saturday Paper — as its name implies — is a weekend newspaper published from Melbourne, Australia. Given this, it rarely has any real news, but some of the features are well-written.

There is a column called Gadfly (again the name would indicate what it is about) which is extremely well-written and is one of the articles that I read every week. It was written for some years by one Richard Ackland, a lawyer with very good writing skills, and is now penned by one Sami Shah, an Indian, who is, again a good writer. Gadfly is funny and, like most of the opinion content in the paper, is left-oriented.

The same cannot be said of some of the other writers. Karen Middleton and Rick Morton fall into the category of poor writers, though the latter sometimes does provide a story that has not been run anywhere else. Middleton can only be described as a hack.

Mike Seccombe is another of the good writers and, when he figures on the day’s menu, one can be assured that the content will be good. Another good writer, David Marr, has now gone missing; indeed, he is not writing for any newspaper at the moment.

But the one fault line that The Saturday Paper has is that it will never cover the Middle East. The owner, Morry Schwartz [seen below in an image used courtesy of Fairfax], leans towards supporting the right-wing Israeli leader Benjamin Netanyahu and thus no matter what atrocities are being perpetrated on the Palestinians, you can be assured that not even a word will not appear in this newspaper.

Critics of the paper avoid mentioning this, in keeping with the habit prevalent in the West, of never saying anything that could be construed as being critical of Israel.

This proclivity of Schwartz was noticed early on and mentioned by a couple of Australian writers. One, Tim Robertson, had this to say when the paper had just started out: “…the Saturday Paper’s coverage of Israel’s assault on Gaza has been conspicuously, well, non-existent. As the death toll rises and more atrocities are committed, the Saturday Paper’s pages remain, to date, devoid of any comment.”

Explaining this, John van Tiggelen, a former editor of The Monthly (another Schwartz publication) said: ” mean, it’s seen as a Left-wing publication, but the publisher is very Right-wing on Israel […] And he’s very much to the, you know, Benjamin Netanyahu end of politics. So, you can’t touch it; just don’t touch it. It’s a glass wall.”

Australian media are very touchy about Israel. One of the country’s better writers, Mike Carlton, lost a plum job with the former Fairfax Media — now absorbed into the publishing and broadcasting firm, Nine Entertainment — when he criticised Israel over one of its attacks on Gaza.

And some supporters of Israel in Melbourne are quite powerful. Fairfax had – and still has – a rather juvenile columnist named Julie Szego. When one of her columns was rejected by the then editor, Paul Ramadge (the staff used to say of him, “Ramadge rhymes with damage”), she ran to Fairfax board member Mark Leibler and requested him to intervene. Hey presto, the column was published.

Of course, it is the prerogative of an editor or owner to keep out what he/she does not want published. But is one is given to describing one’s publication as a newspaper and then ignores one of the world’s major issues, then one’s credibility does tend to suffer.

Planet DebianAntoine Beaupré: A dead game clock

Time flies. Back in 2008, I wrote a game clock. Since then, what was first called "chess clock" was renamed to pychessclock and then Gameclock (2008). It shipped with Debian 6 squeeze (2011), 7 wheezy (4.0, 2013, with a new UI), 8 jessie (5.0, 2015, with a code cleanup, translation, go timers), 9 stretch (2017), and 10 buster (2019), phew! Eight years in Debian over 4 releases, not bad!

But alas, Debian 11 bullseye (2021) won't ship with Gameclock because both Python 2 and GTK 2 were removed from Debian. I lack the time, interest, and energy to port this program. Even if I could find the time, everyone is on their phone nowadays.

So finding the right toolkit would require some serious thinking about how to make a portable program that can run on Linux and Android. I care less about Mac, iOS, and Windows, but, interestingly, it feels it wouldn't be much harder to cover those as well if I hit both Linux and Android (which is already hard enough, paradoxically).

(And before you ask, no, Java is not an option for me thanks. If I switch to anything else than Python, it would be Golang or Rust. And I did look at some toolkit options a few years ago, was excited by none.)

So there you have it: that is how software dies, I guess. Alternatives include:

  • Chessclock - really old Ruby which made Gameclock rename
  • Ghronos - also really old Java app
  • Lichess - has a chess clock built into the app
  • Otter - if you squint a little

PS: Monkeysign also suffered the same fate, for what that's worth. Alternatives include caff, GNOME Keysign, and pius. Note that this does not affect the larger Monkeysphere project, which will ship with Debian bullseye.

Planet DebianJoey Hess: here's your shot

The nurse releases my shoulder and drops the needle in a sharps bin, slaps on a smiley bandaid. "And we're done!" Her cheeryness seems genuine but a little strained. There was a long line. "You're all boosted, and here's your vaccine card."

Waiting out the 15 minutes in observation, I look at the card.

Moderna COVID-19/22 vaccine booster
3/21/2025              lot #5829126

  🇺🇸 NOT A VACCINE PASSPORT 🇺🇸

(Tear at perforated line.)
- - - - - - - - - - - - - - - - - -

Here's your shot at
$$ ONE HUNDRED MILLION $$

       Scratch
       and win

I bite my nails, when I'm not wearing this mask. So I scrub inneffectively at the grainy silver box. Not like the woman across from me, three kids in tow, who's zipping through her sheaf of scratchers.

The message on mine becomes clear: 1 month free Amazon Prime

Ah well.

,

Planet DebianThomas Goirand: Puppet and OS detection

As you may know, Puppet uses “facter” to get facts about the machine it is about to configure. That’s fine, and a nice concept. One can later use variables in a puppet manifest to do different things depending on what facter tells. For example, the operating system name … oh no! This thing is really stupid … Here’s the code one has to do to be compatible with puppet from version 3 up to 5:

if $::lsbdistcodename == undef{
# This works around differences between facter versions
if $facts['os']['lsb'] != undef{
$distro_codename = $facts['os']['lsb']['distcodename']
}else{
$distro_codename = $facts['os']['distro']['codename']
}
}else{
$distro_codename = downcase($::lsbdistcodename)
}

Indeed, the global variable $::lsbdistcodename still existed up to Stretch (and is gone in Buster). The global $::facts wasn’t an array before (but a hash), so in Jessie, it breaks with the error message “facts is not a hash or array when accessing it with os”. So, one need the full code above to make this work.

It’s ok to improve things. It is NOT OK to break os detection. To me it is a very bad practice from upstream Puppet authors. I’m publishing this in the hope to avoid others to fall in the same trap as I did.

Planet DebianMatthew Garrett: An accidental bootsplash

Back in 2005 we had Debconf in Helsinki. Earlier in the year I'd ended up invited to Canonical's Ubuntu Down Under event in Sydney, and one of the things we'd tried to design was a reasonable graphical boot environment that could also display status messages. The design constraints were awkward - we wanted it to be entirely in userland (so we didn't need to carry kernel patches), and we didn't want to rely on vesafb[1] (because at the time we needed to reinitialise graphics hardware from userland on suspend/resume[2], and vesa was not super compatible with that). Nothing currently met our requirements, but by the time we'd got to Helsinki there was a general understanding that Paul Sladen was going to implement this.

The Helsinki Debconf ended being an extremely strange event, involving me having to explain to Mark Shuttleworth what the physics of a bomb exploding on a bus were, many people being traumatised by the whole sauna situation, and the whole unfortunate water balloon incident, but it also involved Sladen spending a bunch of time trying to produce an SVG of a London bus as a D-Bus logo and not really writing our hypothetical userland bootsplash program, so on the last night, fueled by Koff that we'd bought by just collecting all the discarded empty bottles and returning them for the deposits, I started writing one.

I knew that Debian was already using graphics mode for installation despite having a textual installer, because they needed to deal with more complex fonts than VGA could manage. Digging into the code, I found that it used BOGL - a graphics library that made use of the VGA framebuffer to draw things. VGA had a pre-allocated memory range for the framebuffer[3], which meant the firmware probably wouldn't map anything else there any hitting those addresses probably wouldn't break anything. This seemed safe.

A few hours later, I had some code that could use BOGL to print status messages to the screen of a machine booted with vga16fb. I woke up some time later, somehow found myself in an airport, and while sitting at the departure gate[4] I spent a while staring at VGA documentation and worked out which magical calls I needed to make to have it behave roughly like a linear framebuffer. Shortly before I got on my flight back to the UK, I had something that could also draw a graphical picture.

Usplash shipped shortly afterwards. We hit various issues - vga16fb produced a 640x480 mode, and some laptops were not inclined to do that without a BIOS call first. 640x400 worked basically everywhere, but meant we had to redraw the art because circles don't work the same way if you change the resolution. My brief "UBUNTU BETA" artwork that was me literally writing "UBUNTU BETA" on an HP TC1100 shortly after I'd got the Wacom screen working did not go down well, and thankfully we had better artwork before release.

But 16 colours is somewhat limiting. SVGALib offered a way to get more colours and better resolution in userland, retaining our prerequisites. Unfortunately it relied on VM86, which doesn't exist in 64-bit mode on Intel systems. I ended up hacking the X.org x86emu into a thunk library that exposed the same API as LRMI, so we could run it without needing VM86. Shockingly, it worked - we had support for 256 colour bootsplashes in any supported resolution on 64 bit systems as well as 32 bit ones.

But by now it was obvious that the future was having the kernel manage graphics support, both in terms of native programming and in supporting suspend/resume. Plymouth is much more fully featured than Usplash ever was, but relies on functionality that simply didn't exist when we started this adventure. There's certainly an argument that we'd have been better off making reasonable kernel modesetting support happen faster, but at this point I had literally no idea how to write decent kernel code and everyone should be happy I kept this to userland.

Anyway. The moral of all of this is that sometimes history works out such that you write some software that a huge number of people run without any idea of who you are, and also that this can happen without you having any fucking idea what you're doing.

Write code. Do crimes.

[1] vesafb relied on either the bootloader or the early stage kernel performing a VBE call to set a mode, and then just drawing directly into that framebuffer. When we were doing GPU reinitialisation in userland we couldn't guarantee that we'd run before the kernel tried to draw stuff into that framebuffer, and there was a risk that that was mapped to something dangerous if the GPU hadn't been reprogrammed into the same state. It turns out that having GPU modesetting in the kernel is a Good Thing.

[2] ACPI didn't guarantee that the firmware would reinitialise the graphics hardware, and as a result most machines didn't. At this point Linux didn't have native support for initialising most graphics hardware, so we fell back to doing it from userland. VBEtool was a terrible hack I wrote to try to re-execute the system's graphics hardware through a range of mechanisms, and it worked in a surprising number of cases.

[3] As long as you were willing to deal with 640x480 in 16 colours

[4] Helsinki-Vantaan had astonishingly comfortable seating for time

comment count unavailable comments

Kevin RuddABC NewsRadio: Earth Day Summit

E&OE TRANSCRIPT
RADIO INTERVIEW
ABC NEWSRADIO
23 APRIL 2021

Topics: US climate summit; Murdoch Royal Commission

Thomas Oriti
Leaders of more than 40 countries have held a global summit overnight on the world’s response to climate change. They spoke of the urgent need to save the planet from global warming and talked of a jobs boom in the coming years from clean energy technologies. It was hosted by the US President Joe Biden. The US made a commitment to reduce carbon emissions by 50% by the year 2030. The UK says it will cut emissions by 75% by 2035. But let’s look at the Australian perspective. Before the summit began, Australia announced it would not be changing its commitment to a 26-28% reduction by the turn of the next decade. Now Kevin Rudd is a former Australian Prime Minister and president of the Asia Society in New York who joins us live now. Mr Rudd, good morning.

Kevin Rudd
Good morning.

Thomas Oriti
Thank you for your time. You have attended similar high-level climate summits in the past. What kind of standing does Australia have with no new commitments overnight?

Kevin Rudd
A deeply diminished standing is the honest response to that, and that’s a reflection of the views of governments around the Western world and frankly in the emerging world as well. Australia can and should do more. And it’s not simply a question of political atmospherics here; there’s basic science involved in this. If we could keep temperature increases globally, on average, around 1.5 degrees centigrade increased by the end of this century then what it means is we have to move to carbon neutrality by mid-century. To get to carbon neutrality by mid-century, we’ve got to radically reduce our carbon emissions before 2030 with new targets. Other countries have done that. Australia is not.

Thomas Oriti
But Scott Morrison would argue that he is doing something. I mean, over the last two days, we’ve seen a combined $1 billion investment in clean technology. And he said to the summit that his government’s focus is a technology-driven approach to mitigating emissions, saying reaching net-zero he is based on the how and not the when. I mean, what do you make of that sort of approach, focusing on technology?

Kevin Rudd
Well that’s Mr Morrison catering to his own domestic political constituency rather than act of appropriate international leadership by the Prime Minister of Australia at a major global summit to bring about real carbon reductions. The bottom line is the planet doesn’t wait for Mr Morrison to say ‘well, hydrogen will come on stream in X year and and coal reduction targets will come on-stream in Y year’. The reason why the international community, led by the United States in what has been remarkably successful summit minus Australia, is talking about mid-century carbon neutrality, new nationally determined contributions between now and 2030, is to make the mathematics and the science stack up to keep temperature increases within 1.5 degrees. What we’ve heard from Mr Morrison instead is frankly just a bunch of politically driven posturing which doesn’t add up and I think in the international community is treated with contempt, which is why he was heard make his contribution so far down the batting order.

Thomas Oriti
OK well we look at the international community. American officials are reportedly dissatisfied with Australia’s approach. The Biden administration has said it will try to pressure other countries to do more. I mean, how much of an impact do you think that could have on the Morison government?

Kevin Rudd
Well so far, if Morrison was to work out that the United States as our principal ally, who we need in multiple areas of our international policy interest, is making this demand clear of the Australian Government, he really does need to begin to adjust now – in fact, if not yesterday. But if that persuasion doesn’t work, there’s something else rolling down the railway tracks towards Australia, which is so-called border adjustment tariffs, now being actively debated, deliberated on and decided both in Brussels and considered also in Washington to effectively impose a tax on those countries which refuse to take their share of the global burden in bringing down carbon emissions. So if it’s not going to be, as it were, inducement from the US through our alliance relationship with Washington, then there is the threat of punitive financial action which would affect the entire Australian economy. But you know something? Australia as a responsible middle power in the world, and as the driest continent on Earth, for God’s sake, we should be acting as the global leaders here, not the global wooden-spooners.

Thomas Oriti
You wrote an opinion piece in The Guardian this week, Mr Rudd with another former prime minister, Malcolm Turnbull, and how Australia’s ambition on climate change is held back by what you’re saying is a toxic mix of right-wing politics, media, and vested interests. I want to pick up on that last bit. Who are these vested interests and what’s their role?

Kevin Rudd
Well this has become a matter of political raw red meat for the Liberal Party and the National Party to go and chant the coal mantra. That’s one element of it, it’s part of the internal dynamics of the Liberal and National parties. Secondly, I didn’t say the media, I said the Murdoch media, and the Murdoch media has run — and Malcolm Turnbull agrees with — me a vicious campaign against effective climate change action in this country for more than a decade now. And because of their power in the print media in this country, where they have 70% of the print ownership, they have shaped and influenced significantly the terms of our national debate. And the third element in all this, of course, they’re our own big hydrocarbon companies, led by companies like BHP, which have been dragging the chain on this for a very long time. Put those three together, and they can’t hydrocarbon lobby’s trade union, which is the Minerals Council of Australia, this represents a very powerful potent force in Australian politics, which I’ve had to contend with as prime minister and they ultimately prevailed against me; which Malcolm Turnbull had to work against when he was Prime Minister, they prevailed against him; frankly, what is being lost as a consequence of this is effective, clear Australian international leadership on something which matters for our environment, and economy for the future.

Thomas Oriti
Just pick up on something you said a moment ago about the Murdoch media, Mr Rudd, the former US Director of National Intelligence, James Clapper, has backed to your call for a royal commission into Rupert Murdoch’s media empire here in Australia. What do you may give that support and where are you at with your petition at the moment?

Kevin Rudd
Well, the result of our petition which attracted more than half a million signatures within 28 days across Australia — because the system collapsed, we suspect hundreds of thousands of petitioners in addition to that — the Senate decided to commission its own investigation into the future of media diversity. It continues to take evidence from myself, Malcolm Turnbull and others, including the media proprietors, on what we do on the future of this extraordinary monopoly which the Murdoch media has in Australia. It is the highest concentration of print media ownership anywhere in the Western world. Now, when Jim Clapper intervenes as the former director of national intelligence in the United States, what Clapper is saying is the impact of Murdoch there in America, where he is not a majority player, but he’s an aggressive player through Fox News, is that untrammelled this Fox media beast has significantly derailed the potentially for consensus in American politics, not just on climate change, but across a whole range of pressing challenges facing the United States. So he’s sending a clarion clear message, that if we’re going to have Fox News exercise that sort of influence in Australia, through Sky News, which is now having a huge impact across social media platforms and YouTube, then our country prospectively becomes ungovernable like the United States has become in large part in recent years.

Thomas Oriti
Kevin Rudd, thanks very much for joining us this morning.

Kevin Rudd
Good to be with you.

Thomas Oriti
Former Australian Prime Minister Kevin Rudd, who is the president of the Asia Society in New York.

The post ABC NewsRadio: Earth Day Summit appeared first on Kevin Rudd.

Worse Than FailureError'd: When Words Collide

Waiting for his wave to collapse, surfer Mike I. harmonizes "Nice try Power BI, but you're not doing quantum computing quite right." Or, does he?

schrodinger

 

Finn Antti, caught in a hella Catch-22, explains " Apparently I need to install Feedback Hub from Microsoft Store to tell Microsoft that I can't install Feedback Hub from Microsoft Store."

microsoft

 

Our old friend Pascal shares "Coupon codes don't work very well when they are URL encoded."

homedepot

 

Uninamed pydsigner has a strong meme game. "It's bad enough when your fairly popular meme creation site runs out of storage, but to be unable to serve your pictures as a result? The completely un-obfuscated stacktrace is just insult to the injury. "

stack

 

But the submission from Brad W. wins this week's prize . Says he: "The vehicle emissions site (linked directly from the state site) isn't handling the increased traffic well, but their error handling is superb. An online code browser allowing for complete examination of the entire stack and surroundings."

emissions

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Planet DebianDirk Eddelbuettel: drat 0.2.0: Now with ‘docs/’

drat user

A new release of drat arrived on CRAN today. This is the first release in a few months (with the last release in July of last year) and it (finally) makes the leap to supporting docs/ in the main branch as we are all so tired of the gh-pages branch. We also have new vignettes, new (and very shiny) documentation and refreshed vignettes!

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code. See below for a few custom reference examples.

Because for once it really is as your mother told you: Friends don’t let friends install random git commit snapshots. Or as we may now add: stay away from semi-random universes snapshots too.

Properly rolled-up releases it is. Just how CRAN shows us: a model that has demonstrated for two-plus decades how to do this. And you can too: drat is easy to use, documented by (now) six vignettes and just works.

The NEWS file summarises the release as follows:

Changes in drat version 0.2.0 (2021-04-21)

  • A documentation website for the package was added at https://eddelbuettel.github.io/drat/ (Dirk)

  • The continuous integration was switched to using ‘r-ci’ (Dirk)

  • The docs/ directory of the main repository branch can now be used instead of gh-pages branch (Dirk in #112)

  • A new repository https://github.com/drat-base/drat can now be used to fork an initial drat repository (Dirk)

  • A new vignette “Drat Step-by-Step” was added (Roman Hornung and Dirk in #117 fixing #115 and #113)

  • The test suite was refactored for docs/ use (Felix Ernst in #118)

  • The minimum R version is now ‘R (>= 3.6)’ (Dirk fixing #119)

  • The vignettes were switched to minidown (Dirk fixing #116)

  • A new test file was added to ensure ‘NEWS.Rd’ is always at the current release version.

Courtesy of my CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Kevin RuddAFR: Mining Super-Profits Levy

The University of Western Australia


MEDIA STATEMENT
21 APRIL 2021

Published in the Australian Financial Review on 22 April 2021

“As was the case during the last resources boom, and the one before that, the super-profits earned by a handful of resource majors in this country are a giant rip-off of the Australian people.

“Furthermore, the greed of these three is unbelievable: they haven’t even bothered to establish serious, large scale charitable foundations to benefit the Australian people at the scale that other serious global firms do. And in Rio’s case, they dynamite indigenous heritage in the way through.

“I fully understand the financial investment needed for long term projects. But nowhere in their long term financial planning did any company forecast prices at this level. That’s why the Australian people, who actually own these resources and merely lease them to these companies, deserve a higher return.

“That’s why I believe these three majors should pay a super-profits levy into a national investment fund to underpin the future of Australian higher education and research, because this is the sector that will need to generate the next tranche of national wealth. We need Australian equity in the global technology revolution now underway, where we are in danger of owning none of the intellectual property and assets that will drive future global growth.”

Ends

The post AFR: Mining Super-Profits Levy appeared first on Kevin Rudd.

Planet DebianRussell Coker: HP MP350P Gen8

I’m playing with a HP Proliant ML350P Gen8 server (part num 646676-011). For HP servers “ML” means tower (see the ProLiant Wikipedia page for more details [1]). For HP servers the “generation” indicates how old the server is, Gen8 was announced in 2012 and Gen10 seems to be the current generation.

Debian Packages from HP

wget -O /usr/local/hpePublicKey2048_key1.pub https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub
echo "# HP RAID" >> /etc/apt/sources.list
echo "deb [signed-by=/usr/local/hpePublicKey2048_key1.pub] http://downloads.linux.hpe.com/SDR/downloads/MCP/Debian/ buster/current non-free" >> /etc/apt/sources.list

The above commands will setup the APT repository for Debian/Buster. See the HP Downloads FAQ [2] for more information about their repositories.

hponcfg

This package contains the hponcfg program that configures ILO (the HP remote management system) from Linux. One noteworthy command is “hponcfg -r” to reset the ILO, something you should do before selling an old system.

ssacli

This package contains the ssacli program to configure storage arrays, here are some examples of how to use it:

# list controllers and show slot numbers
ssacli controller all show
# list arrays on controller identified by slot and give array IDs
ssacli controller slot=0 array all show
# show details of one array
ssacli controller slot=0 array A show
# show all disks on one controller
ssacli controller slot=0 physicaldrive all show
# show config of a controller, this gives RAID level etc
ssacli controller slot=0 show config
# delete array B (you can immediately pull the disks from it)
ssacli controller slot=0 array B delete
# create an array type RAID0 with specified drives, do this with one drive per array for BTRFS/ZFS
ssacli controller slot=0 create type=arrayr0 drives=1I:1:1

When a disk is used in JBOD mode just under 33MB will be used at the end of the disk for the RAID metadata. If you have existing disks with a DOS partition table you can put it in a HP array as a JBOD and it will work with all data intact (GPT partition table is more complicated). When all disks are removed from the server the cooling fans run at high speed, this would be annoying if you wanted to have a diskless workstation or server using only external storage.

ssaducli

This package contains the ssaducli diagnostic utility for storage arrays. The SSD “wear gauge report” doesn’t work for the 2 SSDs I tested it on, maybe it only supports SAS SSDs not SATA SSDs. It doesn’t seem to do anything that I need.

storcli

This package contains both 32bit and 64bit versions of the MegaRAID utility and deletes whichever one doesn’t match the installation in the package postinst, so it fails debsums checks etc. The MegaRAID utility is for a different type of RAID controller to the “Smart Storage Array” (AKA SSA) that the other utilities work with. As an aside it seems that there are multiple types of MegaRAID controller, the management program from the storcli package doesn’t work on a Dell server with MegaRAID. They should have made separate 32bit and 64bit versions of this package.

Recommendations

Here is HP page for downloading firmware updates (including security updates) [3], you have to login first and have a warranty. This is legal but poor service. Dell servers have comparable prices (on the second hand marker) and comparable features but give free firmware updates to everyone. Dell have overall lower quality of Debian packages for supporting utilities, but a wider range of support so generally Dell support seems better in every way. Dell and HP hardware seems of equal quality so overall I think it’s best to buy Dell.

Suggestions for HP

Finding which of the signing keys to use is unreasonably difficult. You should get some HP employees to sign the HP keys used for repositories with their personal keys and then go to LUG meetings and get their personal keys well connected to the web of trust. Then upload the HP keys to the public key repositories. You should also use the same keys for signing all versions of the repositories. Having different keys for the different versions of Debian wastes people’s time.

Please provide firmware for all users, even if they buy systems second hand. It is in your best interests to have systems used long-term and have them run securely. It is not in your best interests to have older HP servers perform badly.

Having all the fans run at maximum speed when power is turned on is a standard server feature. Some servers can throttle the fan when the BIOS is running, it would be nice if HP servers did that. Having ridiculously loud fans until just before GRUB starts is annoying.

Worse Than FailureCodeSOD: Saved Changes

When you look at bad code, there's a part of your body that reacts to it. You can just feel it, in your spleen. This is code you don't want to maintain. This is code you don't want to see in your code base.

Sometimes, you get that reaction to code, and then you think about the code, and say: "Well, it's not that bad," but your spleen still throbs, because you know if you had to maintain this code, it'd be constant, low-level pain. Maybe you ignore your spleen, because hey, a quick glance, it doesn't seem that bad.

But your spleen knows. A line that seems bad, but mostly harmless, can suddenly unfurl into something far, far nastier.

This example, from Rikki, demonstrates:

private async void AttemptContextChange(bool saveChanges = true) { if (m_Context != null) { if (saveChanges && !SaveChanges()) { // error was already displayed to the user, just stop } else { dataGrid.ItemSource = null; m_Context.Dispose(); } } }

if (saveChanges && !SaveChanges()) is one of those lines that crawls into your spleen and just sits there. My brain tried to say, "oh, this is fine, SaveChanges() probably is just a validation method, and that's why the UI is already up to date, it's just a bad name, it should be CanSaveChanges()" . But if that's true, where does it perform the actual save? Nowhere here. My brain didn't want to see it, but my spleen knew.

If you ignore your spleen and spend a second thinking, it more or less makes sense: saveChanges (the parameter) is a piece of information about this operation- the user would like to save their changes. SaveChanges() the method probably attempts to save the changes, and returns a boolean value if it succeeded.

But wait, returning boolean values isn't how we communicate errors in a language like C#. We can throw exceptions! SaveChanges() should throw an exception if it can't proceed.

Which, speaking of exceptions, we need to think a little bit about the comment: // error was already displayed to the user, just stop.

This comment contains a lot of information about the structure of this program. SaveChanges() attempts to do the save, it catches any exceptions, and then does the UI updates, completely within its own flow. That simple method call conceals a huge amount of spaghetti code.

Sometimes, code doesn't look terrible to your brain, but when you feel its badness in your spleen, listen to it. Spleen-oriented Programming is where you make sure none of the code you have to touch makes your spleen hurt.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianShirish Agarwal: The Great Train Robbery

I had a twitter fight few days back with a gentleman and the article is a result of that fight. Sadly, I do not know the name of the gentleman as he goes via a psuedo name and then again I’ve not taken permission from him to quote him in either way. So I will just state the observations I was able to make from the conversations we had. As people who read this blog regularly would know, I am and have been against Railway Privatization which is happening in India. And will be sharing some of the case studies from other countries as to how it panned out for them.

UK Railways


How Privatization Fails : Railways

The Above video is by a gentleman called Shaun who basically shared that privatization as far as UK is concerned is nothing but monopolies and while there are complex reasons for the same, the design of the Railways is such that it will always be a monopoly structure. At the most what you can do is have several monopolies but that is all that can happen. The idea of competition just cannot happen. Even the idea that subsidies will be less or/and trains will run on time is far from fact. Both of these facts have been checked and found to be truthful by fullfact.org. It is and argued that UK is small and perhaps it doesn’t have the right conditions. It is probably true but still we do deserve to have a glance at the UK railway map.

UK railway map with operatorsUK railway map with operators

The above map is copyrighted to Map Marketing where you could see it today . As can be seen above most companies had their own specified areas. Now if you had looked at the facts then you would have seen that UK fares have been higher. In fact, an oldish article from Metro (a UK publication) shares the same. In fact, UK nationalized its railways effectively as many large rail operators were running in red. Even Scotland is set to nationalised back in March 2022. Remember this is a country which hasn’t seen inflation go upwards of 5% in nearly a decade. The only outlier was 2011 where they indeed breached the 5% mark. So from this, what we see is ‘Private Gains’ and “Private Gains Public Losses’ perhaps seem fit. But then maybe we didn’t use the right example. Perhaps Japan would be better. They have bullet trains while UK is still thinking about it. (HS2).

Japanese Railway

Below is the map of Japanese Railway

Railway map of Japan with ‘private ownership’ – courtesy Wikimedia commons

Japan started privatizing its railway in 1987 and to date it has not been fully privatized. And on top of it, amount as much as ¥24 trillion of the long-term JNR debt was shouldered by the government at the expense of taxpayers of Japan while also reducing almost 1/4th of it employees. To add to it, while some parts of Japanese Railways did make profits, many of them made profits by doing large-scale non-railway business mostly real estate of land adjacent to railway stations. In many cases, it seems this went all the way up to 60% of the revenue. The most profitable has been the Shinkansen though. And while it has been profitable, it has not been without safety scandals over the years, the biggest in recent years was the 2005 Amagasaki derailment. What was interesting to me was the Aftermath, while the Wikipedia page doesn’t share much, I had read at the time and probably could be found how a lot of ordinary people stood up to the companies in a country where it is a known fact that most companies are owned by the Yakuza. And this is a country where people are loyal to their corporation or company no matter what. It is a strange culture to west and also here in India where people change jobs on drop of hat, although nowadays we have record unemployment. So perhaps Japan too does not meet our standard as it doesn’t do competition with each other but each is a set monopoly in those regions. Also how much subsidy is there or not is not really transparent.

U.S. Railways

Last, but not the least I share the U.S. Railway map. This is provided by A Mr. Tom Alison on reddit on channel maporn. As the thread itself is archived and I do not know the gentleman concerned, nor have taken permission for the map, hence sharing the compressed version –


U.S. Railway lines with the different owners

Now the U.S. Railways is and has always been peculiar as unlike the above two the U.S. has always been more of a freight network. Probably, much of it has to do that in the 1960’s when oil was cheap, the U.S. made zillions of roadways and romanticized the ‘road trip’ and has been doing it ever since. Also the creation of low-cost airlines definitely didn’t help the railways to have more passenger services, in fact the opposite.

There are and have been smaller services and attempts of privatization in both New Zealand and Australia and both have been failures. Please see papers in that regard. My simple point is this, as can be seen above, there have been various attempts at privatization of railways and most of them have been a mixed bag. The only one which comes close to what we think as good is Japanese but that also used a lot of public debt which we don’t know what will happen on next. Also for higher-speed train services like a bullet train or whatever, you need to direct, no hair pen bends. In fact, a good talk on the topic is the TBD podcast which while it talks about hyperloop, the same questions is and would be asked if were to do in India. Another thing to be kept in mind is that the Japanese have been exceptional builders and this is because they have been forced to. They live in a seismically active zone which made Fukushima disaster a reality but at the same time, their buildings are earthquake-resistant.

Standard Disclaimer – The above is a simplified version of things. I could have added in financial accounts but that again has no set pattern. For e.g. some Railways use accrual, some use cash and some use hybrid. I could have also shared in either the guage or electrification but all have slightly different standards, although uniguage is something that all Railways aspire for and electrification is again something that all Railways want although in many cases it just isn’t economically feasible.

Indian Railways

Indian Railways itself recently made the move from Cash to Accrual couple of years back. In-between for a couple of years, it was hybrid. The sad part is and was you can now never measure against past performance in the old way because it is so different. Hence, whether the Railways will be making a loss or a profit, we would come to know only much later. Also, most accountants don’t know the new system well, so it is gonna take more time, how much unknown. Sadly, what GOI did a few years back is merge the Railway budget into the Union Budget. Of course, the excuse they gave is too many pressures of new trains, while the truth is, by doing this, they decreased transparency about the whole thing. For e.g. for the last few years, the only state which had significant work being done is in U.P. (Uttar Pradesh) and a bit in Goa, although that is has been protested time and again. I being from the neighborly state of Maharashtra , and have been there several times. Now it does feels all like a dream, going to Goa :(.

Covid news

Now before I jump on the news, I should share the movie ‘Virus’ (2019) which was made by the talented Aashiq Abu. Even though, am not a Malayalee, I still have enjoyed many of his movies simply because he is a terrific director and Malayalam movies, at least most of them have English subtitles and lot of original content.. Interestingly, unlike the first couple of times when I saw it a couple of years back. The first time I saw it, I couldn’t sleep a wink for a week. Even the next time, it was heavy. I had shared the movie with mum, and even she couldn’t see it in one go. It is and was that powerful Now maybe because we are headlong in the pandemic, and the madness is all around us. There are two terms that helped me though understand a great deal of what is happening in the movie, the first term was ‘altered sensorium’ which has been defined here. The other is saturation or to be more precise ‘oxygen saturation‘. This term has also entered the Indian twitter lexicon quite a bit as India has started running out of oxygen. Just today Delhi High Court did an emergency hearing on the subject late at night. Although there is much to share about the mismanagement of the center, the best piece on the subject has been by Miss Priya Ramani. Yup, the same lady who has won against M.J. Akbar and this is when Mr. Akbar had 100 lawyers for this specific case. It would be interesting to see what happens ahead.

There are however few things even she forgot in her piece, For e.g. reverse migration i.e. from urban to rural migration started again. Two articles from different entities sharing a similar outlook.Sadly, the right have no empathy or feeling for either the poor or the sick. Even the labor minister Santosh Gangwar’s statement that around 1.04 crores were the only people who walked back home. While there is not much data, however some work/research has been done on migration to cites that the number could be easily 10 times as much. And this was in the lockdown of last year. This year, again the same issue has re-surfaced and migrants learning lessons started leaving cities. And I’m ashamed to say I think they are doing the right thing. Most State Governments have not learned lessons nor have they done any work to earn the trust of migrants. This is true of almost all state Governments. Last year, just before the lockdown was announced, me and my friend spent almost 30k getting a cab all the way from Chennai to Pune, how much we paid for the cab, how much we bribed the various people just so we could cross the state borders to return home to our anxious families. Thankfully, unlike the migrants, we were better off although we did make a loss. I probably wouldn’t be alive if I were in their situation as many didn’t. That number is still in the air �undocumented deaths’ 😦

Vaccine issues

Currently, though the issue has been the Vaccine and the pricing of the same. A good article to get a summation of the issues outlined has been shared on Economist. Another article that goes to the heart of the issue is at scroll. To buttress the argument, the SII chairman had shared this few weeks back –

Adar Poonawala talking to Vishnu Som on Left, right center, 7th April 2021.

So, a licensee manufacturer wants to make super-profits during the pandemic. And now, as shared above they can very easily do it. Even the quotes given to nearby countries is smaller than the quotes given to Indian states –

Prices of AstraZeneca among various states and countries.

The situation around beds, vaccines, oxygen, anything is so dire that people could go to any lengths to save their loved ones. Even if they know if a certain medicine doesn’t work. For e.g. Remdesivir, 5 WHO trials have concluded that it doesn’t increase mortality. Heck, even AIIMS chief said the same. But both doctors and relatives desperation to cling on hope has made Remdesivir as a black market drug with unoffical prices hovering anywhere between INR 14k/- to INR30k/- per vial. One of the executives of a top firm was also arrested in Gujarat. In Maharashtra, the opposition M.P. came to the ‘rescue‘ of the officials of Bruick pharms in Mumbai.

Sadly, this strange affliction to the party in the center is also there in my extended family. At one end, they will heap praise on Mr. Modi, at the same time they can’t get wait to get fast out of India. Many of them have settled in horrors of horror Dubai, as it is the best place to do business, get international schools for the young ones at decent prices, cheaper or maybe a tad more than what they paid in Delhi or elsewhere. Being an Agarwal or a Gupta makes it easier to compartmentalize both things. Ease of doing business, 5 days flat to get a business registered, up and running. And the paranoia is still there. They won’t talk on the phone about him because they are afraid they may say something which comes back to bite them. As far as their decision to migrate, can’t really blame them. If I were 20-25 yeas younger and my mum were in a better shape than she is, we probably would have migrated as well, although would have preferred Europe than anywhere else.

Internet Freedom and Aarogya Setu App.


Internet Freedom had shared the chilling effects of the Aarogya Setu App. This had also been shared by FSCI in the past, and recently had their handle being banned on Twitter. This was also apparent in a legal bail order which the high court judge gave. While I won’t go into the merits and demerits of the bail order, it is astounding for the judge to say that the accused, even though he would be on bail install an app. so he can be surveilled. And this is a high court judge, such a sad state of affairs. We seem to be putting up new lows every day when it comes to judicial jurisprudence. One interesting aspect of the whole case was shared by Aishwarya Iyer. She shared a story that she and her team worked on quint which raises questions on the quality of the work done by Delhi Police. This is of course, up to Delhi Police to ascertain the truth of the matter because unless and until they are able to tie in the PMO’s office in for a leak or POTUS’s office it hardly seems possible. For e.g. the dates when two heads of state can meet each other would be decided by the secretaries of the two. Once the date is known, it would be shared with the press while at the same time some sort of security apparatus would kick in place. It is incumbent, especially on the host to take as much care as he can of the guest. We all remember that World War 1 (the war to end all wars) started due to the murder of Archduke Ferdinand.

As nobody wants that, the best way is to make sure that a political murder doesn’t happen on your watch. Now while I won’t comment on what it would be, it would be safe to assume that it would be z+ security along with higher readiness. Especially if it as somebody as important as POTUS. Now, it would be quite a reach for Delhi Police to connect the two dates. They either will have to get creative with the dates or some other way. Otherwise, with practically no knowledge in the public domain, they can�t work in limbo. In either case, I do hope the case comes up for hearing soon and we see what the Delhi Police says and contends in the High Court about the same. At the very least, it would be irritating for them to talk of the dates unless they can contend some mass conspiracy which involves the PMO (and would bring into question the constant vetting done by the Intelligence dept. of all those who work in PMO). And this whole case is to kind of shelter to the Delhi riots which happened in which majorly the Muslims died but their deaths lay unaccounted till date 😦

Conclusion

In Conclusion, I would like to share a bit of humor because right now the atmosphere is humorless, both with authoritarian tendencies of the Central Govt. and the mass mismanagement of public health which they now have left to the state to do as they fit. The peice I am sharing is from arre, one of my goto sites whenever I feel low.

,

Planet DebianEnrico Zini: Python output buffering

Here's a little toy program that displays a message like a split-flap display:

#!/usr/bin/python3

import sys
import time

def display(line: str):
    cur = '0' * len(line)
    while True:
        print(cur, end="\r")
        if cur == line:
            break
        time.sleep(0.09)
        cur = "".join(chr(min(ord(c) + 1, ord(oc))) for c, oc in zip(cur, line))
    print()

message = " ".join(sys.argv[1:])
display(message.upper())

This only works if the script's stdout is unbuffered. Pipe the output through cat, and you get a long wait, and then the final string, without the animation.

What is happening is that since the output is not going to a terminal, optimizations kick in that buffer the output and send it in bigger chunks, to make processing bulk I/O more efficient.

I haven't found a good introductory explanation of buffering in Python's documentation. The details seem to be scattered in the io module documentation and they mostly assume that one is already familiar with concepts like unbuffered, line-buffered or block-buffered. The libc documentation has a good quick introduction that one can read to get up to speed.

Controlling buffering in Python

In Python, one can force a buffer flush with the flush() method of the output file descriptor, like sys.stdout.flush(), to make sure pending buffered output gets sent.

Python's print() function also supports flush=True as an optional argument:

    print(cur, end="\r", flush=True)

If one wants to change the default buffering for a file descriptor, since Python 3.7 there's a convenient reconfigure() method, which can reconfigure line buffering only:

sys.stdout.reconfigure(line_buffering=True)

Otherwise, the technique is to reassign sys.stdout to something that has the behaviour one wants (code from this StackOverflow thread):

import io
# Python 3, open as binary, then wrap in a TextIOWrapper with write-through.
sys.stdout = io.TextIOWrapper(open(sys.stdout.fileno(), 'wb', 0), write_through=True)

If one needs all this to implement a progressbar, one should make sure to have a look at the progressbar module first.

Cryptogram On North Korea’s Cyberattack Capabilities

Excellent New Yorker article on North Korea’s offensive cyber capabilities.

Cryptogram Backdoor Found in Codecov Bash Uploader

Developers have discovered a backdoor in the Codecov bash uploader. It’s been there for four months. We don’t know who put it there.

Codecov said the breach allowed the attackers to export information stored in its users’ continuous integration (CI) environments. This information was then sent to a third-party server outside of Codecov’s infrastructure,” the company warned.

Codecov’s Bash Uploader is also used in several uploaders — Codecov-actions uploader for Github, the Codecov CircleCl Orb, and the Codecov Bitrise Step — and the company says these uploaders were also impacted by the breach.

According to Codecov, the altered version of the Bash Uploader script could potentially affect:

  • Any credentials, tokens, or keys that our customers were passing through their CI runner that would be accessible when the Bash Uploader script was executed.
  • Any services, datastores, and application code that could be accessed with these credentials, tokens, or keys.
  • The git remote information (URL of the origin repository) of repositories using the Bash Uploaders to upload coverage to Codecov in CI.

Add this to the long list of recent supply-chain attacks.

Planet DebianSven Hoexter: bullseye: doveadm as unprivileged user with dovecot ssl config

The dovecot version which will be released with bullseye seems to require some subtle config adjustment if you

  • use ssl (ok that should be almost everyone)
  • and you would like to execute doveadm as a user, who can not read the ssl cert and keys (quite likely).

I guess one of the common cases is executing doveadm pw e.g. if you use postfixadmin. For myself that manifested in the nginx error log, which I use in combination with php-fpm, as.

2021/04/19 20:22:59 [error] 307467#307467: *13 FastCGI sent in stderr: "PHP message:
Failed to read password from /usr/bin/doveadm pw ... stderr: doveconf: Fatal: 
Error in configuration file /etc/dovecot/conf.d/10-ssl.conf line 12: ssl_cert:
Can't open file /etc/dovecot/private/dovecot.pem: Permission denied

You easily see the same error message if you just execute something like doveadm pw -p test123. The workaround is to move your ssl configuration to a new file which is only readable by root, and create a dummy one which disables ssl, and has a !include_try on the real one. Maybe best explained by showing the modification:

cd /etc/dovecot/conf.d
cp 10-ssl.conf 10-ssl_server
chmod 600 10-ssl_server
echo 'ssl = no' > 10-ssl.conf
echo '!include_try 10-ssl_server' >> 10-ssl.conf

Discussed upstream here.

Kevin RuddThe Guardian: Kevin Rudd and Malcolm Turnbull – Australia’s ambition on climate change is held back by a toxic mix of rightwing politics, media and vested interests

By Kevin Rudd and Malcolm Turnbull

It was always expected that Joe Biden’s election would be a massive shot in the arm for international climate action, but the scale of that boost has been genuinely surprising.

The new president has now invited 40 world leaders to a virtual climate change summit coinciding with Earth Day this Thursday. China’s Xi Jinping will be there, following productive face-to-face talks last week between Biden’s climate envoy, John Kerry, and his Chinese counterpart, Xie Zhenhua, in Shanghai. Even Vladimir Putin is attending, despite divisions between Washington and the Russian leader over new sanctions.

Japan, South Korea and Canada are all expected to announce new medium-term 2030 emissions reduction plans this week, after earlier refusing to do so. Even China – the world’s largest emitter – last week signalled they may also be prepared to do more this decade above and beyond commitments they made at the end of last year.

Our country, however, continues to bury its head in the sand, despite the fact that Australia remains dangerously at risk of the economic and environmental consequences that will come from the climate crisis barrelling towards us.

Prime minister Scott Morrison’s refusal to adopt both a firm timeline to reach net zero emissions and to increase its own interim 2030 target leaves us effectively isolated in the western world. It also goes against what we signed up to through the Paris agreement – which both our governments worked so hard to secure.

According to our independent Climate Change Authority (CCA) and the Australian Energy Market Operator (Aemo), not only should Australia be doing much more as “our fair share” towards global efforts to reduce emissions, but importantly we also now have the capacity to do more.

The reality is Australia’s current target, set in 2015, to reduce emissions by 26 to 28% on 2005 levels by 2030 is now woefully inadequate – and was always intended to be updated this year. The Obama administration had exactly the same target as Australia, but aimed to achieve it five years earlier than us, which in reality made it much more ambitious than ours. And this week, the Biden administration is expected to announce a new 2030 pledge twice as deep as Australia’s current effort. This will set a new global litmus test for Australia’s own ambition, which as the CCA has said should be at least a 45% cut by 2030.

But, as two former prime ministers representing our nation’s centre-left and centre-right parties, the world shouldn’t give up hope on our country just yet. Thankfully, there is some cause for optimism. Our sun-drenched country has the highest per capita penetration of rooftop solar in the world. And with the right approach, Aemo has said that renewables could go from providing a quarter of electricity market demand on our populous eastern seaboard today to 75% in less than five years. The fact we are in a position to even be able to seize this technological opportunity is in large part due to the introduction in 2009 of a 20% clean renewable energy target for 2020 and the launching of the largest renewable clean energy project in our nation’s history (Snowy Hydro 2.0) by our respective governments.

The national consensus for climate action in Australia has also shifted markedly in recent years. Every state and territory government is now committed to net zero emissions, so too are our peak industry, business and agriculture groups, as well as our national airline, and even our largest mining company.

The main thing holding back Australia’s climate ambition is politics: a toxic coalition of the Murdoch press, the right wing of the Liberal and National parties, and vested interests in the fossil fuel sector.

Sadly, instead of seizing this technological opportunity and embracing this newfound national consensus, the government remains hell-bent on a “gas-fired recovery” from Covid-19. Old coal plants still generate around 75% of Australia’s electricity. But these are being replaced by renewables plus storage because they are a cheaper form of generation than the alternatives on offer.

Gas has a role to play in the transition, but that role is to steadily diminish as renewables continue to grow. To bet big on the future role of gas is to bet against the best engineering and economic advice coming out of Aemo, and to ignore the scientific advice that more gas in the grid will simply lead to more emissions. The only long-term gas-fired future we should be planning is green hydrogen made by electrolysing water with renewable energy.

Australia may be able to get away with showing up empty-handed to this week’s summit, but will find it even more difficult to do as a special guest of the British at the G7 leaders’ summit in June. We would be the only developed country in the room that is not committed to net zero by 2050. And we will find it even harder again to show up empty-handed at the COP26 Climate Conference in Glasgow at the end of the year, given more than 100 countries in the world have pledged to increase their ambition.

There are also consequences for this inaction.

As the rolling apocalypse of fires and floods in our country demonstrates, Australia is on the global frontline of this climate crisis. Last year’s wildfires claimed dozens of lives, destroyed thousands of homes, wiped out billions of animals, and cost billions of dollars.

With more than 70% of Australia’s trade now with countries committed to net zero, the prospect of carbon border taxes being introduced – beginning with the European Union – also leaves us economically exposed. So too does our continued faith in coal as a leading export commodity, especially with many of the 50 proposed new coalmines in Australia already struggling to attract finance. Instead of expanding coal, we should be increasing our support for ground-breaking projects such as the Asian Renewable Energy Hub in the Pilbara region which could allow us to become a green hydrogen supplier for Asia’s clean energy transition. There are also promising new hydrogen projects planned for Queensland centred on Gladstone, a traditional coal port. Building dozens of new coalmines won’t set Australia up for the future; it will lock us into the past.

Australians like to think we “punch above our weight” on the global stage. We certainly do when we come to climate change: we emit more than 40 other countries with larger populations, and our per capita emissions are the highest of any advanced economy. This is not a record we should be proud of at all.

It’s often fatuously claimed that what countries like Australia do make no difference to the global climate because we account for only about 1.2% of emissions. The reality is that Australia is the third-largest fossil fuel exporter in the world. Our own environment is especially vulnerable to global warming as the recent massive bushfires demonstrated. Our economy is also vulnerable to the transition away from fossil fuels. Denial of the reality of global warming and the need to transition to a prosperous clean energy economy is abandoning our responsibilities as much to Australian workers as it is to the world.

Hopefully, at this week’s summit the prime minister will receive the wake-up call the government needs. In the meantime, the rest of the world should not give up on us yet. If our country’s last decade has demonstrated anything – with five prime ministers in just eight years – it’s that political winds can change very quickly.

Kevin Rudd, from the Australian Labor party, was Australia’s prime minister between 2007 and 2010, and again in 2013. Malcolm Turnbull, from the Liberal party of Australia, was prime minister between 2015 and 2018.

First published in The Guardian

Image:

The post The Guardian: Kevin Rudd and Malcolm Turnbull – Australia’s ambition on climate change is held back by a toxic mix of rightwing politics, media and vested interests appeared first on Kevin Rudd.

Worse Than FailureNews Roundup: Single Point of Fun

Let’s quickly recap the past three news roundups:

  1. Flash’s effect on web user experience
  2. Adding every requirement as a feature in a computer*
  3. A terrible UI that cost $900 million

At first glance it appears that poorly thought-through user experience is my sole fascination. But when the Suez Canal blockage story from March kept my full attention for nearly 10 days, I realized that my real fascination is the unintended consequences of poorly thought-through user experiences. Sometimes the poor user experience is relatively minor enough that a new protocol can be developed (in the case of Flash) or an anxiety-inducing technology gets made (in the case of the Expanscape).

But when all risks of the current user experience aren’t considered, then there are real financial consequences - just like in the case of the Suez Canal where one ship, the Ever Given, blocked 10% of global trade. The fact so much traffic comes through the canal makes it a very important single point of failure. (In case anyone wasn’t paying attention to global shipping news a few weeks ago, a large container ship piloted itself into the side of the canal. The ship is so famous to now have its own Wikipedia page, where it’s been reported that the now-unstuck ship has been fined $916 million - $300 of which is for “loss of reputation”.) So maybe my thesis needs to be amended to: the unintended consequences of poorly thought-through user experiences due to single points of failure. (It’s a mouthful, but it feels right.)

There’s the story of Mizuho Bank, whose ATMs started eating customer cards after some routine data migration work caused country-wide system malfunctions. Single point of failure: The IT team’s risk management process.

There’s the story of Ubiquiti, whose data breach in January was a lot more...relatable after a whistleblower complaint. Single point of failure: Password managers. (They’re not as secure when you leave the front door open.)

The anonymous whistleblower alleges that the statement was written in such a way to imply that the vulnerability was on the third party and that Ubiquiti was impacted by that. Among other things, the whistleblower alleges that the hacker(s) were able to target the system by acquiring privileged credentials from a Ubiquiti employee’s LastPass account.

And then there’s the story of Netflix, who is trying to sever the only remaining way I leech off of my parents. Single point of failure: family.

Citi equity analyst Jason Bazinet said that password sharing costs U.S. streaming companies $25 billion annually in lost revenue, and Netflix owns about 25% of that loss.

Perhaps the final example doesn't seem as critical as the first two, but it's not your Netflix access at stake.

Single points of failure are fascinating to me because, as it gets easy to be complacent about dealing with these vulnerabilities as their value increases and no catastrophes arise. I hope to use this space to keep reacting to, and perhaps even being proactive about, technical and operational single point of failure stories that I found.


Quick hits:

*As an addendum to my story, Nature Magazine published a study that shows that “people are more likely to consider solutions that add features than solutions that remove them, even when removing features is more efficient”.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianDirk Eddelbuettel: Rblpapi 0.3.11: Several Updates

A new version 0.3.11 of Rblpapi is now arriving at CRAN. It comes two years after the release of version Rblpapit 0.3.10 and brings a few updates and extensions.

Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the eleventh release since the package first appeared on CRAN in 2016. Changes are detailed below. Special thanks to James, Maxime and Michael for sending us pull requests.

Changes in Rblpapi version 0.3.11 (2021-04-20)

  • Support blpAutoAuthenticate and B-PIPE access, refactor and generalise authentication (James Bell in #285)

  • Deprecate excludeterm (John in #306)

  • Correct example in README.md (Maxime Legrand in #314)

  • Correct bds man page (and code) (Michael Kerber, and John, in #320)

  • Add GitHub Actions continuous integration (Dirk in #323)

  • Remove bashisms detected by R CMD check (Dirk #324)

  • Switch vignette to minidown (Dirk in #331)

  • Switch unit tests framework to tinytest (Dirk in #332)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityNote to Self: Create Non-Exhaustive List of Competitors

What was the best news you heard so far this month? Mine was learning that KrebsOnSecurity is listed as a restricted competitor by Gartner Inc. [NYSE:IT] — a $4 billion technology goliath whose analyst reports can move markets and shape the IT industry.

Earlier this month, a reader pointed my attention to the following notice from Gartner to clients who are seeking to promote Gartner reports about technology products and services:

What that notice says is that KrebsOnSecurity is somehow on Gartner’s “non exhaustive list of competitors,” i.e., online venues where technology companies are not allowed to promote Gartner reports about their products and services.

The bulk of Gartner’s revenue comes from subscription-based IT market research. As the largest organization dedicated to the analysis of software, Gartner’s network of analysts are well connected to the technology and software industries. Some have argued that Gartner is a kind of private social network, in that a significant portion of Gartner’s competitive position is based on its interaction with an extensive network of software vendors and buyers.

Either way, the company regularly serves as a virtual kingmaker with their trademark “Magic Quadrant” designations, which rate technology vendors and industries “based on proprietary qualitative data analysis methods to demonstrate market trends, such as direction, maturity and participants.”

The two main subjective criteria upon which Gartner bases those rankings are “the ability to execute” and “completeness of vision.” They also break companies out into categories such as “challengers,” “leaders,” “visionaries” and “niche players.”

Gartner’s 2020 “Magic Quadrant” for companies that provide “contact center as a service” offerings.

So when Gartner issues a public report forecasting that worldwide semiconductor revenue will fall, or that worldwide public cloud revenue will grow, those reports very often move markets.

Being listed by Gartner as a competitor has had no discernable financial impact on KrebsOnSecurity, or on its reporting. But I find this designation both flattering and remarkable given that this site seldom promotes technological solutions.

Nor have I ever offered paid consulting or custom market research (although I did give a paid keynote speech at Gartner’s 2015 conference in Orlando, which is still by far the largest crowd I’ve ever addressed).

Rather, KrebsOnSecurity has sought to spread cybersecurity awareness primarily by highlighting the “who” of cybercrime — stories told from the perspectives of both attackers and victims. What’s more, my research and content is available to everyone at the same time, and for free.

I rarely do market predictions (or prognostications of any kind), but in deference to Gartner allow me to posit a scenario in which major analyst firms start to become a less exclusive and perhaps less relevant voice as both an influencer and social network.

For years I have tried to corrupt more of my journalist colleagues into going it alone, noting that solo blogs and newsletters can not only provide a hefty boost from newsroom income, but they also can produce journalism that is just as timely, relevant and impactful.

Those enticements have mostly fallen on deaf ears. Recently, however, an increasing number of journalists from major publications have struck out on their own, some in reportorial roles, others as professional researchers and analysts in their own right.

If Gartner considers a one-man blogging operation as competition, I wonder what they’ll think of the coming collective output from an entire industry of newly emancipated reporters seeking more remuneration and freedom offered by independent publishing platforms like Substack, Patreon and Medium.

Oh, I doubt any group of independent journalists would seek to promulgate their own Non-Exclusive List of Competitors at Whom Thou Shalt Not Publish. But why should they? One’s ability to execute does not impair another’s completeness of vision, nor vice versa. According to Gartner, it takes all kinds, including visionaries, niche players, leaders and challengers.

Cryptogram When AIs Start Hacking

If you don’t have enough to worry about already, consider a world where AIs are hackers.

Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long.

As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage.

Okay, maybe this is a bit of hyperbole, but it requires no far-future science fiction technology. I’m not postulating an AI “singularity,” where the AI-learning feedback loop becomes so fast that it outstrips human understanding. I’m not assuming intelligent androids. I’m not assuming evil intent. Most of these hacks don’t even require major research breakthroughs in AI. They’re already happening. As AI gets more sophisticated, though, we often won’t even know it’s happening.

AIs don’t solve problems like humans do. They look at more types of solutions than us. They’ll go down complex paths that we haven’t considered. This can be an issue because of something called the explainability problem. Modern AI systems are essentially black boxes. Data goes in one end, and an answer comes out the other. It can be impossible to understand how the system reached its conclusion, even if you’re a programmer looking at the code.

In 2015, a research group fed an AI system called Deep Patient health and medical data from some 700,000 people, and tested whether it could predict diseases. It could, but Deep Patient provides no explanation for the basis of a diagnosis, and the researchers have no idea how it comes to its conclusions. A doctor either can either trust or ignore the computer, but that trust will remain blind.

While researchers are working on AI that can explain itself, there seems to be a trade-off between capability and explainability. Explanations are a cognitive shorthand used by humans, suited for the way humans make decisions. Forcing an AI to produce explanations might be an additional constraint that could affect the quality of its decisions. For now, AI is becoming more and more opaque and less explainable.

Separately, AIs can engage in something called reward hacking. Because AIs don’t solve problems in the same way people do, they will invariably stumble on solutions we humans might never have anticipated­ — and some will subvert the intent of the system. That’s because AIs don’t think in terms of the implications, context, norms, and values we humans share and take for granted. This reward hacking involves achieving a goal but in a way the AI’s designers neither wanted nor intended.

Take a soccer simulation where an AI figured out that if it kicked the ball out of bounds, the goalie would have to throw the ball in and leave the goal undefended. Or another simulation, where an AI figured out that instead of running, it could make itself tall enough to cross a distant finish line by falling over it. Or the robot vacuum cleaner that instead of learning to not bump into things, it learned to drive backwards, where there were no sensors telling it it was bumping into things. If there are problems, inconsistencies, or loopholes in the rules, and if those properties lead to an acceptable solution as defined by the rules, then AIs will find these hacks.

We learned about this hacking problem as children with the story of King Midas. When the god Dionysus grants him a wish, Midas asks that everything he touches turns to gold. He ends up starving and miserable when his food, drink, and daughter all turn to gold. It’s a specification problem: Midas programmed the wrong goal into the system.

Genies are very precise about the wording of wishes, and can be maliciously pedantic. We know this, but there’s still no way to outsmart the genie. Whatever you wish for, he will always be able to grant it in a way you wish he hadn’t. He will hack your wish. Goals and desires are always underspecified in human language and thought. We never describe all the options, or include all the applicable caveats, exceptions, and provisos. Any goal we specify will necessarily be incomplete.

While humans most often implicitly understand context and usually act in good faith, we can’t completely specify goals to an AI. And AIs won’t be able to completely understand context.

In 2015, Volkswagen was caught cheating on emissions control tests. This wasn’t AI — human engineers programmed a regular computer to cheat — but it illustrates the problem. They programmed their engine to detect emissions control testing, and to behave differently. Their cheat remained undetected for years.

If I asked you to design a car’s engine control software to maximize performance while still passing emissions control tests, you wouldn’t design the software to cheat without understanding that you were cheating. This simply isn’t true for an AI. It will think “out of the box” simply because it won’t have a conception of the box. It won’t understand that the Volkswagen solution harms others, undermines the intent of the emissions control tests, and is breaking the law. Unless the programmers specify the goal of not behaving differently when being tested, an AI might come up with the same hack. The programmers will be satisfied, the accountants ecstatic. And because of the explainability problem, no one will realize what the AI did. And yes, knowing the Volkswagen story, we can explicitly set the goal to avoid that particular hack. But the lesson of the genie is that there will always be unanticipated hacks.

How realistic is AI hacking in the real world? The feasibility of an AI inventing a new hack depends a lot on the specific system being modeled. For an AI to even start on optimizing a problem, let alone hacking a completely novel solution, all of the rules of the environment must be formalized in a way the computer can understand. Goals — known in AI as objective functions — need to be established. And the AI needs some sort of feedback on how well it’s doing so that it can improve.

Sometimes this is simple. In chess, the rules, objective, and feedback — did you win or lose? — are all precisely specified. And there’s no context to know outside of those things that would muddy the waters. This is why most of the current examples of goal and reward hacking come from simulated environments. These are artificial and constrained, with all of the rules specified to the AI. The inherent ambiguity in most other systems ends up being a near-term security defense against AI hacking.

Where this gets interesting are systems that are well specified and almost entirely digital. Think about systems of governance like the tax code: a series of algorithms, with inputs and outputs. Think about financial systems, which are more or less algorithmically tractable.

We can imagine equipping an AI with all of the world’s laws and regulations, plus all the world’s financial information in real time, plus anything else we think might be relevant; and then giving it the goal of “maximum profit.” My guess is that this isn’t very far off, and that the result will be all sorts of novel hacks.

But advances in AI are discontinuous and counterintuitive. Things that seem easy turn out to be hard, and things that seem hard turn out to be easy. We don’t know until the breakthrough occurs.

When AIs start hacking, everything will change. They won’t be constrained in the same ways, or have the same limits, as people. They’ll change hacking’s speed, scale, and scope, at rates and magnitudes we’re not ready for. AI text generation bots, for example, will be replicated in the millions across social media. They will be able to engage on issues around the clock, sending billions of messages, and overwhelm any actual online discussions among humans. What we will see as boisterous political debate will be bots arguing with other bots. They’ll artificially influence what we think is normal, what we think others think.

The increasing scope of AI systems also makes hacks more dangerous. AIs are already making important decisions about our lives, decisions we used to believe were the exclusive purview of humans: Who gets parole, receives bank loans, gets into college, or gets a job. As AI systems get more capable, society will cede more — and more important — decisions to them. Hacks of these systems will become more damaging.

What if you fed an AI the entire US tax code? Or, in the case of a multinational corporation, the entire world’s tax codes? Will it figure out, without being told, that it’s smart to incorporate in Delaware and register your ship in Panama? How many loopholes will it find that we don’t already know about? Dozens? Thousands? We have no idea.

While we have societal systems that deal with hacks, those were developed when hackers were humans, and reflect human speed, scale, and scope. The IRS cannot deal with dozens — let alone thousands — of newly discovered tax loopholes. An AI that discovers unanticipated but legal hacks of financial systems could upend our markets faster than we could recover.

As I discuss in my report, while hacks can be used by attackers to exploit systems, they can also be used by defenders to patch and secure systems. So in the long run, AI hackers will favor the defense because our software, tax code, financial systems, and so on can be patched before they’re deployed. Of course, the transition period is dangerous because of all the legacy rules that will be hacked. There, our solution has to be resilience.

We need to build resilient governing structures that can quickly and effectively respond to the hacks. It won’t do any good if it takes years to update the tax code, or if a legislative hack becomes so entrenched that it can’t be patched for political reasons. This is a hard problem of modern governance. It also isn’t a substantially different problem than building governing structures that can operate at the speed and complexity of the information age.

What I’ve been describing is the interplay between human and computer systems, and the risks inherent when the computers start doing the part of humans. This, too, is a more general problem than AI hackers. It’s also one that technologists and futurists are writing about. And while it’s easy to let technology lead us into the future, we’re much better off if we as a society decide what technology’s role in our future should be.

This is all something we need to figure out now, before these AIs come online and start hacking our world.

This essay previously appeared on Wired.com

Worse Than FailureCodeSOD: Universal Problems

Universally Unique Identifiers are a very practical solution to unique IDs. With 10^30 possible values, the odds of having a collision are, well, astronomical. They're fast enough to generate, random enough to be unique, and there are so many of them that- well, they may not be universally unique through all time, but they're certainly unique enough.

Right?

Krysk's predecessor isn't so confident.

key = uuid4() if(key in self.unloadQueue): # it probably couldn't possibly collide twice right? # right guys? :D key = uuid4() self.unloadQueue[key] = unloaded

The comments explain the code, but leave me with so many more questions. Did they actually have a collision in the past? Exactly how many entries are they putting in this unloadQueue? The plausible explanation is that the developer responsible was being overly cautious. But… were they?

Krysk writes: "Some code in our production server software. Comments like these are the stuff of nightmares for maintenance programmers."

I don't know about nightmares, but I might lose some sleep puzzling over this.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Cryptogram Biden Administration Imposes Sanctions on Russia for SolarWinds

On April 15, the Biden administration both formally attributed the SolarWinds espionage campaign to the Russian Foreign Intelligence Service (SVR), and imposed a series of sanctions designed to punish the country for the attack and deter future attacks.

I will leave it to those with experience in foreign relations to convince me that the response is sufficient to deter future operations. To me, it feels like too little. The New York Times reports that “the sanctions will be among what President Biden’s aides say are ‘seen and unseen steps in response to the hacking,” which implies that there’s more we don’t know about. Also, that “the new measures are intended to have a noticeable effect on the Russian economy.” Honestly, I don’t know what the US should do. Anything that feels more proportional is also more escalatory. I’m sure that dilemma is part of the Russian calculus in all this.

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 03)

This week on my podcast, part three of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).

MP3

Planet DebianRitesh Raj Sarraf: Catching Up Your Sources

I’ve mostly had the preference of controlling my data rather than depend on someone else. That’s one reason why I still believe email to be my most reliable medium for data storage, one that is not plagued/locked by a single entity. If I had the resources, I’d prefer all digital data to be broken down to its simplest form for storage, like email format, and empower the user with it i.e. their data.

Yes, there are free services that are indirectly forced upon common users, and many of us get attracted to it. Many of us do not think that the information, which is shared for the free service in return, is of much importance. Which may be fair, depending on the individual, given that they get certain services without paying any direct dime.

New age communication

So first, we had email and usenet. As I mentioned above, email was designed with fine intentions. Intentions that make it stand even today, independently.

But not everything, back then, was that great either. For example, instant messaging was very closed and centralised then too. Things like: ICQ, MSN, Yahoo Messenger; all were centralized. I wonder if people still have access to their ICQ logs.

Not much has chagned in the current day either. We now have domination by: Facebook Messenger, Google Whatever the new marketing term they introdcue, WhatsApp, Telegram, Signal etc. To my knowledge, they are all centralized.

Over all this time, I’m yet to see a product come up with good (business) intentions, to really empower the end user. In this information age, the most invaluable data is user activity. That’s one data everyone is after. If you decline to share that bit of free data in exchange for the free services, mind you, that that free service like Facebook, Google, Instagram, WhatsApp, Truecaller, Twitter; none of it would come to you at all. Try it out.

So the reality is that while you may not be valuating the data you offer in exchange correctly, there’s a lot that is reaped from it. But still, I think each user has (and should have) the freedom to opt in for these tech giants and give them their personal bit, for free services in return. That is a decent barter deal. And it is a choice that one if free to choose

Retaining my data

I’m fond of keeping an archive folder in my mailbox. A folder that holds significant events in the form of an email usually, if documented. Over the years, I chose to resort to the email format because I felt it was more reliable in the longer term than any other formats.

The next best would be plain text.

In my lifetime, I have learnt a lot from the internet; so it is natural that my preference has been with it. Mailing Lists, IRCs, HOWTOs, Guides, Blog posts; all have helped. And over the years, I’ve come across hundreds of such content that I’d always like to preserve.

Now there are multiple ways to preserving data. Like, for example, big tech giants. In most usual cases, your data for your lifetime, should be fine with a tech giant. In some odd scenarios, you may be unlucky if you relied on a service provider that went bankrupt. But seriously, I think users should be fine if they host their data with Microsoft, Google etc; as long as they abide by their policies.

There’s also the catch of alignment. As the user, you should ensure to align (and transition) with the product offerings of your service provider. Otherwise, what may look constant and always reliable, will vanish in the blink of an eye. I guess Google Plus would be a good example. There was some Google Feed service too. Maybe Google Photos in the near decade future, just like Google Picasa in the previous (or current) decade.

History what is

On the topic of retaining information, lets take a small drift. I still admire our ancestors. I don’t know what went in their mind when they were documenting events in the form of scriptures, carvings, temples, churches, mosques etc; but one things for sure, they were able to leave a fine means of communication. They are all gone but a large number of those events are evident through the creations that they left. Some of those events have been strong enough that further rulers/invaders have had tough times trying to wipe it out from history. Remember, history is usually not the truth, but the statement to be believed by the teller. And the teller is usually the survivor, or the winner you may call.

But still, the information retention techniques were better.

I haven’t visited, but admire whosoever built the Kailasa Temple, Ellora, without which, we’d be made to believe what not by all invaders and rulers of the region. But the majestic standing of the temple, is a fine example of the history and the events that have occured in the past.

Dominance has the power to rewrite history and unfortunately that’s true and it has done its part. It is just that in a mere human’s defined lifetime, it is not possible to witness the transtion from current to history, and say that I was there then and I’m here now, and this is not the reality.

And if not dominance, there’s always the other bit, hearsay. With it, you can always put anything up for dispute. Because there’s no way one can go back in time and produce a fine evidence.

There’s also a part about religion. Religion can be highly sentimental. And religion can be a solid way to get an agenda going. For example, in India - a country which today consitutionally is a secular country, there have been multiple attempts to discard the belief, that never ever did the thing called Ramayana exist. That the Rama Setu, nicely reworded as Adam’s Bridge by who so ever, is a mere result of science. Now Rama, or Hanumana, or Ravana, or Valmiki, aren’t going to come over and prove that that is true or false. So such subjects serve as a solid base to get an agenda going. And probably we’ve even succeeded in proving and believing that there was never an event like Ramayana or the Mahabharata. Nor was there ever any empire other than the Moghul or the British Empire.

But yes, whosoever made the Ellora Temple or the many many more of such creations, did a fine job of making a dent for the future, to know of what the history possibly could also be.

Enough of the drift

So, in my opinion, having events documented is important. It’d be nice to have skills documented too so that it can be passed over generations but that’s a debatable topic. But events, I believe should be documented. And documented in the best possible ways so that its existence is not diminished.

A documentation in the form of certain carvings on a rock is far more better than links and posts shared on Facebook, Twitter, Reddit etc. For one, these are all corporate entities with vested interests and can pretext excuse in the name of compliance and conformance.

So, for the helpless state and generation I am in, I felt email was the best possible independent form of data retention in today’s age. If I really had the resources, I’d not rely on digital age. This age has no guarantee of retaining and recording information in any reliable manner. Instead, it is just mostly junk, which is manipulative and changeable, conditionally.

Email and RSS

So for my communication, I like to prefer emails over any other means. That doesn’t mean I don’t use the current trends. I do. But this blog is mostly about penning my desires. And desire be to have communication over email format.

Such is the case that for information useful over the internet, I crave to have it formatted in email for archival.

RSS feeds is my most common mode of keeping track of information I care about. Not all that I care for is available in RSS feeds but hey such is life. And adaptability is okay.

But my preference is still RSS.

So I use RSS feeds through a fine software called feed2imap. A software that fits my bill fairly well.

feed2imap is:

  • An rss feed news aggregator
  • Pulls and extracts news feeds in the form of an email
  • Can push the converted email over pop/imap
  • Can convert all image content to email mime attachment

In a gist, it makes the online content available to me offline in the most admired email format

In my mail box, in today’s day, my preferred email client is Evolution. It does a good job of dealing with such emails (rss feed items). An image example of accessing the rrs feed item through it is below

The good part is that my actual data is always independent of such MUAs. Tomorrow, as technology - trends - economics evolve, something new would come as a replacement but my data would still be mine.

Trends have shown. Data mobility is a commodity expectation now. As such, I wanted to have something fill in that gap for me. So that I could access my information - which I’ve preferred in a format - easily in today’s trendy options.

I tried multiple options on my current preferred platform of choice for mobiles, i.e. Android. Finally I came across Aqua Mail, which fits in most of my requirements.

Aqua Mail does

  • Connect to my laptop over imap
  • Can sync the requested folders
  • Can sync requested data for offlince accessibility
  • Can present the synchronized data in a quite flexible and customizable manner, to match my taste
  • Has a very extensible User Interface, allowing me to customize it to my taste

Pictures can do a better job of describing my english words.

All of this done with no dependence on network connectivity, post the sync. And all information is stored is best possible simple format.

Worse Than FailureCodeSOD: Maximum Max

Imagine you were browsing a C++ codebase and found a signature in a header file like this:

int max (int a, int b, int c, int d);

Now, what do you think this function does? Would your theories change if I told you that this was just dropped in the header for an otherwise unrelated class file that doesn't actually use the max function?

Let's look at the implementation, supplied by Mariette.

int max (int a, int b, int c, int d) { if (c == d) { // Do nothing.. } if (a >= b) { return a; } else { return b; } }

Now, I have a bit of a reputation of being a ternary hater, but I hate bad ternaries. Every time I write a max function, I write it with a ternary. In that case, it's way more readable, and so while I shouldn't fault the closing if statement in this function, it annoys me. But it's not the WTF anyway.

This max function takes four parameters, but only actually uses two of them. The //Do nothing.. comment is in the code, and that first if statement is there specifically because if it weren't, the compiler would throw warnings about unused parameters.

Those warnings are there for a reason. I suspect someone saw the warning, and contemplated fixing the function, but after seeing the wall of compiler errors generated by changing the function signature, chose this instead. Or maybe they even went so far as to change the behavior, to make it find the max of all four, only to discover that tests failed because there were methods which depended on it only checking the first two parameters.

I'm joking. I assume there weren't any tests. But it did probably crash when someone changed the behavior. Fortunately, no one had used the method expecting it to use all four parameters. Yet.

Mariette confirmed that attempts to fix the function broke many things in the application, so she did the only thing she could do: moved the function into the appropriate implementation files and surrounded it with comments describing its unusual behavior.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianIan Jackson: Otter - a game server for arbitrary board games

One of the things that I found most vexing about lockdown was that I was unable to play some of my favourite board games. There are online systems for many games, but not all. And online systems cannot support games like Mao where the players make up the rules as we go along.

I had an idea for how to solve this problem, and set about implementing it. The result is Otter (the Online Table Top Environment Renderer).

We have played a number of fun games of Penultima with it, and have recently branched out into Mao. The Otter is now ready to be released!

More about Otter

(cribbed shamelessly from the README)

Otter, the Online Table Top Environment Renderer, is an online game system.

But it is not like most online game systems. It does not know (nor does it need to know) the rules of the game you are playing. Instead, it lets you and your friends play with common tabletop/boardgame elements such as hands of cards, boards, and so on.

So it’s something like a “tabletop simulator” (but it does not have any 3D, or a physics engine, or anything like that).

This means that with Otter:

  • Supporting a new game, that Otter doesn’t know about yet, would usually not involve writing or modifying any computer programs.

  • If Otter already has the necessary game elements (cards, say) all you need to do is write a spec file saying what should be on the table at the start of the game. For example, most Whist variants that start with a standard pack of 52 cards are already playable.

  • You can play games where the rules change as the game goes along, or are made up by the players, or are too complicated to write as a computer program.

  • House rules are no problem, since the computer isn’t enforcing the rules - you and your friends are.

  • Everyone can interact with different items on the game table, at any time. (Otter doesn’t know about your game’s turn-taking, so doesn’t know whose turn it might be.)

Installation and usage

Otter is fully functional, but the installation and account management arrangements are rather unsophisticated and un-webby. And there is not currently any publicly available instance you can use to try it out.

Users on chiark will find an instance there.

Other people who who are interested in hosting games (of Penultima or Mao, or other games we might support) will have to find a Unix host or VM to install Otter on, and will probably want help from a Unix sysadmin.

Otter is distributed via git, and is available on Salsa, Debian's gitlab instance.

There is documentation online.

Future plans

I have a number of ideas for improvement, which go off in many different directions.

Quite high up on my priority list is making it possible for players to upload and share game materials (cards, boards, pieces, and so on), rather than just using the ones which are bundled with Otter itself (or dumping files ad-hoc on the server). This will make it much easier to play new games. One big reason I wrote Otter is that I wanted to liberate boardgame players from the need to implemet their game rules as computer code.

The game management and account management is currently done with a command line tool. It would be lovely to improve that, but making a fully-featured management web ui would be a lot of work.

Screenshots!

(Click for the full size images.)



comment count unavailable comments

,

Planet DebianRussell Coker: IMA/EVM Certificates

I’ve been experimenting with IMA/EVM. Here is the Sourceforge page for the upstream project [1]. The aim of that project is to check hashes and maybe public key signatures on files before performing read/exec type operations on them. It can be used as the next logical step from booting a signed kernel with TPM. I am a long way from getting that sort of thing going, just getting the kernel to boot and load keys is my current challenge and isn’t helped due to the lack of documentation on error messages. This blog post started as a way of documenting the error messages so future people who google errors can get a useful result. I am not trying to document everything, just help people get through some of the first problems.

I am using Debian for my work, but some of this will apply to other distributions (particularly the kernel error messages). The Debian distribution has the ima-evm-utils but no other support for IMA/EVM. To get this going in Debian you need to compile your own kernel with IMA support and then boot it with kernel command-line options to enable IMA, in recent kernels that includes “lsm=integrity” as a mandatory requirement to prevent a kernel Oops after mounting the initrd (there is already a patch to fix this).

If you want to just use IMA (not get involved in development) then a good option would be to use RHEL (here is their documentation) [2] or SUSE (here is their documentation) [3]. Note that both RHEL and SUSE use older kernels so their documentation WILL lead you astray if you try and use the latest kernel.org kernel.

The Debian initrd

I created a script named /etc/initramfs-tools/hooks/keys with the following contents to copy the key(s) from /etc/keys to the initrd where the kernel will load it/them. The kernel configuration determines whether x509_evm.der or x509_ima.der (or maybe both) is loaded. I haven’t yet worked out which key is needed when.

#!/bin/bash

mkdir -p ${DESTDIR}/etc/keys
cp /etc/keys/* ${DESTDIR}/etc/keys

Making the Keys

#!/bin/sh

GENKEY=ima.genkey

cat << __EOF__ >$GENKEY
[ req ]
default_bits = 1024
distinguished_name = req_distinguished_name
prompt = no
string_mask = utf8only
x509_extensions = v3_usr

[ req_distinguished_name ]
O = `hostname`
CN = `whoami` signing key
emailAddress = `whoami`@`hostname`

[ v3_usr ]
basicConstraints=critical,CA:FALSE
#basicConstraints=CA:FALSE
keyUsage=digitalSignature
#keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid
#authorityKeyIdentifier=keyid,issuer
__EOF__

openssl req -new -nodes -utf8 -sha1 -days 365 -batch -config $GENKEY \
                -out csr_ima.pem -keyout privkey_ima.pem
openssl x509 -req -in csr_ima.pem -days 365 -extfile $GENKEY -extensions v3_usr \
                -CA ~/kern/linux-5.11.14/certs/signing_key.pem -CAkey ~/kern/linux-5.11.14/certs/signing_key.pem -CAcreateserial \
                -outform DER -out x509_evm.der

To get the below result I used the above script to generate a key, it is the /usr/share/doc/ima-evm-utils/examples/ima-genkey.sh script from the ima-evm-utils package but changed to use the key generated from kernel compilation to sign it. You can copy the files in the certs directory from one kernel build tree to another to have the same certificate and use the same initrd configuration. After generating the key I copied x509_evm.der to /etc/keys on the target host and built the initrd before rebooting.

[    1.050321] integrity: Loading X.509 certificate: /etc/keys/x509_evm.der
[    1.092560] integrity: Loaded X.509 cert 'xev: etbe signing key: 99d4fa9051e2c178017180df5fcc6e5dbd8bb606'

Errors

Here are some of the kernel error messages I received along with my best interpretation of what they mean.

[ 1.062031] integrity: Loading X.509 certificate: /etc/keys/x509_ima.der
[ 1.063689] integrity: Problem loading X.509 certificate -74

Error -74 means -EBADMSG, which means there’s something wrong with the certificate file. I have got that from /etc/keys/x509_ima.der not being in der format and I have got it from a der file that contained a key pair that wasn’t signed.

[    1.049170] integrity: Loading X.509 certificate: /etc/keys/x509_ima.der
[    1.093092] integrity: Problem loading X.509 certificate -126

Error -126 means -ENOKEY, so the key wasn’t in the file or the key wasn’t signed by the kernel signing key.

[    1.074759] integrity: Unable to open file: /etc/keys/x509_evm.der (-2)

Error -2 means -ENOENT, so the file wasn’t found on the initrd. Note that it does NOT look at the root filesystem.

References

Planet DebianJunichi Uekawa: Rewrote my pomodoro technique timer.

Rewrote my pomodoro technique timer. I've been iterating on how I operate and focus. Too much focus exhausts me. I'm trying out Focusmate's method of 50 minutes of focus time and 10 minutes of break. Here is a web app that tries to start the timer at the hour and starts break at the last 10 minutes.

Planet DebianBits from Debian: Debian Project Leader election 2021, Jonathan Carter re-elected.

The voting period and tally of votes for the Debian Project Leader election has just concluded, and the winner is Jonathan Carter!

455 of 1,018 Developers voted using the Condorcet method.

More information about the results of the voting are available on the Debian Project Leader Elections 2021 page.

Many thanks to Jonathan Carter and Sruthi Chandran for their campaigns, and to our Developers for voting.

,

Planet DebianDirk Eddelbuettel: RcppAPT 0.0.7: Micro Update

A new version of the RcppAPT package interfacing from R to the C++ library behind the awesome apt, apt-get, apt-cache, … commands and their cache powering Debian, Ubuntu and the like arrived on CRAN yesterday. This comes a good year after the previous maintenance update for release 0.0.6.

RcppAPT allows you to query the (Debian or Ubuntu) package dependency graph at will, with build-dependencies (if you have deb-src entries), reverse dependencies, and all other goodies. See the vignette and examples for illustrations.

The maintenance release responds to call for updates from CRAN desiring that make all implicit dependencies on packages markdown and rmarkdown explicit via a Suggests: entry. Two of the many packages I maintain were part of the (large !!) list in the CRAN email, and this is one of them. While making the update, we refreshed two other packaging details.

Changes in version 0.0.7 (2021-04-16)

  • Add rmarkdown to Suggests: as an implicit conditional dependency

  • Switch vignette to minidown and its water framework, add minidown to Suggests as well

  • Update two URLs in the README.md file

Courtesy of my CRANberries, there is also a diffstat report for this release. A bit more information about the package is available here as well as as the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianChris Lamb: Tour d'Orwell: Wallington

Previously in George Orwell travel posts: Sutton Courtenay, Marrakesh, Hampstead, Paris, Southwold & The River Orwell.

§

Wallington is a small village in Hertfordshire, approximately fifty miles north of London and twenty-five miles from the outskirts of Cambridge. George Orwell lived at No. 2 Kits Lane, better known as 'The Stores', on a mostly-permanent basis from 1936 to 1940, but he would continue to journey up from London on occasional weekends until 1947.

His first reference to The Stores can be found in early 1936, where Orwell wrote from Lancashire during research for The Road to Wigan Pier to lament that he would very much like "to do some work again — impossible, of course, in the [current] surroundings":

I am arranging to take a cottage at Wallington near Baldock in Herts, rather a pig in a poke because I have never seen it, but I am trusting the friends who have chosen it for me, and it is very cheap, only 7s. 6d. a week [£20 in 2021].

For those not steeped in English colloquialisms, "a pig in a poke" is an item bought without seeing it in advance. In fact, one general insight that may be drawn from reading Orwell's extant correspondence is just how much he relied on a close network of friends, belying the lazy and hagiographical picture of an independent and solitary figure. (Still, even Orwell cultivated this image at times, such as in a patently autobiographical essay he wrote in 1946. But note the off-hand reference to varicose veins here, for they would shortly re-appear as a symbol of Winston's repressed humanity in Nineteen Eighty-Four.)

Nevertheless, the porcine reference in Orwell's idiom is particularly apt, given that he wrote the bulk of Animal Farm at The Stores — his 1945 novella, of course, portraying a revolution betrayed by allegorical pigs. Orwell even drew inspiration for his 'fairy story' from Wallington itself, principally by naming the novel's farm 'Manor Farm', just as it is in the village. But the allusion to the purchase of goods is just as appropriate, as Orwell returned The Stores to its former status as the village shop, even going so far as to drill peepholes in a door to keep an Orwellian eye on the jars of sweets. (Unfortunately, we cannot complete a tidy circle of references, as whilst it is certainly Napoleon — Animal Farm's substitute for Stalin — who is quoted as describing Britain as "a nation of shopkeepers", it was actually the maraisard Bertrand Barère who first used the phrase).

§

"It isn't what you might call luxurious", he wrote in typical British understatement, but Orwell did warmly emote on his animals. He kept hens in Wallington (perhaps even inspiring the opening line of Animal Farm: "Mr Jones, of the Manor Farm, had locked the hen-houses for the night, but was too drunk to remember to shut the pop-holes.") and a photograph even survives of Orwell feeding his pet goat, Muriel. Orwell's goat was the eponymous inspiration for the white goat in Animal Farm, a decidedly under-analysed character who, to me, serves to represent an intelligentsia that is highly perceptive of the declining political climate but, seemingly content with merely observing it, does not offer any meaningful opposition. Muriel's aesthetic of resistance, particularly in her reporting on the changes made to the Seven Commandments of the farm, thus rehearses the well-meaning (yet functionally ineffective) affinity for 'fact checking' which proliferates today. But I digress.

There is a tendency to "read Orwell backwards", so I must point out that Orwell wrote several other works whilst at The Stores as well. This includes his Homage to Catalonia, his aforementioned The Road to Wigan Pier, not to mention countless indispensable reviews and essays as well. Indeed, another result of focusing exclusively on Orwell's last works is that we only encounter his ideas in their highly-refined forms, whilst in reality, it often took many years for concepts to fully mature — we first see, for instance, the now-infamous idea of "2 + 2 = 5" in an essay written in 1939.

This is important to understand for two reasons. Although the ostentatiously austere Barnhill might have housed the physical labour of its writing, it is refreshing to reflect that the philosophical heavy-lifting of Nineteen Eighty-Four may have been performed in a relatively undistinguished North Hertfordshire village. But perhaps more importantly, it emphasises that Orwell was just a man, and that any of us is fully capable of equally significant insight, with — to quote Christopher Hitchens — "little except a battered typewriter and a certain resilience."

§

The red commemorative plaque not only limits Orwell's tenure to the time he was permanently in the village, it omits all reference to his first wife, Eileen O'Shaughnessy, whom he married in the village church in 1936.
Wallington's Manor Farm, the inspiration for the farm in Animal Farm. The lower sign enjoins the public to inform the police "if you see anyone on the [church] roof acting suspiciously". Non-UK-residents may be surprised to learn about the systematic theft of lead.

Planet DebianSteve Kemp: Having fun with CP/M on a Z80 single-board computer.

In the past, I've talked about building a Z80-based computer. I made some progress towards that goal, in the sense that I took the initial (trivial steps) towards making something:

  • I built a clock-circuit.
  • I wired up a Z80 processor to the clock.
  • I got the thing running an endless stream of NOP instructions.
    • No RAM/ROM connected, tying all the bus-lines low, meaning every attempted memory-read returned 0x00 which is the Z80 NOP instruction.

But then I stalled, repeatedly, at designing an interface to RAM and ROM, so that it could actually do something useful. Over the lockdown I've been in two minds about getting sucked back down the rabbit-hole, so I compromised. I did a bit of searching on tindie, and similar places, and figured I'd buy a Z80-based single board computer. My requirements were minimal:

  • It must run CP/M.
  • The source-code to "everything" must be available.
  • I want it to run standalone, and connect to a host via a serial-port.

With those goals there were a bunch of boards to choose from, rc2014 is the standard choice - a well engineered system which uses a common backplane and lets you build mini-boards to add functionality. So first you build the CPU-card, then the RAM card, then the flash-disk card, etc. Over-engineered in one sense, extensible in another. (There are some single-board variants to cut down on soldering overhead, at a cost of less flexibility.)

After a while I came across https://8bitstack.co.uk/, which describes a simple board called the the Z80 playground.

The advantage of this design is that it loads code from a USB stick, making it easy to transfer files to/from it, without the need for a compact flash card, or similar. The downside is that the system has only 64K RAM, meaning it cannot run CP/M 3, only 2.2. (CP/M 3.x requires more RAM, and a banking/paging system setup to swap between pages.)

When the system boots it loads code from an EEPROM, which then fetches the CP/M files from the USB-stick, copies them into RAM and executes them. The memory map can be split so you either have ROM & RAM, or you have just RAM (after the boot the ROM will be switched off). To change the initial stuff you need to reprogram the EEPROM, after that it's just a matter of adding binaries to the stick or transferring them over the serial port.

In only a couple of hours I got the basic stuff working as well as I needed:

  • A z80-assembler on my Linux desktop to build simple binaries.
  • An installation of Turbo Pascal 3.00A on the system itself.
  • An installation of FORTH on the system itself.
    • Which is nice.
  • A couple of simple games compiled from Pascal
    • Snake, Tetris, etc.
  • The Zork trilogy installed, along with Hitchhikers guide.

I had some fun with a CP/M emulator to get my hand back in things before the board arrived, and using that I tested my first "real" assembly language program (cls to clear the screen), as well as got the hang of using the wordstar keyboard shortcuts as used within the turbo pascal environment.

I have some plans for development:

  • Add command-line history (page-up/page-down) for the CP/M command-processor.
  • Add paging to TYPE, and allow terminating with Q.

Nothing major, but fun changes that won't be too difficult to implement.

Since CP/M 2.x has no concept of sub-directories you end up using drives for everything, I implemented a "search-path" so that when you type "FOO" it will attempt to run "A:FOO.COM" if there is no file matching on the current-drive. That's a nicer user-experience at all.

I also wrote some Z80-assembly code to search all drives for an executable, if not found in current drive and not already qualified. Remember CP/M doesn't have a concept of sub-directories) that's actually pretty useful:

  B>LOCATE H*.COM
  P:HELLO   COM
  P:HELLO2  COM
  G:HITCH   COM
  E:HYPHEN  COM

I've also written some other trivial assembly language tools, which was surprisingly relaxing. Especially once I got back into the zen mode of optimizing for size.

I forked the upstream repository, mostly to tidy up the contents, rather than because I want to go into my own direction. I'll keep the contents in sync, because there's no point splitting a community even further - I guess there are fewer than 100 of these boards in the wild, probably far far fewer!

,

Cryptogram Details on the Unlocking of the San Bernardino Terrorist’s iPhone

The Washington Post has published a long story on the unlocking of the San Bernardino Terrorist’s iPhone 5C in 2016. We all thought it was an Israeli company called Cellebrite. It was actually an Australian company called Azimuth Security.

Azimuth specialized in finding significant vulnerabilities. Dowd, a former IBM X-Force researcher whom one peer called “the Mozart of exploit design,” had found one in open-source code from Mozilla that Apple used to permit accessories to be plugged into an iPhone’s lightning port, according to the person.

[…]

Using the flaw Dowd found, Wang, based in Portland, Ore., created an exploit that enabled initial access to the phone ­ a foot in the door. Then he hitched it to another exploit that permitted greater maneuverability, according to the people. And then he linked that to a final exploit that another Azimuth researcher had already created for iPhones, giving him full control over the phone’s core processor ­ the brains of the device. From there, he wrote software that rapidly tried all combinations of the passcode, bypassing other features, such as the one that erased data after 10 incorrect tries.

Apple is suing various companies over this sort of thing. The article goes into the details.

Krebs on SecurityDid Someone at the Commerce Dept. Find a SolarWinds Backdoor in Aug. 2020?

On Aug. 13, 2020, someone uploaded a suspected malicious file to VirusTotal, a service that scans submitted files against more than five dozen antivirus and security products. Last month, Microsoft and FireEye identified that file as a newly-discovered fourth malware backdoor used in the sprawling SolarWinds supply chain hack. An analysis of the malicious file and other submissions by the same VirusTotal user suggest the account that initially flagged the backdoor as suspicious belongs to IT personnel at the National Telecommunications and Information Administration (NTIA), a division of the U.S. Commerce Department that handles telecommunications and Internet policy.

Both Microsoft and FireEye published blog posts on Mar. 4 concerning a new backdoor found on high-value targets that were compromised by the SolarWinds attackers. FireEye refers to the backdoor as “Sunshuttle,” whereas Microsoft calls it “GoldMax.” FireEye says the Sunshuttle backdoor was named “Lexicon.exe,” and had the unique file signatures or “hashes” of “9466c865f7498a35e4e1a8f48ef1dffd” (MD5) and b9a2c986b6ad1eb4cfb0303baede906936fe96396f3cf490b0984a4798d741d8 (SHA-1).

“In August 2020, a U.S.-based entity uploaded a new backdoor that we have named SUNSHUTTLE to a public malware repository,” FireEye wrote.

The “Sunshuttle” or “GoldMax” backdoor, as identified by FireEye and Microsoft, respectively. Image: VirusTotal.com.

A search in VirusTotal’s malware repository shows that on Aug. 13, 2020 someone uploaded a file with that same name and file hashes. It’s often not hard to look through VirusTotal and find files submitted by specific users over time, and several of those submitted by the same user over nearly two years include messages and files sent to email addresses for people currently working in NTIA’s information technology department.

An apparently internal email that got uploaded to VirusTotal in Feb. 2020 by the same account that uploaded the Sunshuttle backdoor malware to VirusTotal in August 2020.

The NTIA did not respond to requests for comment. But in December 2020, The Wall Street Journal reported the NTIA was among multiple federal agencies that had email and files plundered by the SolarWinds attackers. “The hackers broke into about three dozen email accounts since June at the NTIA, including accounts belonging to the agency’s senior leadership, according to a U.S. official familiar with the matter,” The Journal wrote.

It’s unclear what, if anything, NTIA’s IT staff did in response to scanning the backdoor file back in Aug. 2020. But the world would not find out about the SolarWinds debacle until early December 2020, when FireEye first disclosed the extent of its own compromise from the SolarWinds malware and published details about the tools and techniques used by the perpetrators.

The SolarWinds attack involved malicious code being surreptitiously inserted into updates shipped by SolarWinds for some 18,000 users of its Orion network management software. Beginning in March 2020, the attackers then used the access afforded by the compromised SolarWinds software to push additional backdoors and tools to targets when they wanted deeper access to email and network communications.

U.S. intelligence agencies have attributed the SolarWinds hack to an arm of the Russian state intelligence known as the SVR, which also was determined to have been involved in the hacking of the Democratic National Committee six years ago. On Thursday, the White House issued long-expected sanctions against Russia in response to the SolarWinds attack and other malicious cyber activity, leveling economic sanctions against 32 entities and individuals for disinformation efforts and for carrying out the Russian government’s interference in the 2020 presidential election.

The U.S. Treasury Department (which also was hit with second-stage malware that let the SolarWinds attackers read Treasury email communications) has posted a full list of those targeted, including six Russian companies for providing support to the cyber activities of the Russian intelligence service.

Also on Thursday, the FBI, National Security Agency (NSA), and the Cybersecurity Infrastructure Security Administration (CISA) issued a joint advisory on several vulnerabilities in widely-used software products that the same Russian intelligence units have been attacking to further their exploits in the SolarWinds hack. Among those is CVE-2020-4006, a security hole in VMWare Workspace One Access that VMware patched in December 2020 after hearing about it from the NSA.

On December 18, VMWare saw its stock price dip 5.5 percent after KrebsOnSecurity published a report linking the flaw to NSA reports about the Russian cyberspies behind the SolarWinds attack. At the time, VMWare was saying it had received “no notification or indication that CVE-2020-4006 was used in conjunction with the SolarWinds supply chain compromise.” As a result, a number of readers responded that making this connection was tenuous, circumstantial and speculative.

But the joint advisory makes clear the VMWare flaw was in fact used by SolarWinds attackers to further their exploits.

“Recent Russian SVR activities include compromising SolarWinds Orion software updates, targeting COVID-19 research facilities through deploying WellMess malware, and leveraging a VMware vulnerability that was a zero-day at the time for follow-on Security Assertion Markup Language (SAML) authentication abuse,” the NSA’s advisory (PDF) reads. “SVR cyber actors also used authentication abuse tactics following SolarWinds-based breaches.”

Officials within the Biden administration have told media outlets that a portion of the United States’ response to the SolarWinds hack would not be discussed publicly. But some security experts are concerned that Russian intelligence officials may still have access to networks that ran the backdoored SolarWinds software, and that the Russians could use that access to affect a destructive or disruptive network response of their own, The New York Times reports.

“Inside American intelligence agencies, there have been warnings that the SolarWinds attack — which enabled the SVR to place ‘back doors’ in the computer networks — could give Russia a pathway for malicious activity against government agencies and corporations,” The Times observed.

Worse Than FailureError'd: Days of Future Passed

After reading through so many of your submissions these last few weeks, I'm beginning to notice certain patterns emerging. One of these patterns is that despite the fact that dates are literally as old as time, people seem pathologically prone to bungling them. Surely our readers are already familiar with the notable "Falsehoods Programmers Believe" series of blog posts, but if you happen somehow to have been living under an Internet rock (or a cabbage leaf) for the last few decades, you might start your time travails at Infinite Undo. The examples here are not the most egregious ever (there are better coming later or sooner) but they are today's:

Famished Dug S. peckishly pronounces "It's about time!"

 

Far luckier Zachary Palmer appears to have found the perfect solution to poor Dug's delayed dinner: "It took the shipping company a little bit to start moving my package, but they made up for it by shipping it faster than the speed of light," says he.

 

Patient Philip awaits his {ship,prince,processor}: " B&H hitting us with hard truth on when the new line of AMD CPUs will really be available."

 

While an apparent contemporary of the latest royal Eric R. creakily complains " This website for tracking my continuing education hours should be smart enough not to let me enter a date in the year 21 AD"

 

But as for His Lateness Himself, royal servant Steve A. has uncovered a scoop fit for Q:

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Cryptogram NSA Discloses Vulnerabilities in Microsoft Exchange

Amongst the 100+ vulnerabilities patch in this month’s Patch Tuesday, there are four in Microsoft Exchange that were disclosed by the NSA.

Planet DebianDirk Eddelbuettel: Announcing ‘Introductions to Emacs Speaks Statistics’

A new website containing introductory videos and slide decks is now available for your perusal at ess-intro.github.io. It provides a series of introductions to the excellent Emacs Speaks Statistics (ESS) mode for the Emacs editor.

This effort started following my little tips, tricks, tools and toys series of short videos and slide decks “for the command-line and R, broadly-speaking”. Which I had mentioned to friends curious about Emacs, and on the ess-help mailing list. And lo and behold, over the fall and winter sixteen of us came together in one GitHub org and are now proud to present the initial batch of videos about first steps, installing, using with spaceemacs, customizing, and org-mode with ESS. More may hopefully fellow, the group is open and you too can join: see the main repo and its wiki.

This is in fact the initial announcement post, so it is flattering that we have already received over 350 views, four comments and twenty-one likes.

We hope it proves to be a useful starting point for some of you. The Emacs editor is quite uniquely powerful, and coupled with ESS makes for a rather nice environment for programming with data, or analysing, visualising, exploring, … data. But we are not zealots: there are many editors and environments under the sun, and most people are perfectly happy with their choice, which is wonderful. We also like ours, and sometimes someone asks ‘tell me more’ or ‘how do I start’. We hope this series satisifies this initial curiousity and takes it from here.

With that, my thanks to Frédéric, Alex, Tyler and Greg for the initial batch, and for everybody else in the org who chipped in with comments and suggestion. We hope it grows from here, so happy Emacsing with R from us!

,

Planet DebianIan Jackson: Dreamwidth blocking many RSS readers and aggregators

There is a serious problem with Dreamwidth, which is impeding access for many RSS reader tools.

This started at around 0500 UTC on Wednesday morning, according to my own RSS reader cron job. A friend found #43443 in the DW ticket tracker, where a user of a minority web browser found they were blocked.

Local tests demonstrated that Dreamwidth had applied blocking by the HTTP User-Agent header, and were rejecting all user-agents not specifically permitted. Today, this rule has been relaxed and unknown user-agents are permitted. But user-agents for general http client libraries are still blocked.

I'm aware of three unresolved tickets about this: #43444 #43445 #43447

We're told there by a volunteer member of Dreamwidth's support staff that this has been done deliberately for "blocking automated traffic". I'm sure the volunteer is just relaying what they've been told by whoever is struggling to deal with what I suppose is probably a spam problem. But it's still rather unsatisfactory.

I have suggested in my own ticket that a good solution might be to apply the new block only to posting and commenting (eg, maybe, by applying it only to HTTP POST requests). If the problem is indeed spam then that ought to be good enough, and would still let RSS readers work properly.

I'm told that this new blocking has been done by "implementing" (actually, configuring or enabling) "some AWS rules for blocking automated traffic". I don't know what facilities AWS provides. This kind of helplessness is of course precisely the kind of thing that the Free Software movement is against and precisely the kind of thing that proprietary services like AWS produce.

I don't know if this blog entry will appear on planet.debian.org and on other people's reader's and aggregators. I think it will at least be seen by other Dreamwidth users. I thought I would post here in the hope that other Dreamwidth users might be able to help get this fixed. At the very least other Dreamwidth blog owners need to know that many of their readers may not be seeing their posts at all.

If this problem is not fixed I will have to move my blog. One of the main points of having a blog is publishing it via RSS. RSS readers are of course based on general http client libraries and many if not most RSS readers have not bothered to customise their user-agent. Those are currently blocked.



comment count unavailable comments

Worse Than FailureCodeSOD: Constantly Counting

Steven was working on a temp contract for a government contractor, developing extensions to an ERP system. That ERP system was developed by whatever warm bodies happened to be handy, which meant the last "tech lead" was a junior developer who had no supervision, and before that it was a temp who was only budgeted to spend 2 hours a week on that project.

This meant that it was a great deal of spaghetti code, mashed together with a lot of special-case logic, and attempts to have some sort of organization even if that organization made no sense. Which is why, for example, all of the global constants for the application were required to be in a class Constants.

Of course, when you put a big pile of otherwise unrelated things in one place, you get some surprising results. Like this:

foreach (PurchaseOrder po in poList) { if (String.IsNullOrEmpty(po.PoNumber)) { Constants.NEW_COUNT++; CreatePoInOtherSystem(po); } }

Yes, every time this system passes a purchase order off to another system for processing, the "constant" NEW_COUNT gets incremented. And no, this wasn't the only variable "constant", because before long, the Constants class became the "pile of static variables" class.

`
[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianMartin Michlmayr: ledger2beancount 2.6 released

I released version 2.5 of ledger2beancount, a ledger to beancount converter.

Here are the changes in 2.6:

  • Round calculated total if needed for price==cost comparison
  • Add narration_tag config variable to set narration from metadata
  • Retain unconsummated payee/payer metadata
  • Ensure UTF-8 output and assume UTF-8 input
  • Document UTF-8 issue on Windows systems
  • Add option to move posting-level tags to the transaction itself
  • Add support for the alias sub-directive of account declarations
  • Add support for the payee sub-directive of account declarations
  • Support configuration file called .ledger2beancount.yaml
  • Fix uninitialised value warning in hledger mode
  • Print warning if account in assertion has sub-accounts
  • Set commodity for commodity-less balance assertion
  • Expand path name of beancount_header config variable
  • Document handling of buckets
  • Document pre- and post-processing examples
  • Add Dockerfile to create Docker image

Thanks to Alexander Baier, Daniele Nicolodi, and GitHub users bratekarate, faaafo and mefromthepast for various bug reports and other input.

Thanks to Dennis Lee for adding a Dockerfile and to Vinod Kurup for fixing a bug.

Thanks to Stefano Zacchiroli for testing.

You can get ledger2beancount from GitHub.

Cryptogram DNI’s Annual Threat Assessment

The office of the Director of National Intelligence released its “Annual Threat Assessment of the U.S. Intelligence Community.” Cybersecurity is covered on pages 20-21. Nothing surprising:

  • Cyber threats from nation states and their surrogates will remain acute.
  • States’ increasing use of cyber operations as a tool of national power, including increasing use by militaries around the world, raises the prospect of more destructive and disruptive cyber activity.
  • Authoritarian and illiberal regimes around the world will increasingly exploit digital tools to surveil their citizens, control free expression, and censor and manipulate information to maintain control over their populations.
  • During the last decade, state sponsored hackers have compromised software and IT service supply chains, helping them conduct operations — espionage, sabotage, and potentially prepositioning for warfighting.

The supply chain line is new; I hope the government is paying attention.

,

Cryptogram The FBI Is Now Securing Networks Without Their Owners’ Permission

In January, we learned about a Chinese espionage campaign that exploited four zero-days in Microsoft Exchange. One of the characteristics of the campaign, in the later days when the Chinese probably realized that the vulnerabilities would soon be fixed, was to install a web shell in compromised networks that would give them subsequent remote access. Even if the vulnerabilities were patched, the shell would remain until the network operators removed it.

Now, months later, many of those shells are still in place. And they’re being used by criminal hackers as well.

On Tuesday, the FBI announced that it successfully received a court order to remove “hundreds” of these web shells from networks in the US.

This is nothing short of extraordinary, and I can think of no real-world parallel. It’s kind of like if a criminal organization infiltrated a door-lock company and surreptitiously added a master passkey feature, and then customers bought and installed those locks. And then if the FBI got a court order to fix all the locks to remove the master passkey capability. And it’s kind of not like that. In any case, it’s not what we normally think of when we think of a warrant. The links above have details, but I would like a legal scholar to weigh in on the implications of this.

Planet DebianRussell Coker: Basics of Linux Kernel Debugging

Firstly a disclaimer, I’m not an expert on this and I’m not trying to instruct anyone who is aiming to become an expert. The aim of this blog post is to help someone who has a single kernel issue they want to debug as part of doing something that’s mostly not kernel coding. I welcome comments about the second step to kernel debugging for the benefit of people who need more than this (which might include me next week). Also suggestions for people who can’t use a kvm/qemu debugger would be good.

Below is a command to run qemu with GDB. It should be run from the Linux kernel source directory. You can add other qemu options for a blog device and virtual networking if necessary, but the bug I encountered gave an oops from the initrd so I didn’t need to go further. The “nokaslr” is to avoid address space randomisation which deliberately makes debugging tasks harder (from a certain perspective debugging a kernel and compromising a kernel are fairly similar). Loading the bzImage is fine, gdb can map that to the different file it looks at later on.

qemu-system-x86_64 -kernel arch/x86/boot/bzImage -initrd ../initrd-$KERN_VER -curses -m 2000 -append "root=/dev/vda ro nokaslr" -gdb tcp::1200

The command to run GDB is “gdb vmlinux“, when at the GDB prompt you can run the command “target remote localhost:1200” to connect to the GDB server port 1200. Note that there is nothing special about port 1200, it was given in an example I saw and is as good as any other port. It is important that you run GDB against the “vmlinux” file in the main directory not any of the several stripped and packaged files, GDB can’t handle a bzImage file but that’s OK, it ends up much the same in RAM.

When the “target remote” command is processed the kernel will be suspended by the debugger, if you are looking for a bug early in the boot you may need to be quick about this. Using “qemu-system-x86_64” instead of “kvm” slows things down and can help in that regard. The bug I was hunting happened 1.6 seconds after kernel load with KVM and 7.8 seconds after kernel load with qemu. I am not aware of all the implications of the kvm vs qemu decision on debugging. If your bug is a race condition then trying both would be a good strategy.

After the “target remote” command you can debug the kernel just like any other program.

If you put a breakpoint on print_modules() that will catch the operation of printing an Oops which can be handy.

Worse Than FailureCodeSOD: The Truth and the Truth

When Andy inherited some C# code from a contracting firm, he gave it a quick skim. He saw a bunch of methods with names like IsAvailable or CanPerform…, but he also saw that it was essentially random as to whether or not these methods returned bool or string.

That didn't seem like a good thing, so he started to take a deeper look, and that's when he found this.

public ActionResult EditGroup(Group group) { string fleetSuccess = string.Empty; bool success = false; if(action != null) { fleetSuccess = updateGroup(group); } else { fleetSuccess = Boolean.TrueString; } success = updateExternalGroup(group); fleetSuccess += "&&&" + success; if (fleetSuccess.ToLower().Equals("true&&&true")) { GetActivityDataFromService(group, false); } return Json(fleetSuccess, JsonRequestBehavior.AllowGet); }

So, updateGroup returns a string containing a boolean (at least, we hope it contains a boolean). updateExternalGroup returns an actual boolean. If both of these things are true, than we want to invoke GetActivityDataFromService.

Clearly, the only way to do this comparison is to force everything into being a string, with a &&& jammed in the middle as a spacer. Uh, for readability, I guess? Maybe? I almost suspect someone thought they were inventing their own "and" operator and didn't want it to conflict with & or &&.

Or maybe, maybe their code was read aloud by Jeff Goldblum. "True, and-and-and true!" It's very clear they didn't think about whether or not they should do this.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.10.4.0.0 on CRAN: New Upstream ‘Plus’

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 852 other packages on CRAN.

This new release brings us the just release Armadillo 10.4.0. Upstream moves at a speed that is a little faster than the cadence CRAN likes. We release RcppArmadillo 0.10.2.2.0 on March 9; and upstream 10.3.0 came out shortly thereafter. We aim to accomodate CRAN with (roughly) monthly (or less frequent) releases) so by the time we were ready 10.4.0 had just come out.

As it turns, the full testing had a benefit. Among the (currently) 852 CRAN packages using RcppArmadillo, two were failing tests. This is due to a subtle, but important point. Early on we realized that it would be beneficial if the standard R control over random-number creation and seeding affected Armadillo too, which Conrad accomodated kindly with an optional RNG interface—which RcppArmadillo supplies. With recent changes he made, the R side saw normally-distributed draws (via the Armadillo interface) changed, which lead to the two changes. All hail unit tests. So I mentioned this to Conrad, and with the usual Chicago-Brisbane time difference late my evening a fix was in my inbox. The CRAN upload was then halted as I had missed that due to other changes he had made random draws from a Gamma would now call std::rand() which CRAN flags. Another email to Brisbane, another late (one-line) fix back and all was good. We still encountered one package with an error but flagged this as internal to that package’s setup, so Uwe let RcppArmadillo onto CRAN, I contacted that package’s maintainer—who was very receptive and a change should be forthcoming. So with all that we have 0.10.4.0.0 on CRAN giving us Armadillo 10.4.0.

The full set of changes follows. As Armadillo 10.3.0 was not uploaded to CRAN, its changes are included too.

Changes in RcppArmadillo version 0.10.4.0.0 (2021-04-12)

  • Upgraded to Armadillo release 10.4.0 (Pressure Cooker)

    • faster handling of triangular matrices by log_det()

    • added log_det_sympd() for log determinant of symmetric positive matrices

    • added ARMA_WARN_LEVEL configuration option, to control the degree of emitted warning messages

    • reduced the default degree of warning messages, so that failed decompositions, failed saving/loading, etc, no longer emit warnings

  • Apply one upstream corrections for arma::randn draws when using alternative (here R) generator, and arma::randg.

Changes in RcppArmadillo version 0.10.3.0.0 (2021-03-10)

  • Upgraded to Armadillo release 10.3 (Sunrise Chaos)

    • faster handling of symmetric positive definite matrices by pinv()

    • expanded .save() / .load() for dense matrices to handle coord_ascii format

    • for out of bounds access, element accessors now throw the more nuanced std::out_of_range exception, instead of only std::logic_error

    • improved quality of random numbers

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Cryptogram More Biden Cybersecurity Nominations

News:

President Biden announced key cybersecurity leadership nominations Monday, proposing Jen Easterly as the next head of the Cybersecurity and Infrastructure Security Agency and John “Chris” Inglis as the first ever national cyber director (NCD).

I know them both, and think they’re both good choices.

More news.

Cryptogram Backdoor Added — But Found — in PHP

Unknown hackers attempted to add a backdoor to the PHP source code. It was two malicious commits, with the subject “fix typo” and the names of known PHP developers and maintainers. They were discovered and removed before being pushed out to any users. But since 79% of the Internet’s websites use PHP, it’s scary.

Developers have moved PHP to GitHub, which has better authentication. Hopefully it will be enough — PHP is a juicy target.

Krebs on SecurityMicrosoft Patch Tuesday, April 2021 Edition

Microsoft today released updates to plug at least 110 security holes in its Windows operating systems and other products. The patches include four security fixes for Microsoft Exchange Server — the same systems that have been besieged by attacks on four separate (and zero-day) bugs in the email software over the past month. Redmond also patched a Windows flaw that is actively being exploited in the wild.

Nineteen of the vulnerabilities fixed this month earned Microsoft’s most-dire “Critical” label, meaning they could be used by malware or malcontents to seize remote control over vulnerable Windows systems without any help from users.

Microsoft released updates to fix four more flaws in Exchange Server versions 2013-2019 (CVE-2021-28480, CVE-2021-28481, CVE-2021-28482, CVE-2021-28483). Interestingly, all four were reported by the U.S. National Security Agency, although Microsoft says it also found two of the bugs internally. A Microsoft blog post published along with today’s patches urges Exchange Server users to make patching their systems a top priority.

Satnam Narang, staff research engineer at Tenable, said these vulnerabilities have been rated ‘Exploitation More Likely’ using Microsoft’s Exploitability Index.

“Two of the four vulnerabilities (CVE-2021-28480, CVE-2021-28481) are pre-authentication, meaning an attacker does not need to authenticate to the vulnerable Exchange server to exploit the flaw,” Narang said. “With the intense interest in Exchange Server since last month, it is crucial that organizations apply these Exchange Server patches immediately.”

Also patched today was a vulnerability in Windows (CVE-2021-28310) that’s being exploited in active attacks already. The flaw allows an attacker to elevate their privileges on a target system.

“This does mean that they will either need to log on to a system or trick a legitimate user into running the code on their behalf,” said Dustin Childs of Trend Micro. “Considering who is listed as discovering this bug, it is probably being used in malware. Bugs of this nature are typically combined with other bugs, such as browser bug of PDF exploit, to take over a system.”

In a technical writeup on what they’ve observed since finding and reporting attacks on CVE-2021-28310, researchers at Kaspersky Lab noted the exploit they saw was likely used together with other browser exploits to escape “sandbox” protections of the browser.

“Unfortunately, we weren’t able to capture a full chain, so we don’t know if the exploit is used with another browser zero-day, or coupled with known, patched vulnerabilities,” Kaspersky’s researchers wrote.

Allan Laska, senior security architect at Recorded Future, notes that there are several remote code execution vulnerabilities in Microsoft Office products released this month as well. CVE-2021-28454 and CVE-2021-28451 involve Excel, while CVE-2021-28453 is in Microsoft Word and CVE-2021-28449 is in Microsoft Office. All four vulnerabilities are labeled by Microsoft as “Important” (not quite as bad as “Critical”). These vulnerabilities impact all versions of their respective products, including Office 365.

Other Microsoft products that got security updates this month include Edge (Chromium-based), Azure and Azure DevOps Server, SharePoint Server, Hyper-V, Team Foundation Server, and Visual Studio.

Separately, Adobe has released security updates for Photoshop, Digital Editions, RoboHelp, and Bridge.

It’s a good idea for Windows users to get in the habit of updating at least once a month, but for regular users (read: not enterprises) it’s usually safe to wait a few days until after the patches are released, so that Microsoft has time to iron out any kinks in the new armor.

But before you update, please make sure you have backed up your system and/or important files. It’s not uncommon for a Windows update package to hose one’s system or prevent it from booting properly, and some updates have been known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

LongNowTouching the Future

Aboriginal fish traps.

In search of a new story for the future of artificial intelligence, Long Now speaker Genevieve Bell looks back to its cybernetic origins — and keeps on looking, thousands of years into the past.

From her new essay in Griffith Review:

In this moment, we need to be reminded that stories of the future – about AI, or any kind – are never just about technology; they are about people and they are about the places those people find themselves, the places they might call home and the systems that bind them all together.

Genevieve Bell, “Touching the Future” in Griffith Review.

Cryptogram Cybersecurity Experts to Follow on Twitter

Security Boulevard recently listed the “Top-21 Cybersecurity Experts You Must Follow on Twitter in 2021.” I came in at #7. I thought that was pretty good, especially since I never tweet. My Twitter feed just mirrors my blog. (If you are one of the 134K people who read me from Twitter, “hi.”)

Worse Than FailureCodeSOD: A Form of Reuse

Writing code that is reusable is an important part of software development. In a way, we're not simply solving the problem at hand, but we're building tools we can use to solve similar problems in the future. Now, that's also a risk: premature abstraction is its own source of WTFs.

Daniel's peer wrote some JavaScript which is used for manipulating form inputs on customer contact forms. You know the sorts of forms: give us your full name, phone number, company name, email, and someone from our team will be in touch. This developer wrote the script, and offered it to clients to enhance their forms. Well, there was one problem: this script would get embedded in customer contact forms, but not all customer contact forms use the same conventions for how they name their fields.

There's an easy solution for that, involving parameterizing the code or adding a configuration step. There's a hard solution, where you build a heuristic that works for most forms. Then there's this solution, which… well…. Let me present the logic for handling just one field type, unredacted or elided.

for(llelementlooper=0; llelementlooper<document.forms[llformlooper2].elements.length; llelementlooper++) { var llelementphone = (document.forms[llformlooper2].elements[llelementlooper].name) if ( llformphone == '' && ((llelementphone=='phone') || (llelementphone=='Phone') || (llelementphone=='phone') || (llelementphone=='mobilephone') || (llelementphone=='PHONE') || (llelementphone=='sPhone') || (llelementphone=='strPhone') || (llelementphone=='Telephone') || (llelementphone=='telephone') || (llelementphone=='tel') || (llelementphone=='si_contact_ex_field6') || (llelementphone=='phonenumber') || (llelementphone=='phone_number') || (llelementphone=='phoneTextBox') || (llelementphone=='PhoneNumber_num_25_1') || (llelementphone=='Telefone') || (llelementphone=='Contact Phone') || (llelementphone=='submitted[row_3][phone]') || (llelementphone=='edit-profile-phone') || (llelementphone=='contactTelephone') || (llelementphone=='f4') || (llelementphone=='Contact-Phone') || (llelementphone=='formItem_239') || (llelementphone=='phone_r') || (llelementphone=='PhoneNo') || (llelementphone=='LeadGen_ContactForm_98494_m0:Phone') || (llelementphone=='telefono') || (llelementphone=='ntelephone') || (llelementphone=='wtelephone') || (llelementphone=='watelephone') || (llelementphone=='form[telefoon]') || (llelementphone=='phone_work') || (llelementphone=='telephone-number') || (llelementphone=='ctl00$HeaderText$ctl00$PhoneText') || (llelementphone=='ctl00$ctl00$cphMain$cphInsideMain$widget1$ctl00$viewBiz$ctl00$phone$textbox') || (llelementphone=='ctl00$ctl00$ContentPlaceHolderBase$ContentPlaceHolderSideMenu$TextBoxPhone') || (llelementphone=='ctl00$SPWebPartManager1$g_c8bd31c3_e338_41df_bdbe_021242ca01c8$ctl01$ctl06$txtTextbox') || (llelementphone=='ctl00$ctl00$ctl00$ContentPlaceHolderDefault$MasterContentPlaceHolder$txtPhone') || (llelementphone=='curftelephone') || (llelementphone=='form[Telephone]') || (llelementphone=='tx_pilmailform_pi1[text][phone]') || (llelementphone=='ctl00$ctl00$templateMainContent$homeBanners$HomeBannerList$ctrLeads$txt_5_1') || (llelementphone=='ac_daytimeNumber') || (llelementphone=='daytime_phone') || (llelementphone=='r4') || (llelementphone=='ctl00$ContentPlaceHolderBody$Phone') || (llelementphone=='Fld10_label') || (llelementphone=='field333') || (llelementphone=='txtMobile') || (llelementphone=='form_nominator_phonenumber') || (llelementphone=='submitted[phone_no]') || (llelementphone=='submitted[phone]') || (llelementphone=='submitted[5]') || (llelementphone=='submitted[telephone_no]') || (llelementphone=='fields[Contact Phone]') || (llelementphone=='cf2_field_5') || (llelementphone=='a23786') || (llelementphone=='rpr_phone') || (llelementphone=='phone-number') || (llelementphone=='txt_homePhone') || (llelementphone=='your-number') || (llelementphone=='Contact_Phone') || (llelementphone=='ctl00$CPH_body$txtContactnumber') || (llelementphone=='profile_telephone') || (llelementphone=='item_meta[90]' && llfrmid==11823) || (llelementphone=='item_meta[181]' && llfrmid==26416) || (llelementphone=='input_4' && llfrmid==21452) || (llelementphone=='EditableTextField100' && llfrmid==13948) || (llelementphone=='EditableTextField205' && llfrmid==13948) || (llelementphone=='EditableTextField100' && llfrmid==13948) || (llelementphone=='EditableTextField166' && llfrmid==13948) || (llelementphone=='EditableTextField104' && llfrmid==13948) || (llelementphone=='cf2_field_4' && llfrmid==23878) || (llelementphone=='input_4' && llfrmid==24017) || (llelementphone=='cf_field_4' && llfrmid==15876) || (llelementphone=='cf5_field_5' && llfrmid==15876) || (llelementphone=='input_9' && llfrmid==17254) || (llelementphone=='input_2' && llfrmid==22954) || (llelementphone=='input_8' && llfrmid==23756) || (llelementphone=='input_3' && llfrmid==18793) || (llelementphone=='input_6' && llfrmid==24811) || (llelementphone=='input_3' && llfrmid==19880) || (llelementphone=='input_6' && llfrmid==19230) || (llelementphone=='input_3' && llfrmid==24747) || (llelementphone=='input_4' && llfrmid==25897) || (llelementphone=='text-481' && llfrmid==14451) || (llelementphone=='Form7111$formField_7576') || (llelementphone=='Form7168$formField_7673') || (llelementphone=='Form7116$formField_7592') || (llelementphone=='Form7150$formField_7645') || (llelementphone=='Form7153$formField_7655') || (llelementphone=='Form7119$formField_7600') || (llelementphone=='Form7123$formField_7608') || (llelementphone=='Form7161$formField_7665') || (llelementphone=='Form7176$formField_7690') || (llelementphone=='Form7172$formField_7681') || (llelementphone=='Form7113$formField_7584') || (llelementphone=='Form7106$formField_7568') || (llelementphone=='Form7111$formField_7576') || (llelementphone=='Form7136$formField_7628') || (llelementphone=='Form6482$formField_7621') || (llelementphone=='Form6548$formField_6988') || (llelementphone=='submitted[business_phone]') || (llelementphone=='tfa_3' && llfrmid==23388) || (llelementphone=='ContentObjectAttribute_ezsurvey_answer_4455_3633') || (llelementphone=='838ae21c-1f95-488f-a511-135a588a50fb_Phone') || (llelementphone=='plc$lt$zoneContent$pageplaceholder$pageplaceholder$lt$zoneRightContent$contentText$BizFormControl1$Bizform1$ctl00$Telephone$txt1st') || (llelementphone=='plc$lt$zoneContent$pageplaceholder$pageplaceholder$lt$zoneRightContent$contentText$BizFormControl1$Bizform1$ctl00$Telephone') || (llelementphone=='ctl00$ctl00$ctl00$ContentPlaceHolderDefault$ContentAreaPlaceholderMain$ctl02$ContactForm_3$TextBoxTelephone') || (llelementphone=='plc$lt$Content2$pageplaceholder1$pageplaceholder1$lt$Content$BizForm$viewBiz$ctl00$Phone_Number') || (llelementphone=='ctl00$ctl00$ContentPlaceHolder1$cphMainContent$C002$tbTelephone') || (llelementphone=='contact$tbPhoneNumber') || (llelementphone=='crMain$ctl00$txtPhone') || (llelementphone=='ctl00$PrimaryContent$tbPhone') || (llelementphone=='ff_nm_phone[]') || (llelementphone=='q5_phoneNumber5[phone]') || (llelementphone=='TechContactPhone') || (llelementphone=='referral_phone_number') || (llelementphone=='field8418998') || (llelementphone=='ctl00$Content$ctl00$txtPhone') || (llelementphone=='ctl00$PlaceHolderMain$ucContactUs$txtPhone') || (llelementphone=='m_field_id_4' && llfrmid==15091) || (llelementphone=='Field7' && llfrmid==23387) || (llelementphone=='input_4' && llfrmid==22578) || (llelementphone=='input_2' && llfrmid==11241) || (llelementphone=='input_7' && llfrmid==23633) || (llelementphone=='input_7' && llfrmid==22114) || (llelementphone=='input_4' && (llformalyzerURL.indexOf('demo') != -1) && llfrmid==17544) || (llelementphone=='input_4' && (llformalyzerURL.indexOf('contact') != -1) && llfrmid==17544) || (llelementphone=='field_4' && llfrmid==24654) || (llelementphone=='input_6' && llfrmid==24782) || (llelementphone=='input_4' && (llformalyzerURL.indexOf('contact-us') != -1) && llfrmid==16794) || (llelementphone=='input_3' && (llformalyzerURL.indexOf('try-and-buy') != -1) && llfrmid==16794) || (llelementphone=='input_4' && (llformalyzerURL.indexOf('contact-us') != -1) && llfrmid==23842) || (llelementphone=='input_4' && llfrmid==25451) || (llelementphone=='input_5' && llfrmid==24911) || (llelementphone=='input_3' && llfrmid==13417) || (llelementphone=='input_4' && llfrmid==23813) || (llelementphone=='input_4' && llfrmid==21483) || (llelementphone=='input_3' && llfrmid==25396) || (llelementphone=='input_3' && llfrmid==16175) || (llelementphone=='input_7' && llfrmid==25797) || (llelementphone=='input_4' && llfrmid==15650) || (llelementphone=='input_3' && llfrmid==22025) || (llelementphone=='input_3' && llfrmid==14534) || (llelementphone=='input_4' && llfrmid==25216) || (llelementphone=='input_5' && llfrmid==22884) || (llelementphone=='input_6' && llfrmid==25783) || (llelementphone=='text-747' && llfrmid==16324) || (llelementphone=='vfb-42' && llfrmid==24468) || (llelementphone=='vfb-33' && llfrmid==24468) || (llelementphone=='item_meta[57]' && llfrmid==25268) || (llelementphone=='item_meta[78]' && llfrmid==25268) || (llelementphone=='item_meta[85]' && llfrmid==25268) || (llelementphone=='item_meta[154]' && llfrmid==25268) || (llelementphone=='item_meta[220]' && llfrmid==25268) || (llelementphone=='item_meta[240]' && llfrmid==25268) || (llelementphone=='item_meta[286]' && llfrmid==25268) || (llelementphone=='fieldname5' && llfrmid==12535) || (llelementphone=='Question12' && llfrmid==24639) || (llelementphone=='ninja_forms_field_4' && llfrmid==19321) || (llelementphone=='EditableTextField' && llfrmid==15064) || (llelementphone=='form_fields[27]' && llfrmid==22688) || (llelementphone=='ctl00$body$phone') || (llelementphone=='ctl00$MainContent$txtPhone') || (llelementphone=='FreeTrialForm$Phone') || (llelementphone=='text-521ada035aa46') || (llelementphone=='C_BusPhone') || (llelementphone=='ctl00$ctl00$templateMainContent$pageContent$ctrLeads$txt_5_1') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl06$1204') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl06$1320') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl07$1242') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl07$1202') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl08$1242') || (llelementphone=='ctl00$MainColumnPlaceHolder$uxPhone') || (llelementphone=='ctl00$MainContent$DropZoneTop$columnDisplay$ctl04$controlcolumn$ctl00$WidgetHost$WidgetHost_widget$IDPhone') || (llelementphone=='ctl00$ctl05$txtPhone') || (llelementphone=='ctl00$Modules$ctl00$rptFields$ctl07$1219') || (llelementphone=='LeadGen_ContactForm_33872_m419365:Phone') || (llelementphone=='F02220803') || (llelementphone=='h2c0f') || (llelementphone=='your_phone_number') || (llelementphone=='Question7') || (llelementphone=='Question51') || (llelementphone=='Question59') || (llelementphone=='Question35') || (llelementphone=='Question67') || (llelementphone=='field9740823') || (llelementphone=='message[phone]') || (llelementphone=='dnn$ctr1266$ViewKamakuraRegister$Phone') || (llelementphone=='phone1') || (llelementphone=='inf_field_Phone1') || (llelementphone=='hscontact_phone') || (llelementphone=='data[Contact][phone]') || (llelementphone=='fields[Phone]') || (llelementphone=='contact[PhoneNumber]') || (llelementphone=='phonename3') || (llelementphone=='UserPhone') || (llelementphone=='ctl00$MainBody$txtPhoneTech') || (llelementphone=='Telephone1') || (llelementphone=='PhoneNumber') || (llelementphone=='work_phone') || (llelementphone=='jform[contact_telephone]') || (llelementphone=='form[phone]') || (llelementphone=='RequestAQuote1$txtPhone') || (llelementphone=='06_Phone') || (llelementphone=='txtPhone') || (llelementphone=='field_location[und][0][phone]') || (llelementphone=='your-phone') || (llelementphone=='cmsForms_phone') || (llelementphone=='Txt_phonenumber') || (llelementphone=='businessPhone') || (llelementphone=='boxHomePhone') || (llelementphone=='HomePhone') || (llelementphone=='request-phone') || (llelementphone=='user[phone]') || (llelementphone=='DATA[PHONE]') || (llelementphone=='ctl00$ctl00$ctl00$cphContent$cphContent$cphContent$Phone') || (llelementphone=='ctl00$MainBody$Form1$obj11') || (llelementphone=='LeadGen_ContactForm_90888_m1467651:Phone') || (llelementphone=='Users[work]') || (llelementphone=='Question43') || (llelementphone=='aics_phone') || (llelementphone=='form[workphone]') || (llelementphone=='ctl00$ctl00$ContentPlaceHolder1$cphMainContent$C006$tbTelephone') || (llelementphone=='cntnt01fbrp__47') || (llelementphone=='submitted[phone_number]') || (llelementphone=='flipform_phone') || (llelementphone=='txtPhone') || (llelementphone=='ctl00$ContentPlaceHolder2$txtPhnno') || (llelementphone=='ctl00$ctl00$ContentPlaceHolder1$ContentPlaceHolder1$mainContentRegion$BizFormControl1$Bizform1$ctl00$Phone') || (llelementphone=='inpPhone') || (llelementphone=='j_phone') || (llelementphone=='m6e81afbrp__53') || (llelementphone=='item_meta[119]') || (llelementphone=='ctl00$ContentPlaceHolder_Content$dataPhone') || (llelementphone=='ctl00$generalContentPlaceHolder$ctrlContactUs$tbPhone') || (llelementphone=='ctl00$ctl00$ctl00$ContentPlaceHolderDefault$ContentPlaceHolder1$Contact_6$txtPhone') || (llelementphone=='ctl00$MainContent$tel') || (llelementphone=='dynform_element_3') || (llelementphone=='telephone_1') || (llelementphone=='cf_phone') || (llelementphone=='Lead_PrimaryPhone') || (llelementphone=='p_lt_zoneContent_wP_wP_lt_zonePageWidgets_RevolabsMicrosoftDynamicsCRMContactForm_1_txtBusinessPhone') || (llelementphone=='si_contact_ex_field2') || (llelementphone=='dnn$ctr458$XModPro$ctl00$ctl00$ctl00$Telephone') || (llelementphone=='ctl00$ctl06$txtTelephone') || (llelementphone=='dnn$ctr458$XModPro$ctl00$ctl00$ctl00$Telephone') || (llelementphone=='ctl00$ctl00$mainCopy$CPHCenter$ctl00$QuickRegControl_2$TBPhone') || (llelementphone=='LeadGen_ContactForm_38163_m457931:Phone') || (llelementphone=='LeadGen_ContactForm_29909_m371524:Phone') || (llelementphone=='LeadGen_ContactForm_32343_m395611:Phone') || (llelementphone=='LeadGen_ContactForm_31530_m388101:Phone') || (llelementphone=='LeadGen_ContactForm_27072_m349818:Phone') || (llelementphone=='LeadGen_ContactForm_28362_m354522:Phone') || (llelementphone=='LeadGen_ContactForm_28759_m358745:Phone') || (llelementphone=='LeadGen_ContactForm_32343_m395611:Phone') || (llelementphone=='LeadGen_ContactForm_33631_m415978:Phone') || (llelementphone=='LeadGen_ContactForm_30695_m380436:Phone') || (llelementphone=='LeadGen_ContactForm_29958_m372138:Phone') || (llelementphone=='LeadGen_ContactForm_31471_m387422:Phone') || (llelementphone=='LeadGen_ContactForm_32514_m397613:Phone') || (llelementphone=='LeadGen_ContactForm_29152_m362772:Phone') || (llelementphone=='LeadGen_ContactForm_32540_m397908:Phone') || (llelementphone=='pNumber') || (llelementphone=='organizer_phone') || (llelementphone=='ctl00$PlaceHolderMain$TrialDownloadForm$Phone') || (llelementphone=='ContactSubmission.Phone.Value') || (llelementphone=='ctl00$body$txtPhone') || (llelementphone=='p$lt$ctl03$pageplaceholder$p$lt$zoneCentre$editabletext$ucEditableText$widget1$ctl00$viewBiz$ctl00$Telephone$textbox') || (llelementphone=='ctl01_ctl00_pbForm1_ctl_phone_61f3') || (llelementphone=='ctl01$ctl00$ContentPlaceHolder1$ctl15$Phone') || (llelementphone=='p$lt$zoneContent$pageplaceholder$p$lt$zoneRightContent$contentText$ucEditableText$BizFormControl1$Bizform1$ctl00$Telephone$textbox') || (llelementphone=='ctl00$ctl00$ContentPlaceHolder$ContentPlaceHolder$ctl00$fPhone') || (llelementphone=='pagecolumns_0$form_B502CC1EC1644B38B722523526D45F36$field_6BCFC01A782747DF8E785B5533850EEB') || (llelementphone=='cf3_field_10') || (llelementphone=='r_phone') || (llelementphone=='c_phone') || (llelementphone=='cf-1[]') || (llelementphone=='frm_phone') || (llelementphone=='Patient_Phone_Number') || (llelementphone=='ctl00$PageContent$ctl00$txtPhone') || (llelementphone=='dnn$ctr398$FormMaster$ctl_6e49bedd138a4684a66b62dcb1a34658') || (llelementphone=='id_tel') || (llelementphone=='field_contact_tel[und][0][value]') || (llelementphone=='Phone:') || (llelementphone=='ContactPhone') || (llelementphone=='submitted[telephone]') || (llelementphone=='ctl00$ContentPlaceHolder1$ctl04$txtPhone') || (llelementphone=='ctl00$ContentPlaceHolder_pageContent$contact_phone') || (llelementphone=='264') || (llelementphone=='form_phone_number') || (llelementphone=='field8418998') || (llelementphone=='phoneTBox') || (llelementphone=='pagecontent_1$content_0$contentbottom_0$txtPhone') || (llelementphone=='application_0$PhoneTextBox') || (llelementphone=='submitted[phone_work]') || (llelementphone=='data[Lead][phone]') || (llelementphone=='a4475-telephone') || (llelementphone=='ctl00$Form$txtPhoneNumber') || (llelementphone=='signup_form_data[Phone]') || (llelementphone=='WorkPhone') || (llelementphone=='lldPhone') || (llelementphone=='web_form_1[field_102]value') || (llelementphone=='LeadGen_ContactForm_114694_m1832700:Phone') || (llelementphone=='phoneSalesForm') || (llelementphone=='fund_phone') || (llelementphone=='Phonepi_Phone') || (llelementphone=='field343') || (llelementphone=='cntnt01fbrp__48') || (llelementphone=='contact[phone]') || (llelementphone=='ctl00_ContentPlaceHolder1_ctl01_contactTelephoneBox_text') || (llelementphone=='ctl01$ctl00$ContentPlaceHolder1$ctl29$Phone') || (llelementphone=='plc$lt$content$pageplaceholder$pageplaceholder$lt$bodyColumnZone$LogilityContactUs$txtWorkPhone') || (llelementphone=='ctl00$ctl00$ctl00$cphBody$cphMain$cphMain$FormBuilder1$FormBuilderListView$ctrl4$FieldControl_Telephone') || (llelementphone=='ctl00$ctl00$ctl00$ContentPlaceHolderDefault$cp_content$ctl02$RenderForm_1$rpFieldsets$ctl00$rpFields$ctl04$126d33a3_9f7f_4583_8c94_5820d58fc030') || (llelementphone=='tx_powermail_pi1[uid1266]') || (llelementphone=='si_contact_ex_field3') || (llelementphone=='inc_contact1$txtPhone') || (llelementphone=='item2_tel_1') || (llelementphone=='LeadGen_ContactForm_15766_m0:Phone') || (llelementphone=='ctl00$ContentPlaceHolder1$txtPhone') || (llelementphone=='Default$Content$FormViewer$FieldsRepeater$ctl04$ctl00$ViewTextBox') || (llelementphone=='Default$Content$FormViewer$FieldsRepeater$ctl04$ctl00$ViewTextBox') || (llelementphone=='ctl00$SecondaryPageContent$C005$ctl00$ctl00$C002$ctl00$ctl00$textBox_write') || (llelementphone=='_u216318653597056311') || (llelementphone=='_u630018292785751084') || (llelementphone=='data[Contact][office_phone]') || (llelementphone=='ctl00$ctl00$cphMainContent$Content$txtPhone') || (llelementphone=='ctl00$ContentPlaceHolder1$txtTel') || (llelementphone=='item_5') || (llelementphone=='ques_21432') || (llelementphone=='phoneNum') || (llelementphone=='CONTACT_PHONE') || (llelementphone=='ff_nm_cf_phonetext[]') || (llelementphone=='WorkPhone') ) ) { llformphone = (document.forms[llformlooper2].elements[llelementlooper].value); if (llfrmid == debugid ) {alert('llformphone:'+llformphone+' llemailfound:'+llemailfound);} }

If the name property of the form element is equal to any one of the many many many items in this list, we can then extract the value and stuff it into a variable. And, since this will almost certainly break all the time, it's got a convenient "set the debugid and I'll spam alerts as I search the form".

Repeat this for every other field. It ends up being almost 2,000 lines of code, just to select the correct fields out of the forms.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianFrançois Marier: Deleting non-decryptable restic snapshots

Due to what I suspect is disk corruption error due to a faulty RAM module or network interface on my GnuBee, my restic backup failed with the following error:

$ restic check
using temporary cache in /var/tmp/restic-tmp/restic-check-cache-854484247
repository b0b0516c opened successfully, password is correct
created new cache in /var/tmp/restic-tmp/restic-check-cache-854484247
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
error for tree 4645312b:
  decrypting blob 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c failed: ciphertext verification failed
error for tree 2c3248ce:
  decrypting blob 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6 failed: ciphertext verification failed
Fatal: repository contains errors

I started by locating the snapshots which make use of these corrupt trees:

$ restic find --tree 4645312b
repository b0b0516c opened successfully, password is correct
Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c
 ... path /usr/include/boost/spirit/home/support/auxiliary
 ... in snapshot 41e138c8 (2021-01-31 08:35:16)
Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c
 ... path /usr/include/boost/spirit/home/support/auxiliary
 ... in snapshot e75876ed (2021-02-28 08:35:29)

$ restic find --tree 2c3248ce
repository b0b0516c opened successfully, password is correct
Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6
 ... path /usr/include/boost/spirit/home/support/char_encoding
 ... in snapshot 41e138c8 (2021-01-31 08:35:16)
Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6
 ... path /usr/include/boost/spirit/home/support/char_encoding
 ... in snapshot e75876ed (2021-02-28 08:35:29)

and then deleted them:

$ restic forget 41e138c8 e75876ed
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  2 / 2 files deleted

$ restic prune 
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[13:23] 100.00%  58964 / 58964 packs
repository contains 58964 packs (1417910 blobs) with 278.913 GiB
processed 1417910 blobs: 0 duplicate blobs, 0 B duplicate
load all snapshots
find data that is still in use for 20 snapshots
[1:15] 100.00%  20 / 20 snapshots
found 1364852 of 1417910 data blobs still in use, removing 53058 blobs
will remove 0 invalid files
will delete 942 packs and rewrite 1358 packs, this frees 6.741 GiB
[10:50] 31.96%  434 / 1358 packs rewritten
hash does not match id: want 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57, got 95d90aa48ffb18e6d149731a8542acd6eb0e4c26449a4d4c8266009697fd1904
github.com/restic/restic/internal/repository.Repack
    github.com/restic/restic/internal/repository/repack.go:37
main.pruneRepository
    github.com/restic/restic/cmd/restic/cmd_prune.go:242
main.runPrune
    github.com/restic/restic/cmd/restic/cmd_prune.go:62
main.glob..func19
    github.com/restic/restic/cmd/restic/cmd_prune.go:27
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra/command.go:960
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra/command.go:897
main.main
    github.com/restic/restic/cmd/restic/main.go:98
runtime.main
    runtime/proc.go:204
runtime.goexit
    runtime/asm_amd64.s:1374

As you can see above, the prune command failed due to a corrupt pack and so I followed the process I previously wrote about and identified the affected snapshots using:

$ restic find --pack 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57

before deleting them with:

$ restic forget 031ab8f1 1672a9e1 1f23fb5b 2c58ea3a 331c7231 5e0e1936 735c6744 94f74bdb b11df023 dfa17ba8 e3f78133 eefbd0b0 fe88aeb5 
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  13 / 13 files deleted

$ restic prune
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[13:37] 100.00%  60020 / 60020 packs
repository contains 60020 packs (1548315 blobs) with 283.466 GiB
processed 1548315 blobs: 129812 duplicate blobs, 4.331 GiB duplicate
load all snapshots
find data that is still in use for 8 snapshots
[0:53] 100.00%  8 / 8 snapshots
found 1219895 of 1548315 data blobs still in use, removing 328420 blobs
will remove 0 invalid files
will delete 6232 packs and rewrite 1275 packs, this frees 36.302 GiB
[23:37] 100.00%  1275 / 1275 packs rewritten
counting files in repo
[11:45] 100.00%  52822 / 52822 packs
finding old index files
saved new indexes as [a31b0fc3 9f5aa9b5 db19be6f 4fd9f1d8 941e710b 528489d9 fb46b04a 6662cd78 4b3f5aad 0f6f3e07 26ae96b2 2de7b89f 78222bea 47e1a063 5abf5c2d d4b1d1c3 f8616415 3b0ebbaa]
remove 23 old index files
[0:00] 100.00%  23 / 23 files deleted
remove 7507 old packs
[0:08] 100.00%  7507 / 7507 files deleted
done

And with 13 of my 21 snapshots deleted, the checks now pass:

$ restic check
using temporary cache in /var/tmp/restic-tmp/restic-check-cache-407999210
repository b0b0516c opened successfully, password is correct
created new cache in /var/tmp/restic-tmp/restic-check-cache-407999210
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
no errors were found

This represents a significant amount of lost backup history, but at least it's not all of it.

Planet DebianShirish Agarwal: what to write

First up, I am alive and well. I have been receiving calls from friends for quite sometime but now that I have become deaf, it is a pain and the hearing aids aren’t all that useful. But moreover, where we have been finding ourselves each and every day sinking lower and lower feels absurd as to what to write and not write about India. Thankfully, I ran across this piece which does tell in far more detail than I ever could. The only interesting and somewhat positive news I had is from south of India otherwise sad days, especially for the poor. The saddest story is that this time Covid has reached alarming proportions in India and surprise, surprise this time the villain for many is my state of Maharashtra even though it hasn’t received its share of GST proceeds for last two years and this was Kerala’s perspective, different state, different party, different political ideology altogether.

Kerala Finance Minister Thomas Issac views on GST, October 22, 2020 Indian Express.

I briefly also share the death of somewhat liberal Film censorship in India unlike Italy which abolished film censorship altogether. I don’t really want spend too much on how we have become No. 2 in Covid cases in the world and perhaps death also. Many people still believe in herd immunity but don’t really know what it means. So without taking too much time and effort, bid adieu. May post when I’m hopefully emotionally feeling better, stronger 😦

,

Planet DebianSteinar H. Gunderson: Squirrel!

“All comments on this article will now be moderated. The bar to pass moderation will be high, it's really time to think about something else. Did you all see that we have an exciting article on spinlocks?” Poor LWN <3

SPINLOCK!

Planet DebianRussell Coker: Yama

I’ve just setup the Yama LSM module on some of my Linux systems. Yama controls ptrace which is the debugging and tracing API for Unix systems. The aim is to prevent a compromised process from using ptrace to compromise other processes and cause more damage. In most cases a process which can ptrace another process which usually means having capability SYS_PTRACE (IE being root) or having the same UID as the target process can interfere with that process in other ways such as modifying it’s configuration and data files. But even so I think it has the potential for making things more difficult for attackers without making the system more difficult to use.

If you put “kernel.yama.ptrace_scope = 1” in sysctl.conf (or write “1” to /proc/sys/kernel/yama/ptrace_scope) then a user process can only trace it’s child processes. This means that “strace -p” and “gdb -p” will fail when run as non-root but apart from that everything else will work. Generally “strace -p” (tracing the system calls of another process) is of most use to the sysadmin who can do it as root. The command “gdb -p” and variants of it are commonly used by developers so yama wouldn’t be a good thing on a system that is primarily used for software development.

Another option is “kernel.yama.ptrace_scope = 3” which means no-one can trace and it can’t be disabled without a reboot. This could be a good option for production servers that have no need for software development. It wouldn’t work well for a small server where the sysadmin needs to debug everything, but when dozens or hundreds of servers have their configuration rolled out via a provisioning tool this would be a good setting to include.

See Documentation/admin-guide/LSM/Yama.rst in the kernel source for the details.

When running with capability SYS_PTRACE (IE root shell) you can ptrace anything else and if necessary disable Yama by writing “0” to /proc/sys/kernel/yama/ptrace_scope .

I am enabling mode 1 on all my systems because I think it will make things harder for attackers while not making things more difficult for me.

Also note that SE Linux restricts SYS_PTRACE and also restricts cross-domain ptrace access, so the combination with Yama makes things extra difficult for an attacker.

Yama is enabled in the Debian kernels by default so it’s very easy to setup for Debian users, just edit /etc/sysctl.d/whatever.conf and it will be enabled on boot.

Krebs on SecurityParkMobile Breach Exposes License Plate Data, Mobile Numbers of 21M Users

Someone is selling account information for 21 million customers of ParkMobile, a mobile parking app that’s popular in North America. The stolen data includes customer email addresses, dates of birth, phone numbers, license plate numbers, hashed passwords and mailing addresses.

KrebsOnSecurity first heard about the breach from Gemini Advisory, a New York City based threat intelligence firm that keeps a close eye on the cybercrime forums. Gemini shared a new sales thread on a Russian-language crime forum that included my ParkMobile account information in the accompanying screenshot of the stolen data.

Included in the data were my email address and phone number, as well as license plate numbers for four different vehicles we have used over the past decade.

Asked about the sales thread, Atlanta-based ParkMobile said the company published a notification on Mar. 26 about “a cybersecurity incident linked to a vulnerability in a third-party software that we use.”

“In response, we immediately launched an investigation with the assistance of a leading cybersecurity firm to address the incident,” the notice reads. “Out of an abundance of caution, we have also notified the appropriate law enforcement authorities. The investigation is ongoing, and we are limited in the details we can provide at this time.”

The statement continues: “Our investigation indicates that no sensitive data or Payment Card Information, which we encrypt, was affected. Meanwhile, we have taken additional precautionary steps since learning of the incident, including eliminating the third-party vulnerability, maintaining our security, and continuing to monitor our systems.”

Asked for clarification on what the attackers did access, ParkMobile confirmed it included basic account information – license plate numbers, and if provided, email addresses and/or phone numbers, and vehicle nickname.

“In a small percentage of cases, there may be mailing addresses,” spokesman Jeff Perkins said.

ParkMobile doesn’t store user passwords, but rather it stores the output of a fairly robust one-way password hashing algorithm called bcrypt, which is far more resource-intensive and expensive to crack than common alternatives like MD5. The database stolen from ParkMobile and put up for sale includes each user’s bcrypt hash.

“You are correct that bcrypt hashed and salted passwords were obtained,” Perkins said when asked about the screenshot in the database sales thread.

“Note, we do not keep the salt values in our system,” he said. “Additionally, the compromised data does not include parking history, location history, or any other sensitive information. We do not collect social security numbers or driver’s license numbers from our users.”

ParkMobile says it is finalizing an update to its support site confirming the conclusion of its investigation. But I wonder how many of its users were even aware of this security incident. The Mar. 26 security notice does not appear to be linked to other portions of the ParkMobile site, and it is absent from the company’s list of recent press releases.

It’s also curious that ParkMobile hasn’t asked or forced its users to change their passwords as a precautionary measure. I used the ParkMobile app to reset my password, but there was no messaging in the app that suggested this was a timely thing to do.

So if you’re a ParkMobile user, changing your account password might be a pro move. If it’s any consolation, whoever is selling this data is doing so for an insanely high starting price ($125,000) that is unlikely to be paid by any cybercriminal to a new user with no reputation on the forum.

More importantly, if you used your ParkMobile password at any other site tied to the same email address, it’s time to change those credentials as well (and stop re-using passwords).

The breach comes at a tricky time for ParkMobile. On March 9, the European parking group EasyPark announced its plans to acquire the company, which operates in more than 450 cities in North America.

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 02)

This week on my podcast, part two of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).

MP3

MERiverdale

I’ve been watching the show Riverdale on Netflix recently. It’s an interesting modern take on the Archie comics. Having watched Josie and the Pussycats in Outer Space when I was younger I was anticipating something aimed towards a similar audience. As solving mysteries and crimes was apparently a major theme of the show I anticipated something along similar lines to Scooby Doo, some suspense and some spooky things, but then a happy ending where criminals get arrested and no-one gets hurt or killed while the vast majority of people are nice. Instead the first episode has a teen being murdered and Ms Grundy being obsessed with 15yo boys and sleeping with Archie (who’s supposed to be 15 but played by a 20yo actor).

Everyone in the show has some dark secret. The filming has a dark theme, the sky is usually overcast and it’s generally gloomy. This is a significant contrast to Veronica Mars which has some similarities in having a young cast, a sassy female sleuth, and some similar plot elements. Veronica Mars has a bright theme and a significant comedy element in spite of dealing with some dark issues (murder, rape, child sex abuse, and more). But Riverdale is just dark. Anyone who watches this with their kids expecting something like Scooby Doo is in for a big surprise.

There are lots of interesting stylistic elements in the show. Lots of clothing and uniform designs that seem to date from the 1940’s. It seems like some alternate universe where kids have smartphones and laptops while dressing in the style of the 1940s. One thing that annoyed me was construction workers using tools like sledge-hammers instead of excavators. A society that has smart phones but no earth-moving equipment isn’t plausible.

On the upside there is a racial mix in the show that more accurately reflects American society than the original Archie comics and homophobia is much less common than in most parts of our society. For both race issues and gay/lesbian issues the show treats them in an accurate way (portraying some bigotry) while the main characters aren’t racist or homophobic.

I think it’s generally an OK show and recommend it to people who want a dark show. It’s a good show to watch while doing something on a laptop so you can check Wikipedia for the references to 1940s stuff (like when Bikinis were invented). I’m half way through season 3 which isn’t as good as the first 2, I don’t know if it will get better later in the season or whether I should have stopped after season 2.

I don’t usually review fiction, but the interesting aesthetics of the show made it deserve a review.

MEStorage Trends 2021

The Viability of Small Disks

Less than a year ago I wrote a blog post about storage trends [1]. My main point in that post was that disks smaller than 2TB weren’t viable then and 2TB disks wouldn’t be economically viable in the near future.

Now MSY has 2TB disks for $72 and 2TB SSD for $245, saving $173 if you get a hard drive (compared to saving $240 10 months ago). Given the difference in performance and noise 2TB hard drives won’t be worth using for most applications nowadays.

NVMe vs SSD

Last year NVMe prices were very comparable for SSD prices, I was hoping that trend would continue and SSDs would go away. Now for sizes 1TB and smaller NVMe and SSD prices are very similar, but for 2TB the NVMe prices are twice that of SSD – presumably partly due to poor demand for 2TB NVMe. There are also no NVMe devices larger than 2TB on sale at MSY (a store which caters to home stuff not special server equipment) but SSDs go up to 8TB.

It seems that NVMe is only really suitable for workstation storage and for cache etc on a server. So SATA SSDs will be around for a while.

Small Servers

There are a range of low end servers which support a limited number of disks. Dell has 2 disk servers and 4 disk servers. If one of those had 8TB SSDs you could have 8TB of RAID-1 or 24TB of RAID-Z storage in a low end server. That covers the vast majority of servers (small business or workgroup servers tend to have less than 8TB of storage).

Larger Servers

Anandtech has an article on Seagates roadmap to 120TB disks [2]. They currently sell 20TB disks using HAMR technology

Currently the biggest disks that MSY sells are 10TB for $395, which was also the biggest disk they were selling last year. Last year MSY only sold SSDs up to 2TB in size (larger ones were available from other companies at much higher prices), now they sell 8TB SSDs for $949 (4* capacity increase in less than a year). Seagate is planning 30TB disks for 2023, if SSDs continue to increase in capacity by 4* per year we could have 128TB SSDs in 2023. If you needed a server with 100TB of storage then having 2 or 3 SSDs in a RAID array would be much easier to manage and faster than 4*30TB disks in an array.

When you have a server with many disks you can expect to have more disk failures due to vibration. One time I built a server with 18 disks and took disks from 2 smaller servers that had 4 and 5 disks. The 9 disks which had been working reliably for years started having problems within weeks of running in the bigger server. This is one of the many reasons for paying extra for SSD storage.

Seagate is apparently planning 50TB disks for 2026 and 100TB disks for 2030. If that’s the best they can do then SSD vendors should be able to sell larger products sooner at prices that are competitive. Matching hard drive prices is not required, getting to less than 4* the price should be enough for most customers.

The Anandtech article is worth reading, it mentions some interesting features that Seagate are developing such as having 2 actuators (which they call Mach.2) so the drive can access 2 different tracks at the same time. That can double the performance of a disk, but that doesn’t change things much when SSDs are more than 100* faster. Presumably the Mach.2 disks will be SAS and incredibly expensive while providing significantly less performance than affordable SATA SSDs.

Computer Cases

In my last post I speculated on the appearance of smaller cases designed to not have DVD drives or 3.5″ hard drives. Such cases still haven’t appeared apart from special purpose machines like the NUC that were available last year.

It would be nice if we could get a new industry standard for smaller power supplies. Currently power supplies are expected to be almost 5 inches wide (due to the expectation of a 5.25″ DVD drive mounted horizontally). We need some industry standards for smaller PCs that aren’t like the NUC, the NUC is very nice, but most people who build their own PC need more space than that. I still think that planning on USB DVD drives is the right way to go. I’ve got 4PCs in my home that are regularly used and CDs and DVDs are used so rarely that sharing a single DVD drive among all 4 wouldn’t be a problem.

Conclusion

I’m tempted to get a couple of 4TB SSDs for my home server which cost $487 each, it currently has 2*500G SSDs and 3*4TB disks. I would have to remove some unused files but that’s probably not too hard to do as I have lots of old backups etc on there. Another possibility is to use 2*4TB SSDs for most stuff and 2*4TB disks for backups.

I’m recommending that all my clients only use SSDs for their storage. I only have one client with enough storage that disks are the only option (100TB of storage) but they moved all the functions of that server to AWS and use S3 for the storage. Now I don’t have any clients doing anything with storage that can’t be done in a better way on SSD for a price difference that’s easy for them to afford.

Affordable SSD also makes RAID-1 in workstations more viable. 2 disks in a PC is noisy if you have an office full of them and produces enough waste heat to be a reliability issue (most people don’t cool their offices adequately on weekends). 2 SSDs in a PC is no problem at all. As 500G SSDs are available for $73 it’s not a significant cost to install 2 of them in every PC in the office (more cost for my time than hardware). I generally won’t recommend that hard drives be replaced with SSDs in systems that are working well. But if a machine runs out of space then replacing it with SSDs in a RAID-1 is a good choice.

Moore’s law might cover SSDs, but it definitely doesn’t cover hard drives. Hard drives have fallen way behind developments of most other parts of computers over the last 30 years, hopefully they will go away soon.

Kevin RuddBBC Breakfast: HRH Prince Philip

INTERVIEW VIDEO
TV INTERVIEW
BBC BREAKFAST
12 APRIL 2021

The post BBC Breakfast: HRH Prince Philip appeared first on Kevin Rudd.

Kevin RuddThe Guardian: Australia’s vaccination rollout strategy has been an epic fail. Now Scott Morrison is trying to gaslight us

Australians should be proud of their success in suppressing and eliminating coronavirus so far. This is largely due to the efforts of state governments – Labor and Liberal – in containing local outbreaks through a combination of mandatory quarantines, temporary lockdowns and effective contact tracing. And the Australian people themselves have played the biggest part by making this strategy of containment, and eventual elimination, work.

The same cannot be said of the federal government’s vaccination strategy where they have politically trumpeted their success. The daily reality of the vaccination rollout strategy reveals a litany of policy and administrative failures.

Thirteen months into the Covid-19 crisis, the states collectively get a strong B-plus on virus containment; whereas the federal government gets a D-minus on its vaccine rollout.

With the states constitutionally responsible for most of the public health response, Scott Morrison’s main role was: to secure in advance sufficient international and domestic vaccine supply; to do so from multiple vaccine developers to mitigate against the risks of individual vaccines failing; and to organise in advance a distribution strategy that would get the vaccine to the people as rapidly as possible.

On this core responsibility, Morrison has failed. His strategy, once again, is a political strategy. It has been to blame others – the states on delivery and the Europeans on supply.

Ultimately, the delivery of an effective vaccine to the people is the only effective long-term guarantee on a return to public health normality – and therefore economic normality, including the opening of our international borders.

We are now in a race against time to immunise our population, overcome this virus, and start the task of rebuilding from the pandemic. However, five months after Morrison announced Australians were “at the front of the queue” for vaccination, our rollout is presently ranked 104th in the world – sandwiched between Lebanon and Bangladesh – based on the latest seven-day average vaccination rate. This is a national disgrace.

Australians understand this is a race. It is a race between our vaccination rollout to eliminate the virus from our shores, and the rolling risk of the virus mutating. We are reminded of this every time the virus leaks out of hotel quarantine, and whenever we read heart-wrenching stories out of India or Brazil. We understand it when we learn about deadlier and more infectious variants emerging overseas that threaten not only those countries, but the roughly 36,000 stranded Australians who are still trickling home months too late. Each extra day they spend waiting for a quarantine place is another day they risk being exposed to a new variant they could bring back to Australia.

At present, we do not know when all Australians will be vaccinated against Covid-19. We don’t even know when all of our frontline doctors, nurses and quarantine workers will be vaccinated.

Early warnings that Australia should diversify its vaccine portfolio and avoid putting too many eggs in the AstraZeneca basket have been proven right.

And despite the prime minister telling us he has “secured” more Pfizer vaccines, to be delivered sometime around Christmas, the truth is no shipment is truly secure until it is arrived and ready for use.

The truth is we now have no vaccine strategy for half the country this year. Many countries will probably finish rolling out their vaccines before millions of us even get our first shot.

The early perceived political “successes” in Australia’s handling of the virus appears to have induced on Morrison’s part a breathtaking level of political complacency on vaccination strategy that borders on professional negligence. Morrison’s inner circle seem to inhabit an alternate reality. The key decision-makers (many of whom, it seems, have already been vaccinated) insist there is no race at all.

Despite earlier doubling down on unrealistic targets, Morrison now tries to gaslight Australians by claiming he didn’t actually say what we all heard him say. That we would be at the “front of the queue”, that we had access to the best vaccines in the world, and that we would have four million vaccinations done by the end of March. All bullshit.

So what could the prime minister now do? First, Morrison should own up to his responsibilities. Doctors can give excellent medical advice, but they aren’t necessarily experienced at public sector management, international diplomacy or working out how and when vaccines will be delivered to surgeries. Morrison’s job is to ensure that his health bureaucracy has a clear, workable communications plan with the nation’s medical workforce on vaccine distribution.

At the same time, Morrison should recognise that his own hyperactive political messaging is actually eroding the public’s confidence rather than boosting it.

One lesson from the pandemic’s first wave was that many Australians felt far more reassured by straight-talkers than evasive ministers and officials. Public confidence in the vaccination program isn’t eroded by people asking reasonable questions, but by the failure of governments to give straight and factual answers. Morrison and his officials could inspire more confidence if they were less shifty, more candid or simply vacated the public communications space entirely to the chief medical officer.

Second, Australia might look to the United States, which is weeks away from producing a surplus of vaccines. After a century of alliance, partnership and camaraderie, Washington may be able to provide a top-up to at least help vaccinate our most vulnerable frontline workers with the best vaccines available.

Third, we should be learning from our friends and allies about their experiences running mass vaccination centres. One of the major challenges associated with the shift to Pfizer from AstraZeneca is that it requires colder storage facilities and, perhaps most significantly, it requires the second shot to be given about three weeks after the first (rather than about three months for AstraZeneca). The government’s plan A – to mass-vaccinate millions through GP clinics and pharmacies – always seemed far-fetched. It seems inevitable that we may now need to pivot to mass vaccination centres like those in the US.

Fourth, the government must overhaul its local production effort. The pharmaceutical industry is reportedly rife with stories of Australian officials not answering correspondence, not returning phone calls and being generally uninterested in discussing vaccine purchases until several months into the pandemic, by which time those companies had promised billions of doses to other countries.

The same attitudes appear to have driven the government’s approach to our own country’s local mRNA experts. As the Guardian reported last week, “Frustrated experts say Australia could already be producing mRNA Covid vaccines if it had acted earlier”. Any sensible government would have been moving heaven and earth to help make this happen months ago, but not Morrison it would seem.

Australians are not fools. They understand just how vulnerable we remain. And we all know that waiting until Christmas isn’t good enough. As the actor David Wenham tweeted after Morrison’s press conference on Friday, “I just rang my local Priceline pharmacy and ordered 100 million doses of Pfizer vaccine. This is great news and puts Australia at the front of the queue again.” And David, as we all know, is a better actor than Scotty from Marketing will ever be.

First published in The Guardian

Image: Mike Bowers/The Guardian

The post The Guardian: Australia’s vaccination rollout strategy has been an epic fail. Now Scott Morrison is trying to gaslight us appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Exceptionally General

Andres noticed this pattern showing up in his company's code base, and at first didn't think much of it:

try { /*code here*/ } catch (Exception ex) { ExceptionManager.HandleException(ex); throw ex; }

It's not uncommon to have some sort of exception handling framework, maybe some standard logging, or the like. And even if you're using that, it may still make sense to re-throw the exception so another layer of the application can also handle it. But there was just something about it that got Andres's attention.

So Andres did what any curious programmer would do: checked the implementation of HandleException.

public static Exception HandleException(Exception ex) { if (ex is ArgumentException) return new InvalidOperationException("(ExceptionManager) Ocurrió un error en el argumento."); if (ex is ArgumentOutOfRangeException) return new InvalidOperationException("(ExceptionManager) An error ocurred because of an out of range value."); if (ex is ArgumentNullException) return new InvalidOperationException("(ExceptionManager) On error ocurred tried to access a null value."); if (ex is InvalidOperationException) return new InvalidOperationException("(ExceptionManager) On error ocurred performing an invalid operation."); if (ex is SmtpException) return new InvalidOperationException("(ExceptionManager)An error ocurred trying to send an email."); if (ex is SqlException) return new InvalidOperationException("(ExceptionManager) An error ocurred accessing data."); if (ex is IOException) return new InvalidOperationException("(ExceptionManager) An error ocurred accesing files."); return new InvalidOperationException("(ExceptionManager) An error ocurred while trying to perform the application."); }

So, what this code is trying to do is bad: it wants to destroy all the exception information and convert actual meaningful errors into generic InvalidOperationExceptions. If this code did what it intended to do, it'd be destroying the backtrace, concealing the origin of the error, and make the application significantly harder to debug.

Fortunately, this code actually doesn't do anything. It constructs the new objects, and returns them, but that return value isn't consumed, so it just vanishes into the ether. Then our actual exception handler rethrows the original exception.

The old saying is "no harm, no foul", and while this doesn't do any harm, it's definitely quite foul.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Planet DebianJelmer Vernooij: The upstream ontologist

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

The upstream ontologist is a project that extracts metadata about upstream projects in a consistent format. It does this with a combination of heuristics and reading ecosystem-specific metadata files, such as Python’s setup.py, rust’s Cargo.toml as well as e.g. scanning README files.

Supported Data Sources

It will extract information from a wide variety of sources, including:

Supported Fields

Fields that it currently provides include:

  • Homepage: homepage URL
  • Name: name of the upstream project
  • Contact: contact address of some sort of the upstream (e-mail, mailing list URL)
  • Repository: VCS URL
  • Repository-Browse: Web URL for viewing the VCS
  • Bug-Database: Bug database URL (for web viewing, generally)
  • Bug-Submit: URL to use to submit new bugs (either on the web or an e-mail address)
  • Screenshots: List of URLs with screenshots
  • Archive: Archive used - e.g. SourceForge
  • Security-Contact: e-mail or URL with instructions for reporting security issues
  • Documentation: Link to documentation on the web:
  • Wiki: Wiki URL
  • Summary: one-line description of the project
  • Description: longer description of the project
  • License: Single line license description (e.g. "GPL 2.0") as declared in the metadata[1]
  • Copyright: List of copyright holders
  • Version: Current upstream version
  • Security-MD: URL to markdown file with security policy

All data fields have a “certainty” associated with them (“certain”, “confident”, “likely” or “possible”), which gets set depending on how the data was derived or where it was found. If multiple possible values were found for a specific field, then the value with the highest certainty is taken.

Interface

The ontologist provides a high-level Python API as well as two command-line tools that can write output in two different formats:

For example, running guess-upstream-metadata on dulwich:

 % guess-upstream-metadata
 <string>:2: (INFO/1) Duplicate implicit target name: "contributing".
 Name: dulwich
 Repository: https://www.dulwich.io/code/
 X-Security-MD: https://github.com/dulwich/dulwich/tree/HEAD/SECURITY.md
 X-Version: 0.20.21
 Bug-Database: https://github.com/dulwich/dulwich/issues
 X-Summary: Python Git Library
 X-Description: |
   This is the Dulwich project.
   It aims to provide an interface to git repos (both local and remote) that
   doesn't call out to git directly but instead uses pure Python.
 X-License: Apache License, version 2 or GNU General Public License, version 2 or later.
 Bug-Submit: https://github.com/dulwich/dulwich/issues/new

Lintian-Brush

lintian-brush can update DEP-12-style debian/upstream/metadata files that hold information about the upstream project that is packaged as well as the Homepage in the debian/control file based on information provided by the upstream ontologist. By default, it only imports data with the highest certainty - you can override this by specifying the --uncertain command-line flag.

[1]Obviously this won't be able to describe the full licensing situation for many projects. Projects like scancode-toolkit are more appropriate for that.

Planet DebianVishal Gupta: Sikkim 101 for Backpackers

Host to Kanchenjunga, the world’s third-highest mountain peak and the endangered Red Panda, Sikkim is a state in northeastern India. Nestled between Nepal, Tibet (China), Bhutan and West Bengal (India), the state offers a smorgasbord of cultures and cuisines. That said, it’s hardly surprising that the old spice route meanders through western Sikkim, connecting Lhasa with the ports of Bengal. Although the latter could also be attributed to cardamom (kali elaichi), a perennial herb native to Sikkim, which the state is the second-largest producer of, globally. Lastly, having been to and lived in India, all my life, I can confidently say Sikkim is one of the cleanest & safest regions in India, making it ideal for first-time backpackers.

Brief History

  • 17th century: The Kingdom of Sikkim is founded by the Namgyal dynasty and ruled by Buddhist priest-kings known as the Chogyal.
  • 1890: Sikkim becomes a princely state of British India.
  • 1947: Sikkim continues its protectorate status with the Union of India, post-Indian-independence.
  • 1973: Anti-royalist riots take place in front of the Chogyal's palace, by Nepalis seeking greater representation.
  • 1975: Referendum leads to the deposition of the monarchy and Sikkim joins India as its 22nd state.

Languages

  • Official: English, Nepali, Sikkimese/Bhotia and Lepcha
  • Though Hindi and Nepali share the same script (Devanagari), they are not mutually intelligible. Yet, most people in Sikkim can understand and speak Hindi.

Ethnicity

  • Nepalis: Migrated in large numbers (from Nepal) and soon became the dominant community
  • Bhutias: People of Tibetan origin. Major inhabitants in Northern Sikkim.
  • Lepchas: Original inhabitants of Sikkim

Food

  • Tibetan/Nepali dishes (mostly consumed during winter)
    • Thukpa: Noodle soup, rich in spices and vegetables. Usually contains some form of meat. Common variations: Thenthuk and Gyathuk
    • Momos: Steamed or fried dumplings, usually with a meat filling.
    • Saadheko: Spicy marinated chicken salad.
    • Gundruk Soup: A soup made from Gundruk, a fermented leafy green vegetable.
    • Sinki : A fermented radish tap-root product, traditionally consumed as a base for soup and as a pickle. Eerily similar to Kimchi.
  • While pork and beef are pretty common, finding vegetarian dishes is equally easy.
  • Staple: Dal-Bhat with Subzi. Rice is a lot more common than wheat (rice) possibly due to greater carb content and proximity to West Bengal, India’s largest producer of Rice.
  • Good places to eat in Gangtok
    • Hamro Bhansa Ghar, Nimtho (Nepali)
    • Taste of Tibet
    • Dragon Wok (Chinese & Japanese)

Buddhism in Sikkim

  • Bayul Demojong (Sikkim), is the most sacred Land in the Himalayas as per the belief of the Northern Buddhists and various religious texts.
  • Sikkim was blessed by Guru Padmasambhava, the great Buddhist saint who visited Sikkim in the 8th century and consecrated the land.
  • However, Buddhism is said to have reached Sikkim only in the 17th century with the arrival of three Tibetan monks viz. Rigdzin Goedki Demthruchen, Mon Kathok Sonam Gyaltshen & Rigdzin Legden Je at Yuksom. Together, they established a Buddhist monastery.
  • In 1642 they crowned Phuntsog Namgyal as the first monarch of Sikkim and gave him the title of Chogyal, or Dharma Raja.
  • The faith became popular through its royal patronage and soon many villages had their own monastery.
  • Today Sikkim has over 200 monasteries.

Major monasteries

  • Rumtek Monastery, 20Km from Gangtok
  • Lingdum/Ranka Monastery, 17Km from Gangtok
  • Phodong Monastery, 28Km from Gangtok
  • Ralang Monastery, 10Km from Ravangla
  • Tsuklakhang Monastery, Royal Palace, Gangtok
  • Enchey Monastery, Gangtok
  • Tashiding Monastery, 35Km from Ravangla


Reaching Sikkim

  • Gangtok, being the capital, is easiest to reach amongst other regions, by public transport and shared cabs.
  • By Air:
    • Pakyong (PYG) :
      • Nearest airport from Gangtok (about 1 hour away)
      • Tabletop airport
      • Reserved cabs cost around INR 1200.
      • As of Apr 2021, the only flights to PYG are from IGI (Delhi) and CCU (Kolkata).
    • Bagdogra (IXB) :
      • About 20 minutes from Siliguri and 4 hours from Gangtok.
      • Larger airport with flights to most major Indian cities.
      • Reserved cabs cost about INR 3000. Shared cabs cost about INR 350.
  • By Train:
    • New Jalpaiguri (NJP) :
      • About 20 minutes from Siliguri and 4 hours from Gangtok.
      • Reserved cabs cost about INR 3000. Shared cabs from INR 350.
  • By Road:
    • NH10 connects Siliguri to Gangtok
    • If you can’t find buses plying to Gangtok directly, reach Siliguri and then take a cab to Gangtok.
  • Sikkim Nationalised Transport Div. also runs hourly buses between Siliguri and Gangtok and daily buses on other common routes. They’re cheaper than shared cabs.
  • Wizzride also operates shared cabs between Siliguri/Bagdogra/NJP, Gangtok and Darjeeling. They cost about the same as shared cabs but pack in half as many people in “luxury cars” (Innova, Xylo, etc.) and are hence more comfortable.

Gangtok

  • Time needed: 1D/1N
  • Places to visit:
    • Hanuman Tok
    • Ganesh Tok
    • Tashi View Point [6,800ft]
    • MG Marg
    • Sikkim Zoo
    • Gangtok Ropeway
    • Enchey Monastery
    • Tsuklakhang Palace & Monastery
  • Hostels: Tagalong Backpackers (would strongly recommend), Zostel Gangtok
  • Places to chill: Travel Cafe, Café Live & Loud and Gangtok Groove
  • Places to shop: Lal Market and MG Marg

Getting Around

  • Taxis operate on a reserved or shared basis. In case of the latter, you can pool with other commuters your taxis will pick up and drop en-route.
  • Naturally shared taxis only operate on popular routes. The easiest way to get around Gangtok is to catch a shared cab from MG Marg.
  • Reserved taxis for Gangtok sightseeing cost around INR 1000-1500, depending upon the spots you’d like to see
  • Key taxi/bus stands :
    • Deorali stand: For Darjeeling, Siliguri, Kalimpong
    • Vajra stand: For North & East Sikkim (Tsomgo Lake & Nathula)
    • Rumtek taxi: For Ravangla, Pelling, Namchi, Geyzing, Jorethang and Singtam.

Exploring Gangtok on an MTB


North Sikkim

  • The easiest & most economical way to explore North Sikkim is the 3D/2N package offered by shared-cab drivers.
  • This includes food, permits, cab rides and accommodation (1N in Lachen and 1N in Lachung)
  • The accommodation on both nights are at homestays with bare necessities, so keep your hopes low.
  • In the spirit of sustainable tourism, you’ll be asked to discard single-use plastic bottles, so please carry a bottle that you can refill along the way.
  • Zero Point and Gurdongmer Lake are snow-capped throughout the year

3D/2N Shared-cab Package Itinerary

  • Day 1
    • Gangtok (10am) - Chungthang - Lachung (stay)
  • Day 2
    • Pre-lunch : Lachung (6am) - Yumthang Valley [12,139ft] - Zero Point - Lachung [15,300ft]
    • Post-lunch : Lachung - Chungthang - Lachen (stay)
  • Day 3
    • Pre-lunch : Lachen (5am) - Kala Patthar - Gurdongmer Lake [16,910ft] - Lachen
    • Post-lunch : Lachen - Chungthang - Gangtok (7pm)
  • This itinerary is idealistic and depends on the level of snowfall.
  • Some drivers might switch up Day 2 and 3 itineraries by visiting Lachen and then Lachung, depending upon the weather.
  • Areas beyond Lachen & Lachung are heavily militarized since the Indo-China border is only a few miles away.


East Sikkim

Zuluk and Silk Route

  • Time needed: 2D/1N
  • Zuluk [9,400ft] is a small hamlet with an excellent view of the eastern Himalayan range including the Kanchenjunga.
  • Was once a transit point to the historic Silk Route from Tibet (Lhasa) to India (West Bengal).
  • The drive from Gangtok to Zuluk takes at least four hours. Hence, it makes sense to spend the night at a homestay and space out your trip to Zuluk

Tsomgo Lake and Nathula

  • Time Needed : 1D
  • A Protected Area Permit is required to visit these places, due to their proximity to the Chinese border
  • Tsomgo/Chhangu Lake [12,313ft]
    • Glacial lake, 40 km from Gangtok.
    • Remains frozen during the winter season.
    • You can also ride on the back of a Yak for INR 300
  • Baba Mandir
    • An old temple dedicated to Baba Harbhajan Singh, a Sepoy in the 23rd Regiment, who died in 1962 near the Nathu La during Indo – China war.
  • Nathula Pass [14,450ft]
    • Located on the Indo-Tibetan border crossing of the Old Silk Route, it is one of the three open trading posts between India and China.
    • Plays a key role in the Sino-Indian Trade and also serves as an official Border Personnel Meeting(BPM) Point.
    • May get cordoned off by the Indian Army in event of heavy snowfall or for other security reasons.


West Sikkim

  • Time needed: 3N/1N
  • Hostels at Pelling : Mochilerro Ostillo

Itinerary

Day 1: Gangtok - Ravangla - Pelling

  • Leave Gangtok early, for Ravangla through the Temi Tea Estate route.
  • Spend some time at the tea garden and then visit Buddha Park at Ravangla
  • Head to Pelling from Ravangla

Day 2: Pelling sightseeing

  • Hire a cab and visit Skywalk, Pemayangtse Monastery, Rabdentse Ruins, Kecheopalri Lake, Kanchenjunga Falls.

Day 3: Pelling - Gangtok/Siliguri

  • Wake up early to catch a glimpse of Kanchenjunga at the Pelling Helipad around sunrise
  • Head back to Gangtok on a shared-cab
  • You could take a bus/taxi back to Siliguri if Pelling is your last stop.

Darjeeling

  • In my opinion, Darjeeling is lovely for a two-day detour on your way back to Bagdogra/Siliguri and not any longer (unless you’re a Bengali couple on a honeymoon)
  • Once a part of Sikkim, Darjeeling was ceded to the East India Company after a series of wars, with Sikkim briefly receiving a grant from EIC for “gifting” Darjeeling to the latter
  • Post-independence, Darjeeling was merged with the state of West Bengal.

Itinerary

Day 1 :

  • Take a cab from Gangtok to Darjeeling (shared-cabs cost INR 300 per seat)
  • Reach Darjeeling by noon and check in to your Hostel. I stayed at Hideout.
  • Spend the evening visiting either a monastery (or the Batasia Loop), Nehru Road and Mall Road.
  • Grab dinner at Glenary whilst listening to live music.

Day 2:

  • Wake up early to catch the sunrise and a glimpse of Kanchenjunga at Tiger Hill. Since Tiger Hill is 10km from Darjeeling and requires a permit, book your taxi in advance.
  • Alternatively, if you don’t want to get up at 4am or shell out INR1500 on the cab to Tiger Hill, walk to the Kanchenjunga View Point down Mall Road
  • Next, queue up outside Keventers for breakfast with a view in a century-old cafe
  • Get a cab at Gandhi Road and visit a tea garden (Happy Valley is the closest) and the Ropeway. I was lucky to meet 6 other backpackers at my hostel and we ended up pooling the cab at INR 200 per person, with INR 1400 being on the expensive side, but you could bargain.
  • Get lunch, buy some tea at Golden Tips, pack your bags and hop on a shared-cab back to Siliguri. It took us about 4hrs to reach Siliguri, with an hour to spare before my train.
  • If you’ve still got time on your hands, then check out the Peace Pagoda and the Darjeeling Himalayan Railway (Toy Train). At INR 1500, I found the latter to be too expensive and skipped it.


Tips and hacks

  • Download offline maps, especially when you’re exploring Northern Sikkim.
  • Food and booze are the cheapest in Gangtok. Stash up before heading to other regions.
  • Keep your Aadhar/Passport handy since you need permits to travel to North & East Sikkim.
  • In rural areas and some cafes, you may get to try Rhododendron Wine, made from Rhododendron arboreum a.k.a Gurans. Its production is a little hush-hush since the flower is considered holy and is also the National Flower of Nepal.
  • If you don’t want to invest in a new jacket, boots or a pair of gloves, you can always rent them at nominal rates from your hotel or little stores around tourist sites.
  • Check the weather of a region before heading there. Low visibility and precipitation can quite literally dampen your experience.
  • Keep your itinerary flexible to accommodate for rest and impromptu plans.
  • Shops and restaurants close by 8pm in Sikkim and Darjeeling. Plan for the same.

Carry…

  • a couple of extra pairs of socks (woollen, if possible)
  • a pair of slippers to wear indoors
  • a reusable water bottle
  • an umbrella
  • a power bank
  • a couple of tablets of Diamox. Helps deal with altitude sickness
  • extra clothes and wet bags since you may not get a chance to wash/dry your clothes
  • a few passport size photographs

Shared-cab hacks

  • Intercity rides can be exhausting. If you can afford it, pay for an additional seat.
  • Call shotgun on the drives beyond Lachen and Lachung. The views are breathtaking.
  • Return cabs tend to be cheaper (WB cabs travelling from SK and vice-versa)

Cost

  • My median daily expenditure (back when I went to Sikkim in early March 2021) was INR 1350.
  • This includes stay (bunk bed), food, wine and transit (shared cabs)
  • In my defence, I splurged on food, wine and extra seats in shared cabs, but if you’re on a budget, you could easily get by on INR 1 - 1.2k per day.
  • For a 9-day trip, I ended up shelling out nearly INR 15k, including 2AC trains to & from Kolkata
  • Note : Summer (March to May) and Autumn (October to December) are peak seasons, and thereby more expensive to travel around.

Souvenirs and things you should buy

Buddhist souvenirs :

  • Colourful Prayer Flags (great for tying on bikes or behind car windshields)
  • Miniature Prayer/Mani Wheels
  • Lucky Charms, Pendants and Key Chains
  • Cham Dance masks and robes
  • Singing Bowls
  • Common symbols: Om mani padme hum, Ashtamangala, Zodiac signs

Handicrafts & Handlooms

  • Tibetan Yak Wool shawls, scarfs and carpets
  • Sikkimese Ceramic cups
  • Thangka Paintings

Edibles

  • Darjeeling Tea (usually brewed and not boiled)
  • Wine (Arucha Peach & Rhododendron)
  • Dalle Khursani (Chilli) Paste and Pickle

Header Icon made by Freepik from www.flaticon.com is licensed by CC 3.0 BY

Planet DebianJonathan Dowland: 2020 in short fiction

Cover for *Episodes*

Following on from 2020 in Fiction: In 2020 I read a couple of collections of short fiction from some of my favourite authors.

I started the year with Christopher Priest's Episodes. The stories within are collected from throughout his long career, and vary in style and tone. Priest wrote new little prologues and epilogues for each of the stories, explaining the context in which they were written. I really enjoyed this additional view into their construction.

Cover for *Adam Robots*

By contrast, Adam Robert's Adam Robots presents the stories on their own terms. Each of the stories is written in a different mode: one as golden-age SF, another as a kind of Cyberpunk, for example, although they all blend or confound sub-genres to some degree. I'm not clever enough to have decoded all their secrets on a first read, and I would have appreciated some "Cliff's Notes” on any deeper meaning or intent.

Cover for *Exhalation*

Ted Chiang's Exhalation was up to the fantastic standard of his earlier collection and had some extremely thoughtful explorations of philosophical ideas. All the stories are strong but one stuck in my mind the longest: Omphalos)…

With my daughter I finished three of Terry Pratchett's short story collections aimed at children: Dragon at Crumbling Castle; The Witch's Vacuum Cleaner and The Time-Travelling Caveman. If you are a Pratchett fan and you've overlooked these because they're aimed at children, take another look. The quality varies, but there are some true gems in these. Several stories take place in common settings, either the town of Blackbury, in Gritshire (and the adjacent Even Moor), or the Welsh border-town of Llandanffwnfafegettupagogo. The sad thing was knowing that once I'd finished them (and the fourth, Father Christmas's Fake Beard) that was it: there will be no more.

Cover for Interzone, issue 277

8/31 of the "books" I read in 2020 were issues of Interzone. Counting them as "books" for my annual reading goal has encouraged me to read full issues, whereas before I would likely have only read a couple of stories from each issue. Reading full issues has rekindled the enjoyment I got out of it when I first discovered the magazine at the turn of the Century. I am starting to recognise stories by authors that have written stories in other issues, as well as common themes from the current era weaving their way into the work (Trump, Brexit, etc.) No doubt the Pandemic will leave its mark on 2021's stories.

Planet DebianJunichi Uekawa: Wrote a timezone checker page.

Wrote a timezone checker page. timezone. Shows the current time in blue line. I haven't made anything configurable but will think about it later.

,

Planet DebianCharles Plessy: Debian Bullseye: more open

Debian Bullseye will provide the command /usr/bin/open for your greatest comfort at the command line. On a system with a graphical desktop environment, the command should have a similar result as when opening a document from a mouse-and-click file browser.

Technically, /usr/bin/open is a symbolic link managed by update-alternatives to point towards xdg-open if available and otherwise run-mailcap.

Planet DebianKentaro Hayashi: Grow your ideas for Debian Project

There may be some "If it could be ..." ideas for Debian Project. If idea is concreate and worth to make things forward, it should make a proposal for Project Funding.

salsa.debian.org

But it is a just an idea, or no afford to act as an executor role, that idea will not be achieved.

I thought that It needs an incubator - complemental project.

salsa.debian.org

I've salvaged an idea from closed MR Add proposal about "Formalize reimbursement process" (!5) · Merge Requests · Freexian SARL / Project Funding · GitLab

I'm not confident whether mechanism works, but Debian needs change.

Kevin RuddBBC World: HRH Prince Philip

INTERVIEW VIDEO
TV INTERVIEW
BBC WORLD NEWS
9 APRIL 2021

Topic: His Royal Highness Prince Philip, the Duke of Edinburgh

The post BBC World: HRH Prince Philip appeared first on Kevin Rudd.

Kevin RuddBBC Newsnight: HRH Prince Philip

INTERVIEW VIDEO
TV INTERVIEW
BBC ‘NEWSNIGHT’
9 APRIL 2021

Topic: His Royal Highness Prince Philip, the Duke of Edinburgh

The post BBC Newsnight: HRH Prince Philip appeared first on Kevin Rudd.

,

Kevin RuddStatement: HRH The Duke of Edinburgh

Thérèse and I are deeply saddened by the news of the death of His Royal Highness Prince Philip.

We would like to extend our deepest condolences to his lifelong partner Her Majesty The Queen, and other members of the Royal Family.

Prince Philip lived to a venerable age. Both Thérèse and I had the opportunity to meet and converse with both His Royal Highness and Her Majesty on a number of occasions. It was plain from those conversations that Prince Philip had a deep and abiding affection for Australia.

It matters not whether Australians are republicans or monarchists, Prince Philip’s passing is a very sad day for the Royal Family who, like all families, will be grieving deeply the loss of a loving husband, father, grandfather, and great-grandfather.

Our thoughts should all be with Her Majesty The Queen at this time.

Image: ABC / Her Royal Highness Queen Elizabeth II and the Duke of Edinburgh on Royal train at Bathurst, NSW, while on tour, February 1954.

The post Statement: HRH The Duke of Edinburgh appeared first on Kevin Rudd.

Cryptogram Friday Squid Blogging: Squid-Shaped Bike Rack

There’s a new squid-shaped bike rack in Ballard, WA.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Blobs of Squid Eggs Found Near Norway

Divers find three-foot “blobs” — egg sacs of the squid Illex coindetii — off the coast of Norway.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Jurassic Squid and Prey

A 180-million-year-old Vampire squid ancestor was fossilized along with its prey.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Kevin RuddSubmission: Parliamentary Petitions

In February 2021, I made a written submission to the Australian House of Representatives Inquiry into aspects of its petitioning system.

The Inquiry was launched following technological failures by the Department of Parliamentary Services which resulted in many thousands of Australians being unable to sign the Petition for a Royal Commission to ensure the strength and diversity of Australian news media.

The Inquiry is also examining a malicious cyberattack on that petition by a right-wing activist, inspired by a segment he saw on Murdoch’s Sky News.

Click here to read my submission.

The post Submission: Parliamentary Petitions appeared first on Kevin Rudd.

Planet DebianMichael Prokop: A Ceph war story

It all started with the big bang! We nearly lost 33 of 36 disks on a Proxmox/Ceph Cluster; this is the story of how we recovered them.

At the end of 2020, we eventually had a long outstanding maintenance window for taking care of system upgrades at a customer. During this maintenance window, which involved reboots of server systems, the involved Ceph cluster unexpectedly went into a critical state. What was planned to be a few hours of checklist work in the early evening turned out to be an emergency case; let’s call it a nightmare (not only because it included a big part of the night). Since we have learned a few things from our post mortem and RCA, it’s worth sharing those with others. But first things first, let’s step back and clarify what we had to deal with.

The system and its upgrade

One part of the upgrade included 3 Debian servers (we’re calling them server1, server2 and server3 here), running on Proxmox v5 + Debian/stretch with 12 Ceph OSDs each (65.45TB in total), a so-called Proxmox Hyper-Converged Ceph Cluster.

First, we went for upgrading the Proxmox v5/stretch system to Proxmox v6/buster, before updating Ceph Luminous v12.2.13 to the latest v14.2 release, supported by Proxmox v6/buster. The Proxmox upgrade included updating corosync from v2 to v3. As part of this upgrade, we had to apply some configuration changes, like adjust ring0 + ring1 address settings and add a mon_host configuration to the Ceph configuration.

During the first two servers’ reboots, we noticed configuration glitches. After fixing those, we went for a reboot of the third server as well. Then we noticed that several Ceph OSDs were unexpectedly down. The NTP service wasn’t working as expected after the upgrade. The underlying issue is a race condition of ntp with systemd-timesyncd (see #889290). As a result, we had clock skew problems with Ceph, indicating that the Ceph monitors’ clocks aren’t running in sync (which is essential for proper Ceph operation). We initially assumed that our Ceph OSD failure derived from this clock skew problem, so we took care of it. After yet another round of reboots, to ensure the systems are running all with identical and sane configurations and services, we noticed lots of failing OSDs. This time all but three OSDs (19, 21 and 22) were down:

% sudo ceph osd tree
ID CLASS WEIGHT   TYPE NAME      STATUS REWEIGHT PRI-AFF
-1       65.44138 root default
-2       21.81310     host server1
 0   hdd  1.08989         osd.0    down  1.00000 1.00000
 1   hdd  1.08989         osd.1    down  1.00000 1.00000
 2   hdd  1.63539         osd.2    down  1.00000 1.00000
 3   hdd  1.63539         osd.3    down  1.00000 1.00000
 4   hdd  1.63539         osd.4    down  1.00000 1.00000
 5   hdd  1.63539         osd.5    down  1.00000 1.00000
18   hdd  2.18279         osd.18   down  1.00000 1.00000
20   hdd  2.18179         osd.20   down  1.00000 1.00000
28   hdd  2.18179         osd.28   down  1.00000 1.00000
29   hdd  2.18179         osd.29   down  1.00000 1.00000
30   hdd  2.18179         osd.30   down  1.00000 1.00000
31   hdd  2.18179         osd.31   down  1.00000 1.00000
-4       21.81409     host server2
 6   hdd  1.08989         osd.6    down  1.00000 1.00000
 7   hdd  1.08989         osd.7    down  1.00000 1.00000
 8   hdd  1.63539         osd.8    down  1.00000 1.00000
 9   hdd  1.63539         osd.9    down  1.00000 1.00000
10   hdd  1.63539         osd.10   down  1.00000 1.00000
11   hdd  1.63539         osd.11   down  1.00000 1.00000
19   hdd  2.18179         osd.19     up  1.00000 1.00000
21   hdd  2.18279         osd.21     up  1.00000 1.00000
22   hdd  2.18279         osd.22     up  1.00000 1.00000
32   hdd  2.18179         osd.32   down  1.00000 1.00000
33   hdd  2.18179         osd.33   down  1.00000 1.00000
34   hdd  2.18179         osd.34   down  1.00000 1.00000
-3       21.81419     host server3
12   hdd  1.08989         osd.12   down  1.00000 1.00000
13   hdd  1.08989         osd.13   down  1.00000 1.00000
14   hdd  1.63539         osd.14   down  1.00000 1.00000
15   hdd  1.63539         osd.15   down  1.00000 1.00000
16   hdd  1.63539         osd.16   down  1.00000 1.00000
17   hdd  1.63539         osd.17   down  1.00000 1.00000
23   hdd  2.18190         osd.23   down  1.00000 1.00000
24   hdd  2.18279         osd.24   down  1.00000 1.00000
25   hdd  2.18279         osd.25   down  1.00000 1.00000
35   hdd  2.18179         osd.35   down  1.00000 1.00000
36   hdd  2.18179         osd.36   down  1.00000 1.00000
37   hdd  2.18179         osd.37   down  1.00000 1.00000

Our blood pressure increased slightly! Did we just lose all of our cluster? What happened, and how can we get all the other OSDs back?

We stumbled upon this beauty in our logs:

kernel: [   73.697957] XFS (sdl1): SB stripe unit sanity check failed
kernel: [   73.698002] XFS (sdl1): Metadata corruption detected at xfs_sb_read_verify+0x10e/0x180 [xfs], xfs_sb block 0xffffffffffffffff
kernel: [   73.698799] XFS (sdl1): Unmount and run xfs_repair
kernel: [   73.699199] XFS (sdl1): First 128 bytes of corrupted metadata buffer:
kernel: [   73.699677] 00000000: 58 46 53 42 00 00 10 00 00 00 00 00 00 00 62 00  XFSB..........b.
kernel: [   73.700205] 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
kernel: [   73.700836] 00000020: 62 44 2b c0 e6 22 40 d7 84 3d e1 cc 65 88 e9 d8  bD+.."@..=..e...
kernel: [   73.701347] 00000030: 00 00 00 00 00 00 40 08 00 00 00 00 00 00 01 00  ......@.........
kernel: [   73.701770] 00000040: 00 00 00 00 00 00 01 01 00 00 00 00 00 00 01 02  ................
ceph-disk[4240]: mount: /var/lib/ceph/tmp/mnt.jw367Y: mount(2) system call failed: Structure needs cleaning.
ceph-disk[4240]: ceph-disk: Mounting filesystem failed: Command '['/bin/mount', '-t', u'xfs', '-o', 'noatime,inode64', '--', '/dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.cdda39ed-5
ceph/tmp/mnt.jw367Y']' returned non-zero exit status 32
kernel: [   73.702162] 00000050: 00 00 00 01 00 00 18 80 00 00 00 04 00 00 00 00  ................
kernel: [   73.702550] 00000060: 00 00 06 48 bd a5 10 00 08 00 00 02 00 00 00 00  ...H............
kernel: [   73.702975] 00000070: 00 00 00 00 00 00 00 00 0c 0c 0b 01 0d 00 00 19  ................
kernel: [   73.703373] XFS (sdl1): SB validate failed with error -117.

The same issue was present for the other failing OSDs. We hoped, that the data itself was still there, and only the mounting of the XFS partitions failed. The Ceph cluster was initially installed in 2017 with Ceph jewel/10.2 with the OSDs on filestore (nowadays being a legacy approach to storing objects in Ceph). However, we migrated the disks to bluestore since then (with ceph-disk and not yet via ceph-volume what’s being used nowadays). Using ceph-disk introduces these 100MB XFS partitions containing basic metadata for the OSD.

Given that we had three working OSDs left, we decided to investigate how to rebuild the failing ones. Some folks on #ceph (thanks T1, ormandj + peetaur!) were kind enough to share how working XFS partitions looked like for them. After creating a backup (via dd), we tried to re-create such an XFS partition on server1. We noticed that even mounting a freshly created XFS partition failed:

synpromika@server1 ~ % sudo mkfs.xfs -f -i size=2048 -m uuid="4568c300-ad83-4288-963e-badcd99bf54f" /dev/sdc1
meta-data=/dev/sdc1              isize=2048   agcount=4, agsize=6272 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=25088, imaxpct=25
         =                       sunit=128    swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1608, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
synpromika@server1 ~ % sudo mount /dev/sdc1 /mnt/ceph-recovery
SB stripe unit sanity check failed
Metadata corruption detected at 0x433840, xfs_sb block 0x0/0x1000
libxfs_writebufr: write verifer failed on xfs_sb bno 0x0/0x1000
cache_node_purge: refcount was 1, not zero (node=0x1d3c400)
SB stripe unit sanity check failed
Metadata corruption detected at 0x433840, xfs_sb block 0x18800/0x1000
libxfs_writebufr: write verifer failed on xfs_sb bno 0x18800/0x1000
SB stripe unit sanity check failed
Metadata corruption detected at 0x433840, xfs_sb block 0x0/0x1000
libxfs_writebufr: write verifer failed on xfs_sb bno 0x0/0x1000
SB stripe unit sanity check failed
Metadata corruption detected at 0x433840, xfs_sb block 0x24c00/0x1000
libxfs_writebufr: write verifer failed on xfs_sb bno 0x24c00/0x1000
SB stripe unit sanity check failed
Metadata corruption detected at 0x433840, xfs_sb block 0xc400/0x1000
libxfs_writebufr: write verifer failed on xfs_sb bno 0xc400/0x1000
releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!found dirty buffer (bulk) on free list!bad magic number
bad magic number
Metadata corruption detected at 0x433840, xfs_sb block 0x0/0x1000
libxfs_writebufr: write verifer failed on xfs_sb bno 0x0/0x1000
releasing dirty buffer (bulk) to free list!mount: /mnt/ceph-recovery: wrong fs type, bad option, bad superblock on /dev/sdc1, missing codepage or helper program, or other error.

Ouch. This very much looked related to the actual issue we’re seeing. So we tried to execute mkfs.xfs with a bunch of different sunit/swidth settings. Using ‘-d sunit=512 -d swidth=512‘ at least worked then, so we decided to force its usage in the creation of our OSD XFS partition. This brought us a working XFS partition. Please note, sunit must not be larger than swidth (more on that later!).

Then we reconstructed how to restore all the metadata for the OSD (activate.monmap, active, block_uuid, bluefs, ceph_fsid, fsid, keyring, kv_backend, magic, mkfs_done, ready, require_osd_release, systemd, type, whoami). To identify the UUID, we can read the data from ‘ceph --format json osd dump‘, like this for all our OSDs (Zsh syntax ftw!):

synpromika@server1 ~ % for f in {0..37} ; printf "osd-$f: %s\n" "$(sudo ceph --format json osd dump | jq -r ".osds[] | select(.osd==$f) | .uuid")"
osd-0: 4568c300-ad83-4288-963e-badcd99bf54f
osd-1: e573a17a-ccde-4719-bdf8-eef66903ca4f
osd-2: 0e1b2626-f248-4e7d-9950-f1a46644754e
osd-3: 1ac6a0a2-20ee-4ed8-9f76-d24e900c800c
[...]

Identifying the corresponding raw device for each OSD UUID is possible via:

synpromika@server1 ~ % UUID="4568c300-ad83-4288-963e-badcd99bf54f"
synpromika@server1 ~ % readlink -f /dev/disk/by-partuuid/"${UUID}"
/dev/sdc1

The OSD’s key ID can be retrieved via:

synpromika@server1 ~ % OSD_ID=0
synpromika@server1 ~ % sudo ceph auth get osd."${OSD_ID}" -f json 2>/dev/null | jq -r '.[] | .key'
AQCKFpZdm0We[...]

Now we also need to identify the underlying block device:

synpromika@server1 ~ % OSD_ID=0
synpromika@server1 ~ % sudo ceph osd metadata osd."${OSD_ID}" -f json | jq -r '.bluestore_bdev_partition_path'    
/dev/sdc2

With all of this, we reconstructed the keyring, fsid, whoami, block + block_uuid files. All the other files inside the XFS metadata partition are identical on each OSD. So after placing and adjusting the corresponding metadata on the XFS partition for Ceph usage, we got a working OSD – hurray! Since we had to fix yet another 32 OSDs, we decided to automate this XFS partitioning and metadata recovery procedure.

We had a network share available on /srv/backup for storing backups of existing partition data. On each server, we tested the procedure with one single OSD before iterating over the list of remaining failing OSDs. We started with a shell script on server1, then adjusted the script for server2 and server3. This is the script, as we executed it on the 3rd server.

Thanks to this, we managed to get the Ceph cluster up and running again. We didn’t want to continue with the Ceph upgrade itself during the night though, as we wanted to know exactly what was going on and why the system behaved like that. Time for RCA!

Root Cause Analysis

So all but three OSDs on server2 failed, and the problem seems to be related to XFS. Therefore, our starting point for the RCA was, to identify what was different on server2, as compared to server1 + server3. My initial assumption was that this was related to some firmware issues with the involved controller (and as it turned out later, I was right!). The disks were attached as JBOD devices to a ServeRAID M5210 controller (with a stripe size of 512). Firmware state:

synpromika@server1 ~ % sudo storcli64 /c0 show all | grep '^Firmware'
Firmware Package Build = 24.16.0-0092
Firmware Version = 4.660.00-8156

synpromika@server2 ~ % sudo storcli64 /c0 show all | grep '^Firmware'
Firmware Package Build = 24.21.0-0112
Firmware Version = 4.680.00-8489

synpromika@server3 ~ % sudo storcli64 /c0 show all | grep '^Firmware'
Firmware Package Build = 24.16.0-0092
Firmware Version = 4.660.00-8156

This looked very promising, as server2 indeed runs with a different firmware version on the controller. But how so? Well, the motherboard of server2 got replaced by a Lenovo/IBM technician in January 2020, as we had a failing memory slot during a memory upgrade. As part of this procedure, the Lenovo/IBM technician installed the latest firmware versions. According to our documentation, some OSDs were rebuilt (due to the filestore->bluestore migration) in March and April 2020. It turned out that precisely those OSDs were the ones that survived the upgrade. So the surviving drives were created with a different firmware version running on the involved controller. All the other OSDs were created with an older controller firmware. But what difference does this make?

Now let’s check firmware changelogs. For the 24.21.0-0097 release we found this:

- Cannot create or mount xfs filesystem using xfsprogs 4.19.x kernel 4.20(SCGCQ02027889)
- xfs_info command run on an XFS file system created on a VD of strip size 1M shows sunit and swidth as 0(SCGCQ02056038)

Our XFS problem certainly was related to the controller’s firmware. We also recalled that our monitoring system reported different sunit settings for the OSDs that were rebuilt in March and April. For example, OSD 21 was recreated and got different sunit settings:

WARN  server2.example.org  Mount options of /var/lib/ceph/osd/ceph-21      WARN - Missing: sunit=1024, Exceeding: sunit=512

We compared the new OSD 21 with an existing one (OSD 25 on server3):

synpromika@server2 ~ % systemctl show var-lib-ceph-osd-ceph\\x2d21.mount | grep sunit
Options=rw,noatime,attr2,inode64,sunit=512,swidth=512,noquota
synpromika@server3 ~ % systemctl show var-lib-ceph-osd-ceph\\x2d25.mount | grep sunit
Options=rw,noatime,attr2,inode64,sunit=1024,swidth=512,noquota

Thanks to our documentation, we could compare execution logs of their creation:

% diff -u ceph-disk-osd-25.log ceph-disk-osd-21.log
-synpromika@server2 ~ % sudo ceph-disk -v prepare --bluestore /dev/sdj --osd-id 25
+synpromika@server3 ~ % sudo ceph-disk -v prepare --bluestore /dev/sdi --osd-id 21
[...]
-command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdj1
-meta-data=/dev/sdj1              isize=2048   agcount=4, agsize=6272 blks
[...]
+command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdi1
+meta-data=/dev/sdi1              isize=2048   agcount=4, agsize=6336 blks
          =                       sectsz=4096  attr=2, projid32bit=1
          =                       crc=1        finobt=1, sparse=0, rmapbt=0, reflink=0
-data     =                       bsize=4096   blocks=25088, imaxpct=25
-         =                       sunit=128    swidth=64 blks
+data     =                       bsize=4096   blocks=25344, imaxpct=25
+         =                       sunit=64     swidth=64 blks
 naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
 log      =internal log           bsize=4096   blocks=1608, version=2
          =                       sectsz=4096  sunit=1 blks, lazy-count=1
 realtime =none                   extsz=4096   blocks=0, rtextents=0
[...]

So back then, we even tried to track this down but couldn’t make sense of it yet. But now this sounds very much like it is related to the problem we saw with this Ceph/XFS failure. We follow Occam’s razor, assuming the simplest explanation is usually the right one, so let’s check the disk properties and see what differs:

synpromika@server1 ~ % sudo blockdev --getsz --getsize64 --getss --getpbsz --getiomin --getioopt /dev/sdk
4685545472
2398999281664
512
4096
524288
262144

synpromika@server2 ~ % sudo blockdev --getsz --getsize64 --getss --getpbsz --getiomin --getioopt /dev/sdk
4685545472
2398999281664
512
4096
262144
262144

See the difference between server1 and server2 for identical disks? The getiomin option now reports something different for them:

synpromika@server1 ~ % sudo blockdev --getiomin /dev/sdk            
524288
synpromika@server1 ~ % cat /sys/block/sdk/queue/minimum_io_size
524288

synpromika@server2 ~ % sudo blockdev --getiomin /dev/sdk 
262144
synpromika@server2 ~ % cat /sys/block/sdk/queue/minimum_io_size
262144

It doesn’t make sense that the minimum I/O size (iomin, AKA BLKIOMIN) is bigger than the optimal I/O size (ioopt, AKA BLKIOOPT). This leads us to Bug 202127 – cannot mount or create xfs on a 597T device, which matches our findings here. But why did this XFS partition work in the past and fails now with the newer kernel version?

The XFS behaviour change

Now given that we have backups of all the XFS partition, we wanted to track down, a) when this XFS behaviour was introduced, and b) whether, and if so how it would be possible to reuse the XFS partition without having to rebuild it from scratch (e.g. if you would have no working Ceph OSD or backups left).

Let’s look at such a failing XFS partition with the Grml live system:

root@grml ~ # grml-version
grml64-full 2020.06 Release Codename Ausgehfuahangl [2020-06-24]
root@grml ~ # uname -a
Linux grml 5.6.0-2-amd64 #1 SMP Debian 5.6.14-2 (2020-06-09) x86_64 GNU/Linux
root@grml ~ # grml-hostname grml-2020-06
Setting hostname to grml-2020-06: done
root@grml ~ # exec zsh
root@grml-2020-06 ~ # dpkg -l xfsprogs util-linux
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version      Architecture Description
+++-==============-============-============-=========================================
ii  util-linux     2.35.2-4     amd64        miscellaneous system utilities
ii  xfsprogs       5.6.0-1+b2   amd64        Utilities for managing the XFS filesystem

There it’s failing, no matter which mount option we try:

root@grml-2020-06 ~ # mount ./sdd1.dd /mnt
mount: /mnt: mount(2) system call failed: Structure needs cleaning.
root@grml-2020-06 ~ # dmesg | tail -30
[...]
[   64.788640] XFS (loop1): SB stripe unit sanity check failed
[   64.788671] XFS (loop1): Metadata corruption detected at xfs_sb_read_verify+0x102/0x170 [xfs], xfs_sb block 0xffffffffffffffff
[   64.788671] XFS (loop1): Unmount and run xfs_repair
[   64.788672] XFS (loop1): First 128 bytes of corrupted metadata buffer:
[   64.788673] 00000000: 58 46 53 42 00 00 10 00 00 00 00 00 00 00 62 00  XFSB..........b.
[   64.788674] 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
[   64.788675] 00000020: 32 b6 dc 35 53 b7 44 96 9d 63 30 ab b3 2b 68 36  2..5S.D..c0..+h6
[   64.788675] 00000030: 00 00 00 00 00 00 40 08 00 00 00 00 00 00 01 00  ......@.........
[   64.788675] 00000040: 00 00 00 00 00 00 01 01 00 00 00 00 00 00 01 02  ................
[   64.788676] 00000050: 00 00 00 01 00 00 18 80 00 00 00 04 00 00 00 00  ................
[   64.788677] 00000060: 00 00 06 48 bd a5 10 00 08 00 00 02 00 00 00 00  ...H............
[   64.788677] 00000070: 00 00 00 00 00 00 00 00 0c 0c 0b 01 0d 00 00 19  ................
[   64.788679] XFS (loop1): SB validate failed with error -117.
root@grml-2020-06 ~ # mount -t xfs -o rw,relatime,attr2,inode64,sunit=1024,swidth=512,noquota ./sdd1.dd /mnt/
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/loop1, missing codepage or helper program, or other error.
32 root@grml-2020-06 ~ # dmesg | tail -1
[   66.342976] XFS (loop1): stripe width (512) must be a multiple of the stripe unit (1024)
root@grml-2020-06 ~ # mount -t xfs -o rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota ./sdd1.dd /mnt/
mount: /mnt: mount(2) system call failed: Structure needs cleaning.
32 root@grml-2020-06 ~ # dmesg | tail -14
[   66.342976] XFS (loop1): stripe width (512) must be a multiple of the stripe unit (1024)
[   80.751277] XFS (loop1): SB stripe unit sanity check failed
[   80.751323] XFS (loop1): Metadata corruption detected at xfs_sb_read_verify+0x102/0x170 [xfs], xfs_sb block 0xffffffffffffffff 
[   80.751324] XFS (loop1): Unmount and run xfs_repair
[   80.751325] XFS (loop1): First 128 bytes of corrupted metadata buffer:
[   80.751327] 00000000: 58 46 53 42 00 00 10 00 00 00 00 00 00 00 62 00  XFSB..........b.
[   80.751328] 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
[   80.751330] 00000020: 32 b6 dc 35 53 b7 44 96 9d 63 30 ab b3 2b 68 36  2..5S.D..c0..+h6
[   80.751331] 00000030: 00 00 00 00 00 00 40 08 00 00 00 00 00 00 01 00  ......@.........
[   80.751331] 00000040: 00 00 00 00 00 00 01 01 00 00 00 00 00 00 01 02  ................
[   80.751332] 00000050: 00 00 00 01 00 00 18 80 00 00 00 04 00 00 00 00  ................
[   80.751333] 00000060: 00 00 06 48 bd a5 10 00 08 00 00 02 00 00 00 00  ...H............
[   80.751334] 00000070: 00 00 00 00 00 00 00 00 0c 0c 0b 01 0d 00 00 19  ................
[   80.751338] XFS (loop1): SB validate failed with error -117.

Also xfs_repair doesn’t help either:

root@grml-2020-06 ~ # xfs_info ./sdd1.dd
meta-data=./sdd1.dd              isize=2048   agcount=4, agsize=6272 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=25088, imaxpct=25
         =                       sunit=128    swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1608, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

root@grml-2020-06 ~ # xfs_repair ./sdd1.dd
Phase 1 - find and verify superblock...
bad primary superblock - bad stripe width in superblock !!!

attempting to find secondary superblock...
..............................................................................................Sorry, could not find valid secondary superblock
Exiting now.

With the “SB stripe unit sanity check failed” message, we could easily track this down to the following commit fa4ca9c:

% git show fa4ca9c5574605d1e48b7e617705230a0640b6da | cat
commit fa4ca9c5574605d1e48b7e617705230a0640b6da
Author: Dave Chinner <dchinner@redhat.com>
Date:   Tue Jun 5 10:06:16 2018 -0700
    
    xfs: catch bad stripe alignment configurations
    
    When stripe alignments are invalid, data alignment algorithms in the
    allocator may not work correctly. Ensure we catch superblocks with
    invalid stripe alignment setups at mount time. These data alignment
    mismatches are now detected at mount time like this:
    
    XFS (loop0): SB stripe unit sanity check failed
    XFS (loop0): Metadata corruption detected at xfs_sb_read_verify+0xab/0x110, xfs_sb block 0xffffffffffffffff
    XFS (loop0): Unmount and run xfs_repair
    XFS (loop0): First 128 bytes of corrupted metadata buffer:
    0000000091c2de02: 58 46 53 42 00 00 10 00 00 00 00 00 00 00 10 00  XFSB............
    0000000023bff869: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00000000cdd8c893: 17 32 37 15 ff ca 46 3d 9a 17 d3 33 04 b5 f1 a2  .27...F=...3....
    000000009fd2844f: 00 00 00 00 00 00 00 04 00 00 00 00 00 00 06 d0  ................
    0000000088e9b0bb: 00 00 00 00 00 00 06 d1 00 00 00 00 00 00 06 d2  ................
    00000000ff233a20: 00 00 00 01 00 00 10 00 00 00 00 01 00 00 00 00  ................
    000000009db0ac8b: 00 00 03 60 e1 34 02 00 08 00 00 02 00 00 00 00  ...`.4..........
    00000000f7022460: 00 00 00 00 00 00 00 00 0c 09 0b 01 0c 00 00 19  ................
    XFS (loop0): SB validate failed with error -117.
    
    And the mount fails.
    
    Signed-off-by: Dave Chinner <dchinner@redhat.com>
    Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>

diff --git fs/xfs/libxfs/xfs_sb.c fs/xfs/libxfs/xfs_sb.c
index b5dca3c8c84d..c06b6fc92966 100644
--- fs/xfs/libxfs/xfs_sb.c
+++ fs/xfs/libxfs/xfs_sb.c
@@ -278,6 +278,22 @@ xfs_mount_validate_sb(
                return -EFSCORRUPTED;
        }
        
+       if (sbp->sb_unit) {
+               if (!xfs_sb_version_hasdalign(sbp) ||
+                   sbp->sb_unit > sbp->sb_width ||
+                   (sbp->sb_width % sbp->sb_unit) != 0) {
+                       xfs_notice(mp, "SB stripe unit sanity check failed");
+                       return -EFSCORRUPTED;
+               } 
+       } else if (xfs_sb_version_hasdalign(sbp)) { 
+               xfs_notice(mp, "SB stripe alignment sanity check failed");
+               return -EFSCORRUPTED;
+       } else if (sbp->sb_width) {
+               xfs_notice(mp, "SB stripe width sanity check failed");
+               return -EFSCORRUPTED;
+       }
+
+       
        if (xfs_sb_version_hascrc(&mp->m_sb) &&
            sbp->sb_blocksize < XFS_MIN_CRC_BLOCKSIZE) {
                xfs_notice(mp, "v5 SB sanity check failed");

This change is included in kernel versions 4.18-rc1 and newer:

% git describe --contains fa4ca9c5574605d1e48
v4.18-rc1~37^2~14

Now let’s try with an older kernel version (4.9.0), using old Grml 2017.05 release:

root@grml ~ # grml-version
grml64-small 2017.05 Release Codename Freedatensuppe [2017-05-31]
root@grml ~ # uname -a
Linux grml 4.9.0-1-grml-amd64 #1 SMP Debian 4.9.29-1+grml.1 (2017-05-24) x86_64 GNU/Linux
root@grml ~ # lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 9.0 (stretch)
Release:        9.0
Codename:       stretch
root@grml ~ # grml-hostname grml-2017-05
Setting hostname to grml-2017-05: done
root@grml ~ # exec zsh
root@grml-2017-05 ~ #

root@grml-2017-05 ~ # xfs_info ./sdd1.dd
xfs_info: ./sdd1.dd is not a mounted XFS filesystem
1 root@grml-2017-05 ~ # xfs_repair ./sdd1.dd
Phase 1 - find and verify superblock...
bad primary superblock - bad stripe width in superblock !!!

attempting to find secondary superblock...
..............................................................................................Sorry, could not find valid secondary superblock
Exiting now.
1 root@grml-2017-05 ~ # mount ./sdd1.dd /mnt
root@grml-2017-05 ~ # mount -t xfs
/root/sdd1.dd on /mnt type xfs (rw,relatime,attr2,inode64,sunit=1024,swidth=512,noquota)
root@grml-2017-05 ~ # ls /mnt
activate.monmap  active  block  block_uuid  bluefs  ceph_fsid  fsid  keyring  kv_backend  magic  mkfs_done  ready  require_osd_release  systemd  type  whoami
root@grml-2017-05 ~ # xfs_info /mnt
meta-data=/dev/loop1             isize=2048   agcount=4, agsize=6272 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=25088, imaxpct=25
         =                       sunit=128    swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=1608, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Mounting there indeed works! Now, if we mount the filesystem with new and proper sunit/swidth settings using the older kernel, it should rewrite them on disk:

root@grml-2017-05 ~ # mount -t xfs -o sunit=512,swidth=512 ./sdd1.dd /mnt/
root@grml-2017-05 ~ # umount /mnt/

And indeed, mounting this rewritten filesystem then also works with newer kernels:

root@grml-2020-06 ~ # mount ./sdd1.rewritten /mnt/
root@grml-2020-06 ~ # xfs_info /root/sdd1.rewritten
meta-data=/dev/loop1             isize=2048   agcount=4, agsize=6272 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=25088, imaxpct=25
         =                       sunit=64    swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1608, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
root@grml-2020-06 ~ # mount -t xfs                
/root/sdd1.rewritten on /mnt type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquota)

FTR: The ‘sunit=512,swidth=512‘ from the xfs mount option is identical to xfs_info’s output ‘sunit=64,swidth=64‘ (because mount.xfs’s sunit value is given in 512-byte block units, see man 5 xfs, and the xfs_info output reported here is in blocks with a block size (bsize) of 4096, so ‘sunit = 512*512 := 64*4096‘).

mkfs uses minimum and optimal sizes for stripe unit and stripe width; you can check this e.g. via (note that server2 with fixed firmware version reports proper values, whereas server3 with broken controller firmware reports non-sense):

synpromika@server2 ~ % for i in /sys/block/sd*/queue/ ; do printf "%s: %s %s\n" "$i" "$(cat "$i"/minimum_io_size)" "$(cat "$i"/optimal_io_size)" ; done
[...]
/sys/block/sdc/queue/: 262144 262144
/sys/block/sdd/queue/: 262144 262144
/sys/block/sde/queue/: 262144 262144
/sys/block/sdf/queue/: 262144 262144
/sys/block/sdg/queue/: 262144 262144
/sys/block/sdh/queue/: 262144 262144
/sys/block/sdi/queue/: 262144 262144
/sys/block/sdj/queue/: 262144 262144
/sys/block/sdk/queue/: 262144 262144
/sys/block/sdl/queue/: 262144 262144
/sys/block/sdm/queue/: 262144 262144
/sys/block/sdn/queue/: 262144 262144
[...]

synpromika@server3 ~ % for i in /sys/block/sd*/queue/ ; do printf "%s: %s %s\n" "$i" "$(cat "$i"/minimum_io_size)" "$(cat "$i"/optimal_io_size)" ; done
[...]
/sys/block/sdc/queue/: 524288 262144
/sys/block/sdd/queue/: 524288 262144
/sys/block/sde/queue/: 524288 262144
/sys/block/sdf/queue/: 524288 262144
/sys/block/sdg/queue/: 524288 262144
/sys/block/sdh/queue/: 524288 262144
/sys/block/sdi/queue/: 524288 262144
/sys/block/sdj/queue/: 524288 262144
/sys/block/sdk/queue/: 524288 262144
/sys/block/sdl/queue/: 524288 262144
/sys/block/sdm/queue/: 524288 262144
/sys/block/sdn/queue/: 524288 262144
[...]

This is the underlying reason why the initially created XFS partitions were created with incorrect sunit/swidth settings. The broken firmware of server1 and server3 was the cause of the incorrect settings – they were ignored by old(er) xfs/kernel versions, but treated as an error by new ones.

Make sure to also read the XFS FAQ regarding “How to calculate the correct sunit,swidth values for optimal performance”. We also stumbled upon two interesting reads in RedHat’s knowledge base: 5075561 + 2150101 (requires an active subscription, though) and #1835947.

Am I affected? How to work around it?

To check whether your XFS mount points are affected by this issue, the following command line should be useful:

awk '$3 == "xfs"{print $2}' /proc/self/mounts | while read mount ; do echo -n "$mount " ; xfs_info $mount | awk '$0 ~ "swidth"{gsub(/.*=/,"",$2); gsub(/.*=/,"",$3); print $2,$3}' | awk '{ if ($1 > $2) print "impacted"; else print "OK"}' ; done

If you run into the above situation, the only known solution to get your original XFS partition working again, is to boot into an older kernel version again (4.17 or older), mount the XFS partition with correct sunit/swidth settings and then boot back into your new system (kernel version wise).

Lessons learned

  • document everything and ensure to have all relevant information available (including actual times of changes, used kernel/package/firmware/… versions. The thorough documentation was our most significant asset in this case, because we had all the data and information we needed during the emergency handling as well as for the post mortem/RCA)
  • if something changes unexpectedly, dig deeper
  • know who to ask, a network of experts pays off
  • including timestamps in your shell makes reconstruction easier (the more people and documentation involved, the harder it gets to wade through it)
  • keep an eye on changelogs/release notes
  • apply regular updates and don’t forget invisible layers (e.g. BIOS, controller/disk firmware, IPMI/OOB (ILO/RAC/IMM/…) firmware)
  • apply regular reboots, to avoid a possible delta becoming bigger (which makes debugging harder)

Thanks: Darshaka Pathirana, Chris Hofstaedtler and Michael Hanscho.

Looking for help with your IT infrastructure? Let us know!

Worse Than FailureError'd: Punfree Friday

Today's Error'd submissions are not so much WTF as simply "TF?" Please try to explain the thought process in the comments, if you can.

Plaid-hat hacker Mark writes "Just came across this for a Microsoft Security portal. Still trying to figure it out." Me, I just want to know what happens when you click "Audio".

 

Reader Wesley faintly damns the sender "Hey, at least they are being honest!" But is this real, or is it a phishing scam? And if it's real phishing, can it really be honest?

 

Surveyed David misses last week's trivial "None of the above". So do I, David.

 

Diligently searching, keyboard sleuth Paul T suspects his None key might be somewhere near his Any key, but he can't find that one either.

 

Finally, an EU resident who wishes to remain anonymous has warned us "Vodafone doesn't allow IT jokes to kids... And they might be right". Where did we go wrong, Vodafone nannies? Was it the C++?

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Planet DebianSean Whitton: consfigurator-live-build

One of my goals for Consfigurator is to make it capable of installing Debian to my laptop, so that I can stop booting to GRML and manually partitioning and debootstrapping a basic system, only to then turn to configuration management to set everything else up. My configuration management should be able to handle the partitioning and debootstrapping, too.

The first stage was to make Consfigurator capable of debootstrapping a basic system, chrooting into it, and applying other arbitrary configuration, such as installing packages. That’s been in place for some weeks now. It’s sophisticated enough to avoid starting up newly installed services, but I still need to add some bind mounting.

Another significant piece is teaching Consfigurator how to partition block devices. That’s quite tricky to do in a sufficiently general way – I want to cleanly support various combinations of LUKS, LVM and regular partitions, including populating /etc/crypttab and /etc/fstab. I have some ideas about how to do it, but it’ll probably take a few tries to get the abstractions right.

Let’s imagine that code is all in place, such that Consfigurator can be pointed at a block device and it will install a bootable Debian system to it. Then to install Debian to my laptop I’d just need to take my laptop’s disk drive out and plug it into another system, and run Consfigurator on that system, as root, pointed at the block device representing my laptop’s disk drive. For virtual machines, it would be easy to write code which loop-mounts an empty disk image, and then Consfigurator could be pointed at the loop-mounted block device, thereby making the disk image file bootable.

This is adequate for virtual machines, or small single-board computers with tiny storage devices (not that I actually use any of those, but I want Consfigurator to be able to make disk images for them!). But it’s not much good for my laptop. I casually referred to taking out my laptop’s disk drive and connecting it to another computer, but this would void my laptop’s warranty. And Consfigurator would not be able to update my laptop’s NVRAM, as is needed on UEFI systems.

What’s wanted here is a live system which can run Consfigurator directly on the laptop, pointed at the block device representing its physical disk drive. Ideally this live system comes with a chroot with the root filesystem for the new Debian install already built, so that network access is not required, and all Consfigurator has to do is partition the drive and copy in the contents of the chroot. The live system could be set up to automatically start doing that upon boot, but another option is to just make Consfigurator itself available to be used interactively. The user boots the live system, starts up Emacs, starts up Lisp, and executes a Consfigurator deployment, supplying the block device representing the laptop’s disk drive as an argument to the deployment. Consfigurator goes off and partitions that drive, copies in the contents of the chroot, and executes grub-install to make the laptop bootable. This is also much easier to debug than a live system which tries to start partitioning upon boot. It would look something like this:

    ;; melete.silentflame.com is a Consfigurator host object representing the
    ;; laptop, including information about the partitions it should have
    (deploy-these :local ...
      (chroot:partitioned-and-installed
        melete.silentflame.com "/srv/chroot/melete" "/dev/nvme0n1"))

Now, building live systems is a fair bit more involved than installing Debian to a disk drive and making it bootable, it turns out. While I want Consfigurator to be able to completely replace the Debian Installer, I decided that it is not worth trying to reimplement the relevant parts of the Debian Live tool suite, because I do not need to make arbitrary customisations to any live systems. I just need to have some packages installed and some files in place. Nevertheless, it is worth teaching Consfigurator how to invoke Debian Live, so that the customisation of the chroot which isn’t just a matter of passing options to lb_config(1) can be done with Consfigurator. This is what I’ve ended up with – in Consfigurator’s source code:

(defpropspec image-built :lisp (config dir properties)
  "Build an image under DIR using live-build(7), where the resulting live
system has PROPERTIES, which should contain, at a minimum, a property from
CONSFIGURATOR.PROPERTY.OS setting the Debian suite and architecture.  CONFIG
is a list of arguments to pass to lb_config(1), not including the '-a' and
'-d' options, which Consfigurator will supply based on PROPERTIES.

This property runs the lb_config(1), lb_bootstrap(1), lb_chroot(1) and
lb_binary(1) commands to build or rebuild the image.  Rebuilding occurs only
when changes to CONFIG or PROPERTIES mean that the image is potentially
out-of-date; e.g. if you just add some new items to PROPERTIES then in most
cases only lb_chroot(1) and lb_binary(1) will be re-run.

Note that lb_chroot(1) and lb_binary(1) both run after applying PROPERTIES,
and might undo some of their effects.  For example, to configure
/etc/apt/sources.list, you will need to use CONFIG not PROPERTIES."
  (:desc (declare (ignore config properties))
         #?"Debian Live image built in ${dir}")
  (let* (...)
    ;; ...
    `(eseqprops
      ;; ...
      (on-change
          (eseqprops
           (on-change
               (file:has-content ,auto/config ,(auto/config config) :mode #o755)
             (file:does-not-exist ,@clean)
             (%lbconfig ,dir)
             (%lbbootstrap t ,dir))
           (%lbbootstrap nil ,dir)
           (deploys ((:chroot :into ,chroot)) ,host))
        (%lbchroot ,dir)
        (%lbbinary ,dir)))))

Here, %lbconfig is a property running lb_config(1), %lbbootstrap one which runs lb_bootstrap(1), etc. Those properties all just change directory to the right place and run the command, essentially, with a little extra code to handle failed debootstraps and the like.

The ON-CHANGE and ESEQPROPS combinators work together to sequence the interaction of the Debian Live suite and Consfigurator.

  • In the innermost ON-CHANGE expression: create the file auto/config and populate it with the call to lb_config(1) that we need to make, as described in the Debian Live manual, chapter 6.

    • If doing so resulted in a change to the auto/config file – e.g. the user added some more options – ensure that lb_config(1) and lb_bootstrap(1) both get rerun.
  • Now in the inner ESEQPROPS expression, use DEPLOYS to configure the chroot, essentially by forking into the chroot and recursively reinvoking Consfigurator.

  • Finally, if any of the above resulted in a change being made, call lb_chroot(1) and lb_binary(1).

This way, we only rebuild the chroot if the configuration changed, and we only rebuild the image if the chroot changed.

Now over in my personal consfig:

(try-register-data-source
 :git-snapshot :name "consfig" :repo #P"src/cl/consfig/" ...)

(defproplist hybrid-live-iso-built :lisp ()
  "Build a Debian Live system in /srv/live/spw.

Typically this property is not applied in a DEFHOST form, but rather run as
needed at the REPL.  The reason for this is that otherwise the whole image will
get rebuilt each time a commit is made to my dotfiles repo or to my consfig."
  (:desc "Sean's Debian Live system image built")
  (live-build:image-built.
      '("--archive-areas" "main contrib non-free" ...)
      "/srv/live/spw"
    (os:debian-stable "buster" :amd64)
    (basic-props)
    (apt:installed "whatever" "you" "want")

    (git:snapshot-extracted "/etc/skel/src" "dotfiles")
    (file:is-copy-of "/etc/skel/.bashrc" "/etc/skel/src/dotfiles/.bashrc")

    (git:snapshot-extracted "/root/src/cl" "consfig")))

The first argument to LIVE-BUILD:IMAGE-BUILT. is additional arguments to lb_config(1). The third argument onwards are the properties for the live system. The cool thing is GIT:SNAPSHOT-EXTRACTED – the calls to this ensure that a copy of my Emacs configuration and my consfig end up in the live image, ready to be used interactively to install Debian, as described above. I’ll need to add something like (chroot:host-chroot-bootstrapped melete.silentflame.com "/srv/chroot/melete") too.

As with everything Consfigurator-related, Joey Hess’s Propellor is the giant upon whose shoulders I’m standing.

Planet DebianThorsten Alteholz: My Debian Activities in March 2021

FTP master

Things never turn out the way you expect, so this month I was only able to accept 38 packages and rejected none. Due to the freeze, the overall number of packages that got accepted was 88.

Debian LTS

This was my eighty-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30h. During that time I did LTS and normal security uploads of:

  • [DLA 2606-1] lxml security update for one CVE
  • [DSA 4880-1] lxml security update for one CVE
  • [DLA 2611-1] ldb security update for two CVEs
  • [DLA 2612-1] leptonlib security update for four CVEs

I also prepared debdiffs for unstable and/or buster for leptonlib and libebml, which for one reason or another did not result in an upload yet.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the thirty-third ELTS month.

During my allocated time I uploaded:

  • ELA-388-1 for zeromq3
  • ELA-390-1 for lxml
  • ELA-391-1 for jasper
  • ELA-393-1 for ldb
  • ELA-394-1 for leptonlib

Last but not least I did some days of frontdesk duties.

Other stuff

On my neverending golang challenge I uploaded (or sponsored for thola dependencies):
golang-github-tombuildsstuff-giovanni, golang-github-apparentlymart-go-userdirs, golang-github-apparentlymart-go-shquot, golang-github-likexian-gokit, olang-gopkg-mail.v2, golang-gopkg-redis.v5, golang-github-facette-natsort, golang-github-opentracing-contrib-go-grpc, golang-github-felixge-fgprof, golang-ithub-gogo-status, golang-github-leanovate-gopter, golang-github-opentracing-basictracer-go, golang-github-lightstep-lightstep-tracer-common, golang-github-o-sourcemap-sourcemap, golang-github-igm-pubsub, golang-github-igm-sockjs-go, golang-github-centrifugal-protocol, golang-github-mna-redisc, golang-github-fzambia-eagle, golang-github-centrifugal-centrifuge, golang-github-chromedp-sysutil, golang-github-client9-misspell, golang-github-knq-snaker, cdproto-gen, golang-github-mattermost-xml-roundtrip-validator, golang-github-crewjam-saml, ssllabs-scan, golang-uber-automaxprocs, golang-uber-goleak, golang-github-k0kubun-go-ansi, golang-github-schollz-progressbar, golang-github-komkom-toml, golang-github-labstack-echo, golang-github-inexio-go-monitoringplugin

Worse Than FailureCodeSOD: A True Leader's Enhancement

Chuck had some perfectly acceptable C# code running in production. There was nothing terrible about it. It may not be the absolute "best" way to build this logic in terms of being easy to change and maintain in the future, but nothing about it is WTF-y.

if (report.spName == "thisReport" || report.spName == "thatReport") { LoadUI1(); } else if (report.spName == "thirdReport" || report.spName == "thirdReportButMoreSpecific") { LoadUI2(); } else { LoadUI3(); }

At worst, we could argue that using string-ly typed logic for deciding the UI state is suboptimal, but this code is hardly "burn it down" bad.

Fortunately, Chuck's team leader didn't like this code. So that team leader "fixed" it.

if ("thisReport, thatReport".Contains(report.spName)) { LoadUI1(); } else if ("thirdReport, thirdReportButMoreSpecific".Contains(spName)) { LoadUI2(); } else { LoadUI3(); }

So we keep the string-ly typed logic, but instead of straight equality comparisons, we change it into a Contains check. A Contains check on a string which contains all the possible report names, as a comma separated list. Not only is it less readable, and peforms significantly worse, but if spName is an invalid value, we might get some fun, unexpected results.

Perhaps the team lead was going for an ["array", "of", "allowed", "names"] and missed?

The end result though is that this change definitely made the code worse. The team lead, though, doesn't get their code reviewed by their peers. They're the leader, they have no peers, clearly.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianRyan Kavanagh: Writing BASIC-8 on the TSS/8

I recently discovered SDF’s PiDP-8. You can access it over SSH and watch the blinkenlights over its twitch stream. It runs TSS/8, a time-sharing operating system written in 1967 by Adrian van de Goor while a grad student here at CMU. I’ve been having fun tinkering with it, and I just wrote my first BASIC program1 since high school. It plots the graph of some user-specified univariate function. I don’t claim that it’s elegant or well-engineered, but it works!

10  DEF FNC(X) = 19 * COS(X/2)
20  FOR Y = 20 TO -20 STEP -1
30     FOR X = -25 TO 24
40     LET V = FNC(X)
50     GOSUB 90
60  NEXT X
70  PRINT ""
80  NEXT Y
85  STOP
90  REM SUBROUTINE PRINTS AXES AND PLOT
100 IF X = 0 THEN 150
110 IF Y = 0 THEN 150
120 REM X != 0 AND Y != 0 SO IN QUADRANT
130 GOSUB 290
140 RETURN
150 GOSUB 170
160 RETURN
170 REM SUBROUTINE PRINTS AXES (X = 0 OR Y = 0)
180 IF X + Y = 0 THEN 230
190 IF X = 0 THEN 250
200 IF Y = 0 THEN 270
210 PRINT "AXES INVARIANT VIOLATED"
220 STOP
230 PRINT "+";
240 GOTO 280
250 PRINT "I";
260 GOTO 280
270 PRINT "-";
280 RETURN
290 REM SUBROUTINE PRINTS FUNCTION GRAPH (X != 0 AND Y != 0)
300 IF 0 <= Y THEN 350
310 REM Y < 0
320 IF V <= Y THEN 410
330 REM Y < 0 AND Y < V SO OUTSIDE OF PLOT AREA
340 GOTO 390
350 REM 0 <= Y
360 IF Y <= V THEN 410
370 REM 0 <= Y  AND V < Y SO OUTSIDE OF PLOT AREA
380 GOTO 390
390 PRINT " ";
400 RETURN
410 PRINT "*";
420 RETURN
430 REM COPYRIGHT 2021 RYAN KAVANAGH RAK AT RAK.AC
440 END

It produces the following output:

                         I
                         I
*           **           I           **
*           **           I           **
**          **          *I*          **          *
**          **          *I*          **          *
**         ***          *I*          ***         *
**         ****         *I*         ****         *
**         ****         *I*         ****         *
**         ****         *I*         ****         *
**         ****        **I**        ****         *
***        ****        **I**        ****        **
***        ****        **I**        ****        **
***        ****        **I**        ****        **
***       *****        **I**        *****       **
***       ******       **I**       ******       **
***       ******       **I**       ******       **
***       ******       **I**       ******       **
***       ******       **I**       ******       **
***       ******      ***I***      ******       **
-------------------------+------------------------
    ******      ******   I   ******      ******
    ******      ******   I   ******      ******
    *****       ******   I   ******       *****
    *****       ******   I   ******       *****
    *****        *****   I   *****        *****
    *****        *****   I   *****        *****
    *****        *****   I   *****        *****
    *****        ****    I    ****        *****
    *****        ****    I    ****        *****
     ****        ****    I    ****        ****
     ****        ****    I    ****        ****
     ***         ****    I    ****         ***
     ***          ***    I    ***          ***
     ***          ***    I    ***          ***
     ***          ***    I    ***          ***
      **          **     I     **          **
      **          **     I     **          **
      *            *     I     *            *
                         I
                         I

Next up, I am going to try my hand at writing some FORTRAN or some FOCAL69. If you like tinkering with old systems, then you should give the TSS/8 a try.


  1. It’s written in the BASIC-8 dialect. ↩︎

,

Planet DebianEmmanuel Kasper: Manually install a single node Kubernetes cluster on Debian

Debian has work-in-progress packages for Kubernetes, which work well enough enough for a testing and learning environement. Bootstraping a cluster with the kubeadm deployer with these packages is not that hard, and is similar to the upstream kubeadm documentation

Install necessary packages in a VM

Install a throwaway VM with Vagrant.

apt install vagrant vagrant-libvirt
vagrant init debian/testing64

Bump the RAM and CPU of the VM, Kubernetes needs at least 2 gigs and 2 cores.

awk  -i inplace '1;/^Vagrant.configure\("2"\) do \|config/ {print "  config.vm.provider :libvirt do |vm|  vm.memory=2048 end"}' Vagrantfile
awk -i inplace '1;/^Vagrant.configure\("2"\) do \|config/ {print " config.vm.provider :libvirt do |vm| vm.cpus=2 end"}' Vagrantfile

Start the VM, login, update the package index.

vagrant up
vagrant ssh
sudo apt update

Install a container engine, here we use docker.io, we could also use containerd (both are packaged in Debian) or cri-o.

sudo apt install --yes --no-install-recommends docker.io curl

Install kubernetes binaries. This will install kubelet, the system service which will manage the containers, and kubectl the user/admin tool to manage the cluster.

sudo apt install --yes kubernetes-{node,client} containernetworking-plugins

Although it is not technically mandatory, we will use kubeadm, the most popular installer to create a Kubernetes cluster. Kubeadm is not packaged in Debian, we have to download an upstream binary.

wget https://dl.k8s.io/v1.20.5/kubernetes-server-linux-amd64.tar.gz

sha512sum kubernetes-server-linux-amd64.tar.gz
28529733bf34f5d5b72eabe30a81df98cc7f8e529590f807745cd67986a2c5c3eb86cebc7ecbcfc3df3c50416306e5d150948f2483933ea46c2aebaeb871ea8f kubernetes-server-linux-arm64.tar.gz

sudo tar --directory=/usr/local/sbin --strip-components 3 -xaf kubernetes-server-linux-amd64.tar.gz kubernetes/server/bin/kubeadm
sudo chmod +x /usr/local/sbin/kubeadm
sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

Add a kubelet systemd unit:

RELEASE_VERSION="v0.4.0"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sudo tee /etc/systemd/system/kubelet.service
sudo systemctl enable kubelet

and a default config file for kubeadm

RELEASE_VERSION="v0.4.0"
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

finally we need to help kubelet find the components needed for container networking

echo 'KUBELET_EXTRA_ARGS="--cni-bin-dir=/usr/lib/cni"' | sudo tee /etc/default/kubelet

Create a cluster

Initialize a cluster with kubeadm: this will download container images for the Kubernetes control plane (= the brain of the cluster), and start the containers via the kubelet service. Yes a good part of Kubernetes itself run in containers.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16
...
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Follow the instructions from the kubeadm output, and verify you have a single node cluster, with the status NotReady.

kubectl get nodes 
NAME STATUS ROLES AGE VERSION
testing NotReady control-plane,master 9m9s v1.20.5

At that point you should also have a bunch of containers running on the node:

sudo docker ps --format '{{.Names}}'
k8s_kube-apiserver_kube-apiserver-testing_kube-system_2711c230d39ccda1e74d1d6386a05cee_0
k8s_POD_kube-apiserver-testing_kube-system_2711c230d39ccda1e74d1d6386a05cee_0
k8s_etcd_etcd-testing_kube-system_4749b1bca3b1a73fd09c8e299d7030fe_0
k8s_POD_etcd-testing_kube-system_4749b1bca3b1a73fd09c8e299d7030fe_0
...

The kubelet service also needs an external network plugin to get the cluster in Ready state.

sudo systemctl status kubelet
...
Mar 28 09:28:43 testing kubelet[9405]: E0328 09:28:43.958059 9405 kubelet.go:2188] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Let’s add that network plugin. Download the flannel network plugin definition, and schedule flannel to run on all nodes of your cluster:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply --filename=kube-flannel.yml

After a dozen of seconds your node should be in ready status.

kubectl get nodes 
NAME STATUS ROLES AGE VERSION
testing Ready control-plane,master 16m v1.20.5

Deploy a test application

Our node is now in Ready status, but we cannot run application on it, since we only have a master node, an administrative node which by default cannot run user applications.

kubectl describe node testing | grep ^Taints
Taints: node-role.kubernetes.io/master:NoSchedule

Let’s allow node testing to run user applications:

kubectl taint node testing node-role.kubernetes.io/master-

Deploy a nginx container:

kubectl run my-nginx-pod --image=docker.io/library/nginx --port=80 --labels="app=http-content" 

Create a Kubernetes service to access this pod externally:

cat service.yaml

apiVersion: v1
kind: Service
metadata:
name: my-k8s-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30000
selector:
app: http-content

kubectl create --filename service.yaml

Access the service via IP adress:

curl 192.168.121.63:30000
...
Thank you for using nginx.

Notes

I will try to get this blog post in a Debian Wiki article, or maybe in the kubernetes-node documentation. Blog posts deprecate and disappear, wiki and project docs live longer.

Worse Than FailureCodeSOD: We All Expire

Code, like anything else, ages with time. Each minor change we make to a piece of already-in-use software speeds up that process. And while a piece of software can be running for decades unchanged, its utility will still decline over time, as its user interface becomes more distant from common practices, as the requirements drift from their intent, and people forget what the original purpose of certain features even was.

Code ages, but some code is born with an expiration date.

For example, at Jose's company, each year is assigned a letter label. The reasons are obscure, and rooted in somebody's project planning process, but the year 2000 was "A". The year 2001 was "B", and so on. 2025 would be "Z", and then 2026 would roll back over to "A".

At least, that's what the requirement was. What was implemented was a bit different.

if DateTime.Today.year = 2010 then year = "K" else if DateTime.Today.year = 2011 then year = "L" else if DateTime.Today.year = 2012 then year = "M" else if DateTime.Today.year = 2013 then year = "N" else if DateTime.Today.year = 2014 then year = "O" else if DateTime.Today.year = 2015 then year = "P" else if DateTime.Today.year = 2016 then year = "Q" else if DateTime.Today.year = 2017 then year = "R" else if DateTime.Today.year = 2018 then year = "S" else if DateTime.Today.year = 2019 then year = "T" else if DateTime.Today.year = 2020 then year = "U" else if DateTime.Today.year = 2021 then year = "V" else if DateTime.Today.year = 2022 then year = "W" else if DateTime.Today.year = 2023 then year = "X" else if DateTime.Today.year = 2024 then year = "Y" else year = "Z" end if

For want of a mod, 2026 was lost. But hey, this code was clearly written in 2010, which means it will work just fine for a decade and a half. We should all be so lucky.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Cryptogram Google’s Project Zero Finds a Nation-State Zero-Day Operation

Google’s Project Zero discovered, and caused to be patched, eleven zero-day exploits against Chrome, Safari, Microsoft Windows, and iOS. This seems to have been exploited by “Western government operatives actively conducting a counterterrorism operation”:

The exploits, which went back to early 2020 and used never-before-seen techniques, were “watering hole” attacks that used infected websites to deliver malware to visitors. They caught the attention of cybersecurity experts thanks to their scale, sophistication, and speed.

[…]

It’s true that Project Zero does not formally attribute hacking to specific groups. But the Threat Analysis Group, which also worked on the project, does perform attribution. Google omitted many more details than just the name of the government behind the hacks, and through that information, the teams knew internally who the hacker and targets were. It is not clear whether Google gave advance notice to government officials that they would be publicizing and shutting down the method of attack.

Planet DebianNorbert Preining: Debian KDE/Plasma and Digikam Status 2021-04-07

Two months have passed since the last status update, but not much has changed since Debian is more or less frozen for the release of Bullseye, and only critical bugfixes are allowed. As reported before Debian/bullseye will have Plasma 5.20.5, Frameworks 5.78, Apps 20.12. Debian/experimental already carries Plasma 5.21.4 and Frameworks 5.80, and that is also the level at the OSC builds.

Debian Bullseye

We are in hard freeze now, and only targeted fixes are allowed, but Bullseye is carrying a good mixture consisting of the KDE Frameworks 5.78, including several backports of fixes from 5.79 to get smooth operation. Plasma 5.20.5, again with several cherry picks for bugs will be in Bullseye, too. The KDE/Apps are mostly at 20.12 level, and the KDE PIM group packages (akonadi, kmail, etc) are at 20.08.

Debian experimental

Frameworks 5.80 (and soon 5.81) and Plasma 5.21.4 are in Debian/experimental.

OBS packages

(short reminder: you need to import my OBS gpg key to make these repos work!)

The OBS packages as usual follow the latest release, and currently ship KDE Frameworks 5.80, KDE Apps 20.12.3, and Plasma 5.21.4. The package sources are as usual (note the different path for the Plasma packages and the App packages, containing the release version!), for Debian/unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma521/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2012/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./

and the same with Testing instead of Unstable for Debian/testing.

Digikam

Digikam has seen a new release 7.2.0, and packages are available in my OBS archives:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./

and again, same with Testing instead of Unstable for Debian/testing.

,

Planet DebianJelmer Vernooij: Automatic Fixing of Debian Build Dependencies

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

In my last blogpost, I introduced the buildlog consultant - a tool that can identify many reasons why a Debian build failed.

For example, here’s a fragment of a build log where the Build-Depends lack python3-setuptools:

849
850
851
852
853
854
855
856
857
858
 dpkg-buildpackage: info: host architecture amd64
  fakeroot debian/rules clean
 dh clean --with python3,sphinxdoc --buildsystem=pybuild
    dh_auto_clean -O--buildsystem=pybuild
 I: pybuild base:232: python3.9 setup.py clean
 Traceback (most recent call last):
   File "/<<PKGBUILDDIR>>/setup.py", line 2, in <module>
     from setuptools import setup
 ModuleNotFoundError: No module named 'setuptools'
 E: pybuild pybuild:353: clean: plugin distutils failed with: exit code=1: python3.9 setup.py clean

The buildlog consultant can identify the line in bold as being key, and interprets it:

 % analyse-sbuild-log --json ~/build.log

 {
    "stage": "build",
    "section": "Build",
    "lineno": 857,
    "kind": "missing-python-module",
    "details": {"module": "setuptools", "python_version": 3, "minimum_version": null}
 }

Automatically acting on buildlog problems

A common reason why Debian builds fail is missing dependencies or incorrect versions of dependencies declared in the package build depends.

Based on the output of the buildlog consultant, it is possible in many cases to determine what dependency needs to be added to Build-Depends. In the example given above, we can use apt-file to look for the package that contains the path /usr/lib/python3/dist-packages/setuptools/__init__.py - and voila, we find python3-setuptools:

 % apt-file search /usr/lib/python3/dist-packages/setuptools/__init__.py
 python3-setuptools: /usr/lib/python3/dist-packages/setuptools/__init__.py

The deb-fix-build command automates these steps:

  1. It builds the package using sbuild; if the package successfully builds then it just exits successfully
  2. It tries to identify the problem by looking through the build log; if it can't or if it's a problem it has seen before (but apparently failed to resolve), then it exits with a non-zero exit code
  3. It tries to find a dependency that can address the problem
  4. It updates Build-Depends in debian/control or Depends in debian/tests/control
  5. Go to step 1

This takes away the tedious manual process of building a package, discovering that a dependency is missing, updating Build-Depends and trying again.

For example, when I ran deb-fix-build while packaging saneyaml, the output looks something like this:

 % deb-fix-build
 Using output directory /tmp/tmpyz0nkgqq
 Using sbuild chroot unstable-amd64-sbuild
 Using fixers: …
 Building debian packages, running 'sbuild --no-clean-source -A -s -v'.
 Attempting to use fixer upstream requirement fixer(apt) to address MissingPythonDistribution('setuptools_scm', python_version=3, minimum_version='4')
 Using apt-file to search apt contents
 Adding build dependency: python3-setuptools-scm (>= 4)
 Building debian packages, running 'sbuild --no-clean-source -A -s -v'.
 Attempting to use fixer upstream requirement fixer(apt) to address MissingPythonDistribution('toml', python_version=3, minimum_version=None)
 Adding build dependency: python3-toml
 Building debian packages, running 'sbuild --no-clean-source -A -s -v'.
 Built 0.5.2-1- changes files at [‘saneyaml_0.5.2-1_amd64.changes’].

And in our Git repository, we see these changes as well:

% git log -p
 commit 5a1715f4c7273b042818fc75702f2284034c7277 (HEAD -> master)
 Author: Jelmer Vernooij <jelmer@jelmer.uk>
 Date:   Sun Apr 4 02:35:56 2021 +0100

     Add missing build dependency on python3-toml.

 diff --git a/debian/control b/debian/control
 index 5b854dc..3b27b73 100644
 --- a/debian/control
 +++ b/debian/control
 @@ -1,6 +1,6 @@
  Rules-Requires-Root: no
  Standards-Version: 4.5.1
 -Build-Depends: debhelper-compat (= 12), dh-sequence-python3, python3-all, python3-setuptools (>= 50), python3-wheel, python3-setuptools-scm (>= 4)
 +Build-Depends: debhelper-compat (= 12), dh-sequence-python3, python3-all, python3-setuptools (>= 50), python3-wheel, python3-setuptools-scm (>= 4), python3-toml
  Testsuite: autopkgtest-pkg-python
  Source: python-saneyaml
  Priority: optional

 commit f03047da80fcd8468ee231fbc4cf8488d7a0acd1
 Author: Jelmer Vernooij <jelmer@jelmer.uk>
 Date:   Sun Apr 4 02:35:34 2021 +0100

     Add missing build dependency on python3-setuptools-scm (>= 4).

 diff --git a/debian/control b/debian/control
 index a476cc2..5b854dc 100644
 --- a/debian/control
 +++ b/debian/control
 @@ -1,6 +1,6 @@
  Rules-Requires-Root: no
  Standards-Version: 4.5.1
 -Build-Depends: debhelper-compat (= 12), dh-sequence-python3, python3-all, python3-setuptools (>= 50), python3-wheel
 +Build-Depends: debhelper-compat (= 12), dh-sequence-python3, python3-all, python3-setuptools (>= 50), python3-wheel, python3-setuptools-scm (>= 4)
  Testsuite: autopkgtest-pkg-python
  Source: python-saneyaml
  Priority: optional

Using deb-fix-build

You can run deb-fix-build by installing the ognibuild package from unstable. The only requirements for using it are that:

  • The package is maintained in Git
  • A sbuild schroot is available for use

Caveats

deb-fix-build is fairly easy to understand, and if it doesn't work then you're no worse off than you were without it - you'll have to add your own Build-Depends.

That said, there are a couple of things to keep in mind:

  • At the moment, it doesn't distinguish between general, Arch or Indep Build-Depends.
  • It can only add dependencies for things that are actually in the archive
  • Sometimes there are multiple packages that can provide a file, command or python package - it tries to find the right one with heuristics but doesn't always get it right

Krebs on SecurityAre You One of the 533M People Who Got Facebooked?

Ne’er-do-wells leaked personal data — including phone numbers — for some 553 million Facebook users this week. Facebook says the data was collected before 2020 when it changed things to prevent such information from being scraped from profiles. To my mind, this just reinforces the need to remove mobile phone numbers from all of your online accounts wherever feasible. Meanwhile, if you’re a Facebook product user and want to learn if your data was leaked, there are easy ways to find out.

The HaveIBeenPwned project, which collects and analyzes hundreds of database dumps containing information about billions of leaked accounts, has incorporated the data into his service. Facebook users can enter the mobile number (in international format) associated with their account and see if those digits were exposed in the new data dump (HIBP doesn’t show you any data, just gives you a yes/no on whether your data shows up).

The phone number associated with my late Facebook account (which I deleted in Jan. 2020) was not in HaveIBeenPwned, but then again Facebook claims to have more than 2.7 billion active monthly users.

It appears much of this database has been kicking around the cybercrime underground in one form or another since last summer at least. According to a Jan. 14, 2021 Twitter post from Under the Breach’s Alon Gal, the 533 million Facebook accounts database was first put up for sale back in June 2020, offering Facebook profile data from 100 countries, including name, mobile number, gender, occupation, city, country, and marital status.

Under The Breach also said back in January that someone had created a Telegram bot allowing users to query the database for a low fee, and enabling people to find the phone numbers linked to a large number of Facebook accounts.

A cybercrime forum ad from June 2020 selling a database of 533 Million Facebook users. Image: @UnderTheBreach

Many people may not consider their mobile phone number to be private information, but there is a world of misery that bad guys, stalkers and creeps can visit on your life just by knowing your mobile number. Sure they could call you and harass you that way, but more likely they will see how many of your other accounts — at major email providers and social networking sites like Facebook, Twitter, Instagram, e.g. — rely on that number for password resets.

From there, the target is primed for a SIM-swapping attack, where thieves trick or bribe employees at mobile phone stores into transferring ownership of the target’s phone number to a mobile device controlled by the attackers. From there, the bad guys can reset the password of any account to which that mobile number is tied, and of course intercept any one-time tokens sent to that number for the purposes of multi-factor authentication.

Or the attackers take advantage of some other privacy and security wrinkle in the way SMS text messages are handled. Last month, a security researcher showed how easy it was to abuse services aimed at helping celebrities manage their social media profiles to intercept SMS messages for any mobile user. That weakness has supposedly been patched for all the major wireless carriers now, but it really makes you question the ongoing sanity of relying on the Internet equivalent of postcards (SMS) to securely handle quite sensitive information.

My advice has long been to remove phone numbers from your online accounts wherever you can, and avoid selecting SMS or phone calls for second factor or one-time codes. Phone numbers were never designed to be identity documents, but that’s effectively what they’ve become. It’s time we stopped letting everyone treat them that way.

Any online accounts that you value should be secured with a unique and strong password, as well as the most robust form of multi-factor authentication available. Usually, this is a mobile app like Authy or Google Authenticator that generates a one-time code. Some sites like Twitter and Facebook now support even more robust options — such as physical security keys.

Removing your phone number may be even more important for any email accounts you may have. Sign up with any service online, and it will almost certainly require you to supply an email address. In nearly all cases, the person who is in control of that address can reset the password of any associated services or accounts– merely by requesting a password reset email.

Unfortunately, many email providers still let users reset their account passwords by having a link sent via text to the phone number on file for the account. So remove the phone number as a backup for your email account, and ensure a more robust second factor is selected for all available account recovery options.

Here’s the thing: Most online services require users to supply a mobile phone number when setting up the account, but do not require the number to remain associated with the account after it is established. I advise readers to remove their phone numbers from accounts wherever possible, and to take advantage of a mobile app to generate any one-time codes for multifactor authentication.

Why did KrebsOnSecurity delete its Facebook account early last year? Sure, it might have had something to do with the incessant stream of breaches, leaks and privacy betrayals by Facebook over the years. But what really bothered me were the number of people who felt comfortable sharing extraordinarily sensitive information with me on things like Facebook Messenger, all the while expecting that I can vouch for the privacy and security of that message just by virtue of my presence on the platform.

In case readers want to get in touch for any reason, my email here is krebsonsecurity at gmail dot com, or krebsonsecurity at protonmail.com. I also respond at Krebswickr on the encrypted messaging platform Wickr.

Worse Than FailureCodeSOD: He Sed What?

Today's code is only part of the WTF. The code is bad, it's incorrect, but the mistake is simple and easy to make.

Lowell was recently digging into a broken feature in a legacy C application. The specific error was a failure when invoking a sed command from inside the application.

// use the following to remove embedded newlines: sed ':a;N;$!ba;s/\n,/,/g' snprintf(command, sizeof(command),"sed -i ':a;N;$!ba;s/\n,/,/g' %s/%s.txt",path,file); system(command);

While regular expressions have a reputation for being cryptic, this one is at least easy to read- or at least, easier to read than the pile of sed flags that precede it. s/\n,/,/g finds every newline character followed by a comma and replaces it ith just a comma. At least, that was the intent, but there's one problem with that- we're not calling sed from inside the shell.

We're calling it from C, and C is going to interpret the \n as a newline itself. The actual command which gets sent to the shell is:

sed -i ':a;N;$!ba;s/ ,/,/g' /var/tmp/backup.txt

This completely broke one of the features of this legacy application. Specifically, as you might guess from the shell command above, the backup functionality. The application had the ability to backup its data in a way that would let users revert to prior application states or migrate to other hosts. The commit which introduced the sed call broke that feature.

In 2018. For nearly three years, all of the customers running this application have been running it without backups.

Lowell sums it up:

The real WTF may be the first part of my reply: "Looks like backup was broken by a commit in December 2018. The 2014 version should work."

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityRansom Gangs Emailing Victim Customers for Leverage

Some of the top ransomware gangs are deploying a new pressure tactic to push more victim organizations into paying an extortion demand: Emailing the victim’s customers and partners directly, warning that their data will be leaked to the dark web unless they can convince the victim firm to pay up.

This letter is from the Clop ransomware gang, putting pressure on a recent victim named on Clop’s dark web shaming site.

“Good day! If you received this letter, you are a customer, buyer, partner or employee of [victim],” the missive reads. “The company has been hacked, data has been stolen and will soon be released as the company refuses to protect its peoples’ data.”

“We inform you that information about you will be published on the darknet [link to dark web victim shaming page] if the company does not contact us,” the message concludes. “Call or write to this store and ask to protect your privacy!!!!”

The message above was sent to a customer of RaceTrac Petroleum, an Atlanta company that operates more than 650 retail gasoline convenience stores in 12 southeastern states. The person who shared that screenshot above isn’t a distributor or partner of RaceTrac, but they said they are a RaceTrac rewards member, so the company definitely has their email address and other information.

Several gigabytes of the company’s files — including employee tax and financial records — have been posted to the victim shaming site for the Clop ransomware gang.

In response to questions from KrebsOnSecurity, RaceTrac said it was recently impacted by a security incident affecting one of its third-party service providers, Accellion Inc.

For the past few months, attackers have been exploiting a a zero-day vulnerability in Accellion File Transfer Appliance (FTA) software, a flaw that has been seized upon by Clop to break into dozens of other major companies like oil giant Shell and security firm Qualys.

“By exploiting a previously undetected software vulnerability, unauthorized parties were able to access a subset of RaceTrac data stored in the Accellion File Transfer Service, including email addresses and first names of some of our RaceTrac Rewards Loyalty users,” the company wrote. “This incident was limited to the aforementioned Accellion services and did not impact RaceTrac’s corporate network. The systems used for processing guest credit, debit and RaceTrac Rewards transactions were not impacted.”

The same extortion pressure email has been going out to people associated with the University of California, which was one of several large U.S. universities that got hit with Clop ransomware recently. Most of those university ransomware incidents appeared to be tied to attacks on attacks on the same Accellion vulnerability, and the company has acknowledged roughly a third of its customers on that appliance got compromised as a result.

Clop is one of several ransom gangs that will demand two ransoms: One for a digital key needed to unlock computers and data from file encryption, and a second to avoid having stolen data published or sold online. That means even victims who opt not to pay to get their files and servers back still have to decide whether to pay the second ransom to protect the privacy of their customers.

As I noted in Why Paying to Delete Stolen Data is Bonkers, leaving aside the notion that victims might have any real expectation the attackers will actually destroy the stolen data, new research suggests a fair number of victims who do pay up may see some or all of the stolen data published anyway.

The email in the screenshot above differs slightly from those covered last week by Bleeping Computer, which was the first to spot the new victim notification wrinkle. Those emails say that the recipient is being contacted as they are a customer of the store, and their personal data, including phone numbers, email addresses, and credit card information, will soon be published if the store does not pay a ransom, writes Lawrence Abrams.

“Perhaps you bought something there and left your personal data. Such as phone, email, address, credit card information and social security number,” the Clop gang states in the email.

Fabian Wosar, chief technology officer at computer security firm Emsisoft, said the direct appeals to victim customers is a natural extension of other advertising efforts by the ransomware gangs, which recently included using hacked Facebook accounts to post victim shaming advertisements.

Wosar said Clop isn’t the only ransomware gang emailing victim customers.

“Clop likes to do it and I think REvil started as well,” Wosar said.

Earlier this month, Bleeping Computer reported that the REvil ransomware operation was planning on launching crippling distributed denial of service (DDoS) attacks against victims, or making VOIP calls to victims’ customers to apply further pressure.

“Sadly, regardless of whether a ransom is paid, consumers whose data has been stolen are still at risk as there is no way of knowing if ransomware gangs delete the data as they promise,” Abrams wrote.

Cory DoctorowHow To Destroy Surveillance Capitalism (Part 01)

This week on my podcast, part one of a serialized reading of my 2020 Onezero/Medium book How To Destroy Surveillance Capitalism, now available in paperback (you can also order signed and personalized copies from Dark Delicacies, my local bookstore).

MP3

Worse Than FailureCodeSOD: Switching Your Template

Many years ago, Kari got a job at one of those small companies that lives in the shadow of a university. It was founded by graduates of that university, mostly recruited from that university, and the CEO was a fixture at alumni events.

Kari was a rare hire not from that university, but she knew the school had a reputation for having an excellent software engineering program. She was prepared to be a little behind her fellow employees, skills-wise, but looked forward to catching up.

Kari was unprepared for the kind of code quality these developers produced.

First, let's take a look at how they, as a company standard, leveraged C++ templates. C++ templates are similar (though more complicated) than the generics you find in other languages. Defining a method like void myfunction<T>(T param) creates a function which can be applied to any type, so myfunction(5) and myfunction("a string") and myfunction(someClassVariable) are all valid. The beauty, of course, is that you can write a template method once, but use it in many ways.

Kari provided some generic examples of how her employer leveraged this feature, to give us a sense of what the codebase was like:

enum SomeType { SOMETYPE_TYPE1 // ... more types here }; template<SomeType t> void Function1(); template<> void Function1<SOMETYPE_TYPE1>() { // Implementation of Function1 for TYPE1 as a template specialization } template<> void Function1<SOMETYPE_TYPE2>() { // Implementation of Function1 for TYPE2 as a template specialization } // ... more specializations here void CallFunction1(SomeType type) { switch(type) { case SOMETYPE_TYPE1: Function1<SOMETYPE_TYPE1>(); break; case SOMETYPE_TYPE2: Function1<SOMETYPE_TYPE2>(); break; // ... I think you get the picture default: assert(false); break; } }

This technique allows them to define multiple versions of a method call Function1, and then decide which version needs to be invoked by using a type flag and a switch statement. This simultaneously misses the point of templates and overloading. And honestly, while I'm not sure exactly what business problem they were trying to solve, this is a textbook case for using polymorphism to dispatch calls to concrete implementations via inheritance.

Which raises the question, if this is how they do templates, how do they do inheritance? Oh, you know how they do inheritance.

enum ClassType { CLASSTYPE_CHILD1 // ... more enum values here }; class Parent { public: Parent(ClassType type) : type_(type) { } ClassType get_type() const { return type_; } bool IsXYZSupported() const { switch(type_) { case CHILD1: return true; // ... more cases here default: assert(false); return false; } } private: ClassType type_; }; class Child1 : public Parent { public: Child1() : Parent(CLASSTYPE_CHILD1) { } }; // Somewhere else in the application, buried deep within hundreds of lines of obscurity... bool IsABCSupported(Parent *obj) { switch(obj->get_type()) { case CLASSTYPE_CHILD1: return true; // ... more cases here default: assert(false); return false; } }

Yes, once again, we have a type flag and a switch statement. Inheritance would do this for us. They're reinvented the wheel, but this time, it's a triangle. An isosceles triangle, at that.

All that's bad, but the thing which elevates this code to transcendentally bad are the locations of the definitions of IsXYZSupported and IsABCSupported. IsXYZSupported is unnecessary, something which shouldn't exist, but at least it's in the definition of the class. Well, it's in the definition of the parent class, which means the parent has to know each of its children, which opens up a whole can of worms regarding fragility. But there are also stray methods like IsABCSupported, defined someplace else, to do something else, and this means that doing any tampering to the class hierarchy means tracking down possibly hundreds of random methods scattered in the code base.

And, if you're wondering how long these switch statements could get? Kari says: "The record I saw was a switch with approximately 100 cases."

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecurityUbiquiti All But Confirms Breach Response Iniquity

For four days this past week, Internet-of-Things giant Ubiquiti did not respond to requests for comment on a whistleblower’s allegations the company had massively downplayed a “catastrophic” two-month breach ending in January to save its stock price, and that Ubiquiti’s insinuation that a third-party was to blame was a fabrication. I was happy to add their eventual public response to the top of Tuesday’s story on the whistleblower’s claims, but their statement deserves a post of its own because it actually confirms and reinforces those claims.

Ubiquiti’s IoT gear includes things like WiFi routers, security cameras, and network video recorders. Their products have long been popular with security nerds and DIY types because they make it easy for users to build their own internal IoT networks without spending many thousands of dollars.

But some of that shine started to come off recently for Ubiquiti’s more security-conscious customers after the company began pushing everyone to use a unified authentication and access solution that makes it difficult to administer these devices without first authenticating to Ubiquiti’s cloud infrastructure.

All of a sudden, local-only networks were being connected to Ubiquiti’s cloud, giving rise to countless discussion threads on Ubiquiti’s user forums from customers upset over the potential for introducing new security risks.

And on Jan. 11, Ubiquiti gave weight to that angst: It told customers to reset their passwords and enable multifactor authentication, saying a breach involving a third-party cloud provider might have exposed user account data. Ubiquiti told customers they were “not currently aware of evidence of access to any databases that host user data, but we cannot be certain that user data has not been exposed.”

Ubiquiti’s notice on Jan. 12, 2021.

On Tuesday, KrebsOnSecurity reported that a source who participated in the response to the breach said Ubiquiti should have immediately invalidated all credentials because all of the company’s key administrator passwords had been compromised as well. The whistleblower also said Ubiquiti never kept any logs of who was accessing its databases.

The whistleblower, “Adam,” spoke on condition of anonymity for fear of reprisals from Ubiquiti. Adam said the place where those key administrator credentials were compromised — Ubiquiti’s presence on Amazon’s Web Services (AWS) cloud services — was in fact the “third party” blamed for the hack.

From Tuesday’s piece:

“In reality, Adam said, the attackers had gained administrative access to Ubiquiti’s servers at Amazon’s cloud service, which secures the underlying server hardware and software but requires the cloud tenant (client) to secure access to any data stored there.

“They were able to get cryptographic secrets for single sign-on cookies and remote access, full source code control contents, and signing keys exfiltration,” Adam said.

Adam says the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies.

Such access could have allowed the intruders to remotely authenticate to countless Ubiquiti cloud-based devices around the world. According to its website, Ubiquiti has shipped more than 85 million devices that play a key role in networking infrastructure in over 200 countries and territories worldwide.

Ubiquiti finally responded on Mar. 31, in a post signed “Team UI” on the company’s community forum online.

“Nothing has changed with respect to our analysis of customer data and the security of our products since our notification on January 11. In response to this incident, we leveraged external incident response experts to conduct a thorough investigation to ensure the attacker was locked out of our systems.”

“These experts identified no evidence that customer information was accessed, or even targeted. The attacker, who unsuccessfully attempted to extort the company by threatening to release stolen source code and specific IT credentials, never claimed to have accessed any customer information. This, along with other evidence, is why we believe that customer data was not the target of, or otherwise accessed in connection with, the incident.”

Ubiquiti’s response this week on its user forum.

Ubiquiti also hinted it had an idea of who was behind the attack, saying it has “well-developed evidence that the perpetrator is an individual with intricate knowledge of our cloud infrastructure. As we are cooperating with law enforcement in an ongoing investigation, we cannot comment further.”

Ubiquiti’s statement largely confirmed the reporting here by not disputing any of the facts raised in the piece. And while it may seem that Ubiquiti is quibbling over whether data was in fact stolen, Adam said Ubiquiti can say there is no evidence that customer information was accessed because Ubiquiti failed to keep logs of who was accessing its databases.

“Ubiquiti had negligent logging (no access logging on databases) so it was unable to prove or disprove what they accessed, but the attacker targeted the credentials to the databases, and created Linux instances with networking connectivity to said databases,” Adam wrote in a whistleblower letter to European privacy regulators last month. “Legal overrode the repeated requests to force rotation of all customer credentials, and to revert any device access permission changes within the relevant period.”

It appears investors noticed the incongruity as well. Ubiquiti’s share price hardly blinked at the January breach disclosure. On the contrary, from Jan. 13 to Tuesday’s story its stock had soared from $243 to $370. By the end of trading day Mar. 30, UI had slipped to $349. By close of trading on Thursday (markets were closed Friday) the stock had fallen to $289.

Sam VargheseTime for ABC to bite the bullet and bring Tony Jones back to Q+A

Finally, someone from the mainstream Australian media has called it: Q+A, once one of the more popular shows on the ABC, is really not worth watching any more.

Of course, being Australian, the manner in which this sentiment was expressed was oblique, more so given that it came from a critic who writes for the Nine newspapers, Craig Mathieson.

Hamish Macdonald: his immature approach to Q+A has led to the program going downhill. Courtesy YouTube

A second critical review has appeared on April 5, this time in The Australian.

Newspapers from this company are generally classed as being from the left — they once were, when they were owned by Fairfax Media, but centrist or right of centre would be more accurate these days — and given that the ABC is also considered to be part of the left, criticism was generally absent.

Mathieson did not come right out and call the program atrocious – which is what it is right now. The way the headline on Mathieson’s article put it was that Q+A was once an agenda setter, but was no longer essential viewing. He was right about the former, but to call it essential viewing at any stage of its existence is probably an exaggeration.

He cited viewing figures to bolster his views: “Audience figures for Q+A have plummeted this year. Last week [25 March], it failed to crack the top 20 free-to-air programs on the Thursday night it aired, indicating a capital city audience of just 237,000. In March 2020, the number was above 500,000, and likewise in March 2016,” he wrote.

“This was meant to be the year that Q+A ascended to new prominence. Since its debut in 2008 it had aired about 9.30pm on Mondays, the feisty debate chaser to Four Corners and Media Watch.

“In 2021, it moved to 8.30pm on Thursday, an hour earlier presumably to give it access to a larger audience and its own anchoring role on the ABC’s schedule. But even with Back Roads, one of the national broadcaster’s quiet achievers, as an 8pm lead-in, the viewing figures are starting to resemble a death spiral.”

Veteran ABC journalist Tony Jones was the Q+A host until just two seasons ago. Then Hamish Macdonald, from the tabloid TV channel 10, was given the job. And things have generally gone downhill from that point onwards.

Courtesy The Australian

Jones brought a mature outlook to the show and was generally able to keep the discussion interesting. he always had things in check and the panellists were kept in line when they tried to ramble on. Quite often, the show was prevented from going down a difficult path by a simple “I’ll take that as a comment” from Jones.

Macdonald often loses control of things. He seems to be trying too hard to differentiate himself from Jones, bringing too many angles to a single episode and generally trying to engineer gotcha situations. It turns to be quite juvenile. One word describes him: callow. It is one that can be applied to many of the ABC’s recent recruits.

Had the previous host been anyone but Jones, the difference would not have been so stark. But then even when others like Virginia Trioli or Annabel Crabb stood in for Jones, the show was watchable as nobody tried out gimmicks. Again, Trioli and Crabb are very good at their jobs. The same cannot be said for Macdonald.

Now that Jones has had to put his plan of accompanying his partner, Sarah Ferguson, to China, the ABC might like to think of bringing him back to Q+A. The plan was for to Ferguson to be the ABC’s regular correspondent in China, but that was dropped after the previous correspondent, Bill Birtles, fled the country last September, along with Michael Smith, a correspondent for the Australian Financial Review. Jones had planned to write a book while in China.

The ABC needs to bite the bullet and rescue what was once one of its flagship shows. As Mathieson did, it is worthwhile pointing out that two other popular shows, 7.30 and Four Corners, have held their own during the same period that Q+A has gone downhill, even improving on previous audience numbers.

If change does come it would be at the end of this season. Another season of Macdonald will mean that Q+A may have to be pensioned off like Lateline which was killed largely because the main host, Emma Alberici, had made it into a terrible program. Under Jones, and others like Maxine McKew, Trioli and even the comparatively younger Stephen Cannane, Lateline was always compulsory watching for any Australian who followed news somewhat seriously.

,

Kevin RuddSaturday Paper: A Foreign Policy for the Climate

By Kevin Rudd and Thom Woodroofe

Britain’s Conservative government last month declared the fight against climate change its top diplomatic priority after a comprehensive review of its foreign, defence and security policy. In the United States, Joe Biden has mainstreamed climate change across his own national security apparatus. And the European Union has begun taking steps towards putting climate change at the heart of its trading relationships through the implementation of a carbon border adjustment tax.

These examples show tackling climate change is no longer purely the purview of environmental policy. It has crossed the geopolitical Rubicon and countries are now mainstreaming climate action as part of their foreign policy. It is time for Australia to do the same.

The fight against climate change must become a new pillar of our foreign policy, on a par with our commitment to the US alliance, the Indo–Pacific region and the multilateral order. And the progressive side of politics has an opportunity to lead the way.

Acting on climate change not only makes economic sense for Australia, it makes diplomatic sense as well. Our refusal to act meaningfully on climate change will increasingly be a thorn in the side of our relations with all of the world’s advanced economies.

The entirety of the G7 is now committed to net zero emissions by 2050. Australia will confront an uncomfortable reality as a special guest of the group’s next gathering in June when we find ourselves isolated among the developed countries in the room.

For the first time, China also now has a time line to decarbonise its economy, and more than 70 per cent of Australia’s trade is now with jurisdictions committed to making the same transition.

Closer to home, our refusal to act on climate change will continue to hamstring any effort to genuinely step up our engagement in the Pacific Islands, which are on the front line of this crisis.

Australia is a creative middle power. Both sides of politics have admittedly demonstrated our country’s ability to achieve landmark diplomatic outcomes. Whether it be brokering peace in Cambodia, the formation of APEC or the G20, securing a seat on the United Nations Security Council, or – most recently – Mathias Cormann’s appointment as the new head of the OECD.

When we put our diplomatic minds and might to something, we often succeed. This is no different when it comes to galvanising the world’s efforts to tackle climate change. Despite the common refrain, Australia’s environmental leadership during the past 30 years has often made a difference, including on larger emitters such as China.

Under the Hawke and Keating governments, for example, Australia secured an international ban on mining in Antarctica. We also became one of the first countries to propose a quantifiable emissions reduction target years before this became the norm through the UN Framework Convention and the Kyoto Protocol that followed.

The Howard government may have cynically advocated Kyoto’s inclusion of emissions from agriculture to allow them to be seen to be doing more while actually doing less, and then refused to ratify the agreement. But ironically this position on agriculture has proved pivotal for ensuring the land sector – which represents 20 per cent of global emissions – has not been excluded from global efforts.

Copenhagen is often remembered for what it didn’t deliver. But Australia, as a “friend of the chair”, was essential for what was able to be salvaged and ensuring that from its ashes the Paris Agreement was able to rise. The concept born there – of countries’ climate targets being set individually from the bottom up, rather than from the top down based on our relative contribution or economic capacity – was an Australian idea.

So, too, in part was the concept of a global 2-degree temperature limit being a guardrail for our global efforts, which Australia tabled with the Maldives. The fact that six years later, in Paris, Greg Hunt played a role in then ensuring the calls of island nations to bring this guardrail down to 1.5 degrees were not ignored also deserves credit.

And while Australia was then shut out of progressive groupings, including the High Ambition Coalition, there were others we originally helped form, such as the Cartagena Dialogue for Progressive Action, only to be forced, embarrassingly, to step away during the Abbott era.

Yet today, with Biden’s new climate envoy, John Kerry, openly identifying Australia – alongside Saudi Arabia and Brazil under Jair Bolsonaro – as responsible for the collapse of the most-recent round of UN climate talks in Madrid, it is clear just how much of an international pariah we have become.

Had Labor prevailed at the 2019 election, the world would see us in a very different light today. Instead of refusing to honour the letter and spirit of the Paris Agreement by not increasing the ambition of our existing 2030 target, we would have been the first G20 country to do so ahead of this year’s deadline. We would have re-entered the Green Climate Fund – which until 2018 was led by an Australian, Howard Bamsey – rather than now being the only major Western donor that refuses to take part. And we would have been welcomed as a hero at a critical UN summit in 2019, rather than choosing to instead parade around America’s coal country with Donald Trump.

While the Labor Party may have failed to sell its climate message at the last election, now is a time for courage. This year is not the same as 2019, especially now there is no longer a climate denier in the White House. As the new US president likes to say, when he thinks of climate change, he thinks of jobs. Painting a similar vision of a just transition in Australia, especially for our coal industry, will be key.

But it must come with detail, too. The harsh reality is that the global transition to net zero has tolled the death knell for this industry, and unless we are prepared to embrace the serious conversation about how we diversify our domestic economy and export markets, we will be left naked in the wind.

While Labor might be committed to net zero by 2050, the party cannot afford to simply give the government a free pass on the urgency of short-term action.  Being a party of opposition means being an alternative government.

If the Labor Party was to form government this year, the world’s expectation would be that it brings forward an enhanced 2030 target. Not merely to set its sights on what to do in 2035, which isn’t even up for discussion under the Paris framework for another five years. More than anything else, this will require Labor to better explain the economic benefits of taking stronger action now, rather than being forced to make deeper cuts later.

A report similar to the 2007 Garnaut Climate Change Review – commissioned by the then Labor opposition but focused on the economic benefits of action in the short term – could be the circuit-breaker needed internally within the party on this very question.

Thankfully, when Biden announces his own new 2030 target at an Earth Day Summit this month, it will set a new global benchmark for Australia’s own target. In 2014, Abbott deliberately set Australia’s current target on the basis of what he said the Americans were doing, albeit five years earlier. So by the conservatives’ own rhetoric, if the Americans can do more, so should we.

More importantly, we also clearly have the capacity to do more if we are on track to “meet and beat” our current target, as the government says. Not least because in the seven years since that target was developed, we have also begun to bring online the largest renewable energy project in our history in “Snowy 2.0”.

A more ambitious climate policy will require Labor to continue to take the fight up to the government. But just as the government was dragged kicking and screaming away from its insistence on using dodgy accounting tricks to bolster its efforts, it will likely be dragged kicking and screaming to adopting a net zero by 2050 goal. It must similarly be forced to increase its short-term ambition, too. And as we have seen with countries including Japan and Canada in recent months, the possibility of an about-face on this question is not impossible with Biden now in the White House.

The good news is that if given the chance, Australia’s diplomats are primed to once again make a difference in the global fight against climate change. The merger of both AusAID and parts of the Department of Climate Change into the Department of Foreign Affairs and Trade means they are among the best in the world when it comes to understanding the real world and foreign policy dimensions of climate change. The fact our foreign service also doubles as our trade agency will also be crucial for the new era we have entered.

It is time for Australia to adopt a foreign policy for the climate. We have made a difference before and can do so again. All that is missing is the right political leadership.

Published in The Saturday Paper

The post Saturday Paper: A Foreign Policy for the Climate appeared first on Kevin Rudd.

,

Worse Than FailureError'd: Everybody Has A Testing Environment

“Some people,” said the sage, “are lucky enough to also have a completely separate environment for production.” Today's nuggets of web joy are pudding-proof.

Hypothetically hypochondriac STUDENTS[$RANDOM] gasped “I tried to look up information about Covid tests at the institution. Instead I found…this.”

 

An anonymous gastronome delivered this tasty morsel with a pun too cheesy to permit in this staid column. “It must have got lost in the mail.”

 

Hapless hirer Fred G. wonders “Why aren't we getting any resumes?” ruminating “it worked well enough on HR's machine!”

 

“Wrong.” snapped Scott B. testily.

 

Armchair analyst David accidentally unmasks this editor's archetype, exclaiming “I didn't even know this was one of the types!”

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Krebs on SecurityNew KrebsOnSecurity Mobile-Friendly Site

Dear Readers, this has been long overdue, but at last I give you a more responsive, mobile-friendly version of KrebsOnSecurity. We tried to keep the visual changes to a minimum and focus on a simple theme that presents information in a straightforward, easy-to-read format. Please bear with us over the next few days as we hunt down the gremlins in the gears.

We were shooting for responsive (fast) and uncluttered. Hopefully, we achieved that and this new design will render well in whatever device you use to view it. If something looks amiss, please don’t hesitate to drop a note in the comments below.

NB: KrebsOnSecurity has not changed any of its advertising practices: The handful of ads we run are still image-only creatives that are vetted by me and served in-house. If you’re blocking ads on this site, please consider adding an exception here. Thank you!

MECensoring Images

A client asked me to develop a system for “censoring” images from an automatic camera. The situation is that we have a camera taking regular photos from a fixed location which includes part of someone else’s property. So my client made a JPEG with some black rectangles in the sections that need to be covered. The first thing I needed to do was convert the JPEG to a PNG with transparency for the sections that aren’t to be covered.

To convert it I loaded the JPEG in the GIMP and went to the Layer->Transparency->Add Alpha Channel menu to enabled the Alpha channel. Then I selected the “Bucket Fill tool” and used “Mode Erase” and “Fill by Composite” and then clicked on the background (the part of the JPEG that was white) to make it transparent. Then I exported it to PNG.

If anyone knows of an easy way to convert the file then please let me know. It would be nice if there was a command-line program I could run to convert a specified color (default white) to transparent. I say this because I can imagine my client going through a dozen iterations of an overlay file that doesn’t quite fit.

To censor the image I ran the “composite” command from imagemagick. The command I used was “composite -gravity center overlay.png in.jpg out.jpg“. If anyone knows a better way of doing this then please let me know.

The platform I’m using is a ARM926EJ-S rev 5 (v5l) which takes 8 minutes of CPU time to convert a single JPEG at full DSLR resolution (4 megapixel). It also required enabling swap on a SD card to avoid running out of RAM and running “systemctl disable tmp.mount” to stop using tmpfs for /tmp as the system only has 256M of RAM.

Worse Than FailureAnnouncing the launch of TFTs

Totally Fungible Tokens

NFTs, or non-fungible tokens, are an exciting new application of Blockchain technology that allows us to burn down a rainforest every time we want to trade a string representing an artist's signature on a creative work.

Many folks are eagerly turning JPGs, text files, and even Tweets into NFTs, but since not all of us have a convenient rainforest to destroy, The Daily WTF is happy to offer at alternative, the Totally Fungible Token

What Is a Totally Fungible Token?

A TFT is a unique identifier which we can generate for any file or group of files. It combines the actual data in the file(s) with a Universally Unique Identifier, and then condenses that data using a SHA-256 hashing algorithm. This guarantees that you have a unique token which represents that you have created a unique token for that data.

How is this better than an NFT?

There are a few key advantages that TFTs offer. First, they're computationally very cheap to make, allowing even a relatively underpowered computer participate actively in the token ecosystem.

In addition, this breaks all dependencies on the blockchain, meaning that you don't need to use or spend cryptocurrency to create, purchase, or trade these tokens.

Most important: much like NFTs, a TFT is absolutely worthless, but we're not promoting these as some sort of arcane investment instrument, so there won't be any sort of bubble. The value of your TFT will remain essentially zero, for the entire life of your TFT. There is no volatility.

In the interests of efficiency, this also performs terribly on large files. How big is too big? That depends on your browser! Enjoy finding out what's too big to encode!

Generate a TFT

Use the button below to browse for a file on your computer, and this will generate a unique token showing that you generated a unique token. Feel free to share, sell, or trade these tokens with your friends! No information about your files is in the token, so it's guaranteed to be completely meaningless! Give it away, sell it, just write it down on a napkin, your TFT is yours to use as you please!

Token: