Planet Russell

,

Planet DebianJulian Andres Klode: Google Pixel 4a: Initial Impressions

Yesterday I got a fresh new Pixel 4a, to replace my dying OnePlus 6. The OnePlus had developed some faults over time: It repeatedly loses connection to the AP and the network, and it got a bunch of scratches and scuffs from falling on various surfaces without any protection over the past year.

Why get a Pixel?

Camera: OnePlus focuses on stuffing as many sensors as it can into a phone, rather than a good main sensor, resulting in pictures that are mediocre blurry messes - the dreaded oil painting effect. Pixel have some of the best camera in the smartphone world. Sure, other hardware is far more capable, but the Pixels manage consistent results, so you need to take less pictures because they don’t come out blurry half the time, and the post processing is so good that the pictures you get are just great. Other phones can shoot better pictures, sure - on a tripod.

Security updates: Pixels provide 3 years of monthly updates, with security updates being published on the 5th of each month. OnePlus only provides updates every 2 months, and then the updates they do release are almost a month out of date, not counting that they are only 1st-of-month patches, meaning vendor blob updates included in the 5th-of-month updates are even a month older. Given that all my banking runs on the phone, I don’t want it to be constantly behind.

Feature updates: Of course, Pixels also get Beta Android releases and the newest Android release faster than any other phone, which is advantageous for Android development and being nerdy.

Size and weight: OnePlus phones keep getting bigger and bigger. By today’s standards, the OnePlus 6 at 6.18" and 177g is a small an lightweight device. Their latest phone, the Nord, has 6.44" and weighs 184g, the OnePlus 8 comes in at 180g with a 6.55" display. This is becoming unwieldy. Eschewing glass and aluminium for plastic, the Pixel 4a comes in at 144g.

First impressions

Accessories

The Pixel 4a comes in a small box with a charger, USB-C to USB-C cable, a USB-OTG adapter, sim tray ejector. No pre-installed screen protector or bumper are provided, as we’ve grown accustomed to from Chinese manufacturers like OnePlus or Xiaomi. The sim tray ejector has a circular end instead of the standard oval one - I assume so it looks like the ‘o’ in Google?

Google sells you fabric cases for 45€. That seems a bit excessive, although I like that a lot of it is recycled.

Haptics

Coming from a 6.18" phablet, the Pixel 4a with its 5.81" feels tiny. In fact, it’s so tiny my thumb and my index finger can touch while holding it. Cute! Bezels are a bit bigger, resulting in slightly less screen to body. The bottom chin is probably impracticably small, this was already a problem on the OnePlus 6, but this one is even smaller. Oh well, form over function.

The buttons on the side are very loud and clicky. As is the vibration motor. I wonder if this Pixel thinks it’s a Model M. It just feels great.

The plastic back feels really good, it’s that sort of high quality smooth plastic you used to see on those high-end Nokia devices.

The finger print reader, is super fast. Setup just takes a few seconds per finger, and it works reliably. Other phones (OnePlus 6, Mi A1/A2) take like half a minute or a minute to set up.

Software

The software - stock Android 11 - is fairly similar to OnePlus' OxygenOS. It’s a clean experience, without a ton of added bloatware (even OnePlus now ships Facebook out of box, eww). It’s cleaner than OxygenOS in some way - there are no duplicate photos apps, for example. On the other hand, it also has quite a bunch of Google stuff I could not care less about like YT Music. To be fair, those are minor noise once all 130 apps were transferred from the old phone.

There are various things I miss coming from OnePlus such as off-screen gestures, network transfer rate indicator in quick settings, or a circular battery icon. But the Pixel has an always on display, which is kind of nice. Most of the cool Pixel features, like call screening or live transcriptions are unfortunately not available in Germany.

The display is set to display the same amount of content as my 6.18" OnePlus 6 did, so everything is a bit tinier. This usually takes me a week or two to adjust too, and then when I look at the OnePlus again I’ll be like “Oh the font is huge”, but right now, it feels a bit small on the Pixel.

You can configure three colour profiles for the Pixel 4a: Natural, Boosted, and Adaptive. I have mine set to adaptive. I’d love to see stock Android learn what OnePlus has here: the ability to adjust the colour temperature manually, as I prefer to keep my devices closer to 5500K than 6500K, as I feel it’s a bit easier on the eyes. Or well, just give me the ability to load a ICM profile (though, I’d need to calibrate the screen then - work!).

Migration experience

Restoring the apps from my old phone only restore settings for a few handful out of 130, which is disappointing. I had to spent an hour or two logging in to all the other apps, and I had to fiddle far too long with openScale to get it to take its data over. It’s a mystery to me why people do not allow their apps to be backed up, especially something innocent like a weight tracking app. One of my banking apps restored its logins, which I did not really like. KeePass2Android settings were restored as well, but at least the key file was not restored.

I did not opt in to restoring my device settings, as I feel that restoring device settings when changing manufactures is bound to mess up some things. For example, I remember people migrating to OnePlus phones and getting their old DND schedule without any way to change it, because OnePlus had hidden the DND stuff. I assume that’s the reason some accounts, like my work GSuite account were not migrated (it said it would migrate accounts during setup).

I’ve setup Bitwarden as my auto-fill service, so I could login into most of my apps and websites using the stored credentials. I found that often that did not work. Like Chrome does autofill fine once, but if I then want to autofill again, I have to kill and restart it, otherwise I don’t get the auto-fill menu. Other apps did not allow any auto-fill at all, and only gave me the option to copy and paste. Yikes - auto-fill on Android still needs a lot of work.

Performance

It hangs a bit sometimes, but this was likely due to me having set 2 million iterations on my Bitwarden KDF and using Bitwarden a lot, and then opening up all 130 apps to log into them which overwhelmed the phone a bit. Apart from that, it does not feel worse than the OnePlus 6 which was to be expected, given that the benchmarks only show a slight loss in performance.

Photos do take a few seconds to process after taking them, which is annoying, but understandable given how much Google relies on computation to provide decent pictures.

Audio

The Pixel has dual speakers, with the earpiece delivering a tiny sound and the bottom firing speaker doing most of the work. Still, it’s better than just having the bottom firing speaker, as it does provide a more immersive experience. Bass makes this thing vibrate a lot. It does not feel like a resonance sort of thing, but you can feel the bass in your hands. I’ve never had this before, and it will take some time getting used to.

Final thoughts

This is a boring phone. There’s no wow factor at all. It’s neither huge, nor does it have high-res 48 or 64 MP cameras, nor does it have a ton of sensors. But everything it does, it does well. It does not pretend to be a flagship like its competition, it doesn’t want to wow you, it just wants to be the perfect phone for you. The build is solid, the buttons make you think of a Model M, the camera is one of the best in any smartphone, and you of course get the latest updates before anyone else. It does not feel like a “only 350€” phone, but yet it is. 128GB storage is plenty, 1080p resolution is plenty, 12.2MP is … you guessed it, plenty.

The same applies to the other two Pixel phones - the 4a 5G and 5. Neither are particularly exciting phones, and I personally find it hard to justify spending 620€ on the Pixel 5 when the Pixel 4a does job for me, but the 4a 5G might appeal to users looking for larger phones. As to 5G, I wouldn’t get much use out of it, seeing as its not available anywhere I am. Because I’m on Vodafone. If you have a Telekom contract or live outside of Germany, you might just have good 5G coverage already and it might make sense to get a 5G phone rather than sticking to the budget choice.

Outlook

The big question for me is whether I’ll be able to adjust to the smaller display. I now have a tablet, so I’m less often using the phone (which my hands thank me for), which means that a smaller phone is probably a good call.

Oh while we’re talking about calls - I only have a data-only SIM in it, so I could not test calling. I’m transferring to a new phone contract this month, and I’ll give it a go then. This will be the first time I get VoLTE and WiFi calling, although it is Vodafone, so quality might just be worse than Telekom on 2G, who knows. A big shoutout to congstar for letting me cancel with a simple button click, and to @vodafoneservice on twitter for quickly setting up my benefits of additional 5GB per month and 10€ discount for being an existing cable customer.

I’m also looking forward to playing around with the camera (especially night sight), and eSIM. And I’m getting a case from China, which was handed over to the Airline on Sep 17 according to Aliexpress, so I guess it should arrive in the next weeks. Oh, and screen protector is not here yet, so I can’t really judge the screen quality much, as I still have the factory protection film on it, and that’s just a blurry mess - but good enough for setting it up. Please Google, pre-apply a screen protector on future phones and include a simple bumper case.

I might report back in two weeks when I have spent some more time with the device.

Planet DebianSteve Kemp: Writing an assembler.

Recently I've been writing a couple of simple compilers, which take input in a particular format and generate assembly language output. This output can then be piped through gcc to generate a native executable.

Public examples include this trivial math compiler and my brainfuck compiler.

Of course there's always the nagging thought that relying upon gcc (or nasm) is a bit of a cheat. So I wondered how hard is it to write an assembler? Something that would take assembly-language program and generate a native (ELF) binary?

And the answer is "It isn't hard, it is just tedious".

I found some code to generate an ELF binary, and after that assembling simple instructions was pretty simple. I remember from my assembly-language days that the encoding of instructions can be pretty much handled by tables, but I've not yet gone into that.

(Specifically there are instructions like "add rax, rcx", and the encoding specifies the source/destination registers - with different forms for various sized immediates.)

Anyway I hacked up a simple assembler, it can compile a.out from this input:

.hello   DB "Hello, world\n"
.goodbye DB "Goodbye, world\n"

        mov rdx, 13        ;; write this many characters
        mov rcx, hello     ;; starting at the string
        mov rbx, 1         ;; output is STDOUT
        mov rax, 4         ;; sys_write
        int 0x80           ;; syscall

        mov rdx, 15        ;; write this many characters
        mov rcx, goodbye   ;; starting at the string
        mov rax, 4         ;; sys_write
        mov rbx, 1         ;; output is STDOUT
        int 0x80           ;; syscall


        xor rbx, rbx       ;; exit-code is 0
        xor rax, rax       ;; syscall will be 1 - so set to xero, then increase
        inc rax            ;;
        int 0x80           ;; syscall

The obvious omission is support for "JMP", "JMP_NZ", etc. That's painful because jumps are encoded with relative offsets. For the moment if you want to jump:

        push foo     ; "jmp foo" - indirectly.
        ret

:bar
        nop          ; Nothing happens
        mov rbx,33   ; first syscall argument: exit code
        mov rax,1    ; system call number (sys_exit)
        int 0x80     ; call kernel

:foo
        push bar     ; "jmp bar" - indirectly.
        ret

I'll update to add some more instructions, and see if I can use it to handle the output I generate from a couple of other tools. If so that's a win, if not then it was a fun learning experience:

,

Cryptogram COVID-19 and Acedia

Note: This isn’t my usual essay topic. Still, I want to put it on my blog.

Six months into the pandemic with no end in sight, many of us have been feeling a sense of unease that goes beyond anxiety or distress. It’s a nameless feeling that somehow makes it hard to go on with even the nice things we regularly do.

What’s blocking our everyday routines is not the anxiety of lockdown adjustments, or the worries about ourselves and our loved ones — real though those worries are. It isn’t even the sense that, if we’re really honest with ourselves, much of what we do is pretty self-indulgent when held up against the urgency of a global pandemic.

It is something more troubling and harder to name: an uncertainty about why we would go on doing much of what for years we’d taken for granted as inherently valuable.

What we are confronting is something many writers in the pandemic have approached from varying angles: a restless distraction that stems not just from not knowing when it will all end, but also from not knowing what that end will look like. Perhaps the sharpest insight into this feeling has come from Jonathan Zecher, a historian of religion, who linked it to the forgotten Christian term: acedia.

Acedia was a malady that apparently plagued many medieval monks. It’s a sense of no longer caring about caring, not because one had become apathetic, but because somehow the whole structure of care had become jammed up.

What could this particular form of melancholy mean in an urgent global crisis? On the face of it, all of us care very much about the health risks to those we know and don’t know. Yet lurking alongside such immediate cares is a sense of dislocation that somehow interferes with how we care.

The answer can be found in an extreme thought experiment about death. In 2013, philosopher Samuel Scheffler explored a core assumption about death. We all assume that there will be a future world that survives our particular life, a world populated by people roughly like us, including some who are related to us or known to us. Though we rarely or acknowledge it, this presumed future world is the horizon towards which everything we do in the present is oriented.

But what, Scheffler asked, if we lose that assumed future world — because, say, we are told that human life will end on a fixed date not far after our own death? Then the things we value would start to lose their value. Our sense of why things matter today is built on the presumption that they will continue to matter in the future, even when we ourselves are no longer around to value them.

Our present relations to people and things are, in this deep way, future-oriented. Symphonies are written, buildings built, children conceived in the present, but always with a future in mind. What happens to our ethical bearings when we start to lose our grip on that future?

It’s here, moving back to the particular features of the global pandemic, that we see more clearly what drives the restlessness and dislocation so many have been feeling. The source of our current acedia is not the literal loss of a future; even the most pessimistic scenarios surrounding COVID-19 have our species surviving. The dislocation is more subtle: a disruption in pretty much every future frame of reference on which just going on in the present relies.

Moving around is what we do as creatures, and for that we need horizons. COVID-19 has erased many of the spatial and temporal horizons we rely on, even if we don’t notice them very often. We don’t know how the economy will look, how social life will go on, how our home routines will be changed, how work will be organized, how universities or the arts or local commerce will survive.

What unsettles us is not only fear of change. It’s that, if we can no longer trust in the future, many things become irrelevant, retrospectively pointless. And by that we mean from the perspective of a future whose basic shape we can no longer take for granted. This fundamentally disrupts how we weigh the value of what we are doing right now. It becomes especially hard under these conditions to hold on to the value in activities that, by their very nature, are future-directed, such as education or institution-building.

That’s what many of us are feeling. That’s today’s acedia.

Naming this malaise may seem more trouble than its worth, but the opposite is true. Perhaps the worst thing about medieval acedia was that monks struggled with its dislocation in isolation. But today’s disruption of our sense of a future must be a shared challenge. Because what’s disrupted is the structure of care that sustains why we go on doing things together, and this can only be repaired through renewed solidarity.

Such solidarity, however, has one precondition: that we openly discuss the problem of acedia, and how it prevents us from facing our deepest future uncertainties. Once we have done that, we can recognize it as a problem we choose to face together — across political and cultural lines — as families, communities, nations and a global humanity. Which means doing so in acceptance of our shared vulnerability, rather than suffering each on our own.

This essay was written with Nick Couldry, and previously appeared on CNN.com.

Krebs on SecurityAttacks Aimed at Disrupting the Trickbot Botnet

Over the past 10 days, someone has been launching a series of coordinated attacks designed to disrupt Trickbot, an enormous collection of more than two million malware-infected Windows PCs that are constantly being harvested for financial data and are often used as the entry point for deploying ransomware within compromised organizations.

A text snippet from one of the bogus Trickbot configuration updates. Source: Intel 471

On Sept. 22, someone pushed out a new configuration file to Windows computers currently infected with Trickbot. The crooks running the Trickbot botnet typically use these config files to pass new instructions to their fleet of infected PCs, such as the Internet address where hacked systems should download new updates to the malware.

But the new configuration file pushed on Sept. 22 told all systems infected with Trickbot that their new malware control server had the address 127.0.0.1, which is a “localhost” address that is not reachable over the public Internet, according to an analysis by cyber intelligence firm Intel 471.

It’s not known how many Trickbot-infected systems received the phony update, but it seems clear this wasn’t just a mistake by Trickbot’s overlords. Intel 471 found that it happened yet again on Oct. 1, suggesting someone with access to the inner workings of the botnet was trying to disrupt its operations.

“Shortly after the bogus configs were pushed out, all Trickbot controllers stopped responding correctly to bot requests,” Intel 471 wrote in a note to its customers. “This possibly means central Trickbot controller infrastructure was disrupted. The close timing of both events suggested an intentional disruption of Trickbot botnet operations.”

Intel 471 CEO Mark Arena said it’s anyone’s guess at this point who is responsible.

“Obviously, someone is trying to attack Trickbot,” Arena said. “It could be someone in the security research community, a government, a disgruntled insider, or a rival cybercrime group. We just don’t know at this point.

Arena said it’s unclear how successful these bogus configuration file updates will be given that the Trickbot authors built a fail-safe recovery system into their malware. Specifically, Trickbot has a backup control mechanism: A domain name registered on EmerDNS, a decentralized domain name system.

“This domain should still be in control of the Trickbot operators and could potentially be used to recover bots,” Intel 471 wrote.

But whoever is screwing with the Trickbot purveyors appears to have adopted a multi-pronged approach: Around the same time as the second bogus configuration file update was pushed on Oct. 1, someone stuffed the control networks that the Trickbot operators use to keep track of data on infected systems with millions of new records.

Alex Holden is chief technology officer and founder of Hold Security, a Milwaukee-based cyber intelligence firm that helps recover stolen data. Holden said at the end of September Trickbot held passwords and financial data stolen from more than 2.7 million Windows PCs.

By October 1, Holden said, that number had magically grown to more than seven million.

“Someone is flooding the Trickbot system with fake data,” Holden said. “Whoever is doing this is generating records that include machine names indicating these are infected systems in a broad range of organizations, including the Department of Defense, U.S. Bank, JP Morgan Chase, PNC and Citigroup, to name a few.”

Holden said the flood of new, apparently bogus, records appears to be an attempt by someone to dilute the Trickbot database and confuse or stymie the Trickbot operators. But so far, Holden said, the impact has been mainly to annoy and aggravate the criminals in charge of Trickbot.

“Our monitoring found at least one statement from one of the ransomware groups that relies on Trickbot saying this pisses them off, and they’re going to double the ransom they’re asking for from a victim,” Holden said. “We haven’t been able to confirm whether they actually followed through with that, but these attacks are definitely interfering with their business.”

Intel 471’s Arena said this could be part of an ongoing campaign to dismantle or wrest control over the Trickbot botnet. Such an effort would hardly be unprecedented. In 2014, for example, U.S. and international law enforcement agencies teamed up with multiple security firms and private researchers to commandeer the Gameover Zeus Botnet, a particularly aggressive and sophisticated malware strain that had enslaved up to 1 million Windows PCs globally.

Trickbot would be an attractive target for such a takeover effort because it is widely viewed as a platform used to find potential ransomware victims. Intel 471 describes Trickbot as “a malware-as-a-service platform that caters to a relatively small number of top-tier cybercriminals.”

One of the top ransomware gangs in operation today — which deploys ransomware strains known variously as “Ryuk” and “Conti,” is known to be closely associated with Trickbot infections. Both ransomware families have been used in some of the most damaging and costly malware incidents to date.

The latest Ryuk victim is Universal Health Services (UHS), a Fortune 500 hospital and healthcare services provider that operates more than 400 facilities in the U.S. and U.K.

On Sunday, Sept. 27, UHS shut down its computer systems at healthcare facilities across the United States in a bid to stop the spread of the malware. The disruption has reportedly caused the affected hospitals to redirect ambulances and relocate patients in need of surgery to other nearby hospitals.

Cryptogram Friday Squid Blogging: After Squidnight

Review of a squid-related children’s book.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: COVID-19 Found on Chinese Squid Packaging

I thought the virus doesn’t survive well on food packaging:

Authorities in China’s northeastern Jilin province have found the novel coronavirus on the packaging of imported squid, health authorities in the city of Fuyu said on Sunday, urging anyone who may have bought it to get themselves tested.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Planet DebianIan Jackson: Mailman vs DKIM - a novel solution

tl;dr: Do not configure Mailman to replace the mail domains in From: headers. Instead, try out my small new program which can make your Mailman transparent, so that DKIM signatures survive.

Background and narrative

DKIM

NB: This explanation is going to be somewhat simplified. I am going to gloss over some details and make some slightly approximate statements.

DKIM is a new anti-spoofing mechanism for Internet email, intended to help fight spam. DKIM, paired with the DMARC policy system, has been remarkably successful at stemming the flood of joe-job spams. As usually deployed, DKIM works like this:

When a message is originally sent, the author's MUA sends it to the MTA for their From: domain for outward delivery. The From: domain mailserver calculates a cryptographic signature of the message, and puts the signature in the headers of the message.

Obviously not the whole message can be signed, since at the very least additional headers need to be added in transit, and sometimes headers need to be modified too. The signing MTA gets to decide what parts of the message are covered by the signature: they nominate the header fields that are covered by the signature, and specify how to handle the body.

A recipient MTA looks up the public key for the From: domain in the DNS, and checks the signature. If the signature doesn't match, depending on policy (originator's policy, in the DNS, and recipient's policy of course), typically the message will be treated as spam.

The originating site has a lot of control over what happens in practice. They get to publish a formal (DMARC) policy in the DNS which advises recipients what they should do with mails claiming to be from their site. As mentioned, they can say which headers are covered by the signature - including the ability to sign the absence of a particular headers - so they can control which headers downstreams can get away with adding or modifying. And they can set a normalisation policy, which controls how precisely the message must match the one that they sent.

Mailman

Mailman is, of course, the extremely popular mailing list manager. There are a lot of things to like about it. I choose to run it myself not just because it's popular but also because it provides a relatively competent web UI and a relatively competent email (un)subscription interfaces, decent bounce handling, and a pretty good set of moderation and posting access controls.

The Xen Project mailing lists also run on mailman. Recently we had some difficulties with messages sent by Citrix staff (including myself), to Xen mailing lists, being treated as spam. Recipient mail systems were saying the DKIM signatures were invalid.

This was in fact true. Citrix has chosen a fairly strict DKIM policy; in particular, they have chosen "simple" normalisation - meaning that signed message headers must precisely match in syntax as well as in a semantic sense. Examining the the failing-DKIM messages showed that this was definitely a factor.

Applying my Opinions about email

My Bayesian priors tend to suggest that a mail problem involving corporate email is the fault of the corporate email. However in this case that doesn't seem true to me.

My starting point is that I think mail systems should not not modify messages unnecessarily. None of the DKIM-breaking modifications made by Mailman seemed necessary to me. I have on previous occasions gone to corporate IT and requested quite firmly that things I felt were broken should be changed. But it seemed wrong to go to corporate IT and ask them to change their published DKIM/DMARC policy to accomodate a behaviour in Mailman which I didn't agree with myself. I felt that instead I shoud put (with my Xen Project hat on) my own house in order.

Getting Mailman not to modify messages

So, I needed our Mailman to stop modifying the headers. I needed it to not even reformat them. A brief look at the source code to Mailman showed that this was not going to be so easy. Mailman has a lot of features whose very purpose is to modify messages.

Personally, as I say, I don't much like these features. I think the subject line tags, CC list manipulations, and so on, are a nuisance and not really Proper. But they are definitely part of why Mailman has become so popular and I can definitely see why the Mailman authors have done things this way. But these features mean Mailman has to disassemble incoming messages, and then reassemble them again on output. It is very difficult to do that and still faithfully reassemble the original headers byte-for-byte in the case where nothing actually wanted to modify them. There are existing bug reports[1] [2] [3] [4]; I can see why they are still open.

Rejected approach: From:-mangling

This situation is hardly unique to the Xen lists. Many other have struggled with it. The best that seems to have been come up with so far is to turn on a new Mailman feature which rewrites the From: header of the messages that go through it, to contain the list's domain name instead of the originator's.

I think this is really pretty nasty. It breaks normal use of email, such as reply-to-author. It is having Mailman do additional mangling of the message in order to solve the problems caused by other undesirable manglings!

Solution!

As you can see, I asked myself: I want Mailman not modify messages at all; how can I get it to do that? Given the existing structure of Mailman - with a lot of message-modifying functionality - that would really mean adding a bypass mode. It would have to spot, presumably depending on config settings, that messages were not to be edited; and then, it would avoid disassembling and reassembling the message at at all, and bypass the message modification stages. The message would still have to be parsed of course - it's just that the copy send out ought to be pretty much the incoming message.

When I put it to myself like that I had a thought: couldn't I implement this outside Mailman? What if I took a copy of every incoming message, and then post-process Mailman's output to restore the original?

It turns out that this is quite easy and works rather well!

outflank-mailman

outflank-mailman is a 233-line script, plus documentation, installation instructions, etc.

It is designed to run from your MTA, on all messages going into, and coming from, Mailman. On input, it saves a copy of the message in a sqlite database, and leaves a note in a new Outflank-Mailman-Id header. On output, it does some checks, finds the original message, and then combines the original incoming message with carefully-selected headers from the version that Mailman decided should be sent.

This was deployed for the Xen Project lists on Tuesday morning and it seems to be working well so far.

If you administer Mailman lists, and fancy some new software to address this problem, please do try it out.

Matters arising - Mail filtering, DKIM

Overall I think DKIM is a helpful contribution to the fight against spam (unlike SPF, which is fundamentally misdirected and also broken). Spam is an extremely serious problem; most receiving mail servers experience more attempts to deliver spam than real mail, by orders of magnitude. But DKIM is not without downsides.

Inherent in the design of anything like DKIM is that arbitrary modification of messages by list servers is no longer possible. In principle it might be possible to design a system which tolerated modifications reasonable for mailing lists but it would be quite complicated and have to somehow not tolerate similar modifications in other contexts.

So DKIM means that lists can no longer add those unsubscribe footers to mailing list messages. The "new way" (RFC2369, July 1998), to do this is with the List-Unsubscribe header. Hopefully a good MUA will be able to deal with unsubscription semiautomatically, and I think by now an adequate MUA should at least display these headers by default.

Sender:

There are implications for recipient-side filtering too. The "traditional" correct way to spot mailing list mail was to look for Resent-To:, which can be added without breaking DKIM; the "new" (RFC2919, March 2001) correct way is List-Id:, likewise fine. But during the initial deployment of outflank-mailman I discovered that many subscribers were detecting that a message was list traffic by looking at the Sender: header. I'm told that some mail systems (apparently Microsoft's included) make it inconvenient to filter on List-Id.

Really, I think a mailing list ought not to be modifying Sender:. Given Sender:'s original definition and semantics, there might well be reasonable reasons for a mailing list posting to have different From: and and then the original Sender: ought not to be lost. And a mailing list's operation does not fit well into the original definition of Sender:. I suspect that list software likes to put in Sender mostly for historical reasons; notably, a long time ago it was not uncommon for broken mail systems to send bounces to the Sender: header rather than the envelope sender (SMTP MAIL FROM).

DKIM makes this more of a problem. Unfortunately the DKIM specifications are vague about what headers one should sign, but they pretty much definitely include Sender: if it is present, and some materials encourage signing the absence of Sender:. The latter is Exim's default configuration when DKIM-signing is enabled.

Franky there seems little excuse for systems to not readily support and encourage filtering on List-Id, 20 years later, but I don't want to make life hard for my users. For now we are running a compromise configuration: if there wasn't a Sender: in the original, take Mailman's added one. This will result in (i) misfiltering for some messages whose poster put in a Sender:, and (ii) DKIM failures for messages whose originating system signed the absence of a Sender:. I'm going to mine the db for some stats after it's been deployed for a week or so, to see which of these problems is worst and decide what to do about it.

Mail routing

For DKIM to work, messages being sent From: a particular mail domain must go through a system trusted by that domain, so they can be signed.

Most users tend to do this anyway: their mail provider gives them an IMAP server and an authenticated SMTP submission server, and they configure those details in their MUA. The MUA has a notion of "accounts" and according to the user's selection for an outgoing message, connects to the authenticated submission service (usually using TLS over the global internet).

Trad unix systems where messages are sent using the local sendmail or localhost SMTP submission (perhaps by automated systems, or perhaps by human users) are fine too. The smarthost can do the DKIM signing.

But this solution is awkward for a user of a trad MUA in what I'll call "alias account" setups: where a user has an address at a mail domain belonging to different people to the system on which they run their MUA (perhaps even several such aliases for different hats). Traditionally this worked by the mail domain forwarding incoming the mail, and the user simply self-declaring their identity at the alias domain. Without DKIM there is nothing stopping anyone self-declaring their own From: line.

If DKIM is to be enabled for such a user (preventing people forging mail as that user), the user will have to somehow arrange that their trad unix MUA's outbound mail stream goes via their mail alias provider. For a single-user sending unix system this can be done with tolerably complex configuration in an MTA like Exim. For shared systems this gets more awkward and might require some hairy shell scripting etc.

edited 2020-10-01 21:22 and 21:35 and -02 10:50 +0100 to fix typos and 21:28 to linkify "my small program" in the tl;dr



comment count unavailable comments

Worse Than FailureError'd: No Escape

"Last night's dinner was delicious!" Drew W. writes.

 

"Gee, thanks Microsoft! It's great to finally see progress on this long-awaited enhancement. That bot is certainly earning its keep," writes Tim R.

 

Derek wrote, "Well, I guess the Verizon app isn't totally useless when it comes to telling me how many notifications I have. I mean, one out of three isn't THAT bad."

 

"Perhaps Ubisoft UK are trying to reach bilingual gamers?" writes Craig.

 

Jeremy H. writes, "Whether this is a listing for a pair of pet training underwear or a partial human torso, I'm not sure if I'm comfortable with that 'metal clicker'."

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianJunichi Uekawa: Already october.

Already october. I tried moving to vscode from emacs but I have so far only installed the editor. emacs is my workflow engine, so it's hard to migrate everything.

,

Krebs on SecurityRansomware Victims That Pay Up Could Incur Steep Fines from Uncle Sam

Companies victimized by ransomware and firms that facilitate negotiations with ransomware extortionists could face steep fines from the U.S. federal government if the crooks who profit from the attack are already under economic sanctions, the Treasury Department warned today.

Image: Shutterstock

In its advisory (PDF), the Treasury’s Office of Foreign Assets Control (OFAC) said “companies that facilitate ransomware payments to cyber actors on behalf of victims, including financial institutions, cyber insurance firms, and companies involved in digital forensics and incident response, not only encourage future ransomware payment demands but also may risk violating OFAC regulations.”

As financial losses from cybercrime activity and ransomware attacks in particular have skyrocketed in recent years, the Treasury Department has imposed economic sanctions on several cybercriminals and cybercrime groups, effectively freezing all property and interests of these persons (subject to U.S. jurisdiction) and making it a crime to transact with them.

A number of those sanctioned have been closely tied with ransomware and malware attacks, including the North Korean Lazarus Group; two Iranians thought to be tied to the SamSam ransomware attacks; Evgeniy Bogachev, the developer of Cryptolocker; and Evil Corp, a Russian cybercriminal syndicate that has used malware to extract more than $100 million from victim businesses.

Those that run afoul of OFAC sanctions without a special dispensation or “license” from Treasury can face several legal repercussions, including fines of up to $20 million.

The Federal Bureau of Investigation (FBI) and other law enforcement agencies have tried to discourage businesses hit by ransomware from paying their extortionists, noting that doing so only helps bankroll further attacks.

But in practice, a fair number of victims find paying up is the fastest way to resume business as usual. In addition, insurance providers often help facilitate the payments because the amount demanded ends up being less than what the insurer might have to pay to cover the cost of the affected business being sidelined for days or weeks at a time.

While it may seem unlikely that companies victimized by ransomware might somehow be able to know whether their extortionists are currently being sanctioned by the U.S. government, they still can be fined either way, said Ginger Faulk, a partner in the Washington, D.C. office of the law firm Eversheds Sutherland.

Faulk said OFAC may impose civil penalties for sanctions violations based on “strict liability,” meaning that a person subject to U.S. jurisdiction may be held civilly liable even if it did not know or have reason to know it was engaging in a transaction with a person that is prohibited under sanctions laws and regulations administered by OFAC.

“In other words, in order to be held liable as a civil (administrative) matter (as opposed to criminal), no mens rea or even ‘reason to know’ that the person is sanctioned is necessary under OFAC regulations,” Faulk said.

But Fabian Wosar, chief technology officer at computer security firm Emsisoft, said Treasury’s policies here are nothing new, and that they mainly constitute a warning for individual victim firms who may not already be working with law enforcement and/or third-party security firms.

Wosar said companies that help ransomware victims negotiate lower payments and facilitate the financial exchange are already aware of the legal risks from OFAC violations, and will generally refuse clients who get hit by certain ransomware strains.

“In my experience, OFAC and cyber insurance with their contracted negotiators are in constant communication,” he said. “There are often even clearing processes in place to ascertain the risk of certain payments violating OFAC.”

Along those lines, OFAC said the degree of a person/company’s awareness of the conduct at issue is a factor the agency may consider in assessing civil penalties. OFAC said it would consider “a company’s self-initiated, timely, and complete report of a ransomware attack to law enforcement to be a significant mitigating factor in determining an appropriate enforcement outcome if the situation is later determined to have a sanctions nexus.”

Planet DebianSylvain Beucler: Debian LTS and ELTS - September 2020

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In September, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 19.75h for LTS (out of my 30 max; all done) and 20h for ELTS (out of my 20 max; all done).

ELTS - Jessie

  • qemu: jessie triage: finish work started in August
  • qemu: backport 5 CVE fixes, perform virtual and physical testing, security upload ELA-283-1
  • libdbi-perl: global triage: clarifications, confirm incomplete and attempt to get upstream action, request new CVE following discussion with security team
  • libdbi-perl: backport 5 CVE fixes, test, security upload ELA-285-1

LTS - Stretch

  • qemu: stretch triage, while working on ELTS update; mark several CVEs unaffected, update patch/status
  • wordpress: global triage: reference new patches, request proper CVE to fix our temporary tracking
  • wordpress: revamp package: upgrade to upstream's stable 4.7.5->4.7.18 to ease future updates, re-apply missing patches, fix past regression and notify maintainer, security upload DLA-2371-1
  • libdbi-perl: common work with ELTS, security upload DLA-2386-1
  • public IRC team meeting

Documentation/Scripts

  • LTS/TestSuites/wordpress: new page with testsuite import and manual tests
  • LTS/TestSuites/qemu: minor update
  • wiki.d.o/Sympa: update Sympa while using it as a libdbi-perl reverse-dep test (update for newer versions, explain how to bootstrap admin access)
  • www.d.o/lts/security: import a couple missing announcements and notify uploaders about procedures
  • Check status for pdns-recursor, following user request
  • Check status for golang-1.7 / CVE-2019-9514 / CVE-2019-9512
  • Attempt to improve cooperation after seeing my work discarded and redone as-is, which sadly isn't the first time; no answer
  • Historical analysis of our CVE fixes: experiment to gather per-CVE tracker history

Planet DebianMolly de Blanc: Free Software Activities – September 2020

I haven’t done one of these in a while, so let’s see how it goes.

Debian

The Community Team has been busy. We’re planning a sprint to work on a bigger writing project and have some tough discussions that need to happen.
I personally have only worked on one incident, but we’ve had a few others come in.
I’m attempting to step down from the Outreach team, which is more work than I thought it would be. I had a very complicated relationship with the Outreach team. When no one else was there to take on making sure we did GSoC and Outreachy, I stepped up. It wasn’t really what I wanted to be doing, but it’s important. I’m glad to have more time to focus on other things that feel more aligned with what I’m trying to work on right now.

GNOME

In addition to, you know, work, I joined the Code of Conduct Committee. Always a good time! Rosanna and I presented at GNOME Onboard Africa Virtual about the GNOME CoC. It was super fun!

Digital Autonomy

Karen and I did an interview on FLOSS Weekly with Doc Searls and Dan Lynch. Super fun! I’ve been doing some more writing, which I still hope to publish soon, and a lot of organization on it. I’m also in the process of investigating some funding, as there are a few things we’d like to do that come with price tags. Separately, I started working on a video to explain the Principles. I’m excited!

Misc

I started a call that meets every other week where we talk about Code of Conduct stuff. Good peeps. Into it.

Kevin RuddCNBC: On US-China tensions, growing export restrictions

E&OE TRANSCRIPT
TV INTERVIEW
CNBC WORLDWIDE EXCHANGE
28 SEPTEMBER 2020

 

 

The post CNBC: On US-China tensions, growing export restrictions appeared first on Kevin Rudd.

Planet DebianMichael Stapelberg: Linux package managers are slow

I measured how long the most popular Linux distribution’s package manager take to install small and large packages (the ack(1p) source code search Perl script and qemu, respectively).

Where required, my measurements include metadata updates such as transferring an up-to-date package list. For me, requiring a metadata update is the more common case, particularly on live systems or within Docker containers.

All measurements were taken on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz running Docker 1.13.1 on Linux 4.19, backed by a Samsung 970 Pro NVMe drive boasting many hundreds of MB/s write performance. The machine is located in Zürich and connected to the Internet with a 1 Gigabit fiber connection, so the expected top download speed is ≈115 MB/s.

See Appendix C for details on the measurement method and command outputs.

Measurements

Keep in mind that these are one-time measurements. They should be indicative of actual performance, but your experience may vary.

ack (small Perl program)

distribution package manager data wall-clock time rate
Fedora dnf 114 MB 33s 3.4 MB/s
Debian apt 16 MB 10s 1.6 MB/s
NixOS Nix 15 MB 5s 3.0 MB/s
Arch Linux pacman 6.5 MB 3s 2.1 MB/s
Alpine apk 10 MB 1s 10.0 MB/s

qemu (large C program)

distribution package manager data wall-clock time rate
Fedora dnf 226 MB 4m37s 1.2 MB/s
Debian apt 224 MB 1m35s 2.3 MB/s
Arch Linux pacman 142 MB 44s 3.2 MB/s
NixOS Nix 180 MB 34s 5.2 MB/s
Alpine apk 26 MB 2.4s 10.8 MB/s


(Looking for older measurements? See Appendix B (2019).

The difference between the slowest and fastest package managers is 30x!

How can Alpine’s apk and Arch Linux’s pacman be an order of magnitude faster than the rest? They are doing a lot less than the others, and more efficiently, too.

Pain point: too much metadata

For example, Fedora transfers a lot more data than others because its main package list is 60 MB (compressed!) alone. Compare that with Alpine’s 734 KB APKINDEX.tar.gz.

Of course the extra metadata which Fedora provides helps some use case, otherwise they hopefully would have removed it altogether. The amount of metadata seems excessive for the use case of installing a single package, which I consider the main use-case of an interactive package manager.

I expect any modern Linux distribution to only transfer absolutely required data to complete my task.

Pain point: no concurrency

Because they need to sequence executing arbitrary package maintainer-provided code (hooks and triggers), all tested package managers need to install packages sequentially (one after the other) instead of concurrently (all at the same time).

In my blog post “Can we do without hooks and triggers?”, I outline that hooks and triggers are not strictly necessary to build a working Linux distribution.

Thought experiment: further speed-ups

Strictly speaking, the only required feature of a package manager is to make available the package contents so that the package can be used: a program can be started, a kernel module can be loaded, etc.

By only implementing what’s needed for this feature, and nothing more, a package manager could likely beat apk’s performance. It could, for example:

  • skip archive extraction by mounting file system images (like AppImage or snappy)
  • use compression which is light on CPU, as networks are fast (like apk)
  • skip fsync when it is safe to do so, i.e.:
    • package installations don’t modify system state
    • atomic package installation (e.g. an append-only package store)
    • automatically clean up the package store after crashes

Current landscape

Here’s a table outlining how the various package managers listed on Wikipedia’s list of software package management systems fare:

name scope package file format hooks/triggers
AppImage apps image: ISO9660, SquashFS no
snappy apps image: SquashFS yes: hooks
FlatPak apps archive: OSTree no
0install apps archive: tar.bz2 no
nix, guix distro archive: nar.{bz2,xz} activation script
dpkg distro archive: tar.{gz,xz,bz2} in ar(1) yes
rpm distro archive: cpio.{bz2,lz,xz} scriptlets
pacman distro archive: tar.xz install
slackware distro archive: tar.{gz,xz} yes: doinst.sh
apk distro archive: tar.gz yes: .post-install
Entropy distro archive: tar.bz2 yes
ipkg, opkg distro archive: tar{,.gz} yes

Conclusion

As per the current landscape, there is no distribution-scoped package manager which uses images and leaves out hooks and triggers, not even in smaller Linux distributions.

I think that space is really interesting, as it uses a minimal design to achieve significant real-world speed-ups.

I have explored this idea in much more detail, and am happy to talk more about it in my post “Introducing the distri research linux distribution".

There are a couple of recent developments going into the same direction:

Appendix C: measurement details (2020)

ack

You can expand each of these:

Fedora’s dnf takes almost 33 seconds to fetch and unpack 114 MB.

% docker run -t -i fedora /bin/bash
[root@62d3cae2e2f9 /]# time dnf install -y ack
Fedora 32 openh264 (From Cisco) - x86_64     1.9 kB/s | 2.5 kB     00:01
Fedora Modular 32 - x86_64                   6.8 MB/s | 4.9 MB     00:00
Fedora Modular 32 - x86_64 - Updates         5.6 MB/s | 3.7 MB     00:00
Fedora 32 - x86_64 - Updates                 9.9 MB/s |  23 MB     00:02
Fedora 32 - x86_64                            39 MB/s |  70 MB     00:01
[…]
real	0m32.898s
user	0m25.121s
sys	0m1.408s

NixOS’s Nix takes a little over 5s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -iA nixpkgs.ack'
unpacking channels...
created 1 symlinks in user environment
installing 'perl5.32.0-ack-3.3.1'
these paths will be fetched (15.55 MiB download, 85.51 MiB unpacked):
  /nix/store/34l8jdg76kmwl1nbbq84r2gka0kw6rc8-perl5.32.0-ack-3.3.1-man
  /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31
  /nix/store/9fd4pjaxpjyyxvvmxy43y392l7yvcwy1-perl5.32.0-File-Next-1.18
  /nix/store/czc3c1apx55s37qx4vadqhn3fhikchxi-libunistring-0.9.10
  /nix/store/dj6n505iqrk7srn96a27jfp3i0zgwa1l-acl-2.2.53
  /nix/store/ifayp0kvijq0n4x0bv51iqrb0yzyz77g-perl-5.32.0
  /nix/store/w9wc0d31p4z93cbgxijws03j5s2c4gyf-coreutils-8.31
  /nix/store/xim9l8hym4iga6d4azam4m0k0p1nw2rm-libidn2-2.3.0
  /nix/store/y7i47qjmf10i1ngpnsavv88zjagypycd-attr-2.4.48
  /nix/store/z45mp61h51ksxz28gds5110rf3wmqpdc-perl5.32.0-ack-3.3.1
copying path '/nix/store/34l8jdg76kmwl1nbbq84r2gka0kw6rc8-perl5.32.0-ack-3.3.1-man' from 'https://cache.nixos.org'...
copying path '/nix/store/czc3c1apx55s37qx4vadqhn3fhikchxi-libunistring-0.9.10' from 'https://cache.nixos.org'...
copying path '/nix/store/9fd4pjaxpjyyxvvmxy43y392l7yvcwy1-perl5.32.0-File-Next-1.18' from 'https://cache.nixos.org'...
copying path '/nix/store/xim9l8hym4iga6d4azam4m0k0p1nw2rm-libidn2-2.3.0' from 'https://cache.nixos.org'...
copying path '/nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31' from 'https://cache.nixos.org'...
copying path '/nix/store/y7i47qjmf10i1ngpnsavv88zjagypycd-attr-2.4.48' from 'https://cache.nixos.org'...
copying path '/nix/store/dj6n505iqrk7srn96a27jfp3i0zgwa1l-acl-2.2.53' from 'https://cache.nixos.org'...
copying path '/nix/store/w9wc0d31p4z93cbgxijws03j5s2c4gyf-coreutils-8.31' from 'https://cache.nixos.org'...
copying path '/nix/store/ifayp0kvijq0n4x0bv51iqrb0yzyz77g-perl-5.32.0' from 'https://cache.nixos.org'...
copying path '/nix/store/z45mp61h51ksxz28gds5110rf3wmqpdc-perl5.32.0-ack-3.3.1' from 'https://cache.nixos.org'...
building '/nix/store/m0rl62grplq7w7k3zqhlcz2hs99y332l-user-environment.drv'...
created 49 symlinks in user environment
real	0m 5.60s
user	0m 3.21s
sys	0m 1.66s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@1996bb94a2d1:/# time (apt update && apt install -y ack-grep)
Get:1 http://deb.debian.org/debian sid InRelease [146 kB]
Get:2 http://deb.debian.org/debian sid/main amd64 Packages [8400 kB]
Fetched 8546 kB in 1s (8088 kB/s)
[…]
The following NEW packages will be installed:
  ack libfile-next-perl libgdbm-compat4 libgdbm6 libperl5.30 netbase perl perl-modules-5.30
0 upgraded, 8 newly installed, 0 to remove and 23 not upgraded.
Need to get 7341 kB of archives.
After this operation, 46.7 MB of additional disk space will be used.
[…]
real	0m9.544s
user	0m2.839s
sys	0m0.775s

Arch Linux’s pacman takes a little under 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9f6672688a64 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            130.8 KiB  1090 KiB/s 00:00
 extra          1655.8 KiB  3.48 MiB/s 00:00
 community         5.2 MiB  6.11 MiB/s 00:01
resolving dependencies...
looking for conflicting packages...

Packages (2) perl-file-next-1.18-2  ack-3.4.0-1

Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
[…]
real	0m2.936s
user	0m0.375s
sys	0m0.160s

Alpine’s apk takes a little over 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/4) Installing libbz2 (1.0.8-r1)
(2/4) Installing perl (5.30.3-r0)
(3/4) Installing perl-file-next (1.18-r0)
(4/4) Installing ack (3.3.1-r0)
Executing busybox-1.31.1-r16.trigger
OK: 43 MiB in 18 packages
real	0m 1.24s
user	0m 0.40s
sys	0m 0.15s

qemu

You can expand each of these:

Fedora’s dnf takes over 4 minutes to fetch and unpack 226 MB.

% docker run -t -i fedora /bin/bash
[root@6a52ecfc3afa /]# time dnf install -y qemu
Fedora 32 openh264 (From Cisco) - x86_64     3.1 kB/s | 2.5 kB     00:00
Fedora Modular 32 - x86_64                   6.3 MB/s | 4.9 MB     00:00
Fedora Modular 32 - x86_64 - Updates         6.0 MB/s | 3.7 MB     00:00
Fedora 32 - x86_64 - Updates                 334 kB/s |  23 MB     01:10
Fedora 32 - x86_64                            33 MB/s |  70 MB     00:02
[…]

Total download size: 181 M
Downloading Packages:
[…]

real	4m37.652s
user	0m38.239s
sys	0m6.321s

NixOS’s Nix takes almost 34s to fetch and unpack 180 MB.

% docker run -t -i nixos/nix
83971cf79f7e:/# time sh -c 'nix-channel --update && nix-env -iA nixpkgs.qemu'
unpacking channels...
created 1 symlinks in user environment
installing 'qemu-5.1.0'
these paths will be fetched (180.70 MiB download, 1146.92 MiB unpacked):
[…]
real	0m 33.64s
user	0m 16.96s
sys	0m 3.05s

Debian’s apt takes over 95 seconds to fetch and unpack 224 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 http://deb.debian.org/debian sid InRelease [146 kB]
Get:2 http://deb.debian.org/debian sid/main amd64 Packages [8400 kB]
Fetched 8546 kB in 1s (5998 kB/s)
[…]
Fetched 216 MB in 43s (5006 kB/s)
[…]
real	1m25.375s
user	0m29.163s
sys	0m12.835s

Arch Linux’s pacman takes almost 44s to fetch and unpack 142 MB.

% docker run -t -i archlinux/base
[root@58c78bda08e8 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core          130.8 KiB  1055 KiB/s 00:00
 extra        1655.8 KiB  3.70 MiB/s 00:00
 community       5.2 MiB  7.89 MiB/s 00:01
[…]
Total Download Size:   135.46 MiB
Total Installed Size:  661.05 MiB
[…]
real	0m43.901s
user	0m4.980s
sys	0m2.615s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
[…]
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Appendix B: measurement details (2019)

ack

You can expand each of these:

Fedora’s dnf takes almost 30 seconds to fetch and unpack 107 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y ack
Fedora Modular 30 - x86_64            4.4 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  3.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           17 MB/s |  19 MB     00:01
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  44 Packages

Total download size: 13 M
Installed size: 42 M
[…]
real	0m29.498s
user	0m22.954s
sys	0m1.085s

NixOS’s Nix takes 14s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i perl5.28.2-ack-2.28'
unpacking channels...
created 2 symlinks in user environment
installing 'perl5.28.2-ack-2.28'
these paths will be fetched (14.91 MiB download, 80.83 MiB unpacked):
  /nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2
  /nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48
  /nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man
  /nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27
  /nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31
  /nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53
  /nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16
  /nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28
copying path '/nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man' from 'https://cache.nixos.org'...
copying path '/nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27' from 'https://cache.nixos.org'...
copying path '/nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16' from 'https://cache.nixos.org'...
copying path '/nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48' from 'https://cache.nixos.org'...
copying path '/nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53' from 'https://cache.nixos.org'...
copying path '/nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31' from 'https://cache.nixos.org'...
copying path '/nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2' from 'https://cache.nixos.org'...
copying path '/nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28' from 'https://cache.nixos.org'...
building '/nix/store/q3243sjg91x1m8ipl0sj5gjzpnbgxrqw-user-environment.drv'...
created 56 symlinks in user environment
real	0m 14.02s
user	0m 8.83s
sys	0m 2.69s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y ack-grep)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [233 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8270 kB]
Fetched 8502 kB in 2s (4764 kB/s)
[…]
The following NEW packages will be installed:
  ack ack-grep libfile-next-perl libgdbm-compat4 libgdbm5 libperl5.26 netbase perl perl-modules-5.26
The following packages will be upgraded:
  perl-base
1 upgraded, 9 newly installed, 0 to remove and 60 not upgraded.
Need to get 8238 kB of archives.
After this operation, 42.3 MB of additional disk space will be used.
[…]
real	0m9.096s
user	0m2.616s
sys	0m0.441s

Arch Linux’s pacman takes a little over 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            132.2 KiB  1033K/s 00:00
 extra          1629.6 KiB  2.95M/s 00:01
 community         4.9 MiB  5.75M/s 00:01
[…]
Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
[…]
real	0m3.354s
user	0m0.224s
sys	0m0.049s

Alpine’s apk takes only about 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
/ # time apk add ack
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/4) Installing perl-file-next (1.16-r0)
(2/4) Installing libbz2 (1.0.6-r7)
(3/4) Installing perl (5.28.2-r1)
(4/4) Installing ack (3.0.0-r0)
Executing busybox-1.30.1-r2.trigger
OK: 44 MiB in 18 packages
real	0m 0.96s
user	0m 0.25s
sys	0m 0.07s

qemu

You can expand each of these:

Fedora’s dnf takes over a minute to fetch and unpack 266 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y qemu
Fedora Modular 30 - x86_64            3.1 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  2.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           20 MB/s |  19 MB     00:00
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  262 Packages
Upgrade    4 Packages

Total download size: 172 M
[…]
real	1m7.877s
user	0m44.237s
sys	0m3.258s

NixOS’s Nix takes 38s to fetch and unpack 262 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i qemu-4.0.0'
unpacking channels...
created 2 symlinks in user environment
installing 'qemu-4.0.0'
these paths will be fetched (262.18 MiB download, 1364.54 MiB unpacked):
[…]
real	0m 38.49s
user	0m 26.52s
sys	0m 4.43s

Debian’s apt takes 51 seconds to fetch and unpack 159 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [149 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8426 kB]
Fetched 8574 kB in 1s (6716 kB/s)
[…]
Fetched 151 MB in 2s (64.6 MB/s)
[…]
real	0m51.583s
user	0m15.671s
sys	0m3.732s

Arch Linux’s pacman takes 1m2s to fetch and unpack 124 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core       132.2 KiB   751K/s 00:00
 extra     1629.6 KiB  3.04M/s 00:01
 community    4.9 MiB  6.16M/s 00:01
[…]
Total Download Size:   123.20 MiB
Total Installed Size:  587.84 MiB
[…]
real	1m2.475s
user	0m9.272s
sys	0m2.458s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
[…]
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Worse Than FailureCodeSOD: Imploded Code

Cassi’s co-worker (previously) was writing some more PHP. This code needed to take a handful of arguments, like id and label and value, and generate HTML text inputs.

Well, that seems like a perfect use case for PHP. I can’t possibly see how this could go wrong.

echo $sep."<SCRIPT>CreateField(" . implode(', ', array_map('json_encode', $args)) . ");</SCRIPT>\n";

Now, PHP’s array_map is a beast of a function, and its documentation has some pretty atrocious examples. It’s not just a map, but also potentially a n-ary zip, but if you use it that way, then the function you’re passing needs to be able to handle the right number of arguments, and– sorry. Lost my train of thought there when checking the PHP docs. Back to the code.

In this case, we use array_map to make all our fields JavaScript safe by json_encodeing them. Then we use implode to mash them together into a comma separated string. Then we concatenate all that together into a CreateField call.

CreateField is a JavaScript function. Cassi didn’t supply the implementation, but lets us know that it uses document.write to output the input tag into the document body.

So, this is server side code which generates calls to client side code, where the client side code runs at page load to document.write HTML elements into the document… elements which the server side code could have easily supplied.

I’ll let Cassi summarize:

I spent 5 minutes staring at this trying to figure out what to say about it. I just… don’t know. What’s the worst part? json_encoding individual arguments and then imploding them? Capital HTML tags? A $sep variable that doesn’t need to exist? (Trust me on this one.) Maybe it’s that it’s a line of PHP outputting a Javascript function call that in turn uses document.write to output a text HTML input field? Yeah, it’s probably that one.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianRuss Allbery: Review: Harrow the Ninth

Review: Harrow the Ninth, by Tamsyn Muir

Series: The Locked Tomb #2
Publisher: Tor
Copyright: 2020
ISBN: 1-250-31320-1
Format: Kindle
Pages: 510

Harrow the Ninth is a direct sequel to Gideon the Ninth and under absolutely no circumstances should you start reading here. You would be so lost. If you plan on reading this series, read the books as closely together as you can so that you can remember the details of the previous book. You may still resort to re-reading or searching through parts of the previous book as you go.

Muir is doing some complex structural work with Harrow the Ninth, so it's hard to know how much to say about it without spoiling some aspect of it for someone. I think it's safe to say this much: As advertised by the title, we do get a protagonist switch to Harrowhark. However, unlike Gideon the Ninth, it's not a single linear story. The storyline that picks up after the conclusion of Gideon is interwoven with apparent flashbacks retelling the story of the previous book from Harrowhark's perspective. Or at least it might have been the story of the previous book, except that Ortus is Harrowhark's cavalier, Gideon does not appear, and other divergences from the story we previously read become obvious early on.

(You can see why memory of Gideon the Ninth is important.)

Oh, and one of those storylines is written in the second person. Unlike some books that use this as a gimmick, this is for reasons that are eventually justified and partly explained in the story, but it's another example of the narrative complexity. Harrow the Ninth is dropping a lot of clues (and later revelations) in both story events and story structure, many of which are likely to upend reader expectations from the first book.

I have rarely read a novel that is this good at fulfilling the tricky role of the second book of a trilogy. Gideon the Ninth was, at least on the surface, a highly entertaining, linear, and relatively straightforward escape room mystery, set against a dying-world SF background that was more hinted at than fleshed out. Harrow the Ninth revisits and reinterprets that book in ways that add significant depth without feeling artificial. Bits of scenery in the first book take on new meaning and intention. Characters we saw only in passing get a much larger role (and Abigail is worth the wait). And we get a whole ton of answers: about the God Emperor, about Lyctors, about the world, about Gideon and Harrowhark's own pasts and backgrounds, and about the locked tomb that is at the center of the Ninth House. But there is still more than enough for a third book, including a truly intriguing triple cliffhanger ending. Harrow the Ninth is both satisfying in its own right and raises new questions that I'm desperate to see answered in the third book.

Also, to respond to my earlier self on setting, this world is not a Warhammer 40K universe, no matter how much it may have appeared in the glimpses we got in Gideon. The God Emperor appears directly in this book and was not at all what I was expecting, if perhaps even more disturbing. Muir is intentionally playing against type, drawing a sharp contrast between the God Emperor and the dramatic goth feel of the rest of the universe and many of the characters, and it's creepily effective and goes in a much different ethical direction than I had thought. (That said, I will warn that properly untangling the ethical dilemmas of this universe is clearly left to the third book.)

I mentioned in my review of Gideon the Ninth that I was happily to see more SF pulling unapologetically from fanfic. I'm going to keep beating that drum in this review in part because I think the influence may be less obvious to the uninitiated. Harrow the Ninth is playing with voice, structure, memory, and chronology in ways that I suspect the average reader unfamiliar with fanfic may associate more with literary fiction, but they would be wrongly underestimating fanfic if they did so. If anything, the callouts to fanfic are even clearer. There are three classic fanfic alternate universe premises that appear in passing, the story relies on the reader's ability to hold a canonical narrative and an alternate narrative in mind simultaneously, and the genre inspiration was obvious enough to me that about halfway through the novel I correctly guessed one of the fanfic universes in which Muir has written. (I'm not naming it here since I think it's a bit of a spoiler.)

And of course there's the irreverence. There are some structural reasons why the narrative voice isn't quite as good as Gideon the Ninth at the start, but rest assured that Muir makes up for that by the end of the book. My favorite scenes in the series so far happen at the end of Harrow the Ninth: world-building, revelations, crunchy metaphysics, and irreverent snark all woven beautifully together. Muir has her characters use Internet meme references like teenagers, which is a beautiful bit of characterization because they are teenagers. In a world that's heavy on viscera, skeletons, death, and horrific monsters, it's a much needed contrast and a central part of how the characters show defiance and courage. I don't think this will work for everyone, but it very much works for me. There's a Twitter meme reference late in the book that had me laughing out loud in delight.

Harrow the Ninth is an almost perfect second book, in that if you liked Gideon the Ninth, you will probably love Harrow the Ninth and it will make you like Gideon the Ninth even more. It does have one major flaw, though: pacing.

This was also my major complaint about Gideon, primarily around the ending. I think Harrow the Ninth is a bit better, but the problem has a different shape. The start of the book is a strong "what the hell is going on" experience, which is very effective, and the revelations are worth the build-up once they start happening. In between, though, the story drags on a bit too long. Harrow is sick and nauseated at the start of the book for rather longer than I wanted to read about, there is one too many Lyctor banquets than I think were necessary to establish the characters, and I think there's a touch too much wandering the halls.

Muir also interwove two narrative threads and tried to bring them to a conclusion at the same time, but I think she had more material for one than the other. There are moments near the end of the book where one thread is producing all the payoff revelations the reader has been waiting for, and the other thread is following another interminable and rather uninteresting fight scene. You don't want your reader saying "argh, no" each time you cut away to the other scene. It's better than Gideon the Ninth, where the last fifth of the book is mostly a running battle that went on way longer than it needed to, but I still wish Muir had tightened the story throughout and balanced the two threads so that we could stay with the most interesting one when it mattered.

That said, I mostly noticed the pacing issues in retrospect and in talking about them with a friend who was more annoyed than I was. In the moment, there was so much going on here, so many new things to think about, and so much added depth that I devoured Harrow the Ninth over the course of two days and then spent the next day talking to other people who had read it, trading theories about what happened and what will happen in the third book. It was the most enjoyable reading experience I've had so far this year.

Gideon the Ninth was fun; Harrow the Ninth was both fun and on the verge of turning this series into something truly great. I can hardly wait for Alecto the Ninth (which doesn't yet have a release date, argh).

As with Gideon the Ninth, content warning for lots and lots of gore, rather too detailed descriptions of people's skeletons, restructuring bits of the body that shouldn't be restructured, and more about bone than you ever wanted to know.

Rating: 9 out of 10

Planet DebianNorbert Preining: Plasma 5.20 coming to Debian

The KDE Plasma desktop is soon getting an update to 5.20, and beta versions are out for testing.

Plasma 5.20 is going to be one absolutely massive release! More features, more fixes for longstanding bugs, more improvements to the user interface!

There are lots of new features mentioned in the release announcement, I like in particular the ability that settings changed from the default can now be highlighted.

I have been providing builds of KDE related packages since quite some time now, see everything posted under the KDE tag. In the last days I have prepared Debian packages for Plasma 5.19.90 on OBS, for now only targeting Debian/experimental and amd64 architecture.

These packages require Qt 5.15, which is only available in the experimental suite, and there is no way to simply update to Qt 5.15 since all Qt related packages need to be recompiled. So as long as Qt 5.15 doesn’t hit unstable, I cannot really run these packages on my main machine, but I tried a clean Debian virtual machine installing only Plasma 5.19.90 and depending packages, plus some more for a pleasant desktop experience. This worked out quite well, the VM runs Plasma 5.19.90.

Well, bottom line, as soon as we have Qt 5.15 in Debian/unstable, we are also ready for Plasma 5.20!

Cryptogram Detecting Deep Fakes with a Heartbeat

Researchers can detect deep fakes because they don’t convincingly mimic human blood circulation in the face:

In particular, video of a person’s face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.

Deep fakes don’t lack such circulation-induced shifts in color, but they don’t recreate them with high fidelity. The researchers at SUNY and Intel found that “biological signals are not coherently preserved in different synthetic facial parts” and that “synthetic content does not contain frames with stable PPG.” Translation: Deep fakes can’t convincingly mimic how your pulse shows up in your face.

The inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person’s face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.

Of course, this is an arms race. I expect deep fake programs to become good enough to fool FakeCatcher in a few months.

Planet DebianJonathan Carter: Free Software Activities for 2020-09

This month I started working on ways to make hosting access easier for Debian Developers. I also did some work and planning for the MiniDebConf Online Gaming Edition that we’ll likely announce within the next 1-2 days. Just a bunch of content that needs to be fixed and a registration bug then I think we’ll be ready to send out the call for proposals.

In the meantime, here’s my package uploads and sponsoring for September:

2020-09-07: Upload package calamares (3.2.30-1) to Debian unstable.

2020-09-07: Upload package gnome-shell-extension-dash-to-panel (39-1) to Debian unstable.

2020-09-08: Upload package gnome-shell-extension-draw-on-your-screen (6.2-1) to Debian unstable.

2020-09-08: Sponsor package sqlobject (3.8.0+dfsg-2) for Debian unstable (Python team request).

2020-09-08: Sponsor package bidict (0.21.0-1) for Debian unstable (Python team request).

2020-09-11: Upload package catimg (2.7.0-1) to Debian unstable.

2020-09-16: Sponsor package gamemode (1.6-1) for Debian unstable (Games team request).

2020-09-21: Sponsor package qosmic (1.6.0-3) for Debian unstable (Debian Mentors / e-mail request).

2020-09-22: Upload package gnome-shell-extension-draw-on-your-screen (6.4-1) to Debian unstable.

2020-09-22: Upload package bundlewrap (4.2.0-1) to Debian unstable.

2020-09-25: Upload package gnome-shell-extension-draw-on-your-screen (7-1) to Debian unstable.

2020-09-27: Sponsor package libapache2-mod-python (3.5.0-1) for Debian unstable (Python team request).

2020-09-27: Sponsor package subliminal (2.1.0-1) for Debian unstable (Python team request).

,

Planet DebianPaul Wise: FLOSS Activities September 2020

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian wiki: unblock IP addresses, approve accounts

Communication

Sponsors

The gensim, cython-blis, python-preshed, pytest-rerunfailures, morfessor, nmslib, visdom and pyemd work was sponsored by my employer. All other work was done on a volunteer basis.

Planet DebianUtkarsh Gupta: FOSS Activites in September 2020

Here’s my (twelfth) monthly update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 21st month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/

I’ve been busy with my undergraduation stuff but I still squeezed out some time for the regular Debian work. Here are the following things I did in Debian this month:

Uploads and bug fixes:

Other $things:

  • Attended the Debian Ruby team meeting. Logs here.
  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Sponsored trace-cmd for Sudip, ruby-asset-sync for Nilesh, and mariadb-mysql-kbs for William.

RuboCop::Packaging - Helping the Debian Ruby team! \o/

This Google Summer of Code, I worked on writing a linter that could flag offenses for lines of code that are very troublesome for Debian maintainers while trying to package and maintain Ruby libraries and applications!

Whilst the GSoC period is over, I’ve been working on improving that tool and have extended that linter to now “auto-correct” these offenses by itself! \o/
You can now just use the -A flag and you’re done! Boom! The ultimate game-changer!

Here’s a quick demo for this feature:

A few quick updates on RuboCop::Packaging:

I’ve also spent a considerable amount of time in raising awareness about this and in more general sense, about downstream maintenance.
As a result, I raised a bunch of PRs which got really good response. I got all of the 20 PRs merged upstream, fixing these issues.


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my twelfth month as a Debian LTS and third month as a Debian ELTS paid contributor.
I was assigned 19.75 hours for LTS and 15.00 hours for ELTS and worked on the following things:
(for LTS, I over-worked for 11 hours last month on the survey so only had 8.75 hours this month!)

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

  • Issued ELA 274-1, fixing CVE-2020-11984, for uwsgi.
    For Debian 8 Jessie, these problems have been fixed in version 2.0.7-1+deb8u3.
  • Issued ELA 275-1, fixing CVE-2020-14363, for libx11.
    For Debian 8 Jessie, these problems have been fixed in version 2:1.6.2-3+deb8u4.
  • Issued ELA 278-1, fixing CVE-2020-8184, for ruby-rack.
    For Debian 8 Jessie, these problems have been fixed in version 1.5.2-3+deb8u4.
  • Also worked on updating the version of clamAV from v0.101.5 to v0.102.4.
    This was a bit tricky package to work on since it involved an ABI/API change and was more or less a transition. Super thanks to Emilio for his invaluable help and him taking over the package, finishing, and uploading it in the end.

Other (E)LTS Work:

  • Front-desk duty from 31-08 to 06-09 and from 28-09 onward for both LTS and ELTS.
  • Triaged apache2, cryptsetup, nasm, node-bl, plinth, qemu, rsync, ruby-doorkeeper, and uwsgi.
  • Marked CVE-2020-15094/symfony as not-affected for Stretch.
  • Marked CVE-2020-{9490,11993}/apache2 as ignored for Stretch.
  • Marked CVE-2020-8244/node-bl as no-dsa for Stretch.
  • Marked CVE-2020-24978/nasm as no-dsa for Stretch.
  • Marked CVE-2020-25073/plinth as no-dsa for Stretch.
  • Marked CVE-2020-15094/symfony as not-affected for Jessie.
  • Marked CVE-2020-14382/cryptsetup as not-affected for Jessie.
  • Marked CVE-2020-14387/rsync as not-affected for Jessie.
  • Auto EOL’ed ark, collabtive, linux, nasm, node-bl, and thunderbird for Jessie.
  • Use mktemp instead of tempfile in bin/auto-add-end-of-life.sh.
  • Attended the fifth LTS meeting. Logs here.
  • General discussion on LTS private and public mailing list.

Until next time.
:wq for today.

Planet DebianSteinar H. Gunderson: plocate improvements

Since announcing plocate, a number of major and minor improvements have happened, and despite its prototype status, I've basically stopped using mlocate entirely now.

First of all, the database building now uses 90% less RAM, so if you had issues with plocate-build OOM-ing before, you're unlikely to see that happening anymore.

Second, while plocate was always lightning-fast on SSDs or with everything in cache, that isn't always the situation for everyone. It's so annoying having a tool usually be instant, and then suddenly have a 300 ms hiccup just because you searched for something rare. To get that case right, real work had to be done; I couldn't just mmap up the index anymore and search randomly around in it.

Long story short, mmap is out, and io_uring is in. (This requires Linux 5.1 or later and liburing; if you don't have either, it will transparently fall back to synchronous I/O. It will still be faster than before, but not nearly as good.) I've been itching to try io_uring for a while now, and this was suddenly the perfect opportunity. Not because I needed more efficient I/O (in fact, I believe I drive it fairly inefficiently, with lots of syscalls), but because it allows to run asynchronous I/O without the pain of threads or old-style aio. It's unusual in that I haven't heard of anyone else doing io_uring specifically to gain better performance on non-SSDs; usually, it's about driving NVMe drives or large amounts of sockets more efficiently.

plocate needs a fair amount of gather reads; e.g., if you search for “plocate”, it needs to go to disk and fetch disk offsets for the posting lists “plo”, “loc”, “oca”, “cat” and “ate”; and most likely, it will be needing all five of them. io_uring allows me to blast off a bunch of reads at once, having the kernel to reorder them as it sees fit; with some luck from the elevator algorithm, I'll get all of them in one revolution of the disk, instead of reading the first one and discovering the disk head had already passed the spot where the second one was. (After that, it needs to look at the offsets and actually get the posting lists, which can then be decoded and intersected. This work can be partially overlapped with the positions.) Similar optimizations exist for reading the actual filenames.

All in all, this reduces long-tail latency significantly; it's hard to benchmark cold-cache behavior faithfully (drop_caches doesn't actually always drop all the caches, it seems), but generally, a typical cold-cache query on my machine seems to go from 200–400 to 40–60 ms.

There's one part left that is synchronous; once a file is found, plocate needs to go call access() to check that you're actually allowed to see it. (Exercise: Run a timing attack against mlocate or plocate to figure out which files exist on the system that you are not allowed to see.) io_uring doesn't support access() as a system call yet; I managed to sort-of fudge it by running a statx() asynchronously, which then populates the dentry cache enough that synchronous access() on the same directory is fast, but it didn't seem to help actual query times. I guess that in a typical query (as opposed to “plocate a/b”, which will give random results all over the disk), you hit only a few directories anyway, and then you're just at the mercy of the latency distribution of getting that metadata. And you still get the overlap with the loads of the file name list, so it's not fully synchronous.

plocate also now no longer crashes if you run it without a pattern :-)

Get it at https://git.sesse.net/?p=plocate. There still is no real release. You will need to regenerate your plocate.db, as the file format has changed to allow for fewer seeks.

Cory DoctorowAttack Surface on the MMT Podcast

I was incredibly happy to appear on the MMT Podcast again this week, talking about economics, science fiction, interoperability, tech workers and tech ethics, and my new novel ATTACK SURFACE, which comes out in the UK tomorrow (Oct 13 US/Canada):

https://pileusmmt.libsyn.com/68-cory-doctorow-digital-rights-surveillance-capitalism-interoperable-socks

We also delved into my new nonfiction book, HOW TO DESTROY SURVEILLANCE CAPITALISM, and how it looks at the problem of misinformation, and how that same point of view is weaponized by Masha, the protagonist and antihero of Attack Surface.

https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59

It’s no coincidence that both of these books came out at the same time, as they both pursue the same questions from different angles:

  • What does technology do?

  • Who does it do it TO and who does it do it FOR?

  • How can we change how technology works?

And:

  • Why do people make tools of oppression, and what will it take to make them stop?

Here’s the episode’s MP3:

http://traffic.libsyn.com/preview/pileusmmt/The_MMT_Podcast_ep_68_Cory_Doctorow_v2.mp3

and here’s their feed:

http://pileusmmt.libsyn.com/rss

Planet DebianAntoine Beaupré: Presentation tools

I keep forgetting how to make presentations. I had a list of tools in a wiki from a previous job, but that's now private and I don't see why I shouldn't share this (even if for myself!).

So here it is. What's your favorite presentation tool?

Tips

  • if you have some text to present, outline keywords so that you can present your subject without reading every word
  • ideally, don't read from your slides - they are there to help people follow, not for people to read
  • even better: make your slides pretty with only a few words, or don't make slides at all

Further advice:

I'm currently using Pandoc with PDF input (with a trip through LaTeX) for most slides, because PDFs are more reliable and portable than web pages. I've also used Libreoffice, Pinpoint, and S5 (through RST) in the past. I miss Pinpoint, too bad that it died.

Some of my presentations are available in my GitLab.com account:

See also my list of talks and presentations which I can't seem to keep up to date.

Tools

Beamer (LaTeX)

  • LaTeX class
  • Do not use directly unless you are a LaTeX expert or masochist, see Pandoc below
  • see also powerdot
  • Home page

Darkslide

  • HTML, Javascript
  • presenter notes, table of contents, Markdown, RST, Textile, themes, code samples, auto-reload
  • Home page, demo

Impress.js

Impressive

  • simply displays PDFs or images
  • page transitions, overview screen, highlighting
  • Home page

Libreoffice Impress

  • Powerpoint clone
  • Makes my life miserable
  • PDF export, presenter notes, outline view, etc
  • Home page, screenshots

Magicpoint

  • ancestor of everyone else (1997!)
  • text input format, image support, talk timer, slide guides, HTML/Postscript export, draw on slides, X11 output
  • no release since 2008
  • Home page

mdp and lookatme (commandline)

Pandoc

  • Allows converting from basically whatever into slides, including Beamer, DZSlides, reveal.js, slideous, slidy, Powerpoint
  • PDF, HTML, Powerpoint export, presentation notes, full screen background images
  • nice plain text or markdown input format
  • Home page, documentation

PDF Presenter

  • PDF presentation tool, shows presentation notes
  • basically "Keynote for Linux"
  • Home page, pdf-presenter-console in Debian

Pinpoint

  • Native GNOME app
  • Full screen slides, PDF export, live change, presenter notes, pango markup, video, image backgrounds
  • Home page
  • Abandoned since at least 2019

Reveal.js

  • HTML, Javascript
  • PDF export, Markdown, LaTeX support, syntax-highlighting, nested slides, speaker notes
  • Source code, demo

S5

  • HTML, CSS
  • incremental, bookmarks, keyboard controls
  • can be transformed from ReStructuredText (RST) with rst2s5 with python-docutils
  • Home page, demo

sent

  • X11 only
  • plain text, black on white, image support, and that's it
  • from the suckless.org elitists
  • Home page

Sozi

  • Entire presentation is one poster, zooming and jumping around
  • SVG + Javascript
  • Home page, demo

Other options

Another option I have seriously considered is just generate a series of images with good resolution, hopefully matching the resolution (or at least aspect ratio) of the output device. Then you flip through a series of images one by one. In that case, any of those image viewers (not an exhaustive list) would work:

Update: it turns out I already wrote a somewhat similar thing when I did a recent presentation. If you're into rants, you might enjoy the README file accompanying the Kubecon rant presentation. TL;DR: "makes me want to scream" and "yet another unsolved problem space, sigh" (refering to "display images full-screen" specifically).

Planet DebianBastian Blank: Booting Debian on ThinkPad X13 AMD

Running new hardware is always fun. The problems are endless. The solutions not so much.

So I've got a brand new ThinkPad X13 AMD. It features an AMD Ryzen 5 PRO 4650U, 16GB of RAM and a 256GB NVME SSD. The internal type identifier is 20UF. It runs the latest firmware as of today with version 1.09.

So far I found two problems with it:

  • It refuses to boot my Debian image with Secure Boot enabled.
  • It produces ACPI errors on every key press on the internal keyboard.

Disable Secure Boot

The system silently fails to boot a signed shim and grub from an USB thumb drive. I used on of the Debian Cloud images, which should properly work in this setup and do on my other systems.

The only fix I found was to disable Secure Boot alltogether.

Select Linux in firmware

Running a Linux 5.8 with default firmware setting produces ACPI errors on each key press.

ACPI BIOS Error (bug): Could not resolve symbol [\_SB.PCI0.GPP3], AE_NOT_FOUND (20200528/psargs-330)
ACPI Error: Aborting method \_SB.GPIO._EVT due to previous error (AE_NOT_FOUND) (20200528/psparse-529)

This can be "fixed" by setting a strategic setting inside the firmware:
Config > Power > Sleep State to Linux

Planet DebianChris Lamb: Free software activities in September 2020

Here is my monthly update covering what I have been doing in the free software world during September 2020 (previous month):

  • As part of my role of being the assistant Secretary of the Open Source Initiative and a board director of Software in the Public Interest, I attended their respective monthly meetings and participated in various licensing and other discussions occurring on the internet as well as the usual internal discussions, etc. I participated in the OSI's inaugural State of the Source conference and began the 'onboarding' of a new project to SPI.

§


Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:


§


diffoscope

I made the following changes to diffoscope, including preparing and uploading versions 159 and 160 to Debian:

  • New features:

    • Show "ordering differences" only in strings(1) output by applying the ordering check to all differences across the codebase. [...]
  • Bug fixes:

    • Mark some PGP tests that they require pgpdump, and check that the associated binary is actually installed before attempting to run it. (#969753)
    • Don't raise exceptions when cleaning up after guestfs cleanup failure. [...]
    • Ensure we check FALLBACK_FILE_EXTENSION_SUFFIX, otherwise we run pgpdump against all files that are recognised by file(1) as data. [...]
  • Codebase improvements:

    • Add some documentation for the EXTERNAL_TOOLS dictionary. [...]
    • Abstract out a variable we use a couple of times. [...]
  • diffoscope.org website improvements:

    • Make the (long) demonstration GIF less prominent. [...]

§


Debian

Lintian

For Lintian, the static analysis tool for Debian packages, I uploaded versions 2.93.0, 2.94.0, 2.95.0 & 2.96.0 (not counting uploads to the backports repositories), as well as:

  • Bug fixes:

    • Don't emit odd-mark-in-description for large numbers such as 300,000. (#969528)
    • Update the expected Vcs-{Browser,Git} location of modules and applications maintained by recently-merged Python module/app teams. (#970743)
    • Relax checks around looking for the dh(1) sequencer by not looking for the preceding target:. (#970920)
    • Don't try and open debian/patches/series if it does not exist. [...]
    • Update all $LINTIAN_VERSION assignments in scripts and not just the ones we specify; we had added and removed some during development. [...]
  • Tag updates:

  • Developer documentation updates:

    • Add prominent and up-to-date information on how to run the testsuite. (#923696)
    • Drop recommendation to update debian/changelog manually. [...]
    • Apply wrap-and-sort -sa to the debian subdirectory. [...]
    • Merge data/README into CONTRIBUTING.md for greater visibility [...] and move CONTRIBUTING.md to use #-style Markdown headers [...].

Debian LTS

This month I've worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

You can find out more about the project via the following video:

Uploads

Bugs filed

  • bluez-source: Contains bluez-source-tmp directory. (#970130)

  • bookworm: Manual page contains debugging/warning/error information from running binary. (#970277)

  • jhbuild: Missing runtime dependency on python3-distutils. (#971418)

  • wxwidgets3.0: Links in documentation points to within the original build path, not the installed path. (#970431)

Sam VargheseOne feels sorry for Emma Alberici, but that does not mask the fact that she was incompetent

Last month, the Australian Broadcasting Corporation, a taxpayer-funded entity, made several people redundant, due to a cut in funding by the Federal Government. Among them was Emma Alberici, a presenter who has been lionised a great deal as someone with great talents, but actually a mediocre hack who lacks ability.

What marked Alberici out is the fact that she had the glorious title of chief economics correspondent at the ABC, but was never seen on any TV show giving her opinion about anything to do with economics. Over the last two years, China and the US have been engaged in an almighty stoush; Australia, as a country that considers the US as its main ally and has China as its major trading partner, has naturally been of interest too.

But the ABC always put forward people like Peter Ryan, a senior business correspondent, or Ian Verrender, the business editor, when there was a need for someone to appear during a news bulletin and provide a little insight into these matters. Alberici, it seemed, was persona non grata.

The reason why she was kept behind the curtains, it turns out, was because she did not really know much about economics. This was made plain in April 2018 when she wrote what columnist Joe Aston of the Australian Financial Review described as “an undergraduate yarn touting ‘exclusive analysis’ of one publicly available document from which she derived that ‘one in five of Australia’s top companies has paid zero tax for the past three years’ and that ‘Australia’s largest companies haven’t paid corporate tax in 10 years’.”

This article drew much criticism from many people, including politicians, who complained to the ABC. Her magnum opus has now disappeared from the web – an archived version is here. The fourth paragraph read: “Exclusive analysis released by ABC today reveals one in five of Australia’s top companies has paid zero tax for the past three years.” A Twitter teaser said: “Australia’s largest companies haven’t paid corporate tax in 10 years.”

As Aston pointed out in withering tones: “Both premises fatally expose their author’s innumeracy. The first is demonstrably false. Freely available data produced by the Australian Taxation Office show that 32 of Australia’s 50 largest companies paid $19.33 billion in company tax in FY16 (FY17 figures are not yet available). The other 18 paid nothing. Why? They lost money, or were carrying over previous losses.

“Company tax is paid on profits, so when companies make losses instead of profits, they don’t pay it. Amazing, huh? And since 1989, the tax system has allowed losses in previous years to be carried forward – thus companies pay tax on the rolling average of their profits and losses. This is stuff you learn in high school. Except, obviously, if your dream by then was to join the socialist collective at Ultimo, to be a superstar in the cafes of Haberfield.”

As expected, Alberici protested and even called in her lawyer when the ABC took down the article. It was put up again with several corrections. But the damage had been done and the great economics wizard had been unmasked. She did not take it well, protesting that 17 years ago she was nominated for a Walkley Award on tax minimisation.

Underlining her lack of knowledge of economics, Alberici, who was paid a handsome $189,000 per annum by the ABC, exposed herself again in May the same year. In an interview with Labor shadow treasurer Jim Chalmers, she kept pushing him as to what he would do with the higher surpluses that he proposed to run were a Labor government to be elected.

Aston has his own inimitable style, so let me use his words: “This time, Alberici’s utter non-comprehension of public sector accounting is laid bare in three unwitting confessions in her studio interview with Labor’s finance spokesman Jim Chalmers after Tuesday’s Budget.

“[Shadow Treasurer] Chris Bowen on the weekend told my colleague Barrie Cassidy that you want to run higher surpluses than the Government. How much higher than the Government and what would you do with that money?”

“Wearing that unmistakable WTF expression on his dial, Chalmers was careful to evade her illogical query.

“Undeterred, Alberici pressed again. ‘And what will you do with those surpluses?’ A second time, Chalmers dissembled.

“A third time, the cock crowed (this really was as inevitable as the betrayal of Jesus by St Peter). ‘Sorry, no, you said you would run higher surpluses, so what happens to that money?’

“Hanged [sic] by her own persistence. Chalmers, at this point, put her out of her blissful ignorance – or at least tried! ‘Surpluses go towards paying down debt’.

“Bingo. C’est magnifique! Hey, we majored in French. After 25 years in business and finance reporting, Alberici apparently thinks a budget surplus is an account with actual money in it, not a figure reached by deducting the Commonwealth’s expenditure from its revenue.”

Alberici has continued to complain that her knowledge of economics is adequate. She has been extremely annoyed when such criticism comes from any Murdoch publication. But the fact is that she is largely ignorant of the basics of economics.

Her incompetence is not limited to this one field alone. As I have pointed out, there have been occasions in the past, when she has shown that her knowledge of foreign affairs is as good as her knowledge of economics.

After she was made redundant, Alberici published a series of tweets which she later removed. An archive of that is here.

Alberici was the host of a program called Business Breakfast some years ago. It failed and had to be taken off the air. Then she was made the main host of the ABC’s flagship late news program, Lateline. That program was taken off the air last year due to budget cuts. However, Alberici’s performance did not set the Yarra on fire, to put it mildly.

Now she has joined an insurance comparison website, Compare The Market. That company has a bit of a dodgy reputation as the Australian news website The New Daily pointed out in 2014. As reporter George Lekakis wrote: “The website is owned by a leading global insurer that markets many of its own its products on the site. While comparethemarket.com.au offers a broad range of products for life insurance and travel cover, most of its general insurance offerings are underwritten by an insurer known as Auto & General.

“Auto & General is a subsidiary of global financial services group Budget Holdings Limited and is the ultimate owner of comparethemarket.com.au. An investigation by The New Daily of the brands marketed by the website for auto and home insurance reveals a disproportionate weighting to A&G products.”

Once again, the AFR, this time through columnist Myriam Robyn, was not exactly complimentary about Alberici’s new billet. But perhaps it is a better fit for Alberici than the ABC where she was really a square peg in a round hole. She is writing a book in which, no doubt, she will again try to put her case forward and prove she was a brilliant journalist who lost her job due to other people politicking against her. But the facts say otherwise.

Worse Than FailureCodeSOD: A Poor Facsimile

For every leftpad debacle, there are a thousand “utility belt” libraries for JavaScript. Whether it’s the “venerable” JQuery, or lodash, or maybe Google’s Closure library, there’s a pile of things that usually end up in a 50,000 line util.js file available for use, and packaged up a little more cleanly.

Dan B had a co-worker who really wanted to start using Closure. But they also wanted to be “abstract”, and “loosely coupled”, so that they could swap out implementations of these utility functions in the future.

This meant that they wrote a lot of code like:

initech.util.clamp = function(value, min, max) {
  return goog.math.clamp(value, min, max);
}

But it wasn’t enough to just write loads of functions like that. This co-worker “knew” they’d need to provide their own implementations for some methods, reflecting how the utility library couldn’t always meet their specific business needs.

Unfortunately, Dan noticed some of those “reimplementations” didn’t always behave correctly, for example:

initech.util.isDefAndNotNull(null); //returns true

Let’s look at the Google implementations of some methods, annotated by Dan:

goog.isDef = function(val) {
  return val !== void 0; // tests for undefined
}
goog.isDefAndNotNull = function(val) {
  return val != null; // Also catches undefined since ==/!= instead of ===/!== allows for implicit type conversion to satisfy the condition.
}

Setting aside the problems of coercing vs. non-coercing comparison operators even being a thing, these both do the job they’re supposed to do. But Dan’s co-worker needed to reinvent this particular wheel.

initech.util.isDef = function (val) {
  return val !== void 0; 
}

This code is useless, since we already have it in our library, but whatever. It’s fine and correct.

initech.util.isDefAndNotNull = initech.util.isDef; 

This developer is the kid who copied someone else’s homework and still somehow managed to get a failing grade. They had a working implementation they could have referenced to see what they needed to do. The names of the methods themselves should have probably provided a hint that they needed to be mildly different. There was no reason to write any of this code, and they still couldn’t get that right.

Dan adds:

Thankfully a lot of this [functionality] was recreated in C# for another project…. I’m looking to bring all that stuff back over… and just destroy this monstrosity we created. Goodbye JavaScript!

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Krebs on SecurityWho’s Behind Monday’s 14-State 911 Outage?

Emergency 911 systems were down for more than an hour on Monday in towns and cities across 14 U.S. states. The outages led many news outlets to speculate the problem was related to Microsoft‘s Azure web services platform, which also was struggling with a widespread outage at the time. However, multiple sources tell KrebsOnSecurity the 911 issues stemmed from some kind of technical snafu involving Intrado and Lumen, two companies that together handle 911 calls for a broad swath of the United States.

Image: West.com

On the afternoon of Monday, Sept. 28, several states including Arizona, California, Colorado, Delaware, Florida, Illinois, Indiana, Minnesota, Nevada, North Carolina, North Dakota, Ohio, Pennsylvania and Washington reported 911 outages in various cities and localities.

Multiple news reports suggested the outages might have been related to an ongoing service disruption at Microsoft. But a spokesperson for the software giant told KrebsOnSecurity, “we’ve seen no indication that the multi-state 911 outage was a result of yesterday’s Azure service disruption.”

Inquiries made with emergency dispatch centers at several of the towns and cities hit by the 911 outage pointed to a different source: Omaha, Neb.-based Intrado — until last year known as West Safety Communications — a provider of 911 and emergency communications infrastructure, systems and services to telecommunications companies and public safety agencies throughout the country.

Intrado did not respond to multiple requests for comment. But according to officials in Henderson County, NC, which experienced its own 911 failures yesterday, Intrado said the outage was the result of a problem with an unspecified service provider.

“On September 28, 2020, at 4:30pm MT, our 911 Service Provider observed conditions internal to their network that resulted in impacts to 911 call delivery,” reads a statement Intrado provided to county officials. “The impact was mitigated, and service was restored and confirmed to be functional by 5:47PM MT.  Our service provider is currently working to determine root cause.”

The service provider referenced in Intrado’s statement appears to be Lumen, a communications firm and 911 provider that until very recently was known as CenturyLink Inc. A look at the company’s status page indicates multiple Lumen systems experienced total or partial service disruptions on Monday, including its private and internal cloud networks and its control systems network.

Lumen’s status page indicates the company’s private and internal cloud and control system networks had outages or service disruptions on Monday.

In a statement provided to KrebsOnSecurity, Lumen blamed the issue on Intrado.

“At approximately 4:30 p.m. MT, some Lumen customers were affected by a vendor partner event that impacted 911 services in AZ, CO, NC, ND, MN, SD, and UT,” the statement reads. “Service was restored in less than an hour and all 911 traffic is routing properly at this time. The vendor partner is in the process of investigating the event.”

It may be no accident that both of these companies are now operating under new names, as this would hardly be the first time a problem between the two of them has disrupted 911 access for a large number of Americans.

In 2019, Intrado/West and CenturyLink agreed to pay $575,000 to settle an investigation by the Federal Communications Commission (FCC) into an Aug. 2018 outage that lasted 65 minutes. The FCC found that incident was the result of a West Safety technician bungling a configuration change to the company’s 911 routing network.

On April 6, 2014, some 11 million people across the United States were disconnected from 911 services for eight hours thanks to an “entirely preventable” software error tied to Intrado’s systems. The incident affected 81 call dispatch centers, rendering emergency services inoperable in all of Washington and parts of North Carolina, South Carolina, Pennsylvania, California, Minnesota and Florida.

According to a 2014 Washington Post story about a subsequent investigation and report released by the FCC, that issue involved a problem with the way Intrado’s automated system assigns a unique identifying code to each incoming call before passing it on to the appropriate “public safety answering point,” or PSAP.

“On April 9, the software responsible for assigning the codes maxed out at a pre-set limit,” The Post explained. “The counter literally stopped counting at 40 million calls. As a result, the routing system stopped accepting new calls, leading to a bottleneck and a series of cascading failures elsewhere in the 911 infrastructure.”

Compounding the length of the 2014 outage, the FCC found, was that the Intrado server responsible for categorizing and keeping track of service interruptions classified them as “low level” incidents that were never flagged for manual review by human beings.

The FCC ultimately fined Intrado and CenturyLink $17.4 million for the multi-state 2014 outage. An FCC spokesperson declined to comment on Monday’s outage, but said the agency was investigating the incident.

Planet DebianMike Gabriel: UBports: Packaging of Lomiri Operating Environment for Debian (part 03)

Before and during FOSDEM 2020, I agreed with the people (developers, supporters, managers) of the UBports Foundation to package the Unity8 Operating Environment for Debian. Since 27th Feb 2020, Unity8 has now become Lomiri.

Recent Uploads to Debian related to Lomiri

Over the past 4 months I worked on the following bits and pieces regarding Lomiri in Debian:

  • Work on lomiri-app-launch (Debian packaging, upstream work, upload to Debian)
  • Fork lomiri-url-dispatcher from url-dispatcher (upstream work)
  • Upload lomiri-url-dispatcher to Debian
  • Fork out suru-icon-theme and make it its own upstream project
  • Package and upload suru-icon-theme to Debian
  • First glance at lomiri-ui-toolkit (currently FTBFS, needs to be revisited)
  • Update of Mir (1.7.0 -> 1.8.0) in Debian
  • Fix net-cpp FTBFS in Debian
  • Fix FTBFS in gsettings-qt.
  • Fix FTBFS in mir (support of binary-only and arch-indep-only builds)
  • Coordinate with Marius Gripsgard and Robert Tari on shift over from Ubuntu Indicator to Ayatana Indicators
  • Upload ayatana-indicator-* (and libraries) to Debian (new upstream releases)
  • Package and upload to Debian: qmenumodel (still in Debian's NEW queue)
  • Package and upload to Debian: ayatana-indicator-sound
  • Symbol-Updates (various packages) for non-standard architectures
  • Fix FTBFS of qtpim-opensource-src in Debian since Qt5.14 had landed in unstable
  • Fix FTBFS on non-standard architectures of qtsystems, qtpim and qtfeedback
  • Fix wlcs in Debian (for non-standard architectures), more Symbol-Updates (esp. for the mir DEB package)
  • Symbol-Updates (mir, fix upstream tinkering with debian/libmiral3.symbols)
  • Fix FTBFS in lomiri-url-dispatcher against Debian unstable, file merge request upstream
  • Upstream release of qtmir 0.6.1 (via merge request)
  • Improve check_whitespace.py script as used in lomiri-api to ignore debian/ subfolder
  • Upstream release of lomiri-api 0.1.1 and upload to Debian unstable.

The next two big projects / packages ahead are lomiri-ui-toolkit and qtmir.

Credits

Many big thanks go to Marius and Dalton for their work on the UBports project and being always available for questions, feedback, etc.

Thanks to Ratchanan Srirattanamet for providing some of his time for debugging some non-thread safe unit tests (currently unsure, what package we actually looked at...).

Thanks for Florian Leeber for being my point of contact for topcis regarding my cooperation with the UBports Foundation.

Previous Posts about my Debian UBports Team Efforts

Kevin RuddThe Market: China’s fears about America

E&OE TRANSCRIPT
PAPER INTERVIEW
THE MARKET
29 SEPTEMBER 2020

Few Western observers know China better than The Honorable Kevin Rudd. As a young diplomat, the Australian, who speaks fluent Mandarin, was stationed in Beijing in the 1980s. As Australia’s Prime Minister and then Foreign Minister from 2007 to 2012, he led his country through the delicate tension between its most important alliance partner (the USA) and its largest trading partner (China).

Today Mr. Rudd is President of the Asia Society Policy Institute in New York. In an in-depth conversation via Zoom, he explains why a fundamental competition has begun between the two great powers. He would not rule out a hot war: «We know from history that it is easy to start a conflict, but it is bloody hard to end it», he warns.

Mr. Rudd, the conflict between the U.S. and China has escalated significantly over the past three years. To what extent has that escalation been driven by the presence of two strongmen, i.e. Donald Trump and Xi Jinping?

The strategic competition between the two countries is the product of both structural and leadership factors. The structural factors are pretty plain, and that is the continuing change in the balance of power in military, economic, and technological terms. This has an impact on China’s perception of its ability to be more assertive in the region and the world against America. The second dynamic is Xi Jinping’s leadership style, which is more assertive and aggressive than any of his post-Mao predecessors, Deng Xiaoping, Jiang Zemin and Hu Jintao. The third factor is Donald Trump, who obsesses about particular parts of the economy, namely trade and to a lesser degree technology.

Would America’s position be different if someone else than Trump was President?

No. The structural factors about the changing balance of power, as well as Xi Jinping’s leadership style, have caused China to rub up against American interests and values very sharply. Indeed, China is rubbing up against the interests and values of most other Western countries and some Asian democracies as well. Had Hillary Clinton won in 2016, her response would have been very robust. Trump has for the most part been superficially robust, principally on trade and technology. He was only triggered into more comprehensive robustness by the Covid-19 crisis threatening his reelection. If the next President of the U.S. is a Democrat, my judgement would be that the new Administration will be equally but more systematically hard-line in their reaction to China.

Has a new Cold War started?

I don’t like to embrace the language of a Cold War 2.0, because we should not forget that the Cold War of the 20th century had three big defining characteristics: One, the Soviets and the Americans threatened each other with nuclear Armageddon for forty years; two, they fought more than twenty proxy wars around the world; and three, they had zero economic engagement with each other. The current conflict between the U.S. and China on the other hand is characterised by two things: One, an economic decoupling in areas such as trade, supply chains, foreign direct investment, capital markets, technology, and talent. At the same time, it is also an increasingly sharp ideological war on values. The Chinese authoritarian capitalist model has asserted itself beyond China and challenges America.

How do you see that economic decoupling playing out?

The three formal instruments of power in the U.S. to enforce decoupling are entity listing, the new export control regime, and thirdly, the new powers given to the Committee on Foreign Investment in the United States, CFIUS. Those are powerful instruments which potentially affect third countries as well, through sanctions imposed under the entity list. You can take the example of semiconductors, where the recent changes of the entity list virtually limits the exports of semiconductors to a defined list of Chinese enterprises from anywhere in the world, as long as they are based on American intellectual property.

These measures have cut off Chinese companies like Huawei or SMIC from acquiring high-end semiconductor technology anywhere in the world. The reaction in Beijing has been muted so far, with no direct retaliation. Why?

In China there is a division of opinion on the question of how to respond. The hawks have an «eye for an eye» posture, that’s driven both by a perception of strategy, but also with an eye on domestic sentiment. The America doves within the leadership – and they do exist – argue a different proposition. They think China is not yet ready for a complete decoupling. If it’s going to happen, they at least try to slow it down. Plus, they want to keep their powder dry until they see the outcome of the election and what the next Administration will do. That’s the reason why we have seen only muted responses so far.

Isn’t it the case that both sides would lose if they drive decoupling too far? And given that, could it be that there won’t be any further decoupling?

We are past that point. Whoever wins the election, America will resolve in decoupling in a number of defined areas. First and foremost in those global supply chains where the products are of too crucial importance to the U.S. to depend on Chinese supply. Think medical supplies or pharmaceuticals. The second area is in defined critical technologies. The Splinternet is not just a term, it’s becoming a reality. Thirdly, you will see a partial decoupling on the global supply of semiconductors to China. Not just those relevant to 5G and Artificial Intelligence, but semiconductors in general. The centrality of microchips to computing power for all purposes, and the spectrum of application in the civilian and military economy is huge. Fourth, I think foreign direct investment in both directions will shrink to zero. The fifth area of decoupling is happening in talent markets. The hostility towards Chinese students in the U.S. is reaching ridiculous proportions.

Do you see a world divided into two technology spheres, one with American standards and one with Chinese standards?

This is the logical consequence. Assume you have Huawei 5G systems rolled out across the 75 countries that take part in the Belt and Road Initiative, then what follows from that is a series of industry standards that become accepted and compatible within those countries, as opposed to those that rely on American systems. But then another set of questions arises: Let’s say China is effectively banned from purchasing semiconductors based on American technology and is dependent on domestic supply. Chinese semiconductors are slower than their American counterparts, and likely to remain for the decade ahead. Do the BRI countries accept a slower microprocessor product standard for being part of the Chinese technological ecosystem? I don’t know the answer to that, but I think your prognosis of two technology spheres is correct.

China throws huge amounts of money into the project of building up its semiconductor capabilities. Are they still so far behind?

Despite their acts to buy, borrow or steal intellectual property, they constantly remain three to seven years behind the U.S., Taiwan and South Korea, i.e. behind the likes of Intel, TSMC and Samsung. It’s obviously hard to reverse engineer semiconductors, as opposed to a Tupolev, as the Soviets had to find out, which can be reverse engineered over a weekend.

Wouldn’t America hurt itself too much if it cut off China from its semiconductor industry altogether?

There is an argument that 50% of the profits of the US semiconductor industry come from their clients in China. That money funds their R&D in order to keep three to seven years ahead of China. The U.S. Department of Defense knows that, what’s why the Pentagon doesn’t necessarily side with the anti China hawks on this issue. So the debate between the US semiconductor industry versus the perceived national security interest has yet to be resolved. It has been resolved in terms of AI chips, and Huawei is the first big casualty of that. But for semiconductors in general the question is not solved yet.

Will countries in Europe or Southeast Asia be forced to decide which technology sphere they want to belong to?

Until the 5G revolution, they have managed to straddle both worlds. But now China has banked on the strategy of being the technology leader in certain categories, and the one they are in at the moment is in 5G technology and systems. If you look at the next five years, and if you look at the success of China in the other technology categories in its Made in China 2025 Strategy, then it becomes clear that we increasingly are going to end up in a binary technology world. Policy makers in various nations will have to answer questions around the relative sophistication of the technology, industry standards, concerns on privacy and data governance, and of course a very big question: What are our points of exposure to the U.S. or China? What will we lose in our China relationship by joining the American sphere in certain fields, and vice versa? Those variables will impact the decision making processes everywhere from Bern to Berlin to Bangkok.

But in the end, they will have to choose?

Yes. India for example has done it in the field of 5G. For India, that was a big call, given the size of its market and China’s desire to bring India slowly but surely towards its side of the Splinternet.

The third field of conflict after trade and technology lies in financial markets. We know that Washington can weaponise the Dollar if it wants to. So far, this front has been rather quiet. What are your expectations?

Two measures have been taken so far by the Trump Administration. One, the direction to U.S. government pension funds not to invest in Chinese listed companies; and two, the direction to the New York Stock Exchange not to sustain the listing of Chinese firms unless they conform to global accounting standards. I regard these as two simple shots across the bow.

With more to follow?

Like on most other things including technology, the U.S. is divided in terms of where its interests lie. Just like Silicon Valley, Wall Street has a big interest in not having too harsh restrictions on financial markets. Just look at the volume of business. Financial market collaboration between the Chinese and American financial systems in terms of investment flows for Treasuries, bonds and equities is a $5 trillion per year business. This is not small. I presume the reason we have only seen two rather small warning shots so far is that the Administration is deeply sensitive to Wall Street interests, led by Secretary of the Treasury Steven Mnuchin. Make no mistake: Given the serious Dollars at stake in financial markets, an escalation there will make the trade war look like child’s play.

Which way will it resolve?

The risk I see is that if the Chinese crack down further in Hong Kong. If there is an eruption of protests resulting in violence, we should not be surprised by the possibility of Washington deciding to de-link the U.S. financial system from the Hong Kong Dollar and the Hong Kong financial market. That would be a huge step.

How probable is it that Washington would choose to weaponise the Dollar?

We don’t know. The Democrats in my judgement do not have a policy on that at present. Perhaps the best way to respond to your question is to say this: There are three things that China still fears about America. The US military, its semiconductor industry, and the Dollar. If you are attentive to the internalities of the Chinese national economic self-sufficiency debate at present, it often is expressed in terms of «Let’s not allow to happen to us in financial markets what is happening in technology markets». But if the U.S. goes into hardline mode against China for general strategy purposes, then the only thing that would deter Washington is the amount of self-harm it would inflict on Wall Street, if it is forced to decouple from the Chinese market. If that would happen, it would place Frankfurt, Zurich, Paris or the City of London in a pretty interesting position.

Would you say that there is a form of mutually assured destruction, MAD, in financial markets, which would prevent the U.S. from going into full hardline mode?

If the Americans wanted to send a huge warning shot to the Chinese, they are probably more disposed towards using sectoral measures, like the one I outlined for Hong Kong, and not comprehensive measures. But never forget: American political elites, Republicans and Democrats, have concluded that Xi Jinping’s China is not a status quo power, but that it wishes to replace the U.S. in its position of global leadership. Therefore, the inherent rationality or irrationality of individual measures is no longer necessarily self-evident against the general strategic question. The voices in America to prevent a financial decoupling from China are strong at present, but that does not necessarily mean they will prevail.

China’s strategy, meanwhile, is to welcome U.S. banks with open arms. Is it working?

The general strategy of China is that the more economic enmeshment occurs – not just with the U.S., but also with Europe, Japan and the likes –, then the less likely countries are going to take a hard-line policy against Beijing. China is a master in using its economic leverage to secure foreign policy and national security policy ends. They know this tool very well. The more friends you have, be it JPMorgan, Morgan Stanley or the big technology firms of Silicon Valley, the more it complicates the decision making process in the West. China knows that. I’m sure you’ve heard it a thousand times from Swiss companies as well: How can we grow a global business and ignore the Chinese market? Every company in the world is asking that question.

You wrote an article in Foreign Affairs in August, warning of a potential military conflict triggered by events in the South China Sea or Taiwan. Do you really see the danger of a hot war?

I don’t mean to be alarmist, not at all. But I was talking to too many people both in Washington and Beijing that were engaged in scenario planning, to believe that this was any longer just theoretical. My simple thesis is this: These things can happen pretty easily once you have whipped up nationalist narratives on both sides and then have a major incident that goes wrong. A conflict is easy to start, but history tells us they are bloody hard to stop.

Of course the main argument against that is that there is too much at stake on both sides, which will prevent an escalation into a hot war.

You see, within that argument lies the perceived triumph of European rationality over East Asian reality. All that European rationality worked really well in 1914, when nobody thought that war was inevitable. The title of my article Beware the Guns of August referred to the time between the assassination in Sarajevo at the end of June, the failure of diplomacy in July, and then miscommunication, poor signalling and the dynamics of mobilisation in the end led to a situation that neither the Kaiser nor the Czar could stop. Nationalism is as poisonous today as it was in Europe for centuries. It’s just that you’ve all killed each other twice before you found out it was a bad idea. Remember, in East Asia, we have the rolling problems of our own version of Alsace-Lorraine: it’s called Taiwan.

Influential voices in Washington say that the time of ambiguity is over. The U.S. should make its support for Taiwan explicit. Do you agree?

I don’t. If you change your declaratory policy on Taiwan, then there is a real danger that you by accident create the crossing of a red line in Chinese official perception, and you bring on the very crisis you are seeking to avoid. It’s far better if you simply had an operational strategy, which aims to maximally enhance Taiwan’s ability to deter a Chinese attack.

Over the past years, the Chinese Communist Party has morphed into the Party of Xi. How do you see the internal dynamics within the CCP playing out over the coming years?

Xi Jinping’s position as Paramount Leader makes him objectively the most powerful Chinese leader since Mao. During the days of Deng Xiaoping, there were counterweighting voices to Deng, represented at the most senior levels, and there was a debate of economic and strategic policy between them. The dynamics of collective leadership applied then, they applied under Jiang Zemin, they certainly applied under Hu Jintao. They now no longer apply.

What will that mean for the future?

In the seven years he’s been in power so far, China moved to the left on domestic politics, giving a greater role to the Party. In economic policy, we’ve seen it giving less headroom for the private sector. China has become more nationalist and more internationally assertive as a consequence of it becoming more nationalist. There are however opposing voices among the top leadership, and the open question is whether these voices can have any coalescence in the lead-up to the 20th Party Congress in 2022, which will decide on whether Xi Jinping’s term is extended. If it is extended, you can say he then becomes leader for life. That will be a seminal decision for the Party.

What’s your prediction?

For a range of internal political reasons, which have to do with power politics, plus disagreements on economic policy and some disagreements on foreign policy, the internal political debate in China will become sharper than we have seen so far. If I was a betting man, at this stage, I would say it is likely that Xi will prevail.

«It's obviously hard to reverse engineer semiconductors.»

The post The Market: China’s fears about America appeared first on Kevin Rudd.

Planet DebianVincent Bernat: Speeding up bgpq4 with IRRd in a container

When building route filters with bgpq4 or bgpq3, the speed of rr.ntt.net or whois.radb.net can be a bottleneck. Updating many filters may take several tens of minutes, depending on the load:

$ time bgpq4 -h whois.radb.net AS-HURRICANE | wc -l
909869
1.96s user 0.15s system 2% cpu 1:17.64 total
$ time bgpq4 -h rr.ntt.net AS-HURRICANE | wc -l
927865
1.86s user 0.08s system 12% cpu 14.098 total

A possible solution is to have your own IRRd instance in your network, mirroring the main routing registries. A close alternative is to bundle IRRd with all the data in a ready-to-use Docker image. This also has the advantage of easy integration into a Docker-based CI/CD pipeline.

$ git clone https://github.com/vincentbernat/irrd-legacy.git -b blade/master
$ cd irrd-legacy
$ docker build . -t irrd-snapshot:latest
[…]
Successfully built 58c3e83a1d18
Successfully tagged irrd-snapshot:latest
$ docker container run --rm --detach --publish=43:43 irrd-snapshot
4879cfe7413075a0c217089dcac91ed356424c6b88808d8fcb01dc00eafcc8c7
$ time bgpq4 -h localhost AS-HURRICANE | wc -l
904137
1.72s user 0.11s system 96% cpu 1.881 total

The Dockerfile contains three stages:

  1. building IRRd,1
  2. retrieving various IRR databases, and
  3. assembling the final container with the result of the two previous stages.

The second stage fetches the databases used by rr.ntt.net: NTTCOM, RADB, RIPE, ALTDB, BELL, LEVEL3, RGNET, APNIC, JPIRR, ARIN, BBOI, TC, AFRINIC, ARIN-WHOIS, and REGISTROBR. However, it misses RPKI.2 Feel free to adapt!

The image can be scheduled to be rebuilt daily or weekly, depending of your needs. The repository includes a .gitlab-ci.yaml file automating the build and triggering the compilation of all filters by your CI/CD upon success.


  1. Instead of using the latest version of IRRd, the image relies on an older version that does not require a PostgreSQL instance and uses flat files instead. ↩︎

  2. Unlike the others, the RPKI database is built from the published RPKI ROAs. They can be retrieved with rpki-client and transformed into RPSL objects to be imported in IRRd↩︎

Worse Than FailureCodeSOD: Taking Your Chances

A few months ago, Sean had some tasks around building a front-end for some dashboards. Someone on the team picked a UI library for managing their widgets. It had lovely features like handling flexible grid layouts, collapsing/expanding components, making components full screen and even drag-and-drop widget rearrangement.

It was great, until one day it wasn't. As more and more people started using the front end, they kept getting more and more reports about broken dashboards, misrendering, duplicated widgets, and all sorts of other difficult to explain bugs. These were bugs which Sean and the other developers had a hard time replicating, and even on the rare occassions that they did replicate it, they couldn't do it twice in a row.

Stumped, Sean took a peek at the code. Each on-screen widget was assigned a unique ID. Except at those times when the screen broke- the IDs weren't unique. So clearly, something went wrong with the ID generation, which seemed unlikely. All the IDs were random-junk, like 7902699be4, so there shouldn't be a collision, right?

generateID() { return sha256(Math.floor(Math.random() * 100)). substring(1, 10); }

Good idea: feeding random values into a hash function to generate unique IDs. Bad idea: constraining your range of random values to integers between 0 and 99.

SHA256 will take inputs like 0, and output nice long, complex strings like "5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9". And a mildly different input will generate a wildly different output. This would be good for IDs, if you had a reasonable expectation that the values you're seeding are themselves unique. For this use case, Math.random() is probably good enough. Even after trimming to the first ten characters of the hash, I only hit two collisions for every million IDs in some quick tests. But Math.floor(Math.random() * 100) is going to give you a collision a lot more frequently. Even if you have a small number of widgets on the screen, a collision is probable. Just rare enough to be hard to replicate, but common enough to be a serious problem.

Sean did not share what they did with this library. Based on my experience, it was probably "nothing, we've already adopted it" and someone ginned up an ugly hack that patches the library at runtime to replace the method. Despite the library being open source, nobody in the organization wants to maintain a fork, and legal needs to get involved if you want to contribute back upstream, so just runtime patch the object to replace the method with one that works.

At least, that's a thing I've had to do in the past. No, I'm not bitter.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianNorbert Preining: Performance with Intel i218/i219 NIC

I always had the feeling that my server, hosted by Hetzner, somehow has a slow internet connection. Then, I did put it on the distance between Finland and Japan, and didn’t care too much. Until yesterday my server stopped reacting to pings/ssh, and needed a hard reset. It turned out that the server was running fine, only that the ethernet card did hang. Hetzner support answered promptly and directed me to this web page, which described a change in the kernel concerning fragmentation offloading, and suggested the following configuration to regain connection speed:

ethtool -K <interface> tso off gso off

And to my surprise, this simple thing did wonder, and the connection speed improved dramatically, even from Japan (something like factor 10 in large rsync transfers). I have added this incantation to system cron tab and run it every hour, just to be sure that even after a reboot it is fine.

If you have bad connection speed with this kind of ethernet card, give it a try.

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 16)

Here’s part sixteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Cryptogram Negotiating with Ransomware Gangs

Really interesting conversation with someone who negotiates with ransomware gangs:

For now, it seems that paying ransomware, while obviously risky and empowering/encouraging ransomware attackers, can perhaps be comported so as not to break any laws (like anti-terrorist laws, FCPA, conspiracy and others) ­ and even if payment is arguably unlawful, seems unlikely to be prosecuted. Thus, the decision whether to pay or ignore a ransomware demand, seems less of a legal, and more of a practical, determination ­ almost like a cost-benefit analysis.

The arguments for rendering a ransomware payment include:

  • Payment is the least costly option;
  • Payment is in the best interest of stakeholders (e.g. a hospital patient in desperate need of an immediate operation whose records are locked up);
  • Payment can avoid being fined for losing important data;
  • Payment means not losing highly confidential information; and
  • Payment may mean not going public with the data breach.

The arguments against rendering a ransomware payment include:

  • Payment does not guarantee that the right encryption keys with the proper decryption algorithms will be provided;
  • Payment further funds additional criminal pursuits of the attacker, enabling a cycle of ransomware crime;
  • Payment can do damage to a corporate brand;
  • Payment may not stop the ransomware attacker from returning;
  • If victims stopped making ransomware payments, the ransomware revenue stream would stop and ransomware attackers would have to move on to perpetrating another scheme; and
  • Using Bitcoin to pay a ransomware attacker can put organizations at risk. Most victims must buy Bitcoin on entirely unregulated and free-wheeling exchanges that can also be hacked, leaving buyers’ bank account information stored on these exchanges vulnerable.

When confronted with a ransomware attack, the options all seem bleak. Pay the hackers ­ and the victim may not only prompt future attacks, but there is also no guarantee that the hackers will restore a victim’s dataset. Ignore the hackers ­ and the victim may incur significant financial damage or even find themselves out of business. The only guarantees during a ransomware attack are the fear, uncertainty and dread inevitably experienced by the victim.

Cryptogram Hacking a Coffee Maker

As expected, IoT devices are filled with vulnerabilities:

As a thought experiment, Martin Hron, a researcher at security company Avast, reverse engineered one of the older coffee makers to see what kinds of hacks he could do with it. After just a week of effort, the unqualified answer was: quite a lot. Specifically, he could trigger the coffee maker to turn on the burner, dispense water, spin the bean grinder, and display a ransom message, all while beeping repeatedly. Oh, and by the way, the only way to stop the chaos was to unplug the power cord.

[…]

In any event, Hron said the ransom attack is just the beginning of what an attacker could do. With more work, he believes, an attacker could program a coffee maker — ­and possibly other appliances made by Smarter — ­to attack the router, computers, or other devices connected to the same network. And the attacker could probably do it with no overt sign anything was amiss.

Kevin RuddBBC World: Kevin Rudd on Xi Jinping’s zero-carbon pledge

E&OE TRANSCRIPT
TV INTERVIEW
BBC World
26 SEPTEMBER 2020

 

Mike Embley
Let’s speak to a man who has a lot of experience with China. Kevin Rudd, former Prime Minister of Australia, now president of the Asia Society, Policy Institute. Mr. Rudd, is that headline grabbing thing to say, isn’t it? the United Nations General Assembly? Do you think in a very directed economy with a population that’s given very little individual choice, it is possible? And if so, how?

Kevin Rudd
I think it is possible. And the reason I say so is because China does have a state planning system. And on top of that, China has concluded, I think that it’s in its own national interests, to bring about a radical reduction in carbon over time. So the two big announcements coming out of Xi Jinping at the UN General Assembly have been A, the one you’ve just referred to, which is to achieve carbon neutrality before 2060. And also for China to reach what’s called peak greenhouse gas emissions before 2030. The international community have been pressuring them to bring that forward as much as possible, respectively, we hope to close to 2050. And we hope close to 2025, as far as peak emissions are concerned, but we’ll have to wait and see until China produces its 14th five year plan.

Mike Embley
I guess we can probably assume that President Trump wasn’t impressed, he’s made his position pretty clear on these matters. How do you think other countries will respond? Or will they just say, well, that’s China, China can do that we really can’t?

Kevin Rudd
Well, I think you’re right to characterize President Trump’s reaction as dismissive, because ultimately, he does not seem to be persuaded at all by the climate change science. And the United States under his presidency has been completely missing in action, in terms of global work on climate change, mitigation activity. But when you look at the other big player on global climate change actions, the European Union, and I think positively speaking, both the European Union and working as they did, through their virtual discussions with the Chinese leadership last week, had been encouraging directly the Chinese to take these sorts of actions. I think the Europeans will be pleased by what they have heard. But there’s a caveat to this as well. One of the other things that Europeans asked is for China, not just to make concrete its commitments in terms of peak emissions and carbon neutrality, but also not to offshore China’s responsibilities by building a lot of carbon based or fossil fuel based power generation plants in the Belt and Road Initiative countries beyond China’s borders. That’s where we have to see further action by China as well.

Mike Embley
Just briefly, if you can, you know, there will be people yelling at the screen, given how fast climate change is becoming a climate emergency, that 2050-2060 is nowhere near soon enough that we have to inflict really massive changes on ourselves.

Kevin Rudd
Well, the bottom line is that if you look at the science, it requires us to be carbon neutral by 2050. As a planet, given the China is the world’s largest emitter by a country mile, then I’d say to those throwing shoes at the screen, getting China to that position, given where they were, frankly 10 years ago when I first began negotiating with the Chinese leadership on climate change matters is a big step forward. And secondly, what I find most encouraging though, is that China’s national political interests, I think, have been engaged. They don’t want to suffocate their own national economic future, any more than they want to suffocate the world’s.

The post BBC World: Kevin Rudd on Xi Jinping’s zero-carbon pledge appeared first on Kevin Rudd.

Kevin RuddCNN: Kevin Rudd on Rio Tinto and Indigenous Heritage Sites

E&OE TRANSCRIPT
TV INTERVIEW
CNN
25 SEPTEMBER 2020

Michael Holmes
Former Australian Prime Minister and president of the Asia Society Policy Institute, Kevin Rudd joins me now. Prime Minister, I remember this, this well, you know, it was it was back in, you know, 2008, when you did something that was historic, you presented an apology to Indigenous Australians for past wrongs. 12 years later, this is happening, or what what has changed in Australia in relation to how Aboriginals are treated?

Kevin Rudd
Well, the Apology in 2008, helped turn a page in the recognition by white Australia, about the level of mistreatment of black Australia over the previous couple of hundred years. And some progress has been made in closing the gap in some areas of health and education between indigenous and non Indigenous Australians. But when you see actions, like we’ve just seen with Rio Tinto, in this willful destruction of a 46,000 year old archaeological site, which even their own internal archaeological advisors said, was a major site of archaeological significance in Australia, you scratch your head, and can only conclude that Rio Tinto really do see themselves as above and beyond government, and above and beyond the public interests shared by all Australians.

Michael Holmes
And we heard in Angus’ report, you know, that some of these sites the notion of destroying some of these sites for coal mine or whatever, you know, that it’s like destroying a war cemetery to indigenous people? Why are indigenous voices not being heard the way they should be? And, you know, let’s be honest coal, that’s not going to be a thing in 20-30 years, and meanwhile, you destroy something that’s been there for 30,000 years. Why aren’t they being heard?

Kevin Rudd
Well, you’re right to pose the question, because when I was Prime Minister, I began by acknowledging indigenous people8/7

* of this country as the oldest continuing culture on Earth. That is not a small statement. It’s a very large state now, but it’s also an historical and pre historical fact. And therefore, when we look at the architectural and the archaeological legacy of indigenous settlement in this country, we have a unique global responsibility to act responsibly. I can only put down this buccaneer cavalier attitude to the corporate arrogance of Rio Tinto and one or two of the other major mining companies that I have a very skeptical view of BHP Billiton in this respect, and their attitude to Indigenous Australians, but also a very comfortable, far too comfortable relationship with the conservative federal government of Australia, which replaced my government. Which, as you’ve just demonstrated, through one of the decisions made by their environment minister, are quite happy to allow indigenous interests and archaeological significance to take a back place.

Michael Holmes
Yeah, as I said, I remember 2008 well, when that apology was made. It was such a significant moment for Australia and Australians. I mean, I haven’t lived in Australia for 25 years, but you know, have have attitudes changed in recent years in Australia, or changed enough when it comes to these sorts of national treasures and appreciation of them? Has white Australia altered its view of Aboriginals Aboriginal heritage enough.

Kevin Rudd
I think the National Apology and the broader processes of reconciliation in this country, which my government took forward from 2008 onwards, have a continuing effect on the whole country, including white Australians. And what is my evidence of that? The evidence is that when Rio Tinto in their monumental arrogance, thought they could just slide through by blowing this 46,000 year old archaeologically, significant ancient caves to bits. Even let me call it, white Australia, turned around and said, this is just a bridge too far. Even conservative members of parliament in a relevant parliamentary inquiry, scratch their head and said this is just beyond the pale. And so if you want evidence of the fact that underlying community attitudes have changed, and political attitudes have changed, that’s it. But we still have a buccaneer approach on the part of Rio Tinto, who I think not just in Australia, but globally have demonstrated a level of arrogance to elected democratic governments around the world whereby Rio Tinto thinks it’s up here, and the elected governments and the norms of the societies in which they operate it down here, Rio Tinto has a major challenge. itself, as do other big miners like BHP Billiton, which has to understand that they are dealing with the people’s resource. And they are working within a framework of laws, which places an absolute priority on the centrality of the indigenous heritage of all these countries including Australia.

 

The post CNN: Kevin Rudd on Rio Tinto and Indigenous Heritage Sites appeared first on Kevin Rudd.

LongNowCharting Earth’s (Many) Mass Extinctions

How many mass extinctions has the Earth had, really? Most people talk today as if it’s five, but where one draws the line determines everything, and some say over twenty.

However many it might be, new mass extinctions seem to reveal themselves with shocking frequency. Just last year researchers argued for another giant die-off just preceding the Earth’s worst, the brutal end-Permian.

Now, one more has come into focus as stratigraphy improves. With that, four of the most awful things that life has ever suffered all came down (in many cases, literally, as volcanic ash) within 60 million years. Not a great span of time with which to spin the wheel, in case your time machine is imprecise…

Planet DebianKentaro Hayashi: dnsZoneEntry: field should be removed when DD is retired

It is known that Debian Developer can setup *.debian.net.

wiki.debian.org

When Debian Developer had retired, actual DNS entry is removed, but dnsZoneEntry: field is kept on LDAP (db.debian.org)

So you can not reuse *.debian.net if retired Debian Developer owns your prefered subdomain already.

I've posted question about this current undocumented specification.

lists.debian.org

Planet DebianVincent Bernat: Syncing RIPE, ARIN and APNIC objects with a custom Ansible module

Internet is split into five regional Internet registry: AFRINIC, ARIN, APNIC, LACNIC and RIPE. Each RIR maintains an Internet Routing Registry. An IRR allows one to publish information about the routing of Internet number resources.1 Operators use this to determine the owner of an IP address and to construct and maintain routing filters. To ensure your routes are widely accepted, it is important to keep the prefixes you announce up-to-date in an IRR.

There are two common tools to query this database: whois and bgpq4. The first one allows you to do a query with the WHOIS protocol:

$ whois -BrG 2a0a:e805:400::/40
[…]
inet6num:       2a0a:e805:400::/40
netname:        FR-BLADE-CUSTOMERS-DE
country:        DE
geoloc:         50.1109 8.6821
admin-c:        BN2763-RIPE
tech-c:         BN2763-RIPE
status:         ASSIGNED
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2020-05-19T08:04:58Z
last-modified:  2020-05-19T08:04:58Z
source:         RIPE

route6:         2a0a:e805:400::/40
descr:          Blade IPv6 - AMS1
origin:         AS64476
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2019-10-01T08:19:34Z
last-modified:  2020-05-19T08:05:00Z
source:         RIPE

The second one allows you to build route filters using the information contained in the IRR database:

$ bgpq4 -6 -S RIPE -b AS64476
NN = [
    2a0a:e805::/40,
    2a0a:e805:100::/40,
    2a0a:e805:300::/40,
    2a0a:e805:400::/40,
    2a0a:e805:500::/40
];

There is no module available on Ansible Galaxy to manage these objects. Each IRR has different ways of being updated. Some RIRs propose an API but some don’t. If we restrict ourselves to RIPE, ARIN and APNIC, the only common method to update objects is email updates, authenticated with a password or a GPG signature.2 Let’s write a custom Ansible module for this purpose!

Notice

I recommend that you read “Writing a custom Ansible module” as an introduction, as well as “Syncing MySQL tables” for a more instructive example.

Code

The module takes a list of RPSL objects to synchronize and returns the body of an email update if a change is needed:

- name: prepare RIPE objects
  irr_sync:
    irr: RIPE
    mntner: fr-blade-1-mnt
    source: whois-ripe.txt
  register: irr

Prerequisites

The source file should be a set of objects to sync using the RPSL language. This would be the same content you would send manually by email. All objects should be managed by the same maintainer, which is also provided as a parameter.

Signing and sending the result is not the responsibility of this module. You need two additional tasks for this purpose:

- name: sign RIPE objects
  shell:
    cmd: gpg --batch --user noc@example.com --clearsign
    stdin: "{{ irr.objects }}"
  register: signed
  check_mode: false
  changed_when: false

- name: update RIPE objects by email
  mail:
    subject: "NEW: update for RIPE"
    from: noc@example.com
    to: "auto-dbm@ripe.net"
    cc: noc@example.com
    host: smtp.example.com
    port: 25
    charset: us-ascii
    body: "{{ signed.stdout }}"

You also need to authorize the PGP keys used to sign the updates by creating a key-cert object and adding it as a valid authentication method for the corresponding mntner object:

key-cert:  PGPKEY-A791AAAB
certif:    -----BEGIN PGP PUBLIC KEY BLOCK-----
certif:    
certif:    mQGNBF8TLY8BDADEwP3a6/vRhEERBIaPUAFnr23zKCNt5YhWRZyt50mKq1RmQBBY
[]
certif:    -----END PGP PUBLIC KEY BLOCK-----
mnt-by:    fr-blade-1-mnt
source:    RIPE

mntner:    fr-blade-1-mnt
[]
auth:      PGPKEY-A791AAAB
mnt-by:    fr-blade-1-mnt
source:    RIPE

Module definition

Starting from the skeleton described in the previous article, we define the module:

module_args = dict(
    irr=dict(type='str', required=True),
    mntner=dict(type='str', required=True),
    source=dict(type='path', required=True),
)

result = dict(
    changed=False,
)

module = AnsibleModule(
    argument_spec=module_args,
    supports_check_mode=True
)

Getting existing objects

To grab existing objects, we use the whois command to retrieve all the objects from the provided maintainer.

# Per-IRR variations:
# - whois server
whois = {
    'ARIN': 'rr.arin.net',
    'RIPE': 'whois.ripe.net',
    'APNIC': 'whois.apnic.net'
}
# - whois options
options = {
    'ARIN': ['-r'],
    'RIPE': ['-BrG'],
    'APNIC': ['-BrG']
}
# - objects excluded from synchronization
excluded = ["domain"]
if irr == "ARIN":
    # ARIN does not return these objects
    excluded.extend([
        "key-cert",
        "mntner",
    ])

# Grab existing objects
args = ["-h", whois[irr],
        "-s", irr,
        *options[irr],
        "-i", "mnt-by",
        module.params['mntner']]
proc = subprocess.run("whois", *args, capture_output=True)
if proc.returncode != 0:
    raise AnsibleError(
        f"unable to query whois: {args}")
output = proc.stdout.decode('ascii')
got = extract(output, excluded)

The first part of the code setup some IRR-specific constants: the server to query, the options to provide to the whois command and the objects to exclude from synchronization. The second part invokes the whois command, requesting all objects whose mnt-by field is the provided maintainer. Here is an example of output:

$ whois -h whois.ripe.net -s RIPE -BrG -i mnt-by fr-blade-1-mnt
[…]

inet6num:       2a0a:e805:300::/40
netname:        FR-BLADE-CUSTOMERS-FR
country:        FR
geoloc:         48.8566 2.3522
admin-c:        BN2763-RIPE
tech-c:         BN2763-RIPE
status:         ASSIGNED
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2020-05-19T08:04:59Z
last-modified:  2020-05-19T08:04:59Z
source:         RIPE

[…]

route6:         2a0a:e805:300::/40
descr:          Blade IPv6 - PA1
origin:         AS64476
mnt-by:         fr-blade-1-mnt
remarks:        synced with cmdb
created:        2019-10-01T08:19:34Z
last-modified:  2020-05-19T08:05:00Z
source:         RIPE

[…]

The result is passed to the extract() function. It parses and normalizes the results into a dictionary mapping object names to objects. We store the result in the got variable.

def extract(raw, excluded):
    """Extract objects."""
    # First step, remove comments and unwanted lines
    objects = "\n".join([obj
                         for obj in raw.split("\n")
                         if not obj.startswith((
                                 "#",
                                 "%",
                         ))])
    # Second step, split objects
    objects = [RPSLObject(obj.strip())
               for obj in re.split(r"\n\n+", objects)
               if obj.strip()
               and not obj.startswith(
                   tuple(f"{x}:" for x in excluded))]
    # Last step, put objects in a dict
    objects = {repr(obj): obj
               for obj in objects}
    return objects

RPSLObject() is a class enabling normalization and comparison of objects. Look at the module code for more details.

>>> output="""
... inet6num:       2a0a:e805:300::/40
... […]
... """
>>> pprint({k: str(v) for k,v in extract(output, excluded=[])})
{'<Object:inet6num:2a0a:e805:300::/40>':
   'inet6num:       2a0a:e805:300::/40\n'
   'netname:        FR-BLADE-CUSTOMERS-FR\n'
   'country:        FR\n'
   'geoloc:         48.8566 2.3522\n'
   'admin-c:        BN2763-RIPE\n'
   'tech-c:         BN2763-RIPE\n'
   'status:         ASSIGNED\n'
   'mnt-by:         fr-blade-1-mnt\n'
   'remarks:        synced with cmdb\n'
   'source:         RIPE',
 '<Object:route6:2a0a:e805:300::/40>':
   'route6:         2a0a:e805:300::/40\n'
   'descr:          Blade IPv6 - PA1\n'
   'origin:         AS64476\n'
   'mnt-by:         fr-blade-1-mnt\n'
   'remarks:        synced with cmdb\n'
   'source:         RIPE'}

Comparing with wanted objects

Let’s build the wanted dictionary using the same structure, thanks to the extract() function we can use verbatim:

with open(module.params['source']) as f:
    source = f.read()
wanted = extract(source, excluded)

The next step is to compare got and wanted to build the diff object:

if got != wanted:
    result['changed'] = True
    if module._diff:
        result['diff'] = [
            dict(before_header=k,
                 after_header=k,
                 before=str(got.get(k, "")),
                 after=str(wanted.get(k, "")))
            for k in set((*wanted.keys(), *got.keys()))
            if k not in wanted or k not in got or wanted[k] != got[k]]

Returning updates

The module does not have a side effect. If there is a difference, we return the updates to send by email. We choose to include all wanted objects in the updates (contained in the source variable) and let the IRR ignore unmodified objects. We also append the objects to be deleted by adding a delete: attribute to each them them.

# We send all source objects and deleted objects.
deleted_mark = f"{'delete:':16}deleted by CMDB"
deleted = "\n\n".join([f"{got[k].raw}\n{deleted_mark}"
                       for k in got
                       if k not in wanted])
result['objects'] = f"{source}\n\n{deleted}"

module.exit_json(**result)

The complete code is available on GitHub. The module supports both --diff and --check flags. It does not return anything if no change is detected. It can work with APNIC, RIPE and ARIN. It is not perfect: it may not detect some changes,3 it is not able to modify objects not owned by the provided maintainer4 and some attributes cannot be modified, requiring to manually delete and recreate the updated object.5 However, this module should automate 95% of your needs.


  1. Other IRRs exist without being attached to a RIR. The most notable one is RADb↩︎

  2. ARIN is phasing out this method in favor of IRR-online. RIPE has an API available, but email updates are still supported and not planned to be deprecated. APNIC plans to expose an API↩︎

  3. For ARIN, we cannot query key-cert and mntner objects and therefore we cannot detect changes in them. It is also not possible to detect changes to the auth mechanisms of a mntner object. ↩︎

  4. APNIC do not assign top-level objects to the maintainer associated with the owner. ↩︎

  5. Changing the status of an inetnum object requires deleting and recreating the object. ↩︎

Worse Than FailureThe Watchdog Hydra

Ammar A uses Node to consume an HTTP API. The API uses cookies to track session information, and ensures those cookies expire, so clients need to periodically re-authenticate. How frequently?

That's an excellent question, and one that neither Ammar nor the docs for this API can answer. The documentation provides all the clarity and consistency of a religious document, which is to say you need to take it on faith that these seemingly random expirations happen for a good reason.

Ammar, not feeling particularly faithful, wrote a little "watchdog" function. Once you log in, every thirty seconds it tries to fetch a URL, hopefully keeping the session alive, or, if it times out, starts a new session.

That's what it was supposed to do, but Ammar made a mistake.

function login(cb) { request.post({url: '/login', form:{username: config.username, password: config.password}, function(err, response) { if (err) return cb(err) if (response.statusCode != 200) return cb(response.statusCode); console.log("[+] Login succeeded"); setInterval(watchdog, 30000); return cb(); }) } function watchdog() { // Random request to keep session alive request.get({url: '/getObj', form:{id: 1}, (err, response)=>{ if (!err && response.statusCode == 200) return console.log("[+] Watchdog ping succeeded"); console.error("[-] Watchdog encountered error, session may be dead"); login((err)=>{ if (err) console.error("[-] Failed restarting session:",err); }) }) }

login sends the credentials, and if we get a 200 status back, we setInterval to schedule a call to the watchdog every 30,000 milliseconds.

In the watchdog, we fetch a URL, and if it fails, we call login. Which attempts to login, and if it succeeds, schedules the watchdog every 30,000 milliseconds.

Late one night, Ammar got a call from the API vendor's security team. "You are attacking our servers, and not only have you already cut off access to most of our customers, you've caused database corruption! There will be consequences for this!"

Ammar checked, and sure enough, his code was sending hundreds of thousands of requests per second. It didn't take him long to figure out why: requests from the watchdog were failing with a 500 error, so it called the login method. The login method had been succeeding, so another watchdog got scheduled. Thirty seconds later, that failed, as did all the previously scheduled watchdogs, which all called login again. Which, on success, scheduled a fresh round of watchdogs. Every thirty seconds, the number of scheduled calls doubled. Before long, Ammar's code was DoSing the API.

In the aftermath, it turned out that Ammar hadn't caused any database corruption, to the contrary, it was an error in the database which caused all the watchdog calls to fail. The vendor corrupted their own database, without Ammar's help. Coupled with Ammar's careless setInterval management, that error became even more punishing.

Which raises the question: why wasn't the vendor better prepared for this? Ammar's code was bad, sure, a lurking timebomb, just waiting to go off. But any customer could have produced something like that. A hostile entity could easily have done something like that. And somehow, this vendor had nothing in place to deal with what appeared to be a denial-of-service attack, short of calling the customer responsible?

I don't know what was going on with the vendor's operations team, but I suspect that the real WTF is somewhere in there.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianSteinar H. Gunderson: Introducing plocate

In continued annoyance over locate's slowness, I made my own locate using posting lists (thus the name plocate) and compression, and it turns out that you hardly need any tuning at all to make it fast. Example search on a system with 26M files:

cassarossa:~/nmu/plocate> ls -lh /var/lib/mlocate  
total 1,5G                
-rw-r----- 1 root mlocate 1,1G Sep 27 06:33 mlocate.db
-rw-r----- 1 root mlocate 470M Sep 28 00:34 plocate.db

cassarossa:~/nmu/plocate> time mlocate info/mlocate
/var/lib/dpkg/info/mlocate.conffiles
/var/lib/dpkg/info/mlocate.list
/var/lib/dpkg/info/mlocate.md5sums
/var/lib/dpkg/info/mlocate.postinst
/var/lib/dpkg/info/mlocate.postrm
/var/lib/dpkg/info/mlocate.prerm
mlocate info/mlocate  20.75s user 0.14s system 99% cpu 20.915 total

cassarossa:~/nmu/plocate> time plocate info/mlocate
/var/lib/dpkg/info/mlocate.conffiles
/var/lib/dpkg/info/mlocate.list
/var/lib/dpkg/info/mlocate.md5sums
/var/lib/dpkg/info/mlocate.postinst
/var/lib/dpkg/info/mlocate.postrm
/var/lib/dpkg/info/mlocate.prerm
plocate info/mlocate  0.01s user 0.00s system 83% cpu 0.008 total

It will be slower if files are on rotating rust and not cached, but still much faster then mlocate.

It's a prototype, and freerides off of updatedb from mlocate (mlocate.db is converted to plocate.db). Case-sensitive matches only, no regexes or other funny business. Get it from https://git.sesse.net/?p=plocate (clone with --recursive so that you get the TurboPFOR submodule). GPLv2+.

Planet DebianNorbert Preining: Cinnamon for Debian – imminent possible removal from testing

Update 2020-09-30: The post has created considerable movement, and a PR request by Fedora developers to rebase cjs onto more current gjs and libmozjs78 is being tested. I have uploaded packages of cinnamon and cjs to experimental based on these patches (400 files, about 50000 lines of code touched, not what I normally like to have in a debian patch) and would appreciate testing and feedback.

I have been more or less maintaining Cinnamon now for quite some time, but using it only sporadically due to my switch to KDE/Plasma. Currently, Cinnamon’s cjs package depends on mozjs52, which also is probably going to be orphaned soon. This will precipitate a lot of changes, not the least being Cinnamon being removed from Debian/testing.

I have pinged upstream several times, without much success. So for now the future looks bleak for cinnamon in Debian. If there are interested developers (Debian or not), please get in touch with me, or directly try to update cjs to mozjs78.

Planet DebianEnrico Zini: Coup d'état in recent Italian history

Italy during the cold war has always been in too strategic a position, and with too strong a left wing movement, not to get the CIA involved.

Here are a few stories of coup d'état and other kinds of efforts to manipulate Italian politics:

Planet DebianIain R. Learmonth: Multicast IPTV

For almost a decade, I’ve been very slowly making progress on a multicast IPTV system. Recently I’ve made a significant leap forward in this project, and I wanted to write a little on the topic so I’ll have something to look at when I pick this up next. I was aspiring to have a useable system by the end of today, but for a couple of reasons, it wasn’t possible.

When I started thinking about this project, it was still common to watch broadcast television. Over time the design of this system has been changing as new technologies have become available. Multicast IP is probably the only constant, although I’m now looking at IPv6 rather than IPv4.

Initially, I’d been looking at DVB-T PCI cards. USB devices have become common and are available cheaply. There are also DVB-T hats available for the Raspberry Pi. I’m now looking at a combination of Raspberry Pi hats and USB devices with one of each on a couple of Pis.

Two Raspberry Pis with DVB hats installed, TV antenna sockets showing

Two Raspberry Pis with DVB hats installed, TV antenna sockets showing

The Raspberry Pi devices will run DVBlast, an open-source DVB demultiplexer and streaming server. Each of the tuners will be tuned to a different transponder giving me the ability to stream any combination of available channels simultaneously. This is everything that would be needed to watch TV on PCs on the home network with VLC.

I’ve not yet worked out if Kodi will accept multicast streams as a TV source, but I do know that Tvheadend will. Tvheadend can also act as a PVR to record programmes for later playback so is useful even if the multicast streams can be viewed directly.

So how far did I get? I have built two Raspberry Pis in cases with the DVB-T hats on. They need to sit in the lounge as that’s where the antenna comes down from the roof. There’s no wired network connection in the lounge. I planned to use an OpenBSD box as a gateway, bridging the wireless network to a wired network.

Two problems quickly emerged. The first being that the wireless card I had purchased only supported 2.4GHz, no 5GHz, and I have enough noise from neighbours that the throughput rate and packet loss are unacceptable.

The second problem is that I had forgotten the problems with bridging wireless networks. To create a bridge, you need to be able to spoof the MAC addresses of wired devices on the wireless interface, but this can only be done when the wireless interface is in access point mode.

So when I come back to this, I will have to look at routing rather than bridging to work around the MAC address issue, and I’ll also be on the lookout for a cheap OpenBSD supported mini-PCIe wireless card that can do 5GHz.

Planet DebianJoachim Breitner: Learn Haskell on CodeWorld writing Sokoban

Two years ago, I held the CIS194 minicourse on Haskell at the University of Pennsylvania. In that installment of the course, I changed the first four weeks to teach the basics of Haskell using the online Haskell environment CodeWorld, and lead the students towards implementing the game Sokoban.

As it is customary for CIS194, I put my lecture notes and exercises online, and this has been used as a learning resources by people from all over the world. But since I have left the University of Pennsylvania, I lost the ability to update the text, and as the CodeWorld API has evolved, some of the examples and exercises no longer work.

Some recent complains about that, in bug reports against CodeWorld and in unrealistically flattering tweets (“Shame, this was the best Haskell course ever!!!”) motivated me to extract that material and turn it into an updated stand-alone tutorial that I can host myself.

So if you feel like learning Haskell without worrying about local installation, and while creating a reasonably fun game, head over to https://haskell-via-sokoban.nomeata.de/ and get started! Improvements can now also be contributed at https://github.com/nomeata/haskell-via-sokoban.

Credits go to Brent Yorgey, Richard Eisenberg and Noam Zilberstein, who held the previous installments of the course, and Chris Smith for creating the CodeWorld environment.

Planet DebianDirk Eddelbuettel: pkgKitten 0.2.0: Now with tinytest and new docs

kitten

A new release 0.2.0 of pkgKitten just hit on CRAN today, or about eleven months after the previous release.

This release brings support for tinytest by having pkgKitten::kitten() automagically call tinytest::puppy() if the latter package is installed (and the user did not opt out of calling it). So your newly created minimal package now also uses a wonderful yet tiny testing framework. We also added a new documentation site using the previously tweeted-about wrapper for Material for MkDocs I really dig. And last but not least we switched to BSPM-based Continued Integration (which I wrote about yesterday in R4 #30) and fixed one bug regarding the default NAMESPACE file.

Changes in version 0.2.0 (2020-09-27)

  • Continuous Integration uses the updated BSPM-based script on Travis and with GitHub Actions (Dirk in #11 plus earlier commits).

  • A new default NAMESPACE file is now installed (Dirk in #12).

  • A package documentation website was added (Dirk in #13).

  • Call tinytest::puppy if installed and not opted out (Dirk in #14).

More details about the package are at the pkgKitten webpage, the (new) pkgKitten docs site, and the pkgKitten GitHub repo.

Courtesy of my CRANberries site, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianAndrew Cater: Final post from media team for the day - most of the ordinary images and live images have been tested

 Winding down slightly - we've worked our way through most of the images and testing. Schweer tested all of the Debian Edu/Skolelinux images for which many thanks

Sledge, RattusRattus, Isy and I have been working pretty much solidly for 10 3/4 hours. There's still some images to build - mips, mipsel and s390x but these are all images that we don't have hardware to test on particularly.

Another good and useful day - bits and pieces done throughout. NOTE: There appear to have been some security updates since the main release this morning so, as ever, it's worth updating machines on a regular basis.

Waiting for the final images to finish building so that we can check the archive for completeness and then publish to the media mirrors. All the best until next time: thanks as ever to Sledge for his invaluable help. See you again in a couple of months in all likelihood. 

A much smaller release: some time in the next month we hope to be able to build and test an Alpha release for Bullseye. Bullseye is likely to be released somewhere round the middle of next year so we'll have additional Buster stable point releases in the meantime.

Planet DebianAndrew Cater: There's a Debian point release for Debian stable happening this weekend - 10.6

 Nothing particularly new or unexpected: there's a point release happening at some point this weekend for Debian stable. Usual rules apply: if you've already got a system current and up to date, there's not much to do but the base files version will change at some point to reflect 10.6 when you next update. 

If you have media from 10.5, you may not _have_ to go and get media this weekend but it's always useful to get new media in due course. There's an updated kernel and an ABI bump. You _will_ need to reboot at some time to use the new kernel image.

This point release will contain security fixes, consequent changes etc. as usual - it is always good and useful to keep machines up to date.

Working with the CD team to eventually test, build and release CD / DVD images and media as and when files gradually become available. As ever, this may take 12-16 hours. As ever, I'll post some blog entries as we go.

Currently "sitting in Cambridge" via video link with Sledge, RattusRattus and Isy who are all involved in the testing and we'll have a great day, as ever.


Planet DebianFrançois Marier: Repairing a corrupt ext4 root partition

I ran into filesystem corruption (ext4) on the root partition of my backup server which caused it to go into read-only mode. Since it's the root partition, it's not possible to unmount it and repair it while it's running. Normally I would boot from an Ubuntu live CD / USB stick, but in this case the machine is using the mipsel architecture and so that's not an option.

Repair using a USB enclosure

I had to pull the shutdown the server and then pull the SSD drive out. I then moved it to an external USB enclosure and connected it to my laptop.

I started with an automatic filesystem repair:

fsck.ext4 -pf /dev/sde2

which failed for some reason and so I moved to an interactive repair:

fsck.ext4 -f /dev/sde2

Once all of the errors were fixed, I ran a full surface scan to update the list of bad blocks:

fsck.ext4 -c /dev/sde2

Finally, I forced another check to make sure that everything was fixed at the filesystem level:

fsck.ext4 -f /dev/sde2

Fix invalid alternate GPT

The other thing I noticed is this messge in my dmesg log:

scsi 8:0:0:0: Direct-Access     KINGSTON  SA400S37120     SBFK PQ: 0 ANSI: 6
sd 8:0:0:0: Attached scsi generic sg4 type 0
sd 8:0:0:0: [sde] 234441644 512-byte logical blocks: (120 GB/112 GiB)
sd 8:0:0:0: [sde] Write Protect is off
sd 8:0:0:0: [sde] Mode Sense: 31 00 00 00
sd 8:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 8:0:0:0: [sde] Optimal transfer size 33553920 bytes
Alternate GPT is invalid, using primary GPT.
 sde: sde1 sde2

I therefore checked to see if the partition table looked fine and got the following:

$ fdisk -l /dev/sde
GPT PMBR size mismatch (234441643 != 234441647) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Disk /dev/sde: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 799CD830-526B-42CE-8EE7-8C94EF098D46

Device       Start       End   Sectors   Size Type
/dev/sde1     2048   8390655   8388608     4G Linux swap
/dev/sde2  8390656 234441614 226050959 107.8G Linux filesystem

It turns out that all I had to do, since only the backup / alternate GPT partition table was corrupt and the primary one was fine, was to re-write the partition table:

$ fdisk /dev/sde

Welcome to fdisk (util-linux 2.33.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

GPT PMBR size mismatch (234441643 != 234441647) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.

Command (m for help): w

The partition table has been altered.
Syncing disks.

Run SMART checks

Since I still didn't know what caused the filesystem corruption in the first place, I decided to do one last check: SMART errors.

I couldn't do this via the USB enclosure since the SMART commands aren't forwarded to the drive and so I popped the drive back into the backup server and booted it up.

First, I checked whether any SMART errors had been reported using smartmontools:

smartctl -a /dev/sda

That didn't show any errors and so I kicked off an extended test:

smartctl -t long /dev/sda

which ran for 30 minutes and then passed without any errors.

The mystery remains unsolved.

Planet DebianDirk Eddelbuettel: #30: Easy, Reliable, Fast and Portable Linux and macOS Continuous Integration

Welcome to the 30th post in the rarified R recommendation resources series or R4 for short. The last post introduced BSPM. In the four weeks since, we have worked some more on BSPM to bring it to the point where it is ready for use with continuous integration. Building on this, it is now used inside the run.sh script that driven our CI use for many years (via the r-travis repo).

Which we actually use right now on three different platforms:

All three use the exact same script facilitating this, and run a ‘matrix’ over Linux and macOS. You read this right: one CI setup that is portable and which you can take to your CI provider of choice. No lock-in or tie-in. Use what works, change at will. Or run on all three if you like burning extra cycles.

This is already used by handful of my repos as well as by at least two repos of friends also deploying r-travis. How does it work? In a nutshell we are

  • downloading run.sh via curl and changing its mode;
  • running run.sh bootstrap which sets the operating system default:
    • on Linux we use Ubuntu,
      • add two PPAs repos for R itself and over 4600 r-cran-* binaries,
      • and enable BSPM to use these from install.packages()
    • on macOS we use the standard setup also used on Travis, GitHub Actions and elsewhere;
    • this provides us with fast, reliable, easy, and portable access to binaries on two OSs under dependency resolution;
  • running run.sh install_deps to install just the requireded Depends:, Imports: and LinkingTo:
  • running run.sh tests to build the tarball and test it via R CMD check --as-cran.

There are several customizations that are possible via environment variables

  • additional PPAs or drat repos can be added to offer even more package choice;
  • alternatively one could run run.sh install_all to also install Suggests:;
  • optionally one could run run.sh install_r pkgA pkgB ... to install packages explicitly listed;
  • optionally one could also run run.sh install_aptget r-cran-pkga r-cran-pkgb otherpackage to add more Ubuntu binaries.

We find this setup compelling. The scheme is simple: there really is just one shell script behind it which can also be downloaded and altered. The scheme is also portable as we can (as shown) rotate between CI provides. The scheme is also more flexible: in case of debugging needs one can simply run the script on a local Docker or VM instance. Lastly, the scheme moves away from single points of failure or breakage.

Currently the script uses only BSPM as I had the hunch that it would a) work and b) be compelling. Adding support for RSPM would be equally easy, but I have no immediate need to do so. Adding BioConductor installation may be next. That is easy when BioConductor uses r-release; it may be little more challenging under r-devel to but it should work too. Stay tuned.

In the meantime, if the above sounds compelling, give run.sh from r-travis a go!

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianAndrew Cater: Chunking through the tests for various media images ...

We're working our way through some of the CD/DVD/Blu-Ray media images, doing test installs, noting failures and so on. It's repetitive work but vital if we're going to provide some assurance that folk can install from the images we make. 

There's always the few things that catch us out and there's always something to note for next time. Schweer has joined us and is busy chasing down debian-edu/Skolelinux installs from Germany. We're getting there, one way and another, and significantly ahead of where we were last time around when the gremlins got in and delayed us. All good :)


Planet DebianAndrew Cater: There are things that money can't buy - and sensible Debian colleagues are worth gold and diamonds :)

 Participating in the Debian media testing on debian-cd. One of my colleagues has just spent time to sort out an email issue having spent a couple of hours with me the other night. I now have good, working email for the first time in years - I can't value that highly enough.

Sledge, RattusRattus, Isy and myself are all engaged in testing various CD images. At the same time, they're debugging a new application to save us from wiki problems when we do this - and we're also able to use a video link which is really handy to chat backwards and forwards and means I can sit virtually in Cambridge :)

Lots of backchat and messages flying backwards and forwards - couldn't wish for a better way to spend an afternoon with friends.



LongNowThe Language Keepers Podcast

A six-part podcast from Emergence Magazine explores the plight of four Indigenous languages spoken in California—Tolowa Dee-ni’, Karuk, Wukchumni, and Kawaiisu—among the most vulnerable in the world:

“Two centuries ago, as many as ninety languages and three hundred dialects were spoken in California; today, only half of these languages remain. In Episode One, we are introduced to the language revitalization efforts of these four Indigenous communities. Through their experiences, we examine the colonizing histories that brought Indigenous languages to the brink of disappearance and the struggle for Indigenous cultural survival in America today.”

,

Planet DebianRussell Coker: Bandwidth for Video Conferencing

For the Linux Users of Victoria (LUV) I’ve run video conferences on Jitsi and BBB (see my previous post about BBB vs Jitsi [1]). One issue with video conferences is the bandwidth requirements.

The place I’m hosting my video conference server has a NBN link with allegedly 40Mb/s transmission speed and 100Mb/s reception speed. My tests show that it can transmit at about 37Mb/s and receive at speeds significantly higher than that but also quite a bit lower than 100Mb/s (around 60 or 70Mb/s). For a video conference server you have a small number of sources of video and audio and a larger number of targets as usually most people will have their microphones muted and video cameras turned off. This means that the transmission speed is the bottleneck. In every test the reception speed was well below half the transmission speed, so the tests confirmed my expectation that transmission was the only bottleneck, but the reception speed was higher than I had expected.

When we tested bandwidth use the maximum upload speed we saw was about 4MB/s (32Mb/s) with 8+ video cameras and maybe 20 people seeing some of the video (with a bit of lag). We used 3.5MB/s (28Mb/s) when we only had 6 cameras which seemed to be the maximum for good performance.

In another test run we had 4 people all sending video and the transmission speed was about 260KB/s.

I don’t know how BBB manages the small versions of video streams. It might reduce the bandwidth when the display window is smaller.

I don’t know the resolutions of the cameras. When you start sending video in BBB you are prompted for the “quality” with “medium” being default. I don’t know how different camera hardware and different choices about “quality” affect bandwidth.

These tests showed that for the cameras we had available a small group of people video chatting a 100/40 NBN link (the fastest Internet link in Australia that’s not really expensive) a small group of people can be all sending video or a medium size group of people can watch video streams from a small group.

For meetings of the typical size of LUV meetings we won’t have a bandwidth problem.

There is one common case that I haven’t yet tested, where there is a single video stream that many people are watching. If 4 people are all sending video with 260KB/s transmission bandwidth then 1 person could probably send video to 4 for 65KB/s. Doing some simple calculations on those numbers implies that we could have 1 person sending video to 240 people without running out of bandwidth. I really doubt that would work, but further testing is needed.

Krebs on SecurityWho is Tech Investor John Bernard?

John Bernard, the subject of a story here last week about a self-proclaimed millionaire investor who has bilked countless tech startups, appears to be a pseudonym for John Clifton Davies, a U.K. man who absconded from justice before being convicted on multiple counts of fraud in 2015. Prior to his conviction, Davies served 16 months in jail before being cleared of murdering his wife on their honeymoon in India.

The Private Office of John Bernard, which advertises itself as a capital investment firm based in Switzerland, has for years been listed on multiple investment sites as the home of a millionaire who made his fortunes in the dot-com boom 20 years ago and who has oodles of cash to invest in tech startups.

But as last week’s story noted, Bernard’s investment company is a bit like a bad slot machine that never pays out. KrebsOnSecurity interviewed multiple investment brokers who all told the same story: After promising to invest millions after one or two phone calls and with little or no pushback, Bernard would insist that companies pay tens of thousands of dollars worth of due diligence fees up front.

However, the due diligence company he insisted on using — another Swiss firm called Inside Knowledge — also was secretly owned by Bernard, who would invariably pull out of the deal after receiving the due diligence money.

Neither Mr. Bernard nor anyone from his various companies responded to multiple requests for comment over the past few weeks. What’s more, virtually all of the employee profiles tied to Bernard’s office have since last week removed those firms from their work experience as listed on their LinkedIn resumes — or else deleted their profiles altogether.

Sometime on Thursday John Bernard’s main website — the-private-office.ch — replaced the content on its homepage with a note saying it was closing up shop.

“We are pleased to announce that we are currently closing The Private Office fund as we have reached our intended investment level and that we now plan to focus on helping those companies we have invested into to grow and succeed,” the message reads.

As noted in last week’s story, the beauty of a scam like the one multiple investment brokers said was being run by Mr. Bernard is that companies bilked by small-time investment schemes rarely pursue legal action, mainly because the legal fees involved can quickly surpass the losses. What’s more, most victims will likely be too ashamed to come forward.

Also, John Bernard’s office typically did not reach out to investment brokers directly. Rather, he had his firm included on a list of angel investors focused on technology companies, so those seeking investments usually came to him.

Finally, multiple sources interviewed for this story said Bernard’s office offered a finders fee for any investment leads that brokers brought his way. While such commissions are not unusual, the amount promised — five percent of the total investment in a given firm that signed an agreement — is extremely generous. However, none of the investment brokers who spoke to KrebsOnSecurity were able to collect those fees, because Bernard’s office never actually consummated any of the deals they referred to him.

PAY NO ATTENTION TO THE EMPTY BOOKSHELVES

After last week’s story ran, KrebsOnSecurity heard from a number of other investment brokers who had near identical experiences with Bernard. Several said they at one point spoke with him via phone or Zoom conference calls, and that he had a distinctive British accent.

When questioned about why his staff was virtually all based in Ukraine when his companies were supposedly in Switzerland, Bernard replied that his wife was Ukrainian and that they were living there to be closer to her family.

One investment broker who recently got into a deal with Bernard shared a screen shot from a recent Zoom call with him. That screen shot shows Bernard bears a striking resemblance to one John Clifton Davies, a 59-year-old from Milton Keynes, a large town in Buckinghamshire, England about 50 miles (80 km) northwest of London.

John Bernard (left) in a recent Zoom call, and a photo of John Clifton Davies from 2015.

In 2015, Mr. Davies was convicted of stealing more than GBP 750,000 from struggling companies looking to restructure their debt. For at least seven years, Davies ran multiple scam businesses that claimed to provide insolvency consulting to distressed companies, even though he was not licensed to do so.

“After gaining the firm’s trust, he took control of their assets and would later pocket the cash intended for creditors,” according to a U.K. news report from 2015. “After snatching the cash, Davies proceeded to spend the stolen money on a life of luxury, purchasing a new upmarket home fitted with a high-tech cinema system and new kitchen.”

Davies disappeared before he was convicted of fraud in 2015. Two years before that, Davies was released from prison after being held in custody for 16 months on suspicion of murdering his new bride in 2004 on their honeymoon in India.

Davies’ former wife Colette Davies, 39, died after falling 80 feet from a viewing point at a steep gorge in the Himachal Pradesh region of India. Mr. Davies was charged with murder and fraud after he attempted to collect GBP 132,000 in her life insurance payout, but British prosecutors ultimately conceded they did not have enough evidence to convict him.

THE SWISS AND UKRAINE CONNECTIONS

While the photos above are similar, there are other clues that suggest the two identities may be the same person. A review of business records tied to Davies’ phony insolvency consulting businesses between 2007 and 2013 provides some additional pointers.

John Clifton Davies’ former listing at the official U.K. business registrar Companies House show his company was registered at the address 26 Dean Forest Way, Broughton, Milton Keynes.

A search on that street address at 4iq.com turns up several interesting results, including a listing for senecaequities.com registered to a John Davies at the email address john888@myswissmail.ch.

A Companies House official record for Seneca Equities puts it at John Davies’ old U.K. address at 26 Dean Forest Way and lists 46-year-old Iryna Davies as a director. “Iryna” is a uniquely Ukrainian spelling of the name Irene (the Russian equivalent is typically “Irina”).

A search on John Clifton Davies and Iryna turned up this 2013 story from The Daily Mirror which says Iryna is John C. Davies’ fourth wife, and that the two were married in 2010.

A review of the Swiss company registrar for The Inside Knowledge GmbH shows an Ihor Hubskyi was named as president of the company. This name is phonetically the same as Igor Gubskyi, a Ukrainian man who was listed in the U.K.’s Companies House records as one of five officers for Seneca Equities along with Iryna Davies.

KrebsOnSecurity sought comment from both the U.K. police district that prosecuted Davies’ case and the U.K.’s National Crime Agency (NCA). Neither wished to comment on the findings. “We can neither confirm nor deny the existence of an investigation or subjects of interest,” a spokesperson for the NCA said.

Planet DebianColin Watson: Porting Launchpad to Python 3: progress report

Launchpad still requires Python 2, which in 2020 is a bit of a problem. Unlike a lot of the rest of 2020, though, there’s good reason to be optimistic about progress.

I’ve been porting Python 2 code to Python 3 on and off for a long time, from back when I was on the Ubuntu Foundations team and maintaining things like the Ubiquity installer. When I moved to Launchpad in 2015 it was certainly on my mind that this was a large body of code still stuck on Python 2. One option would have been to just accept that and leave it as it is, maybe doing more backporting work over time as support for Python 2 fades away. I’ve long been of the opinion that this would doom Launchpad to being unmaintainable in the long run, and since I genuinely love working on Launchpad - I find it an incredibly rewarding project - this wasn’t something I was willing to accept. We’re already seeing some of our important dependencies dropping support for Python 2, which is perfectly reasonable on their terms but which is starting to become a genuine obstacle to delivering important features when we need new features from newer versions of those dependencies. It also looks as though it may be difficult for us to run on Ubuntu 20.04 LTS (we’re currently on 16.04, with an upgrade to 18.04 in progress) as long as we still require Python 2, since we have some system dependencies that 20.04 no longer provides. And then there are exciting new features like type hints and async/await that we’d like to be able to use.

However, until last year there were so many blockers that even considering a port was barely conceivable. What changed in 2019 was sorting out a trifecta of core dependencies. We ported our database layer, Storm. We upgraded to modern versions of our Zope Toolkit dependencies (after contributing various fixes upstream, including some substantial changes to Zope’s test runner that we’d carried as local patches for some years). And we ported our Bazaar code hosting infrastructure to Breezy. With all that in place, a port seemed more of a realistic possibility.

Still, even with this, it was never going to be a matter of just following some standard porting advice and calling it good. Launchpad has almost a million lines of Python code in its main git tree, and around 250 dependencies of which a number are quite Launchpad-specific. In a project that size, not only is following standard porting advice an extremely time-consuming task in its own right, but just about every strange corner case is going to show up somewhere. (Did you know that StringIO.StringIO(None) and io.StringIO(None) do different things even after you account for the native string vs. Unicode text difference? How about the behaviour of .union() on a subclass of frozenset?) Launchpad’s test suite is fortunately extremely thorough, but even just starting up the test suite involves importing most of the data model code, so before you can start taking advantage of it you have to make a large fraction of the codebase be at least syntactically-correct Python 3 code and use only modules that exist in Python 3 while still working in Python 2; in a project this size that turns out to be a large effort on its own, and can be quite risky in places.

Canonical’s product engineering teams work on a six-month cycle, but it just isn’t possible to cram this sort of thing into six months unless you do literally nothing else, and “please can we put all feature development on hold while we run to stand still” is a pretty tough sell to even the most understanding management. Fortunately, we’ve been able to grow the Launchpad team in the last year or so, and so it’s been possible to put “Python 3” on our roadmap on the understanding that we aren’t going to get all the way there in one cycle, while still being able to do other substantial feature development work as well.

So, with all that preamble, what have we done this cycle? We’ve taken a two-pronged approach. From one end, we identified 147 classes that needed to be ported away from some compatibility code in our database layer that was substantially less friendly to Python 3: we’ve ported 38 of those, so there’s clearly a fair bit more to do, but we were able to distribute this work out among the team quite effectively. From the other end, it was clear that it would be very inefficient to do general porting work when any attempt to even run the test suite would run straight into the same crashes in the same order, so I set myself a target of getting the test suite to start up, and started hacking on an enormous git branch that I never expected to try to land directly: instead, I felt free to commit just about anything that looked reasonable and moved things forward even if it was very rough, and every so often went back to tidy things up and cherry-pick individual commits into a form that included some kind of explanation and passed existing tests so that I could propose them for review.

This strategy has been dramatically more successful than anything I’ve tried before at this scale. So far this cycle, considering only Launchpad’s main git tree, we’ve landed 137 Python-3-relevant merge proposals for a total of 39552 lines of git diff output, keeping our existing tests passing along the way and deploying incrementally to production. We have about 27000 more lines of patch at varying degrees of quality to tidy up and merge. Our main development branch is only perhaps 10 or 20 more patches away from the test suite being able to start up, at which point we’ll be able to get a buildbot running so that multiple developers can work on this much more easily and see the effect of their work. With the full unlanded patch stack, about 75% of the test suite passes on Python 3! This still leaves a long tail of several thousand tests to figure out and fix, but it’s a much more incrementally-tractable kind of problem than where we started.

Finally: the funniest (to me) bug I’ve encountered in this effort was the one I encountered in the test runner and fixed in zopefoundation/zope.testrunner#106: IDs of failing tests were written to a pipe, so if you have a test suite that’s large enough and broken enough then eventually that pipe would reach its capacity and your test runner would just give up and hang. Pretty annoying when it meant an overnight test run didn’t give useful results, but also eloquent commentary of sorts.

Worse Than FailureError'd: Where to go, Next?

"In this screenshot, 'Lyckades' means 'Succeeded' and the buttons say 'Try again' and 'Cancel'. There is no 'Next' button," wrote Martin W.

 

"I have been meaning to send a card, but I just wasn't planning on using PHP's eval() to send it," Andrew wrote.

 

Martyn R. writes, "I was trying to connect to PA VPN and it seems they think that downgrading my client software will help."

 

"What the heck, Doordash? I was expecting a little more variety from you guys...but, to be honest, I gotta wonder what 'null' flavored cheesecake is like," Joshua M. writes.

 

Nicolas wrote, "NaN exclusive posts? I'm already on the inside loop. After all, I love my grandma!"

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianReproducible Builds: ARDC sponsors the Reproducible Builds project

The Reproducible Builds project is pleased to announce a donation from Amateur Radio Digital Communications (ARDC) in support of its goals. ARDC’s contribution will propel the Reproducible Builds project’s efforts in ensuring the future health, security and sustainability of our increasingly digital society.

About Amateur Radio Digital Communications (ARDC)

Amateur Radio Digital Communications (ARDC) is a non-profit that was formed to further research and experimentation with digital communications using radio, with a goal of advancing the state of the art of amateur radio and to educate radio operators in these techniques.

It does this by managing the allocation of network resources, encouraging research and experimentation with networking protocols and equipment, publishing technical articles and number of other activities to promote the public good of amateur radio and other related fields. ARDC has recently begun to contribute funding to organisations, groups, individuals and projects towards these and related goals, and their grant to the Reproducible Builds project is part of this new initiative.

Amateur radio is an entirely volunteer activity performed by knowledgeable hobbyists who have proven their ability by passing the appropriate government examinations. No remuneration is permitted. “Ham radio,” as it is also known, has proven its value in advancements of the state of the communications arts, as well as in public service during disasters and in times of emergency.

For more information about ARDC, please see their website at ampr.org.

About the Reproducible Builds project

One of the original promises of open source software was that peer review would result in greater end-user security and stability of our digital ecosystem. However, although it is theoretically possible to inspect and build the original source code in order to avoid maliciously-inserted flaws, almost all software today is distributed in prepackaged form.

This disconnect allows third-parties to compromise systems by injecting code into seemingly secure software during the build process, as well as by manipulating copies distributed from ‘app stores’ and other package repositories.

In order to address this, ‘Reproducible builds’ are a set of software development practices, ideas and tools that create an independently-verifiable path from the original source code, all the way to what is actually running on our machines. Reproducible builds can reveal the injection of backdoors introduced by the hacking of developers’ own computers, build servers and package repositories, but can also expose where volunteers or companies have been coerced into making changes via blackmail, government order, and so on.

A world without reproducible builds is a world where our digital infrastructure cannot be trusted and where online communities are slower to grow, collaborate less and are increasingly fragile. Without reproducible builds, we leave space for greater encroachments on our liberties both by individuals as well as powerful, unaccountable actors such as governments, large corporations and autocratic regimes.

The Reproducible Builds project began as a project within the Debian community, but is now working with many crucial and well-known free software projects such as Coreboot, openSUSE, OpenWrt, Tails, GNU Guix, Arch Linux, Tor, and many others. It is now an entirely Linux distribution independent effort and serves as the central ‘clearing house’ for all issues related to securing build systems and software supply chains of all kinds.

For more about the Reproducible Builds project, please see their website at reproducible-builds.org.


If you are interested in ensuring the ongoing security of the software that underpins our civilisation, and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

,

Krebs on SecurityMicrosoft: Attackers Exploiting ‘ZeroLogon’ Windows Flaw

Microsoft warned on Wednesday that malicious hackers are exploiting a particularly dangerous flaw in Windows Server systems that could be used to give attackers the keys to the kingdom inside a vulnerable corporate network. Microsoft’s warning comes just days after the U.S. Department of Homeland Security issued an emergency directive instructing all federal agencies to patch the vulnerability by Sept. 21 at the latest.

DHS’s Cybersecurity and Infrastructure Agency (CISA) said in the directive that it expected imminent exploitation of the flaw — CVE-2020-1472 and dubbed “ZeroLogon” — because exploit code which can be used to take advantage of it was circulating online.

Last night, Microsoft’s Security Intelligence unit tweeted that the company is “tracking threat actor activity using exploits for the CVE-2020-1472 Netlogon vulnerability.”

“We have observed attacks where public exploits have been incorporated into attacker playbooks,” Microsoft said. “We strongly recommend customers to immediately apply security updates.”

Microsoft released a patch for the vulnerability in August, but it is not uncommon for businesses to delay deploying updates for days or weeks while testing to ensure the fixes do not interfere with or disrupt specific applications and software.

CVE-2020-1472 earned Microsoft’s most-dire “critical” severity rating, meaning attackers can exploit it with little or no help from users. The flaw is present in most supported versions of Windows Server, from Server 2008 through Server 2019.

The vulnerability could let an unauthenticated attacker gain administrative access to a Windows domain controller and run an application of their choosing. A domain controller is a server that responds to security authentication requests in a Windows environment, and a compromised domain controller can give attackers the keys to the kingdom inside a corporate network.

Scott Caveza, research engineering manager at security firm Tenable, said several samples of malicious .NET executables with the filename ‘SharpZeroLogon.exe’ have been uploaded to VirusTotal, a service owned by Google that scans suspicious files against dozens of antivirus products.

“Given the flaw is easily exploitable and would allow an attacker to completely take over a Windows domain, it should come as no surprise that we’re seeing attacks in the wild,” Caveza said. “Administrators should prioritize patching this flaw as soon as possible. Based on the rapid speed of exploitation already, we anticipate this flaw will be a popular choice amongst attackers and integrated into malicious campaigns.”

Kevin RuddWall Street Journal: China Backslides on Economic Reform

Published in the Wall Street Journal on 24 September 2020

 

China is the only major world economy reporting any economic growth today. It went first into Covid-19 and was first out, grinding out 3.2% growth in the most recent quarter while the U.S. shrank 9.5% and other advanced economies endured double-digit declines. High-tech monitoring, comprehensive testing and aggressive top-down containment measures enabled China to get the virus under control while others struggled. The Middle Kingdom may even deliver a modest year-over-year economic expansion in 2020.

This rebound is real, but behind the short-term numbers the economic restart is dubious. China’s growth spurt isn’t the beginning of a robust recovery but an uneven bounce fueled by infrastructure construction. Second-quarter data showed the same imbalance other nations are wrestling with: Investment contributed 5 percentage points to growth, while consumption fell, subtracting 2.3 points.

Since 2017, the China Dashboard, a joint project of Rhodium Group and the Asia Society Policy Institute, has tracked economic policy in China closely for signs of progress. Despite repeated commitments from Chinese authorities to open up and address the country’s overreliance on debt, the China Dashboard has observed delayed attempts and even backtracking on reforms. The Covid-19 outbreak offered an opportunity for Beijing to shift course and deliver on market reforms. Signals from leaders this spring hinted at fixing defunct market mechanisms. But notably, the long list of reforms promised in May was almost the same as previous lists—such as separating capital management from business management at state firms and opening up to foreign investment while increasing the “quality” of outbound investment—which were adopted by the Third Plenum in 2013. In other words, promised recent reforms didn’t happen, and nothing in the new pronouncements explains why or how this time will be different.

An honest look at the forces behind China’s growth this year shows a doubling down on state-managed solutions, not real reform. State-owned entities, or SOEs, drove China’s investment-led recovery. In the first half of 2020, according to China’s National Bureau of Statistics, fixed-asset investment grew by 2.1% among SOEs and decreased by 7.3% in the private sector. Finished product inventory for domestic private firms rose sharply in the same period—a sign of sales difficulty—while SOE inventory decreased slightly, showing the uneven nature of China’s growth.

Perhaps the most significant demonstration of mistrust in markets is the “internal circulation” program first floated by President Xi Jinping in May. On the surface, this new initiative is supposed to expand domestic demand to complement, but not replace, external demand. But Beijing is trying to boost home consumption by making it a political priority. With household demand still shrinking, expect more subsidies to producers and other government interventions, rather than measures that empower buyers. Dictating to markets and decreeing that consumption will rise aren’t the hallmarks of an advanced economy.

The pandemic has forced every country to put short-term stability above future concerns, but no other country has looming burdens like China’s. In June the State Council ordered banks to “give up” 1.5 trillion yuan (around $220 billion) in profit, the only lever of control authorities have to reduce debt costs and support growth. In August China’s economic planning agency ordered six banks to set aside a quota to fund infrastructure construction with long-term loans at below-market rates. These temporary solutions threaten the financial stability of a system that is already overleveraged, pointing to more bank defaults and restructurings ahead.

China also faces mounting costs abroad. By pumping up production over the past six months, as domestic demand stagnated, Beijing has ballooned its trade surplus, fueling an international backlash against state-driven capitalism that goes far beyond Washington. The U.S. has pulled the plug on some channels for immigration, financial flows and technology engagement. Without naming China, a European Commission policy paper in June took aim at “subsidies granted by non-EU governments to companies in the EU.” Governments and industry groups in Germany, the Netherlands, France and Italy are also pushing China to change.

Misgivings about Beijing’s direction are evident even among foreign firms that have invested trillions of dollars in China in recent decades. A July 2020 UBS Evidence Lab survey of more than 1,000 chief financial officers across sectors in the U.S., China and North Asia found that 75% of respondents are considering moving some production out of China or have already done so. Nearly half the American executives with plans to leave said they would relocate more than 60% of their firms’ China-based production. Earlier this year Beijing floated economic reforms intended to forestall an exodus of foreign companies, but nothing has come of it.

For years, the world has watched and waited for China to become more like a free-market economy, thereby reducing American security concerns. At a time of profound stress world-wide, the multiple gauges of reform we have been monitoring through the China Dashboard point in the opposite direction. China’s economic norms are diverging from, rather than converging with, the West’s. Long-promised changes detailed at the beginning of the Xi era haven’t materialized.

Though Beijing talks about “market allocation” efficiency, it isn’t guided by what mainstream economists would call market principles. The Chinese economy is instead a system of state capitalism in which the arbiter is an uncontestable political authority. That may or may not work for China, but it isn’t what liberal democracies thought they would get when they invited China to take a leading role in the world economy.

The post Wall Street Journal: China Backslides on Economic Reform appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Generic Comment

To my mind, code comments are important to explain why the code what it does, not so much what it does. Ideally, the what is clear enough from the code that you don’t have to. Today, we have no code, but we have some comments.

Chris recently was reviewing some C# code from 2016, and found a little conversation in the comments, which may or may not explain whats or whys. Line numbers included for, ahem context.

4550: //A bit funky, but entirely understandable: Something that is a C# generic on the storage side gets
4551: //represented on the client side as an array. Why? A C# generic is rather specific, i.e., Java
4552: //doesn't have, for example, a Generic List class. So we're messing with arrays. No biggie.

Now, honestly, I’m more confused than I probably would have been just from the code. Presumably as we’re sending things to a client, we’re going to serialize it to an intermediate representation, so like, sure, arrays. The comment probably tells me why, but it’s hardly a necessary explanation here. And what does Java have to do with anything? And also, Java absolutely does support generics, so even if the Java trivia were relevant, it’s not accurate.

I’m not the only one who had some… questions. The comment continues:

4553: //
4554: //WTF does that have to do with anything? Who wrote this inane, stupid comment, 
4555: //but decided not to put comments on anything useful?

Not to sound judgmental, but if you’re having flamewars in your code comments, you may not be on a healthy, well-functioning team.

Then again, if this is someplace in the middle of your file, and you’re on line 4550, you probably have some other problems going on.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Krebs on SecurityGovt. Services Firm Tyler Technologies Hit in Apparent Ransomware Attack

Tyler Technologies, a Texas-based company that bills itself as the largest provider of software and technology services to the United States public sector, is battling a network intrusion that has disrupted its operations. The company declined to discuss the exact cause of the disruption, but their response so far is straight out of the playbook for responding to ransomware incidents.

Plano, Texas-based Tyler Technologies [NYSE:TYL] has some 5,300 employees and brought in revenues of more than $1 billion in 2019. It sells a broad range of services to state and local governments, including appraisal and tax software, integrated software for courts and justice agencies, enterprise financial software systems, public safety software, records/document management software solutions and transportation software solutions for schools.

Earlier today, the normal content on tylertech.com was replaced with a notice saying the site was offline. In a statement provided to KrebsOnSecurity after the markets closed central time, Tyler Tech said early this morning the company became aware that an unauthorized intruder had gained access to its phone and information technology systems.

“Upon discovery and out of an abundance of caution, we shut down points of access to external systems and immediately began investigating and remediating the problem,” Tyler’s Chief Information Officer Matt Bieri said. “We have since engaged outside IT security and forensics experts to conduct a detailed review and help us securely restore affected equipment. We are implementing enhanced monitoring systems, and we have notified law enforcement.”

“At this time and based on the evidence available to us to-date, all indications are that the impact of this incident is limited to our internal network and phone systems,” their statement continues. “We currently have no reason to believe that any client data, client servers, or hosted systems were affected.”

While it may be comforting to hear that last bit, the reality is that it is still early in the company’s investigation. Also, ransomware has moved well past just holding a victim firm’s IT systems hostage in exchange for an extortion payment: These days, ransomware purveyors will offload as much personal and financial data that they can before unleashing their malware, and then often demand a second ransom payment in exchange for a promise to delete the stolen information or to refrain from publishing it online.

Tyler Technologies declined to say how the intrusion is affecting its customers. But several readers who work in IT roles at local government systems that rely on Tyler Tech said the outage had disrupted the ability of people to pay their water bills or court payments.

“Tyler has access to a lot of these servers in cities and counties for remote support, so it was very thoughtful of them to keep everyone in the dark and possibly exposed if the attackers made off with remote support credentials while waiting for the stock market to close,” said one reader who asked to remain anonymous.

Depending on how long it takes for Tyler to recover from this incident, it could have a broad impact on the ability of many states and localities to process payments for services or provide various government resources online.

Tyler Tech has pivoted on the threat of ransomware as a selling point for many of its services, using its presence on social media to promote ransomware survival guides and incident response checklists. With any luck, the company was following some of its own advice and will weather this storm quickly.

Update, Sept. 24, 6:00 p.m. ET: Tyler said in an updated statement on its website that a review of its logs, monitoring, traffic reports and cases related to utility and court payments revealed no outages with those systems. However, several sources interviewed for this story who work in tech roles at local governments which rely on Tyler Tech said they proactively severed their connections to Tyler Tech systems after learning about the intrusion. This is a fairly typical response for companies that outsource payment transactions to a third party when and that third party ends up experiencing a ransomware attack, but the end result (citizens unable to make payments) is the same.

Update, 11:49 p.m. ET: Tyler is now encouraging all customers to change any passwords for any remote network access for Tyler staff, after receiving reports from some customers about suspicious logins. From a statement Tyler sent to customers:

“We apologize for the late-night communications, but we wanted to pass along important information as soon as possible. We recently learned that two clients have report suspicious logins to their systems using Tyler credentials. Although we are not aware of any malicious activity on client systems and we have not been able to investigate or determine the details regarding these logins, we wanted to let you know immediately so that you can take action to protect your systems.”

Cory DoctorowAnnouncing the Attack Surface tour

It’s been 12 years since I went on my first book tour and in the years since, I’ve met and spoken with tens of thousands of readers in hundreds of cities on five continents in support of more than a dozen books.

Now I’ve got another major book coming out: ATTACK SURFACE.

How do you tour a book during a pandemic? I think we’re still figuring that out. I’ll tell you one thing, I won’t be leaving Los Angeles this time around. Instead, my US publisher, Tor Books, has set up eight remote “Attack Surface Lectures.”

https://read.macmillan.com/torforge/cory-doctorow-virtual-lecture-series/

Each event has a different theme and different guest-hosts/co-discussants, chosen both for their expertise and their ability to discuss their subjects in ways that are fascinating and engaging.

ATTACK SURFACE is the third Little Brother book, a standalone book for adults.

It stars Masha, a young woman who is finally reckoning with the moral character of the work she’s done, developing surveillance tools to fight Iraqi insurgents and ex-Soviet democracy activists.

Masha has struggled with her work for years, compartmentalizing her qualms, rationalizing her way into worse situations.

She goes home to San Francisco and discovers her best friend, a BLM activist, is being targeted by the surveillance weapons Masha herself invented.

What follows is a Little Brother-style technothriller, full of rigorous description and extrapolation on cybersecurity, surveillance and resistance, that illuminates the tale of a tech worker grappling with their own life’s work.

Obviously, this covers a lot of ground, as is reflected in the eight nights of talks we’re announcing today:

I. Politics & Protest, Oct 13, with Eva Galperin and Ron Deibert, hosted by The Strand Bookstore

II. Cross-Medium SciFi, Oct 14, with Amber Benson and John Rogers, hosted by Brookline Booksmith

III. ​​Intersectionality: Race, Surveillance, and Tech and Its History, Oct 15, with Malkia Cyril and Meredith Whittaker, hosted by Booksmith

IV. SciFi Genre, Oct 16, with Sarah Gailey and Chuck Wendig, hosted by Fountain Books

V. Cyberpunk and Post-Cyberpunk, Oct 19, with Bruce Sterling and Christopher Brown, hosted by Andeersons Bookshop

VI. Tech in SciFi, Oct 20, with Ken Liu and Annalee Newitz, hosted by Interabang

VII. Little Revolutions, Oct 21, with Tochi Onyebuchi and Bethany C Morrow, hosted by Skylight Books

VIII. OpSec & Personal Cyber-Security: How Can You Be Safe?, Oct 22, with Runa Sandvik and Window Snyder, hosted by Third Place Books

Some of the events come with either a hardcover and a signed bookplate, or, with some stores, actual signed books.

(those stores’ stock is being shipped to my house, and I’m signing after each event and mailing out from here)

(yes, really)

I’ve never done anything like this and I’d be lying if I said I wasn’t nervous about it. Book tours are crazy marathons – I did 35 cities in 3 countries in 45 days for Walkaway a couple years ago – and this is an entirely different kind of thing.

But I’m also (very) excited. Revisiting Little Brother after seven years is quite an experience. ATTACK SURFACE – a book about uprisings, police-state tactics, and the digital tools as oppressors and liberators – is (unfortunately) very timely.

Having an excuse to talk about this book and its themes with you all – and with so many distinguished and brilliant guests – is going to keep me sane next month. I really hope you can make it.

Cory DoctorowPoesy the Monster Slayer

POESY THE MONSTER SLAYER is my first-ever picture book, illustrated by Matt Rockefeller and published by Firstsecond. It’s an epic tale of toy-hacking, bedtime-avoidance and monster-slaying.

https://us.macmillan.com/books/9781626723627

The book’s publication was attended by a superb and glowing review from Kirkus:

“The lights are out, and the battle begins. She knows the monsters are coming, and she has a plan. First, the werewolf appears. No problem. Poesy knows the tools to get rid of him: silver (her tiara) and light. She wins, of course, but the ruckus causes a ‘cross’ Daddy to appear at her door, telling her to stop playing with toys and go back to bed. She dutifully lets him tuck her back in. But on the next page, her eyes are open. ‘Daddy was scared of monsters. Let DADDY stay in bed.’ Poesy keeps fighting the monsters that keep appearing out of the shadows, fearlessly and with all the right tools, to the growing consternation of her parents, a Black-appearing woman and a White-appearing man, who are oblivious to the monsters and clearly fed up and exhausted but used to this routine. Poesy is irresistible with her brave, two-sided personality. Her foes don’t stand a chance (and neither do her parents). Rockefeller’s gently colored cartoon art enhances her bravery with creepily drawn night creatures and lively, expressive faces.

“This nighttime mischief is not for the faint of heart. (Picture book. 5-8)”

Kirkus joins Publishers Weekly in its praise: “Strikes a gently edgy tone, and his blow-by-blow account races to its closing spread: of two tired parents who resemble yet another monster.”

Sociological ImagesIs Knowing Half the Battle?

We seem to have been struggling with science for the past few…well…decades. The CDC just updated what we know about COVID-19 in the air, misinformation about trendy “wellness products” abounds, and then there’s the whole climate crisis.

This is an interesting pattern because many public science advocates put a lot of work into convincing us that knowing more science is the route to a more fulfilling life. Icons like Carl Sagan and Neil deGrasse Tyson, as well as modern secular movements, talk about the sense of profound wonder that comes along with learning about the world. Even GI Joe PSAs told us that knowing was half the battle.

The problem is that we can be too quick to think that knowing more will automatically make us better at addressing social problems. That claim is based on two assumptions: one, that learning things feels good and motivates us to action, and two, that knowing more about a topic makes people more likely to appreciate and respect that topic. Both can be true, but they are not always true.

The first is a little hasty. Sure, learning can feel good, but research on teaching and learning shows that it doesn’t always feel good, and I think we often risk losing students’ interest because they assume that if a topic is a struggle, they are not meant to be studying it.

The second is often wrong, because having more information does not always empower us to make better choices. Research shows us that knowing more about a topic can fuel all kinds of other biases, and partisan identification is increasingly linked with with attitudes toward science.

To see this in action, I took a look at some survey data collected by the Pew Research Center in 2014. The survey had seven questions checking attitudes about science – like whether people kept up with science or felt positively about it – and six questions checking basic knowledge about things like lasers and red blood cells. I totaled up these items into separate scales so that each person has a score for how much they knew and how positively or negatively they thought about science in general. These scales are standardized, so people with average scores are closer to zero. Plotting out these scores shows us a really interesting null finding documented by other research – there isn’t a strong relationship between knowing more and feeling better about science.

The purple lines mark average scores in each scale, and the relationship between science knowledge and science attitudes is fairly flat.

Here, both people who are a full standard deviation above the mean and multiple standard deviations below the mean on their knowledge score still hold pretty average attitudes about science. We might expect an upward sloping line, where more knowledge associates with more positive attitudes, but we don’t see that. Instead, attitudes about science, whether positive or negative, get more diffuse among people who get fewer answers correct. The higher the knowledge, the more tightly attitudes cluster around average.

This is an important point that bears repeating for people who want to change public policy or national debate on any science-based issue. It is helpful to inform people about these serious issues, but shifting their attitudes is not simply a matter of adding more information. To really change minds, we have to do the work to put that information into conversation with other meaning systems, emotions, and moral assumptions.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianVincent Fourmond: Tutorial: analyze Km data of CODHs

This is the first post of a series in which we will provide the readers with simple tutorial approaches to reproduce the data analysis of some of our published papers. All our data analysis is performed using QSoas. Today, we will show you how to analyze the experiments we used to characterize the behaviour of an enzyme, the Nickel-Iron CO dehydrogenase IV from Carboxytothermus hydrogenoformans. The experiments we analyze here are described in much more details in the original publication, Domnik et al, Angewandte Chemie, 2017. The only things you need to know for now are the following:
  • the data is the evolution of the current over time given by the absorbed enzyme;
  • at a given point, a certain amount of CO (50 µM) is injected, and its concentration decreases exponentially over time;
  • the enzyme has a Michaelis-Menten behaviour with respect to CO.
This means that we expect a response of the type: $$i(t) = \frac{i_m}{1 + \frac{K_m}{[\mathrm{CO}](t)}}$$ in which $$[\mathrm{CO}](t) = \begin{cases} 0, & \text{for } t < t_0 \\ C_0 \exp \frac{t_0 - t}{\tau}, & \text{for }t\geq t_0 %> \end{cases}$$

To begin this tutorial, first download the files from the github repository (direct links: data, parameter file and ruby script). Start QSoas, go to the directory where you saved the files, load the data file, and remove spikes in the data using the following commands:

QSoas> cd
QSoas> l Km-CODH-IV.dat
QSoas> R
First fit
Then, to fit the above equation to the data, the simplest is to take advantage of the time-dependent parameters features of QSoas. Run simply:
QSoas> fit-arb im/(1+km/s) /with=s:1,exp
This simply launches the fit interface to fit the exact equations above. The im/(1+km/s) is simply the translation of the Michaelis-Menten equation above, and the /with=s:1,exp specifies that s is the result of the sum of 1 exponential like for the definition of above. Then, load the Km-CODH-IV.params parameter files (using the "Parameters.../Load from file" action at the bottom, or the Ctrl+L keyboard shortcut). Your window should now look like this:
To fit the data, just hit the "Fit" button ! (or Ctrl+F).
Including an offset
The fit is not bad, but not perfect. In particular, it is easy to see why: the current predicted by the fit goes to 0 at large times, but the actual current is below 0. We need therefore to include an offset to take this into consideration. Close the fit window, and re-run a fit, but now with this command:
QSoas> fit-arb im/(1+km/s)+io /with=s:1,exp
Notice the +io bit that corresponds to the addition of an offset current. Load again the base parameters, run the fit again... Your fit window show now look like:
See how the offset current is now much better taken into account. Let's talk a bit more about the parameters:
  • im is \(i_m\), the maximal current, around 120 nA (which matches the magnitude of the original current).
  • io is the offset current, around -3nA.
  • km is the \(K_m\), expressed in the same units as s_1, the first "injected" value of s (we used 50 because the injection is 50 µM CO). That means the value of \(K_m\) determined by this fit is around 9 nM ! (but see below).
  • s_t_1 is the time of the injection of CO (it was in the parameter files you loaded).
  • Finally, s_tau_1 is the time constant of departure of CO, noted \(\tau\) in the equations above.
Taking into account mass-transport limitations
However, the fit is still unsatisfactory: the predicted curve fails to reproduce the curvature at the beginning and at the end of the decrease. This is due to issues linked to mass-transport limitations, which are discussed in details in Merrouch et al, Electrochimica Acta, 2017. In short, what you need to do is to close the fit window again, load the transport.rb Ruby file that contains the definition of the itrpt function, and re-launch the fit window using:
QSoas> ruby-run transport.rb
QSoas> fit-arb itrprt(s,km,nFAm,nFAmu)+io /with=s:1,exp
Load again the parameter file... but this time you'll have to play a bit more with the starting parameters for QSoas to find the right values when you fit. Here are some tips:
  • the curve predicted with the current parameters (use "Update" to update the display) should "look like" the data;
  • apart from io, no parameter should be negative;
  • there may be hints about the correct values in the papers...
A successful fit should look like this:
Here you are ! I hope you enjoyed analyzing our data, and that it will help you analyze yours ! Feel free to comment and ask for clarifications.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code or buy precompiled versions for MacOS and Windows there.

Kevin RuddABC Radio Melbourne: On the Coalition’s changes to the NBN

E&OE TRANSCRIPT
RADIO INTERVIEW
ABC RADIO MELBOURNE DRIVE
23 SEPTEMBER 2020

Topics: NBN, Energy, Gas, Dan Andrews and Victoria

Rafael Epstein
Welcome, if you’re watching us on facebook live today the Coalition government came forward and said they will fund NBN to your home and not just to your street. 6 million homes at least some businesses as well, at a cost of about well somewhere north of $5 billion. Kevin Rudd is claiming vindication. He is of course, the former prime minister whose government came up with the NBN idea. And he’s president of the Asia Society Policy Institute in New York. At the moment, he is in Queensland, Kevin Rudd, thanks for joining us.

Kevin Rudd
Good to be with you.

Rafael Epstein
They’re doing the same thing, but better and cheaper. They say.

Kevin Rudd
Well, I think in your intro, you said ‘claiming vindication.’ All I’m interested in is the facts here. We laid out a plan for a National Broadband Network back in 2009 to take fiber optic cable to every single premise is house in small business and school in the country. Other than the most remote areas where we rely on a combination of satellite and wireless services, so that was 93% of the country. And it was to be 100 megabits per second, capable of further upgrade. So that’s what we were rolling out. Whistle onto the 2013 election. What did they then do? Partly in response to pressure from Murdoch, who didn’t want a new competition for his failing Fox Entertainment Network on cable by

Rafael Epstein
Disputed by Malcolm Turnbull, but continue? Yes.

Kevin Rudd
Well, that might be disputed, I’m just referring to basically unassailable facts. Murdoch did not want Netflix streamed live to people’s homes, because that would provide direct competition far too early than he wanted for his Fox entertainment cable network, at that stage is only really profitable business in Australia. Anyway, we get to 2013. What did they do they kill the original model, and instead we’re not gonna have fiber optic to the premises and have fiber optic to this thing called the node wherever that was, we still haven’t found ours in our suburbs up in sunshine beach. And that’s where it’s languished ever since. And as a result, when COVID hit people across the country have been screaming out because the inadequacy of their broadband services. That’s the history, however, they may choose to rewrite it.

Rafael Epstein
Well, they do. The essential argument without getting into the technical detail from the government is that you went too hard, too fast. You hadn’t connected nearly enough people by the time of that election, and they needed to go back and do it more slowly. And with the best technology at the time, how do you respond to that?

Kevin Rudd
What a load of codswallop. We launched the thing in 2009, I was actually participating in the initial rollout of cable ceremonies. I remember doing one in northwestern Tasmania, late 09, early 2010. And we had rolled out cable in a number of suburbs and a number of cities and regional towns right across Australia and we deliberately started in regional areas. We did not start in the inner city areas because we wanted to make sure that every Australian household and small business had a go.

Rafael Epstein
Is that what led to the higher cost, Kevin Rudd, is because they did the criticism from them is that yours was too expensive what you were doing at the time.

Kevin Rudd
Well, I make no apology for what it costs to layout high speed broadband in Darwin, or in Bernie in northwestern Tasmania, or in these parts of the world where yes, residences are further apart. But either you had a digital divide in Australia, which separated rich from poor from inner city to outer suburbs, or from capital cities to the country and regions. I grew up in the regions, I happen to believe that people out there are deserving of high speed broadband just as much as anybody else. And they killed this plan in 2013 spent 10s of billions of dollars in a botch job, which resulted in broadband speeds for Australia, among the slowest in the OECD. How they defend that, as a policy achievement, beggars belief.

Rafael Epstein
Doesn’t it makes sense though, to have a demand driven program? The minister was saying, look what we’re gonna do, we’ll roll the fiber down the street, and then the fiber will come into your home only when the customer wants it. Isn’t that a sensible way to build a network?

Kevin Rudd
So why have I and others around the country and members of parliament, government and opposition received screaming petitions from people over the last, you know, four to five years, demanding their broadband services. And in fact, it’s just been virtually impossible for many people to get connected. The truth is this has been an absolute botch up in the conceptual design, that is, substituting fiber optic cable with copper, for that last link, which means that however fast the data has traveled along to the fiber optic node. Once it’s constricted and strangled at the copper intersection point it then slows down to a snail’s pace, which is why everyone listening to your program has been pulling their hair out if they hadn’t been able to be connected to fiber optic themselves.

Rafael Epstein
We don’t know for sure which program would have been more expensive though, do we? I mean they’re massive projects. You only really know the costs when you reach the point in the project? I mean, how do we know yours would have been any cheaper or any better?

Kevin Rudd
We planned ours thoroughly. The first conceptual design contrary to Liberal Party propaganda, was put together over a six month period involving technical committees, presided over by the Secretary of the Commonwealth Treasurery. The Liberal Party propaganda was that the NBN was invented on the back of an envelope or the back of a restaurant menu. That’s utter bullshit. The bottom line is it was well planned, well developed by a specialist committee of experts both technical and financial presided over by the Secretary the Treasury. It was then rolled out and we intend to have it rolled out across the nation by 2018-2019. That was how it was going. We had a battle plan with which to do that. But when it gets cut off at midships by the conservatives with an entirely different delivery model, the amount of waste of taxpayers dollars involved in going for plan Zed rather than Plan A…

Rafael Epstein
So can I can ask you to expand on that Kevin Rudd because you’re accusing them of waste. Why is it a waste? What have they done that you say is a waste?

Kevin Rudd
Let me just give you one example of waste. One example of waste is when you’ve got a massive rollout program underway with multiple contractors already engaged across the country, preparing trenches and laying cable and then with a central fiat from the then Abbott-Turnbull-Morrison government saying stop. What happens is all these projects and all these contractual arrangements get thrown up into the air, many of which I imagine involve penalty payments for loss of work, then you’ve got a whole problem now reconvening a massive rollout program, building up capacity and signing new contracts. Again, at loss to the taxpayer, I would invite the Auditor General to investigate what has actually been lost in quantitative dollar terms through the mismanagement of this project.

Rafael Epstein
You’re listening to Kevin Rudd, of course, formal Prime Minister of Australia regional conceiver of the NBN with his then Communications Minister Stephen Conroy. The phone number is 1300 222 774. Kevin Rudd, the government had a few announcements over the last week or two around energy, especially the announcement yesterday. There is some there’s quite a lot of senses in there in spending on every low emissions technology that is available that might include something like gases or transition fuel, that’s, that’s a decent step forward, isn’t it, even if they don’t have the emissions tag you’d like them to have?

Kevin Rudd
Let me put it to you in these terms. The strategy we had was to give renewable energy initial boost. We did that through the Mandatory Renewable Energy Target, which we legislated. And it worked. Remember, when we were elected to office 4% of Australia’s electricity and energy supply came from renewables. We legislated to 20% by 2020. And guess what it is? Now it’s 21%. The whole idea was to give that whole slew of renewable and energy industries, a starting chance, you know, a critical mass to participate in the national energy market. What they should be now doing, if they’re going to throw taxpayer dollars at a next tranche of energy is throw it open to competitive renewables and see who comes up with the best bid.

Rafael Epstein
They can do that. I mean, they’ve announced funding the arena, which Labor government came up with, but renewable people can bid for that money. There’s nothing stopping them bidding for their money.

Kevin Rudd
Yeah, but the whole point of investing taxpayer dollars in projects of this nature is to advance the renewables cause or to advance, shall we say, zero emissions? You see, the reason the Renewable Energy Agency was established again by the Labor government was to provide a financing mechanism for renewables that is 100% clean energy. Gas projects exist on the basis of their own financial economics out there in the marketplace, and together with coal will ultimately feel the impact of international investment houses incrementally withdrawing financial support from any carbon based fuel that’s happening over time. Cole’s the product leader in terms of people exiting those investments, others will follow.

Rafael Epstein
What do you make of the way the majors handled? The premier here in Victoria, Dan Andrews.

Kevin Rudd
I think what’s the media has comprehensively failed to understand what it’s like to be in the hot seat. You see, I began my career in public life. As a state official, I was Director General of the Queensland Cabinet Office for five years. I’ve got some familiarity with how state governments go, and how they work and how they operate. And against all the measures of what’s possible and not possible. I think Dan Andrews has done a really good job. And what people who criticize him fail the answer…

Rafael Epstein
Even with the hotel’s problem?

Kevin Rudd
Yeah, well, look, you know, it reminds me very much of when I was Prime Minister. And the Liberals then attacked me and held me personally accountable and responsible for the tragic deaths of four young men who were killed in the rollout of the home insulation program, as a consequence of a failure of contractors to adhere to state based occupational health and safety standards. So all I would say is when people start waving the finger at a head of government and saying you Dan Andrews are responsible for the second shift at two o’clock in the morning at hotel facility F, frankly, doesn’t pass the common sense test, I think…

Rafael Epstein
Shouldn’t his health department be in charge of the…

Kevin Rudd
The voters of Victoria and the voters of the nation, I think are common sense people, and they would ask themselves, what would I do in the same situation? Tough call. But I think in overall terms, Andrews has handled it well.

Rafael Epstein
I know you’re a big critic of the Murdoch media. I just wonder if you see. Is there more criticism that comes from a newspaper like the Herald Sun or the Australian towards Dan Andrews, do you think?

Kevin Rudd
Well, I look at the Murdoch rags every day. And they are rags. If you don’t believe they are go to the ABC and look at Iview at the BBC series on the Murdoch dynasty and the manipulation of truth for editorial purposes. Something which I noticed the ABC in Australia is always coy to probe to fear there’ll be too much Murdoch retaliation against the ABC.

Rafael Epstein
I’m happy to probe it but there are some good journalists at the Australian and the Herald Sun aren’t there?

Kevin Rudd
Well, well actually very few of your colleagues are. That’s the truth of it. Tell me this. When was the last time Four Corners did an expose into the abuse of media power in Australia by the murdochs. Do you know the answer to that one is?

Rafael Epstein
I don’t answer for Four Corners.

Kevin Rudd
No Hang on. You’re the ABC…

Rafael Epstein
Yes but you can’t ask me about Four Corners Kevin Rudd, I don’t have nothing to do with it.

Kevin Rudd
The answer is 25 years. So what I’m saying is that your principal investigator program has left it as too hot to handle for quarter of a century. So to answer your question about Murdoch’s handling of state Labor premiers. If I was to look each day at the coverage and the Queensland Courier Mail and the Melbourne Herald Sun, versus the Daily Telegraph in Sydney, where there’s a Liberal Premier, and the Adelaide advertiser with the Liberal Premier, against any measure, it is chalk and cheese. It is 95% attack in Victoria and Queensland. And it’s 95% of defense in NSW.

Rafael Epstein
Don’t you think there’s some significant criticisms produced by the journalists at those newspapers that are worthy of coverage? In Victoria.

Kevin Rudd
When I speak to journalists who have left for example, the Australian newspaper, and who tell me that the reason they have left is because they are sick and tired of the number of times not only has their copy been rewritten, but in fact rebalanced and with often a contrary headline put across the top of it. Frankly, we all know the joke in the Murdoch newspaper empire. And that is the publisher has a view. It’s a deeply conservative worldview. He doesn’t like Labor, governments federal or states that don’t that don’t doth their caps in honor of his political and financial interests. And he seeks therefore to delegitimize and remove them from office. There’s a pattern of this over 10 years.

Rafael Epstein
I’m going to ask you to give a short answer to this one if I can, because it is a new topic that I know you could talk about for half an hour but if I can. Gut feel who’s gonna win the American election?

Kevin Rudd
I think given up lived in the US for the last five years. I think there is sufficient turn off factor in relation to Trump and his handling of Coronavirus that will cause of itself, Biden to win in what will still be a contested election and end up probably under appeal to the United States Senate.

Rafael Epstein
Thanks for your time.

Kevin Rudd
Good to be with you.

The post ABC Radio Melbourne: On the Coalition’s changes to the NBN appeared first on Kevin Rudd.

Planet DebianVincent Fourmond: Define a function with inline Ruby code in QSoas

QSoas can read and execute Ruby code directly, while reading command files, or even at the command prompt. For that, just write plain Ruby code inside a ruby...ruby end block. Probably the most useful possibility is to define elaborated functions directly from within QSoas, or, preferable, from within a script; this is an alternative to defining a function in a completely separated Ruby-only file using ruby-run. For instance, you can define a function for plain Michaelis-Menten kinetics with a file containing:

ruby
def my_func(x, vm, km)
  return vm/(1 + km/x)
end
ruby end

This defines the function my_func with three parameters, , (vm) and (km), with the formula:

You can then test that the function has been correctly defined running for instance:

QSoas> eval my_func(1.0,1.0,1.0)
 => 0.5
QSoas> eval my_func(1e4,1.0,1.0)
 => 0.999900009999

This yields the correct answer: the first command evaluates the function with x = 1.0, vm = 1.0 and km = 1.0. For , the result is (here 0.5). For , the result is almost . You can use the newly defined my_func in any place you would use any ruby code, such as in the optional argument to generate-buffer, or for arbitrary fits:

QSoas> generate-buffer 0 10 my_func(x,3.0,0.6)
QSoas> fit-arb my_func(x,vm,km)

To redefine my_func, just run the ruby code again with a new definition, such as:
ruby
def my_func(x, vm, km)
  return vm/(1 + km/x**2)
end
ruby end
The previous version is just erased, and all new uses of my_func will refer to your new definition.


See for yourself

The code for this example can be found there. Browse the qsoas-goodies github repository for more goodies !

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.1. You can download its source code or buy precompiled versions for MacOS and Windows there.

Planet DebianVincent Fourmond: Release 2.2 of QSoas

The new release of QSoas is finally ready ! It brings in a lot of new features and improvements, notably greatly improved memory use for massive multifits, a fit for linear (in)activation processes (the one we used in Fourmond et al, Nature Chemistry 2014), a new way to transform "numbers" like peak position or stats into new datasets and even SVG output ! Following popular demand, it also finally brings back the peak area output in the find-peaks command (and the other, related commands) ! You can browse the full list of changes there.

The new release can be downloaded from the downloads page.

Freely available binary images for QSoas 1.0

In addition to the new release, we are now releasing the binary images for MacOS and Windows for the release 1.0. They are also freely available for download from the downloads page.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code or buy precompiled versions for MacOS and Windows there.

Worse Than FailureCodeSOD: A Random While

A bit ago, Aurelia shared with us a backwards for loop. Code which wasn’t wrong, but was just… weird. Well, now we’ve got some code which is just plain wrong, in a number of ways.

The goal of the following Java code is to generate some number of random numbers between 1 and 9, and pass them off to a space-separated file.

StringBuffer buffer = new StringBuffer();
long count = 0;
long numResults = GetNumResults();

while (count < numResults)
{
	ArrayList<BigDecimal> numbers = new ArrayList<BigDecimal>();
	while (numbers.size() < 1)
	{
		int randInt = random.nextInt(10);
		long randLong = randInt & 0xffffffffL;
		if (!numbers.contains(new BigDecimal(randLong)) && (randLong != 0))
		{
			buffer.append(randLong);
			buffer.append(" ");
			numbers.add(new BigDecimal(randLong));
		}
		System.out.println("Random Integer: " + randInt + ", Long Integer: " + randLong);	
	}
	
	outFile.writeLine(buffer.toString()); 
	buffer = new StringBuffer();
	
	count++;
}

Pretty quickly, we get a sense that something is up, with the while (count < numResults)- this begs to be a for loop. It’s not wrong to while this, but it’s suspicious.

Then, right away, we create an ArrayList<BigDecimal>. There is no reasonable purpose to using a BigDecimal to hold a value between 1 and 9. But the rails don’t really start to come off until we get into the inner loop.

while (numbers.size() < 1)
	{
		int randInt = random.nextInt(10);
		long randLong = randInt & 0xffffffffL;
    if (!numbers.contains(new BigDecimal(randLong)) && (randLong != 0))
    …

This loop condition guarantees that we’ll only ever have one element in the list, which means our numbers.contains check doesn’t mean much, does it?

But honestly, that doesn’t hold a candle to the promotion of randInt to randLong, complete with an & 0xffffffffL, which guarantees… well, nothing. It’s completely unnecessary here. We might do that sort of thing when we’re bitshifting and need to mask out for certain bytes, but here it does nothing.

Also note the (randLong != 0) check. Because they use random.nextInt(10), that generates a number in the range 0–9, but we want 1 through 9, so if we draw a zero, we need to re-roll. A simple, and common solution to this would be to do random.nextInt(9) + 1, but at least we now understand the purpose of the while (numbers.size() < 1) loop- we keep trying until we get a non-zero value.

And honestly, I should probably point out that they include a println to make sure that both the int and the long versions match, but how could they not?

Nothing here is necessary. None of this code has to be this way. You don’t need the StringBuffer. You don’t need nested while loops. You don’t need the ArrayList<BigDecimal>, you don’t need the conversion between integer types. You don’t need the debugging println.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianRussell Coker: Qemu (KVM) and 9P (Virtfs) Mounts

I’ve tried setting up the Qemu (in this case KVM as it uses the Qemu code in question) 9P/Virtfs filesystem for sharing files to a VM. Here is the Qemu documentation for it [1].

VIRTFS="-virtfs local,path=/vmstore/virtfs,security_model=mapped-xattr,id=zz,writeout=immediate,fmode=0600,dmode=0700,mount_tag=zz"
VIRTFS="-virtfs local,path=/vmstore/virtfs,security_model=passthrough,id=zz,writeout=immediate,mount_tag=zz"

Above are the 2 configuration snippets I tried on the server side. The first uses mapped xattrs (which means that all files will have the same UID/GID and on the host XATTRs will be used for storing the Unix permissions) and the second uses passthrough which requires KVM to run as root and gives the same permissions on the host as on the VM. The advantages of passthrough are better performance through writing less metadata and having the same permissions in host and VM. The advantages of mapped XATTRs are running KVM/Qemu as non-root and not having a SUID file in the VM imply a SUID file in the host.

Here is the link to Bonnie++ output comparing Ext3 on a KVM block device (stored on a regular file in a BTRFS RAID-1 filesystem on 2 SSDs on the host), a NFS share from the host from the same BTRFS filesystem, and virtfs shares of the same filesystem. The only tests that Ext3 doesn’t win are some of the latency tests, latency is based on the worst-case not the average. I expected Ext3 to win most tests, but didn’t expect it to lose any latency tests.

Here is a link to Bonnie++ output comparing just NFS and Virtfs. It’s obvious that Virtfs compares poorly, giving about half the performance on many tests. Surprisingly the only tests where Virtfs compared well to NFS were the file creation tests which I expected Virtfs with mapped XATTRs to do poorly due to the extra metadata.

Here is a link to Bonnie++ output comparing only Virtfs. The options are mapped XATTRs with default msize, mapped XATTRs with 512k msize (I don’t know if this made a difference, the results are within the range of random differences), and passthrough. There’s an obvious performance benefit in passthrough for the small file tests due to the less metadata overhead, but as creating small files isn’t a bottleneck on most systems a 20% to 30% improvement in that area probably doesn’t matter much. The result from the random seeks test in passthrough is unusual, I’ll have to do more testing on that.

SE Linux

On Virtfs the XATTR used for SE Linux labels is passed through to the host. So every label used in a VM has to be valid on the host and accessible to the context of the KVM/Qemu process. That’s not really an option so you have to use the context mount option. Having the mapped XATTR mode work for SE Linux labels is a necessary feature.

Conclusion

The msize mount option in the VM doesn’t appear to do anything and it doesn’t appear in /proc/mounts, I don’t know if it’s even supported in the kernel I’m using.

The passthrough and mapped XATTR modes give near enough performance that there doesn’t seem to be a benefit of one over the other.

NFS gives significant performance benefits over Virtfs while also using less CPU time in the VM. It has the issue of files named .nfs* hanging around if the VM crashes while programs were using deleted files. It’s also more well known, ask for help with an NFS problem and you are more likely to get advice than when asking for help with a virtfs problem.

Virtfs might be a better option for accessing databases than NFS due to it’s internal operation probably being a better map to Unix filesystem semantics, but running database servers on the host is probably a better choice anyway.

Virtfs generally doesn’t seem to be worth using. I had hoped for performance that was better than NFS but the only benefit I seemed to get was avoiding the .nfs* file issue.

The best options for storage for a KVM/Qemu VM seem to be Ext3 for files that are only used on one VM and for which the size won’t change suddenly or unexpectedly (particularly the root filesystem) and NFS for everything else.

,

Cryptogram Documented Death from a Ransomware Attack

A Dusseldorf woman died when a ransomware attack against a hospital forced her to be taken to a different hospital in another city.

I think this is the first documented case of a cyberattack causing a fatality. UK hospitals had to redirect patients during the 2017 WannaCry ransomware attack, but there were no documented fatalities from that event.

The police are treating this as a homicide.

Cryptogram On Executive Order 12333

Mark Jaycox has written a long article on the US Executive Order 12333: “No Oversight, No Limits, No Worries: A Primer on Presidential Spying and Executive Order 12,333“:

Abstract: Executive Order 12,333 (“EO 12333”) is a 1980s Executive Order signed by President Ronald Reagan that, among other things, establishes an overarching policy framework for the Executive Branch’s spying powers. Although electronic surveillance programs authorized by EO 12333 generally target foreign intelligence from foreign targets, its permissive targeting standards allow for the substantial collection of Americans’ communications containing little to no foreign intelligence value. This fact alone necessitates closer inspection.

This working draft conducts such an inspection by collecting and coalescing the various declassifications, disclosures, legislative investigations, and news reports concerning EO 12333 electronic surveillance programs in order to provide a better understanding of how the Executive Branch implements the order and the surveillance programs it authorizes. The Article pays particular attention to EO 12333’s designation of the National Security Agency as primarily responsible for conducting signals intelligence, which includes the installation of malware, the analysis of internet traffic traversing the telecommunications backbone, the hacking of U.S.-based companies like Yahoo and Google, and the analysis of Americans’ communications, contact lists, text messages, geolocation data, and other information.

After exploring the electronic surveillance programs authorized by EO 12333, this Article proposes reforms to the existing policy framework, including narrowing the aperture of authorized surveillance, increasing privacy standards for the retention of data, and requiring greater transparency and accountability.

Cryptogram Iranian Government Hacking Android

The New York Times wrote about a still-unreleased report from Chckpoint and the Miaan Group:

The reports, which were reviewed by The New York Times in advance of their release, say that the hackers have successfully infiltrated what were thought to be secure mobile phones and computers belonging to the targets, overcoming obstacles created by encrypted applications such as Telegram and, according to Miaan, even gaining access to information on WhatsApp. Both are popular messaging tools in Iran. The hackers also have created malware disguised as Android applications, the reports said.

It looks like the standard technique of getting the victim to open a document or application.

LongNowSleeping Beauties of Prehistory and the Present Day

Changmiania liaoningensis, buried while sleeping by a prehistoric volcano. Image Source.

Although the sensitive can feel it in all seasons, Autumn seems to thin the veil between the living and the dead. Writing from the dying cusp of summer and the longer bardo marking humankind’s uneasy passage into a new world age (a transit paradoxically defined by floating signifiers and eroded, fluid categories), it seems right to constellate a set of sleeping beauties, both extant and extinct, recently discovered and newly understood. Much like the “sleeping beauties” of forgotten scientific research, as described by Sidney Redner in his 02005 Physics Today paper and elaborated on by David Byrne’s Long Now Seminar, these finds provide an opportunity to contemplate time’s cycles — and the difference between the dead, and merely dormant.

We start 125 million years ago in the unbelievably fossiliferous Liaoning Province of China, one of the world’s finest lagerstätten (an area of unusually rich floral or faunal deposits — such as Canada’s famous Burgess Shale, which captured the transition into the first bloom of complex, hard-shelled, eye-bearing life; or the Solnhofen Limestone in Germany, from which flew the “Urvogel” feathered dinosaur Archaeopteryx, one of the most significant fossil finds in scientific history). Liaoning’s Lujiatan Beds just offered up a pair of perfectly-preserved herbivorous small dinosaurs, named Changmiania or “Sleeping Beauty” for how they were discovered buried in repose within their burrows by what was apparently volcanic ash, a kind of prehistoric Pompeii:

Changmiania in its eternal repose. Image Source.

There’s something especially poignant about flash-frozen remains that render ancient life in its sweet, quiet moments — a challenge to the reigning iconography of the Dawn Ages with their battling giants and their bloodied teeth. Like the Lovers of Valdaro, or the family of Pinacosaurus buried together huddling up against a sandstorm, Changmainia makes the alien past familiar and reminds us of the continuity of life through all its transmutations.

Similarly precious is the new discovery of a Titanosaurid (long-necked dinosaur) embryo from Late Cretaceous Patagonia. These creatures laid the largest eggs known in natural history — a requisite to house what would become some of the biggest animals to ever walk the land. Even so, their contents were so small and delicate it is a marvelous surprise to find the face of this pre-natal “Littlefoot” in such great shape, preserving what looks like an “egg tooth”-bearing structure that would have helped it break free:

The face of a baby dinosaur brings the ancient and brand new into stereoscopic focus. Image Source.

From ancient Argentina to the present day, we move from dinosaurs caught sleeping by fossilization to the “lizard popsicles” of modern reptiles who have managed to adapt to freezing night-time temperatures in their alpine environment. Legends tell of these outliers in the genus Liolaemus walking on the Perito Moreno glacier, a very un-lizard-like haunt; they’re regularly studied well above 13,000 feet, where they may be using an internal form of anti-freeze to live through blistering night-time temperatures. The Andes, young by montane standards, offer a heterogeneous environment that may function like a “species pump,” making Liolaemus one of the most diverse lizard genuses; 272 distinct varieties have been described, some of which give live birth to help their young survive where eggs would just freeze solid.

Liolaemus in half-popsicle mode. Image Source.

Even further south and further back in time, mammal-like reptile Lystrosaurus hibernated in Antarctica 250 million years ago — a discovery made when examining the pulsing growth captured in the records of ringed bone in its extraordinary tusks, much like the growth rings of a redwood:

Lystrosaurus tusks looking remarkably like a redwood tree cross-section. Image Source.

This prehistoric beast, however, lived in a world predating woody trees. Toothless, beaked, and tusked, its positively foreign face nonetheless slept through winter just like modern bears and turtles…which might be why it managed to endure the unimaginable hardships of the Permo-Triassic extinction, which wiped out even more life than the meteor that killed the dinosaurs 135 million years later. Slowing down enough to imitate the dead appears to be, poetically, a strategy for dodging draft into their ranks. And likely living on cuisine like roots and tubers during long Antarctic nights, it may have thrived both in and on the underworld. Lystrosaurus seems to have weathered the Great Dying by playing dead and preying on the kinds of flora that could also play dead through a crisis on the surface.

Lystrosaurus in the woodless landscape of the Triassic. Image Source.

And while we’re on the subject of the blurry boundary between the worlds of life and death, Yale University researchers recently announced they managed to restore activity in some parts of a pig’s brain four hours after death. In the 18th Century when the first proto-CPR resuscitation methods were invented, humans started adding horns to coffins so the not-quite-dead could call for help upon awakening, if necessary; perhaps we’re due for more of this, now that scientists have managed to turn certain regions of a dead brain back on just by simulating blood flow with a special formula called “BrainEx.”

Revived cells in a dead pig’s brain. Image Source.

At no point did the team observe coordinated patterns of electrical activity like those now correlated with awareness, but the research may deliver new techniques for studying an intact mammal brain that lead to innovations in brain injury repair. Discoveries like these suggest that life and death, once thought a binary, is a continuum instead — a slope more shallow every year.

If life and death are ultimately separated only by the paces at which processes at which the many layers of biology align, the future seems like it will be a twilight zone: a weird and wide slow Styx akin to Arthur C. Clarke’s 3001: The Final Odyssey, rife with de-extincted mammoths and revived celebrities of history, digital uploads and doubles and uncanny androids, cryogenic mummies in space ark sarcophagi — and more than a few hibernating Luddites waiting for a sign that it is safe to re-emerge.

Rondam RamblingsCan facts be racist?

Here's a fact:[D]ifferences in home and neighborhood quality do not fully explain the devaluation of homes in black neighborhoods. Homes of similar quality in neighborhoods with similar amenities are worth 23 percent less ($48,000 per home on average, amounting to $156 billion in cumulative losses) in majority black neighborhoods, compared to those with very few or no black residents(And

Planet DebianSteve Kemp: Using a FORTH-like language for something useful

So my previous post was all about implementing a simple FORTH-like language. Of course the obvious question is then "What do you do with it"?

So I present one possible use - turtle-graphics:

\ Draw a square of the given length/width
: square
  dup dup dup dup
  4 0 do
    forward
    90 turn
  loop
;

\ pen down
1 pen

\ move to the given pixel
100 100 move

\ draw a square of width 50 pixels
50 square

\ save the result (png + gif)
save

Exciting times!

Worse Than FailureCodeSOD: A Cutt Above

We just discussed ViewState last week, and that may have inspired Russell F to share with us this little snippet.

private ConcurrentQueue<AppointmentCuttOff> lstAppointmentCuttOff { get { object o = ViewState["lstAppointmentCuttOff"]; if (o == null) return null; else return (ConcurrentQueue<AppointmentCuttOff>)o; } set { ViewState["lstAppointmentCuttOff"] = value; } }

This pattern is used for pretty much all of the ViewState data that this code interacts with, and if you look at the null check, you can see that it's unnecessary. Our code checks for a null, and if we have one… returns null. The entire get block could just be: return (ConcurrentQueue<AppointmentCuttOff>)ViewState["lstAppointmentCuttOff"]

The bigger glitch here is the data-type. While there are a queue of appointments, that queue is never accessed across threads, so there's no need for a threadsafe ConcurrentQueue.

But I really love the name of the variable we store in ViewState. We have Hungarian notation, which calls it a lst, which isn't technically correct, though it is iterable, so maybe that's what they meant, but if the point of Hungarian notation is to make the code more clear, this isn't helping.

But what I really love is that these are CuttOffs, which just sounds like some retail brand attempting to sell uncomfortably short denim. It'll be next year's summer trend, mark my words!

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianRuss Allbery: Review: Unconquerable Sun

Review: Unconquerable Sun, by Kate Elliott

Series: Sun Chronicles #1
Publisher: Tor
Copyright: 2020
ISBN: 1-250-19725-2
Format: Kindle
Pages: 526

Sun is the daughter and heir of the mercurial Queen-Marshal Eirene, ruler of the Republic of Chaonia. Chaonia, thanks to Eirene and her ancestors, has carved out a fiercely independent position between the Yele League and the Phene Empire. Sun's father, Prince João, is one of Eirene's three consorts, all chosen for political alliances to shore up that fragile position. João is Gatoi, a civilization of feared fighters and supposed barbarians from outside Chaonia who normally ally with the Phene, which complicates Sun's position as heir. Sun attempts to compensate for that by winning battles for the Republic, following in the martial footsteps of her mother.

The publisher's summary of this book is not great (I'm a huge fan of Princess Leia, but that is... not the analogy that comes to mind), so let me try to help. This is gender-swapped Alexander the Great in space. However, it is gender-swapped Alexander the Great in space with her Companions, which means the DNA of this novel is half space opera and half heist story (without, to be clear, an actual heist, although there are some heist-like maneuvers). It's also worth mentioning that Sun, like Alexander, is not heterosexual.

The other critical thing to know before reading, mostly because it will get you through the rather painful start, is that the most interesting viewpoint character in this book is not Sun, the Alexander analogue. It's Persephone, who isn't introduced until chapter seven.

Significant disclaimer up front: I got a reasonably typical US grade school history of Alexander the Great, which means I was taught that he succeeded his father, conquered a whole swath of the middle of the Eurasian land mass at a very young age, and then died and left his empire to his four generals who promptly divided it into four uninteresting empires that no one's ever heard of, and that's why Rome is more important than Greece. (I put in that last bit to troll one specific person.)

I am therefore not the person to judge the parallels between this story and known history, or to notice any damage done to Greek pride, or to pick up on elements that might cause someone with a better grasp of that history to break out in hives. I did enough research to know that one scene in this book is lifted directly out of Alexander's life, but I'm not sure how closely the other parallels track. Yele is probably southern Greece and Phene is probably Persia, but I'm not certain even of that, and some of the details don't line up. If I had to hazard a guess, I'd say that Elliott has probably mangled history sufficiently to make it clear that this isn't intended to be a retelling, but if the historical parallels are likely to bother you, you may want to do more research before reading.

What I can say is that the space opera setup, while a bit stock, has all the necessary elements to make me happy. Unconquerable Sun is firmly in the "lost Earth" tradition: The Argosy fleet fled the now-mythical Celestial Empire and founded a new starfaring civilization without any contact with their original home. Eventually, they invented (or discovered; the characters don't know) the beacons, which allow for instantaneous travel between specific systems without the long (but still faster-than-light) journeys of the knnu drive. More recently, the beacon network has partly collapsed, cutting off the characters' known world from the civilization that was responsible for the beacons and guarded their secrets. It's a fun space opera history with lots of lost knowledge to reference and possibly discover, and with plot-enabling military choke points at the surviving beacons that link multiple worlds.

This is all background to the story, which is the ongoing war between Chaonia and the Phene Empire mixed with cutthroat political maneuvering between the great houses of the Chaonian Republic. This is where the heist aspects come in. Each house sends one representative to join the household of the Queen-Marshal and (more to the point for this story) another to join her heir. Sun has encouraged the individual and divergent talents of her Companions and their cee-cees (an unfortunate term that I suspect is short for Companion's Companion) and forged them into a good working team. A team that's about to be disrupted by the maneuverings of a rival house and the introduction of a new team member whom no one wants.

A problem with writing tactical geniuses is that they often aren't good viewpoint characters. Sun's tight third-person chapters, which is a little less than half the book, advance the plot and provide analysis of the interpersonal dynamics of the characters, but aren't the strength of the story. That lies with the interwoven first-person sections that follow Persephone, an altogether more interesting character.

Persephone is the scion of the house that is Sun's chief rival, but she has no interest in being part of that house or its maneuverings. When the story opens, she's a cadet in a military academy for recruits from the commoners, having run away from home, hidden her identity, and won a position through the open entrance exams. She of course doesn't stay there; her past catches up with her and she gets assigned to Sun, to a great deal of mutual suspicion. She also is assigned an impeccably dressed and stunningly beautiful cee-cee, Tiana, who has her own secrets and who was my favorite character in the book.

Somewhat unusually for the space opera tradition, this is a book that knows that common people exist and have interesting lives. It's primarily focused on the ruling houses, but that focus is not exclusive and the rulers do not have a monopoly on competence. Elliott also avoids narrowing the political field too far; the Gatoi are separate from the three rival powers, and there are other groups with traditions older than the Chaonian Republic and their own agendas. Sun and her Companions are following a couple of political threads, but there is clearly more going on in this world than that single plot.

This is exactly the kind of story I think of when I think space opera. It's not doing anything that original or groundbreaking, and it's not going to make any of my lists of great literature, but it's a fun romp with satisfyingly layered bits of lore, a large-scale setting with lots of plot potential, and (once we get through the confusing and somewhat tedious process of introducing rather too many characters in short succession) some great interpersonal dynamics. It's the kind of book in which the characters are in the middle of decisive military action in an interstellar war and are also near-teenagers competing for ratings in an ad hoc reality TV show, primarily as an excuse to create tactical distractions for Sun's latest scheme. The writing is okay but not great, and the first few chapters have some serious infodumping problems, but I thoroughly enjoyed the whole book and will pre-order the sequel.

One Amazon review complained that Unconquerable Sun is not a space opera like Hyperion or Use of Weapons. That is entirely true, but if that's your standard for space opera, the world may be a disappointing place. This is a solid entry in a subgenre I love, with some great characters, sarcasm, competence porn, plenty of pages to keep turning, a few twists, and the promise of more to come. Recommended.

Followed by the not-yet-published Furious Heaven.

Rating: 7 out of 10

Cryptogram Interview with the Author of the 2000 Love Bug Virus

No real surprises, but we finally have the story.

The story he went on to tell is strikingly straightforward. De Guzman was poor, and internet access was expensive. He felt that getting online was almost akin to a human right (a view that was ahead of its time). Getting access required a password, so his solution was to steal the passwords from those who’d paid for them. Not that de Guzman regarded this as stealing: He argued that the password holder would get no less access as a result of having their password unknowingly “shared.” (Of course, his logic conveniently ignored the fact that the internet access provider would have to serve two people for the price of one.)

De Guzman came up with a solution: a password-stealing program. In hindsight, perhaps his guilt should have been obvious, because this was almost exactly the scheme he’d mapped out in a thesis proposal that had been rejected by his college the previous year.

,

Planet DebianKees Cook: security things in Linux v5.7

Previously: v5.6

Linux v5.7 was released at the end of May. Here’s my summary of various security things that caught my attention:

arm64 kernel pointer authentication
While the ARMv8.3 CPU “Pointer Authentication” (PAC) feature landed for userspace already, Kristina Martsenko has now landed PAC support in kernel mode. The current implementation uses PACIASP which protects the saved stack pointer, similar to the existing CONFIG_STACKPROTECTOR feature, only faster. This also paves the way to sign and check pointers stored in the heap, as a way to defeat function pointer overwrites in those memory regions too. Since the behavior is different from the traditional stack protector, Amit Daniel Kachhap added an LKDTM test for PAC as well.

BPF LSM
The kernel’s Linux Security Module (LSM) API provide a way to write security modules that have traditionally implemented various Mandatory Access Control (MAC) systems like SELinux, AppArmor, etc. The LSM hooks are numerous and no one LSM uses them all, as some hooks are much more specialized (like those used by IMA, Yama, LoadPin, etc). There was not, however, any way to externally attach to these hooks (not even through a regular loadable kernel module) nor build fully dynamic security policy, until KP Singh landed the API for building LSM policy using BPF. With this, it is possible (for a privileged process) to write kernel LSM hooks in BPF, allowing for totally custom security policy (and reporting).

execve() deadlock refactoring
There have been a number of long-standing races in the kernel’s process launching code where ptrace could deadlock. Fixing these has been attempted several times over the last many years, but Eric W. Biederman and Ernd Edlinger decided to dive in, and successfully landed the a series of refactorings, splitting up the problematic locking and refactoring their uses to remove the deadlocks. While he was at it, Eric also extended the exec_id counter to 64 bits to avoid the possibility of the counter wrapping and allowing an attacker to send arbitrary signals to processes they normally shouldn’t be able to.

slub freelist obfuscation improvements
After Silvio Cesare observed some weaknesses in the implementation of CONFIG_SLAB_FREELIST_HARDENED‘s freelist pointer content obfuscation, I improved their bit diffusion, which makes attacks require significantly more memory content exposures to defeat the obfuscation. As part of the conversation, Vitaly Nikolenko pointed out that the freelist pointer’s location made it relatively easy to target too (for either disclosures or overwrites), so I moved it away from the edge of the slab, making it harder to reach through small-sized overflows (which usually target the freelist pointer). As it turns out, there were a few assumptions in the kernel about the location of the freelist pointer, which had to also get cleaned up.

RISCV page table dumping
Following v5.6’s generic page table dumping work, Zong Li landed the RISCV page dumping code. This means it’s much easier to examine the kernel’s page table layout when running a debug kernel (built with PTDUMP_DEBUGFS), visible in /sys/kernel/debug/kernel_page_tables.

array index bounds checking
This is a pretty large area of work that touches a lot of overlapping elements (and history) in the Linux kernel. The short version is: C is bad at noticing when it uses an array index beyond the bounds of the declared array, and we need to fix that. For example, don’t do this:

int foo[5];
...
foo[8] = bar;

The long version gets complicated by the evolution of “flexible array” structure members, so we’ll pause for a moment and skim the surface of this topic. While things like CONFIG_FORTIFY_SOURCE try to catch these kinds of cases in the memcpy() and strcpy() family of functions, it doesn’t catch it in open-coded array indexing, as seen in the code above. GCC has a warning (-Warray-bounds) for these cases, but it was disabled by Linus because of all the false positives seen due to “fake” flexible array members. Before flexible arrays were standardized, GNU C supported “zero sized” array members. And before that, C code would use a 1-element array. These were all designed so that some structure could be the “header” in front of some data blob that could be addressable through the last structure member:

/* 1-element array */
struct foo {
    ...
    char contents[1];
};

/* GNU C extension: 0-element array */
struct foo {
    ...
    char contents[0];
};

/* C standard: flexible array */
struct foo {
    ...
    char contents[];
};

instance = kmalloc(sizeof(struct foo) + content_size);

Converting all the zero- and one-element array members to flexible arrays is one of Gustavo A. R. Silva’s goals, and hundreds of these changes started landing. Once fixed, -Warray-bounds can be re-enabled. Much more detail can be found in the kernel’s deprecation docs.

However, that will only catch the “visible at compile time” cases. For runtime checking, the Undefined Behavior Sanitizer has an option for adding runtime array bounds checking for catching things like this where the compiler cannot perform a static analysis of the index values:

int foo[5];
...
for (i = 0; i < some_argument; i++) {
    ...
    foo[i] = bar;
    ...
}

It was, however, not separate (via kernel Kconfig) until Elena Petrova and I split it out into CONFIG_UBSAN_BOUNDS, which is fast enough for production kernel use. With this enabled, it's now possible to instrument the kernel to catch these conditions, which seem to come up with some regularity in Wi-Fi and Bluetooth drivers for some reason. Since UBSAN (and the other Sanitizers) only WARN() by default, system owners need to set panic_on_warn=1 too if they want to defend against attacks targeting these kinds of flaws. Because of this, and to avoid bloating the kernel image with all the warning messages, I introduced CONFIG_UBSAN_TRAP which effectively turns these conditions into a BUG() without needing additional sysctl settings.

Fixing "additive" snprintf() usage
A common idiom in C for building up strings is to use sprintf()'s return value to increment a pointer into a string, and build a string with more sprintf() calls:

/* safe if strlen(foo) + 1 < sizeof(string) */
wrote  = sprintf(string, "Foo: %s\n", foo);
/* overflows if strlen(foo) + strlen(bar) > sizeof(string) */
wrote += sprintf(string + wrote, "Bar: %s\n", bar);
/* writing way beyond the end of "string" now ... */
wrote += sprintf(string + wrote, "Baz: %s\n", baz);

The risk is that if these calls eventually walk off the end of the string buffer, it will start writing into other memory and create some bad situations. Switching these to snprintf() does not, however, make anything safer, since snprintf() returns how much it would have written:

/* safe, assuming available <= sizeof(string), and for this example
 * assume strlen(foo) < sizeof(string) */
wrote  = snprintf(string, available, "Foo: %s\n", foo);
/* if (strlen(bar) > available - wrote), this is still safe since the
 * write into "string" will be truncated, but now "wrote" has been
 * incremented by how much snprintf() *would* have written, so "wrote"
 * is now larger than "available". */
wrote += snprintf(string + wrote, available - wrote, "Bar: %s\n", bar);
/* string + wrote is beyond the end of string, and availabe - wrote wraps
 * around to a giant positive value, making the write effectively 
 * unbounded. */
wrote += snprintf(string + wrote, available - wrote, "Baz: %s\n", baz);

So while the first overflowing call would be safe, the next one would be targeting beyond the end of the array and the size calculation will have wrapped around to a giant limit. Replacing this idiom with scnprintf() solves the issue because it only reports what was actually written. To this end, Takashi Iwai has been landing a bunch scnprintf() fixes.

That's it for now! Let me know if there is anything else you think I should mention here. Next up: Linux v5.8.

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

Cryptogram Matt Blaze on OTP Radio Stations

Matt Blaze discusses (also here) an interesting mystery about a Cuban one-time-pad radio station, and a random number generator error that probably helped arrest a pair of Russian spies in the US.

Planet DebianAntoine Beaupré: PSA: Mailman used to harrass people

It seems that Mailman instances are being abused to harrass people with subscribe spam. If some random people complain to you that they "never wanted to subscribe to your mailing list", you may be a victim to that attack, even if you run the latest Mailman 2.

TL;DR: IKR! HOW DO I FIX THIS!?

Make sure you have SUBSCRIBE_FORM_SECRET set in your mailman configuration:

SECRET=$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 30)'
echo "SUBSCRIBE_FORM_SECRET = '$SECRET'" >> /etc/mailman/mm.cfg

This will add a magic token to all forms in the Mailman web forms that will force the attacker to at least get a token before asking for registration. There are, of course, other ways of performing the attack then, but it's more expensive than a single request for the attacker and keeps most of the junk out.

Other solutions

I originally deployed a different fix, using referrer checks and an IP block list:

RewriteMap hosts-deny  txt:/etc/apache2/blocklist.txt
RewriteCond ${hosts-deny:%{REMOTE_ADDR}|NOT-FOUND} !=NOT-FOUND [OR]
RewriteCond ${hosts-deny:%{REMOTE_HOST}|NOT-FOUND} !=NOT-FOUND [OR]
RewriteCond %{HTTP_REFERER} !^https://lists.torproject.org/$ [NC]
RewriteRule ^/cgi-bin/mailman/subscribe/ - [F]
# see also https://www.w3.org/TR/referrer-policy/#referrer-policy-origin
Header always set Referrer-Policy "origin"

I kept those restrictions in place because it keeps the spammers from even hitting the Mailman CGI, which is useful to preserve our server resources. But if "they" escalate with smarter crawlers, the block list will still be useful.

You can use this query to extract the top 10 IP addresses used for subscription attempts:

awk '{ print $NF }' /var/log/mailman/subscribe | sort | uniq -c | sort -n | tail -10  | awk '{ print $2 " " $1 }'

Note that this might include email-based registration, but in our logs those are extremely rare: only two in three weeks, out of over 73,000 requests. I also use this to keep an eye on the logs:

tail -f  /var/log/mailman/subscribe /var/log/apache2/lists.torproject.org-access.log | grep -v 'GET /pipermail/'

The server-side mitigations might also be useful if you happen to run an extremely old version of Mailman, that is pre-2.1.18, but it's now over 6 years old and part of every supported Debian release out there (all the way back to Debian 8 jessie).

Why does that attack work?

Because Mailman 2 doesn't have CSRF tokens in its forms by default, anyone can send a POST request to /mailman/subscribe/LISTNAME to have Mailman send an email to the user. In the old "Internet is for nice people" universe, that wasn't a problem: all it does is ask the victim if they want to subscribe to LISTNAME. Innocuous, right?

But in the brave, new, post-Eternal-September, "Internet is for stupid" universe, some assholes think it's a good idea to make a form that collects hundreds of mailing list URLs and spam them through an iframe. To see what that looks like, you can look at the rendered source code behind samedyfreeday.co.uk (not linking to avoid promoting it). That site does what is basically a distributed cross-site scripting attack against Mailman servers.

Obviously, CSRF protection should be enabled by default in Mailman, but there you go. Hopefully this will help some folks...

(The latest Mailman 3 release doesn't suffer from such idiotic defaults and ships with proper CSRF protection out of the box.)

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 15)

Here’s part fifteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Planet DebianJonathan McDowell: Mainline Linux on the MikroTik RB3011

I upgraded my home internet connection to fibre (FTTP) last October. I’m still on an 80M/20M service, so it’s no faster than my old VDSL FTTC connection was, and as a result for a long time I continued to use my HomeHub 5A running OpenWRT. However the FTTP ONT meant I was using up an additional ethernet port on the router, and I was already short, so I ended up with a GigE switch in use as well. Also my wifi is handled by a UniFi, which takes its power via Power-over-Ethernet. That mean I had a router, a switch and a PoE injector all in close proximity. I wanted to reduce the number of devices, and ideally upgrade to something that could scale once I decide to upgrade my FTTP service speed.

Looking around I found the MikroTik RB3011UiAS-RM, which is a rack mountable device with 10 GigE ports (plus an SFP slot) and a dual core Qualcomm IPQ8064 ARM powering it. There’s 1G RAM and 128MB NAND flash, as well as a USB3 port. It also has PoE support. On paper it seemed like an ideal device. I wasn’t particularly interested in running RouterOS on it (the provided software), but that’s based on Linux and there was some work going on within OpenWRT to add support, so it seemed like a worthwhile platform to experiment with (what, you expected this to be about me buying an off the shelf device and using it with only the supplied software?). As an added bonus a friend said he had one he wasn’t using, and was happy to sell it to me for a bargain price.

RB3011 router in use

I did try out RouterOS to start with, but I didn’t find it particularly compelling. I’m comfortable configuring firewalling and routing at a Linux command line, and I run some additional services on the router like my MQTT broker, and mqtt-arp, my wifi device presence monitor. I could move things around such that they ran on the house server, but I consider them core services and as a result am happier with them on the router.

The first step was to get something booting on the router. Luckily it has an RJ45 serial console port on the back, and a reasonably featured bootloader that can manage to boot via tftp over the network. It wants an ELF binary rather than a plain kernel, but Sergey Sergeev had done the hard work of getting u-boot working for the IPQ8064, which mean I could just build normal u-boot images to try out.

Linux upstream already had basic support for a lot of the pieces I was interested in. There’s a slight fudge around AUTO_ZRELADDR because the network coprocessors want a chunk of memory at the start of RAM, but there’s ongoing discussions about how to handle this cleanly that I’m hopeful will eventually mean I can drop that hack. Serial, ethernet, the QCA8337 switches (2 sets of 5 ports, tied to different GigE devices on the processor) and the internal NOR all had drivers, so it was a matter of crafting an appropriate DTB to get them working. That left niggles.

First, the second switch is hooked up via SGMII. It turned out the IPQ806x stmmac driver didn’t initialise the clocks in this mode correctly, and neither did the qca8k switch driver. So I need to fix up both of those (Sergey had handled the stmmac driver, so I just had to clean up and submit his patch). Next it turned out the driver for talking to the Qualcomm firmware (SCM) had been updated in a way that broke the old method needed on the IPQ8064. Some git archaeology figured that one out and provided a solution. Ansuel Smith helpfully provided the DWC3 PHY driver for the USB port. That got me to the point I could put a Debian armhf image onto a USB stick and mount that as root, which made debugging much easier.

At this point I started to play with configuring up the device to actually act as a router. I make use of a number of VLANs on my home network, so I wanted to make sure I could support those. Turned out the stmmac driver wasn’t happy reconfiguring its MTU because the IPQ8064 driver doesn’t configure the FIFO sizes. I found what seem to be the correct values and plumbed them in. Then the qca8k driver only supported port bridging. I wanted the ability to have a trunk port to connect to the upstairs switch, while also having ports that only had a single VLAN for local devices. And I wanted the switch to handle this rather than requiring the CPU to bridge the traffic. Thankfully it’s easy to find a copy of the QCA8337 datasheet and the kernel Distributed Switch Architecture is pretty flexible, so I was able to implement the necessary support.

I stuck with Debian on the USB stick for actually putting the device into production. It makes it easier to fix things up if necessary, and the USB stick allows for a full Debian install which would be tricky on the 128M of internal NAND. That means I can use things like nftables for my firewalling, and use the standard Debian packages for things like collectd and mosquitto. Plus for debug I can fire up things like tcpdump or tshark. Which ended up being useful because when I put the device into production I started having weird IPv6 issues that turned out to be a lack of proper Ethernet multicast filter support in the IPQ806x ethernet device. The driver would try and setup the multicast filter for the IPv6 NDP related packets, but it wouldn’t actually work. The fix was to fall back to just receiving all multicast packets - this is what the vendor driver does.

Most of this work will be present once the 5.9 kernel is released - the basics are already in 5.8. Currently not queued up that I can think of are the following:

  • stmmac IPQ806x FIFO sizes. I sent out an RFC patch for these, but didn’t get any replies. I probably just need to submit this.
  • NAND. This is missing support for the QCOM ADM DMA engine. I’ve sent out the patch I found to enable this, and have had some feedback, so I’m hopeful it will get in at some point.
  • LCD. AFAICT LCD is an ST7735 device, which has kernel support, but I haven’t spent serious effort getting the SPI configuration to work.
  • Touchscreen. Again, this seems to be a zt2046q or similar, which has a kernel driver, but the basic attempts I’ve tried don’t get any response.
  • Proper SFP functionality. The IPQ806x has a PCS module, but the stmmac driver doesn’t have an easy way to plumb this in. I have ideas about how to get it working properly (and it can be hacked up with a fixed link config) but it’s not been a high priority.
  • Device tree additions. Some of the later bits I’ve enabled aren’t yet in the mainline RB3011 DTB. I’ll submit a patch for that at some point.

Overall I consider the device a success, and it’s been entertaining getting it working properly. I’m running a mostly mainline kernel, it’s handling my house traffic without breaking a sweat, and the fact it’s running Debian makes it nice and easy to throw more things on it as I desire. However it turned out the RB3011 isn’t as perfect device as I’d hoped. The PoE support is passive, and the UniFi wants 802.1af. So I was going to end up with 2 devices. As it happened I picked up a cheap D-Link DGS-1210-10P switch, which provides the PoE support as well as some additional switch ports. Plus it runs Linux, so more on that later…

Google AdsenseA guide to common AdSense policy questions

A guide to understand the digital advertising policies and resolving policy violations for AdSense.

Sociological ImagesSurvivors or Victims?

The #MeToo movement that began in 2017 has reignited a long debate about how to name people who have had traumatic experiences. Do we call individuals who have experienced war, cancer, crime, or sexual violence “victims”? Or should we call them “survivor,” as recent activists like #MeToo founder Tarana Burke have advocated?

Strong arguments can be raised for both sides. In the sexual violence debate, advocates of “survivor” argue the term places women at the center of their own narrative of recovery and growth. Defenders of victim language, meanwhile, argue that victim better describes the harm and seriousness of violence against women and identifies the source of violence in systemic misogyny and cultures of patriarchy.

Unfortunately, while there has been much debate about the use of these terms, there has been little documentation of how service and advocacy organizations that work with individuals who have experienced trauma actually use these terms. Understanding the use of survivor and victim is important because it tells us what these terms to mean in practice and where barriers to change are. 

We sought to remedy this problem in a recent paper published in Social Currents.  We used data from nonprofit mission statements to track language change among 3,756 nonprofits that once talked about victims in the 1990s.  We found, in general, that relatively few organizations adopted survivor as a way to talk about trauma even as some organizations have moved away from talking about victims.  However, we also found that, increasingly, organizations that focus on issues related to women tend to use victim and survivor interchangeably. In contrast, organizations that do not work with women appear be moving away from both terms.

These findings contradict the way we usually think about “survivor” and “victim” as opposing terms. Does this mean that survivor and victim are becoming the “extremely reduced form” through which women are able to enter the public sphere? Or does it mean that feminist service providers are avoiding binary thinking? These questions, as well as questions about the strategic, linguistic, and contextual reasons that organizations choose victim- or survivor-based language give advocates and scholars of language plenty to re-examine.  

Andrew Messamore is a PhD student in the Department of Sociology at the University of Texas at Austin. Andrew studies changing modes of local organizing at work and in neighborhoods and how the ways people associate shapes community, public discourse, and economic inequality in the United States.

Pamela Paxton is the Linda K. George and John Wilson Professor of Sociology at The University of Texas at Austin. With Melanie Hughes and Tiffany Barnes, she is the co-author of the 2020 book, Women, Politics, and Power: A Global Perspective.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureCodeSOD: Exceptional Standards Compliance

When we're laying out code standards and policies, we are, in many ways, relying on "policing by consent". We are trying to establish standards for behavior among our developers, but we can only do this with their consent. This means our standards have to have clear value, have to be applied fairly and equally. The systems we build to enforce those standards are meant to reduce conflict and de-escalate disagreements, not create them.

But that doesn't mean there won't always be developers who resist following the agreed upon standards. Take, for example, Daniel's co-worker. Their CI process also runs a static analysis step against their C# code, which lets them enforce a variety of coding standards.

One of those standards is: "Catch specific exceptions. Don't catch the generic Exception type unless explicitly necessary." If it is explicitly necessary, their CI system attaches "Comments" (not code comments) to the commit, so all you need to do is click the "resolve" button and provide a brief explanation of why it was necessary.

This wouldn't be an… exceptional standard. Specific is always better than vague, and in this case, the rule isn't hard and fast: you're allowed to violate it if you can explain why.

But explaining yourself sounds like a lot of work. Wouldn't it be easier to try and fool the static analysis tool?

try { {...} } catch (Exception ex) when (ex is Exception exception) { {...} }

C#'s catch block supports a when operator, which is meant to filter based on properties of the exception. The ex is Exception exception is a pattern match and also a cast: it's true if the type of ex is Exception, and also cast it to Exception and store the cast in exception.

Or to word it another way, we catch the exception if it is of the type Exception but only when the type is Exception in which case we cast the exception which we know is an Exception to Exception and store it in exception, and I take exception to all of that.

Presumably, they'll change the rule in the CI system to exclude these, but hopefully they'll also have a talk with the developer responsible about the purpose of standards. Maybe they'll get some "standards by consent", or maybe somebody'll be looking for a new job.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianRuss Allbery: Review: Lower Ed

Review: Lower Ed, by Tressie McMillan Cottom

Publisher: The New Press
Copyright: 2017
Printing: 2018
ISBN: 1-62097-472-X
Format: Kindle
Pages: 217

Lower Ed (subtitled The Troubling Rise of For-Profit Colleges in the New Economy) is the first book by sociologist Tressie McMillan Cottom. (I previously reviewed her second book, the excellent essay collection Thick.) It is a deep look at the sociology of for-profit higher education in the United States based on interviews with students and executives, analysis of Wall Street filings, tests of the admissions process, and her own personal experiences working for two of the schools. One of the questions that McMillan Cottom tries to answer is why students choose to enroll in these institutions, particularly the newer type of institution funded by federal student loans and notorious for being more expensive and less valuable than non-profit colleges and universities.

I was hesitant to read this book because I find for-profit schools depressing. I grew up with the ubiquitous commercials, watched the backlash develop, and have a strongly negative impression of the industry, partly influenced by having worked in traditional non-profit higher education for two decades. The prevailing opinion in my social group is that they're a con job. I was half-expecting a reinforcement of that opinion by example, and I don't like reading infuriating stories about people being defrauded.

I need not have worried. This is not that sort of book (nor, in retrospect, do I think McMillan Cottom would approach a topic from that angle). Sociology is broader than reporting. Lower Ed positions for-profit colleges within a larger social structure of education, credentialing, and changes in workplace expectations; takes a deep look at why they are attractive to their students; and humanizes and complicates the motives and incentives of everyone involved, including administrators and employees of for-profit colleges as well as the students. McMillan Cottom does of course talk about the profit motive and the deceptions surrounding that, but the context is less that of fraud that people are unable to see through and more a balancing of the drawbacks of a set of poor choices embedded in institutional failures.

One of my metrics for a good non-fiction book is whether it introduces me to a new idea that changes how I analyze the world. Lower Ed does that twice.

The first idea is the view of higher education through the lens of risk shifting. It used to be common for employers to hire people without prior job-specific training and do the training in-house, possibly through an apprenticeship structure. More notably, once one was employed by a particular company, the company routinely arranged or provided ongoing training. This went hand-in-hand with a workplace culture of long tenure, internal promotion, attempts to avoid layoffs, and some degree of mutual loyalty. Companies expected to invest significantly in an employee over their career and thus also had an incentive to retain that employee rather than train someone for a competitor.

However, from a purely financial perspective, this is a risk and an inefficiency, similar to the risk of carrying a large inventory of parts and components. Companies have responded to investor-driven focus on profits and efficiency by reducing overhead and shifting risk. This leads to the lean supply chain, where no one pays for parts to sit around in warehouses and companies aren't caught with large stockpiles of now-useless components, but which is more sensitive to any disruption (such as from a global pandemic). And, for employment, it leads to a desire to hire pre-trained workers, retain only enough workers to do the current amount of work, and replace them with new workers who already have appropriate training rather than retrain them.

The effect of the corporate decision to only hire pre-trained employees is to shift the risk and expense of training from the company to the prospective employee. The individual has to seek out training at their own expense in the hope (not guarantee) that at the conclusion of that training they will get or retain a job. People therefore turn to higher education to both provide that training and to help them decide what type of training will eventually be valuable. This has a long history with certain professional fields (doctors and lawyers, for example), but the requirements for completing training in those fields are relatively clear (a professional license to practice) and the compensation reflects the risk. What's new is the shift of training risk to the individual in more mundane jobs, without any corresponding increase in compensation.

This, McMillan Cottom explains, is the background for the growth in demand for higher education in general and the the type of education offered by for-profit colleges in particular. Workers who in previous eras would be trained by their employers are now responsible for their own training. That training is no longer judged by the standards of a specific workplace, but is instead evaluated by a hiring process that expects constant job-shifting. This leads to increased demand by both workers and employers for credentials: some simple-to-check certificate of completion of training that says that this person has the skills to immediately start doing some job. It also leads to a demand for more flexible class hours, since the student is now often someone older with a job and a family to balance. Their ongoing training used to be considered a cost of business and happen during their work hours; now it is something they have to fit around the contours of their life because their employer has shifted that risk to them.

The risk-shifting frame makes sense of the "investment" language so common in for-profit education. In this job economy, education as investment is not a weird metaphor for the classic benefits of a liberal arts education: broadened perspective, deeper grounding in philosophy and ethics, or heightened aesthetic appreciation. It's an investment in the literal financial sense; it is money that you spend now in order to get a financial benefit (a job) in the future. People have to invest in their own training because employers are no longer doing so, but still require the outcome of that investment. And, worse, it's primarily a station-keeping investment. Rather than an optional expenditure that could reap greater benefits later, it's a mandatory expenditure to prevent, at best, stagnation in a job paying poverty wages, and at worst the disaster of unemployment.

This explains renewed demand for higher education, but why for-profit colleges? We know they cost more and have a worse reputation (and therefore their credentials have less value) than traditional non-profit colleges. Flexible hours and class scheduling explains some of this but not all of it. That leads to the second perspective-shifting idea I got from Lower Ed: for-profit colleges are very good at what they focus time and resources on, and they focus on enrolling students.

It is hard to enroll in a university! More precisely, enrolling in a university requires bureaucracy navigation skills, and those skills are class-coded. The people who need them the most are the least likely to have them.

Universities do not reach out to you, nor do they guide you through the process. You have to go to them and discover how to apply, something that is often made harder by the confusing state of many university web sites. The language and process is opaque unless other people in your family have experience with universities and can explain it. There might be someone you can reach on the phone to ask questions, but they're highly unlikely to proactively guide you through the remaining steps. It's your responsibility to understand deadlines, timing, and sequence of operations, and if you miss any of the steps (due to, for example, the overscheduled life of someone in need of better education for better job prospects), the penalty in time and sometimes money can be substantial. And admission is just the start; navigating financial aid, which most students will need, is an order of magnitude more daunting. Community colleges are somewhat easier (and certainly cheaper) than universities, but still have similar obstacles (and often even worse web sites).

It's easy for people like me, who have long professional expertise with bureaucracies, family experience with higher education, and a support network of people to nag me about deadlines, to underestimate this. But the application experience at a for-profit college is entirely different in ways far more profound than I had realized. McMillan Cottom documents this in detail from her own experience working for two different for-profit colleges and from an experiment where she indicated interest in multiple for-profit colleges and then stopped responding before signing admission paperwork. A for-profit college is fully invested in helping a student both apply and get financial aid, devotes someone to helping them through that process, does not expect them to understand how to navigate bureaucracies or decipher forms on their own, does not punish unexpected delays or missed appointments, and goes to considerable lengths to try to keep anyone from falling out of the process before they are enrolled. They do not expect their students to already have the skills that one learns from working in white-collar jobs or from being surrounded by people who do. They provide the kind of support that an educational institution should provide to people who, by definition, don't understand something and need to learn.

Reading about this was infuriating. Obviously, this effort to help people enroll is largely for predatory reasons. For-profit schools make their money off federal loans and they don't get that money unless they can get someone to enroll and fill out financial paperwork (and to some extent keep them enrolled), so admissions is their cash cow and they act accordingly. But that's not why I found it infuriating; that's just predictable capitalism. What I think is inexcusable is that nothing they do is that difficult. We could being doing the same thing for prospective community college students but have made the societal choice not to. We believe that education is valuable, we constantly advocate that people get more job training and higher education, and yet we demand prospective students navigate an unnecessarily baroque and confusing application process with very little help, and then stereotype and blame them for failing to do so.

This admission support is not a question of resources. For-profit colleges are funded almost entirely by federally-guaranteed student loans. We are paying them to help people apply. It is, in McMillan Cottom's term, a negative social insurance program. Rather than buffering people against the negative effects of risk-shifting of employers by helping them into the least-expensive and most-effective training programs (non-profit community colleges and universities), we are spending tax dollars to enrich the shareholders of for-profit colleges while underfunding the alternatives. We are choosing to create a gap that routes government support to the institution that provides worse training at higher cost but is very good at helping people apply. It's as if the unemployment system required one to use payday lenders to get one's unemployment check.

There is more in this book I want to talk about, but this review is already long enough. Suffice it to say that McMillan Cottom's analysis does not stop with market forces and the admission process, and the parts of her analysis that touch on my own personal experience as someone with a somewhat unusual college path ring very true. Speaking as a former community college student, the discussion of class credit transfer policies and the way that institutional prestige gatekeeping and the desire to push back against low-quality instruction becomes a trap that keeps students in the for-profit system deserves another review this length. So do the implications of risk-shifting and credentialism on the morality of "cheating" on schoolwork.

As one would expect from the author of the essay "Thick" about bringing context to sociology, Lower Ed is personal and grounded. McMillan Cottom doesn't shy away from including her own experiences and being explicit about her sources and research. This is backed up by one of the best methodological notes sections I've seen in a book. One of the things I love about McMillan Cottom's writing is that it's solidly academic, not in the sense of being opaque or full of jargon (the text can be a bit dense, but I rarely found it hard to follow), but in the sense of being clear about the sources of knowledge and her methods of extrapolation and analysis. She brings her receipts in a refreshingly concrete way.

I do have a few caveats. First, I had trouble following a structure and line of reasoning through the whole book. Each individual point is meticulously argued and supported, but they are not always organized into a clear progression or framework. That made Lower Ed feel at times like a collection of high-quality but somewhat unrelated observations about credentials, higher education, for-profit colleges, their student populations, their business models, and their relationships with non-profit schools.

Second, there are some related topics that McMillan Cottom touches on but doesn't expand sufficiently for me to be certain I understood them. One of the big ones is credentialism. This is apparently a hot topic in sociology and is obviously important to this book, but it's referenced somewhat glancingly and was not satisfyingly defined (at least for me). There are a few similar places where I almost but didn't quite follow a line of reasoning because the book structure didn't lay enough foundation.

Caveats aside, though, this was meaty, thought-provoking, and eye-opening, and I'm very glad that I read it. This is a topic that I care more about than most people, but if you have watched for-profit colleges with distaste but without deep understanding, I highly recommend Lower Ed.

Rating: 8 out of 10

,

Planet DebianEnrico Zini: Relationships links

The Bouletcorp » Love & Dragons is a strip I like about fairytale relationships.

There are a lot of mainstream expectations about relationships. These links challenge a few of them:

More about emotional work, some more links to follow a previous links post:

Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.2: New upstream, awesome new stopwatch

Following up on the initial RcppSpdlog 0.0.1 release earlier this week, we are pumped to announce release 0.0.2. It contains upstream version 1.8.0 for spdlog which utilizes (among other things) a new feature in the embedded fmt library, namely completely automated formatting of high resolution time stamps which allows for gems like this (taken from this file in the package and edited down for brevity):

What we see is all there is: One instantiates a stopwatch object, and simply references it. The rest, as they say, is magic. And we get tic / toc alike behaviour in modern C++ at essentially no cost us (as code authors). So nice. Output from the (included in the package) function exampleRsink() (again edited down just a little):

We see that the two simple logging instances come 10 and 18 microseconds into the call.

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.2 (2020-09-17)

  • Upgraded to upstream release 1.8.0

  • Switched Travis CI to using BSPM, also test on macOS

  • Added 'stopwatch' use to main R sink example

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppSpdlog page.

The only sour grapes, again, are once more over the CRAN processing. And just how 0.0.1 was delayed for no good reason for three weeks, 0.0.2 was delayed by three days just because … well that is how CRAN rules sometimes. I’d be even more mad if I had an alternative but I don’t. We remain grateful for all they do but they really could have let this one through even at one-day update delta. Ah well, now we’re three days wiser and of course nothing changed in the package.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianRussell Coker: Links September 2020

MD5 cracker, find plain text that matches MD5 hash [1].

Debian Quick Image Baker – Debian VM images for various architectures [2].

Insightful article on Scientific American about how dental and orthodontic problems are caused by our modern lifestyle [3].

Cory Doctorow wrote an insightful article for Locus Magazine about Intellectual Property [4]. He makes the case that we are losing our rights due to changes in the legal system that benefits corporations.

Anarcat has an informative blog post about how to stop people using your Mailman installation to attack others [5]. Since version 1:2.1.27-1 the Debian package of Mailman has created a SUBSCRIBE_FORM_SECRET by default on install, but any installation upgraded from an older version of the Debian package won’t have it.

Planet DebianAndy Simpkins: Using an IP camera in conference calls

My webcam has broken, something that I have been using a lot during the last few months for some reason.

A friend of mine suggested that I use the mic and camera on my mobile phone instead.  There is a simple app ‘droidcam’ that makes the phone behave as a simple webcam, it also has a client application to run on your PC to capture the web-stream and present it as a video device.  All well and good but I would like to keep propitiatory software off my PCs (I have a hard enough time accepting it on a phone but I have to draw a line somewhere).

I decided that there had to be a simple way to ingest that stream and present it as a local video device on a Linux box.  It turns out that it is a lot simpler than I thought it would be.  I had it working within 10 minutes!

Packages needed:  ffmpeg, v4l2loopback-utils

sudo apt-get install ffmpeg v4l2loopback-utils

Start the loop-back device:

sudo modprobe v4l2loopback

Ingest video and present on loop-back device:

ffmpeg -re -i <URL_VideoStream> -vcodec rawvideo -pix_fmt yuv420p -f v4l2 /dev/video0

Where

-re
Read input at native frame rate. Mainly used to simulate a grab
device, or live input stream (e.g. when reading from a file). Should
not be used with actual grab devices or live input streams (where it
can cause packet loss). By default ffmpeg attempts to read the
input(s) as fast as possible. This option will slow down the reading
of the input(s) to the native frame rate of the input(s). It is
useful for real-time output (e.g. live streaming).
 -i <source>
In this case my phone running droidcam http://10.14.1.100:4747/video
-vcodec rawvideo
Select the output video codec to raw video (as expected for /dev/video#)
-pix_fmt yuv420p
Set pixel format.  in this example tell ffmpeg that the video will
be yuv colour space at 420p resolution
-f v4l2 /dev/video0
Force the output to be in video for Linux 2 (v4l2) format bound to
device /dev/video0

,

Rondam RamblingsGame over for the USA

I would like to think that Ruth Bader Ginsberg's untimely passing is not the catastrophe that it appears to be.  I would like to think that Mitch McConnell is a man of principle, and having once said that the Senate should not confirm a Supreme Court justice in an election year he will not brazenly expose himself as a hypocrite and confirm a Supreme Court justice in an election year.I would like

,

Cryptogram CEO of NS8 Charged with Securities Fraud

The founder and CEO of the Internet security company NS8 has been arrested and “charged in a Complaint in Manhattan federal court with securities fraud, fraud in the offer and sale of securities, and wire fraud.”

I admit that I’ve never even heard of the company before.

Planet Linux AustraliaFrancois Marier: Setting up and testing an NPR modem on Linux

After acquiring a pair of New Packet Radio modems on behalf of VECTOR, I set it up on my Linux machine and ran some basic tests to check whether it could achieve the advertised 500 kbps transfer rates, which are much higher than AX25) packet radio.

The exact equipment I used was:

Radio setup

After connecting the modems to the power supply and their respective antennas, I connected both modems to my laptop via micro-USB cables and used minicom to connect to their console on /dev/ttyACM[01]:

minicom -8 -b 921600 -D /dev/ttyACM0
minicom -8 -b 921600 -D /dev/ttyACM1

To confirm that the firmware was the latest one, I used the following command:

ready> version
firmware: 2020_02_23
freq band: 70cm

then I immediately turned off the radio:

radio off

which can be verified with:

status

Following the British Columbia 70 cm band plan, I picked the following frequency, modulation (bandwidth of 360 kHz), and power (0.05 W):

set frequency 433.500
set modulation 22
set RF_power 7

and then did the rest of the configuration for the master:

set callsign VA7GPL_0
set is_master yes
set DHCP_active no
set telnet_active no

and the client:

set callsign VA7GPL_1
set is_master no
set DHCP_active yes
set telnet_active no

and that was enough to get the two modems to talk to one another.

On both of them, I ran the following:

save
reboot

and confirmed that they were able to successfully connect to each other:

who

Monitoring RF

To monitor what is happening on the air and quickly determine whether or not the modems are chatting, you can use a software-defined radio along with gqrx with the following settings:

frequency: 433.500 MHz
filter width: user (80k)
filter shape: normal
mode: Raw I/Q

I found it quite helpful to keep this running the whole time I was working with these modems. The background "keep alive" sounds are quite distinct from the heavy traffic sounds.

IP setup

The radio bits out of the way, I turned to the networking configuration.

On the master, I set the following so that I could connect the master to my home network (192.168.1.0/24) without conflicts:

set def_route_active yes
set DNS_active no
set modem_IP 192.168.1.254
set IP_begin 192.168.1.225
set master_IP_size 29
set netmask 255.255.255.0

(My router's DHCP server is configured to allocate dynamic IP addresses from 192.168.1.100 to 192.168.1.224.)

At this point, I connected my laptop to the client using a CAT-5 network cable and the master to the ethernet switch, essentially following Annex 5 of the Advanced User Guide.

My laptop got assigned IP address 192.168.1.225 and so I used another computer on the same network to ping my laptop via the NPR modems:

ping 192.168.1.225

This gave me a round-trip time of around 150-250 ms.

Performance test

Having successfully established an IP connection between the two machines, I decided to run a quick test to measure the available bandwidth in an ideal setting (i.e. the two antennas very close to each other).

On both computers, I installed iperf:

apt install iperf

and then setup the iperf server on my desktop computer:

sudo iptables -A INPUT -s 192.168.1.0/24 -p TCP --dport 5001 -j ACCEPT
sudo iptables -A INPUT -s 192.168.1.0/24 -u UDP --dport 5001 -j ACCEPT
iperf --server

On the laptop, I set the MTU to 750 in NetworkManager:

and restarted the network.

Then I created a new user account (npr with a uid of 1001):

sudo adduser npr

and made sure that only that account could access the network by running the following as root:

# Flush all chains.
iptables -F

# Set defaults policies.
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP

# Don't block localhost and ICMP traffic.
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT

# Don't re-evaluate already accepted connections.
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

# Allow connections to/from the test user.
iptables -A OUTPUT -m owner --uid-owner 1001 -m conntrack --ctstate NEW -j ACCEPT

# Log anything that gets blocked.
iptables -A INPUT -j LOG
iptables -A OUTPUT -j LOG
iptables -A FORWARD -j LOG

then I started the test as the npr user:

sudo -i -u npr
iperf --client 192.168.1.8

Results

The results were as good as advertised both with modulation 22 (360 kHz bandwidth):

$ iperf --client 192.168.1.8 --time 30
------------------------------------------------------------
Client connecting to 192.168.1.8, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.225 port 58462 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-34.5 sec  1.12 MBytes   274 Kbits/sec

------------------------------------------------------------
Client connecting to 192.168.1.8, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.225 port 58468 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-42.5 sec  1.12 MBytes   222 Kbits/sec

------------------------------------------------------------
Client connecting to 192.168.1.8, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.225 port 58484 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-38.5 sec  1.12 MBytes   245 Kbits/sec

and modulation 24 (1 MHz bandwitdh):

$ iperf --client 192.168.1.8 --time 30
------------------------------------------------------------
Client connecting to 192.168.1.8, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.225 port 58148 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-31.1 sec  1.88 MBytes   506 Kbits/sec

------------------------------------------------------------
Client connecting to 192.168.1.8, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.225 port 58246 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.5 sec  2.00 MBytes   550 Kbits/sec

------------------------------------------------------------
Client connecting to 192.168.1.8, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.225 port 58292 connected with 192.168.1.8 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.0 sec  2.00 MBytes   559 Kbits/sec

Worse Than FailureError'd: Just a Trial Run

"How does Netflix save money when making their original series? It's simple. They just use trial versions of VFX software," Nick L. wrote.

 

Chris A. writes, "Why get low quality pixelated tickets when you can have these?"

 

"Better make room! This USB disk enclosure is ready for supper and som really mega-bytes!" wrote Stuart L.

 

Scott writes, "Go Boncos!"

 

"With rewards like these, I can't believe more people don't pledge on Patreon!" writes Chris A.

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityChinese Antivirus Firm Was Part of APT41 ‘Supply Chain’ Attack

The U.S. Justice Department this week indicted seven Chinese nationals for a decade-long hacking spree that targeted more than 100 high-tech and online gaming companies. The government alleges the men used malware-laced phishing emails and “supply chain” attacks to steal data from companies and their customers. One of the alleged hackers was first profiled here in 2012 as the owner of a Chinese antivirus firm.

Image: FBI

Charging documents say the seven men are part of a hacking group known variously as “APT41,” “Barium,” “Winnti,” “Wicked Panda,” and “Wicked Spider.” Once inside of a target organization, the hackers stole source code, software code signing certificates, customer account data and other information they could use or resell.

APT41’s activities span from the mid-2000s to the present day. Earlier this year, for example, the group was tied to a particularly aggressive malware campaign that exploited recent vulnerabilities in widely-used networking products, including flaws in Cisco and D-Link routers, as well as Citrix and Pulse VPN appliances. Security firm FireEye dubbed that hacking blitz “one of the broadest campaigns by a Chinese cyber espionage actor we have observed in recent years.”

The government alleges the group monetized its illicit access by deploying ransomware and “cryptojacking” tools (using compromised systems to mine cryptocurrencies like Bitcoin). In addition, the gang targeted video game companies and their customers in a bid to steal digital items of value that could be resold, such as points, powers and other items that could be used to enhance the game-playing experience.

APT41 was known to hide its malware inside fake resumes that were sent to targets. It also deployed more complex supply chain attacks, in which they would hack a software company and modify the code with malware.

“The victim software firm — unaware of the changes to its product, would subsequently distribute the modified software to its third-party customers, who were thereby defrauded into installing malicious software code on their own computers,” the indictments explain.

While the various charging documents released in this case do not mention it per se, it is clear that members of this group also favored another form of supply chain attacks — hiding their malware inside commercial tools they created and advertised as legitimate security software and PC utilities.

One of the men indicted as part of APT41 — now 35-year-old Tan DaiLin — was the subject of a 2012 KrebsOnSecurity story that sought to shed light on a Chinese antivirus product marketed as Anvisoft. At the time, the product had been “whitelisted” or marked as safe by competing, more established antivirus vendors, although the company seemed unresponsive to user complaints and to questions about its leadership and origins.

Tan DaiLin, a.k.a. “Wicked Rose,” in his younger years. Image: iDefense

Anvisoft claimed to be based in California and Canada, but a search on the company’s brand name turned up trademark registration records that put Anvisoft in the high-tech zone of Chengdu in the Sichuan Province of China.

A review of Anvisoft’s website registration records showed the company’s domain originally was created by Tan DaiLin, an infamous Chinese hacker who went by the aliases “Wicked Rose” and “Withered Rose.” At the time of story, DaiLin was 28 years old.

That story cited a 2007 report (PDF) from iDefense, which detailed DaiLin’s role as the leader of a state-sponsored, four-man hacking team called NCPH (short for Network Crack Program Hacker). According to iDefense, in 2006 the group was responsible for crafting a rootkit that took advantage of a zero-day vulnerability in Microsoft Word, and was used in attacks on “a large DoD entity” within the USA.

“Wicked Rose and the NCPH hacking group are implicated in multiple Office based attacks over a two year period,” the iDefense report stated.

When I first scanned Anvisoft at Virustotal.com back in 2012, none of the antivirus products detected it as suspicious or malicious. But in the days that followed, several antivirus products began flagging it for bundling at least two trojan horse programs designed to steal passwords from various online gaming platforms.

Security analysts and U.S. prosecutors say APT41 operated out of a Chinese enterprise called Chengdu 404 that purported to be a network technology company but which served a legal front for the hacking group’s illegal activities, and that Chengdu 404 used its global network of compromised systems as a kind of dragnet for information that might be useful to the Chinese Communist Party.

Chengdu404’s offices in China. Image: DOJ.

“CHENGDU 404 developed a ‘big data’ product named ‘SonarX,’ which was described…as an ‘Information Risk Assessment System,'” the government’s indictment reads. “SonarX served as an easily searchable repository for social media data that previously had been obtained by CHENGDU 404.”

The group allegedly used SonarX to search for individuals linked to various Hong Kong democracy and independence movements, and snoop on a U.S.-backed media outlet that ran stories examining the Chinese government’s treatment of Uyghur people living in its Xinjian region.

As noted by TechCrunch, after the indictments were filed prosecutors said they obtained warrants to seize websites, domains and servers associated with the group’s operations, effectively shutting them down and hindering their operations.

“The alleged hackers are still believed to be in China, but the allegations serve as a ‘name and shame’ effort employed by the Justice Department in recent years against state-backed cyber attackers,” wrote TechCrunch’s Zack Whittaker.

Cryptogram Amazon Delivery Drivers Hacking Scheduling System

Amazon drivers — all gig workers who don’t work for the company — are hanging cell phones in trees near Amazon delivery stations, fooling the system into thinking that they are closer than they actually are:

The phones in trees seem to serve as master devices that dispatch routes to multiple nearby drivers in on the plot, according to drivers who have observed the process. They believe an unidentified person or entity is acting as an intermediary between Amazon and the drivers and charging drivers to secure more routes, which is against Amazon’s policies.

The perpetrators likely dangle multiple phones in the trees to spread the work around to multiple Amazon Flex accounts and avoid detection by Amazon, said Chetan Sharma, a wireless industry consultant. If all the routes were fed through one device, it would be easy for Amazon to detect, he said.

“They’re gaming the system in a way that makes it harder for Amazon to figure it out,” Sharma said. “They’re just a step ahead of Amazon’s algorithm and its developers.”

Cryptogram Friday Squid Blogging: Nano-Sized SQUIDS

SQUID news:

Physicists have developed a small, compact superconducting quantum interference device (SQUID) that can detect magnetic fields. The team l focused on the instrument’s core, which contains two parallel layers of graphene.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Google AdsenseDirector of Global Partnerships Solutions

The GNI Digital Growth Program helps small and mid-sized publishers around the world grow their businesses online.

Worse Than FailureConfiguration Errors

Automation and tooling, especially around continuous integration and continuous deployment is standard on applications, large and small.

Paramdeep Singh Jubbal works on a larger one, with a larger team, and all the management overhead such a team brings. It needs to interact with a REST API, and as you might expect, the URL for that API is different in production and test environments. This is all handled by the CI pipeline, so long as you remember to properly configure which URLs map to which environments.

Paramdeep mistyped one of the URLs when configuring a build for a new environment. The application passed all the CI checks, and when it was released to the new environment, it crashed and burned. Their error handling system detected the failure, automatically filed a bug ticket, Paramdeep saw the mistake, fixed it, and everything was back to the way it was supposed to be within five minutes.

But a ticket had been opened. For a bug. And the team lead, Emmett, saw it. And so, in their next team meeting, Emmett launched with, “We should talk about Bug 264.”

“Ah, that’s already resolved,” Paramdeep said.

“Is it? I didn’t see a commit containing a unit test attached to the ticket,” Emmett said.

“I didn’t write one,” Paramdeep said, getting a little confused at this point. “It was just a misconfiguration.”

“Right, so there should be a test to ensure that the correct URL is configured before we deploy to the environment.” That was, in fact, the policy: any bug ticket which was closed was supposed to have an associated unit test which would protect against regressions.

“You… want a unit test which confirms that the environment URLs are configured correctly?”

“Yes,” Emmett said. “There should never be a case where the application connects to an incorrect URL.”

“But it gets that URL from the configuration.”

“And it should check that configuration is correct. Honestly,” Emmett said, “I know I’m not the most technical person on this team, but that just sounds like common sense to me.”

“It’s just…” Paramdeep considered how to phrase this. “How does the unit test tell which are the correct or incorrect URLs?”

Emmett huffed. “I don’t understand why I’m explaining this to a developer. But the URLs have a pattern. URLs which match the pattern are valid, and they pass the test. URLs which don’t fail the test, and we should fallback to a known-good URL for that environment.”

“And… ah… how do we know what the known-good URL is?”

“From the configuration!”

“So you want me to write a unit test which checks the configuration to see if the URLs are valid, and if they’re not, it uses a valid URL from the configuration?”

“Yes!” Emmett said, fury finally boiling over. “Why is that so hard?”

Paramdeep thought the question explained why it was so hard, but after another moment’s thought, there was an even better reason that Emmett might actually understand.

“Because the configuration files are available during the release step, and the unit tests are run after the build step, which is before the release step.”

Emmett blinked and considered how their CI pipeline worked. “Right, fair enough, we’ll put a pin in that then, and come back to it in a future meeting.”

“Well,” Paramdeep thought, “that didn’t accomplish anything, but it was a good waste of 15 minutes.”

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

,

Krebs on SecurityTwo Russians Charged in $17M Cryptocurrency Phishing Spree

U.S. authorities today announced criminal charges and financial sanctions against two Russian men accused of stealing nearly $17 million worth of virtual currencies in a series of phishing attacks throughout 2017 and 2018 that spoofed websites for some of the most popular cryptocurrency exchanges.


The Justice Department unsealed indictments against Russian nationals Danil Potekhin and Dmitirii Karasavidi, alleging the duo was responsible for a sophisticated phishing and money laundering campaign that resulted in the theft of $16.8 million in cryptocurrencies and fiat money from victims.

Separately, the U.S. Treasury Department announced economic sanctions against Potekhin and Karasavidi, effectively freezing all property and interests of these persons (subject to U.S. jurisdiction) and making it a crime to transact with them.

According to the indictments, the two men set up fake websites that spoofed login pages for the currency exchanges Binance, Gemini and Poloniex. Armed with stolen login credentials, the men allegedly stole more than $10 million from 142 Binance victims, $5.24 million from 158 Poloniex users, and $1.17 million from 42 Gemini customers.

Prosecutors say the men then laundered the stolen funds through an array of intermediary cryptocurrency accounts — including compromised and fictitiously created accounts — on the targeted cryptocurrency exchange platforms. In addition, the two are alleged to have artificially inflated the value of their ill-gotten gains by engaging in cryptocurrency price manipulation using some of the stolen funds.

For example, investigators alleged Potekhin and Karasavidi used compromised Poloniex accounts to place orders to purchase large volumes of “GAS,” the digital currency token used to pay the cost of executing transactions on the NEO blockchain — China’s first open source blockchain platform.

“Using digital crurency in one victim Poloniex account, they placed an order to purchase approximately 8,000 GAS, thereby immediately increasing the market price of GAS from approximately $18 to $2,400,” the indictment explains.

Potekhin and others then converted the artificially inflated GAS in their own fictitious Poloniex accounts into other cryptocurrencies, including Ethereum (ETH) and Bitcoin (BTC). From the complaint:

“Before the Eight Fictitious Poloniex Accounts were frozen, POTEKHIN and others transferred approximately 759 ETH to nine digital currency addresses. Through a sophisticated and layered manner, the ETH from these nine digital currency addresses was sent through multiple intermediary accounts, before ultimately being deposited into a Bitfinex account controlled by Karasavidi.”

The Treasury’s action today lists several of the cryptocurrency accounts thought to have been used by the defendants. Searching on some of those accounts at various cryptocurrency transaction tracking sites points to a number of phishing victims.

“I would like to blow your bitch ass away, if you even had the balls to show yourself,” exclaimed one victim, posting in a comment on the Etherscan lookup service.

One victim said he contemplated suicide after being robbed of his ETH holdings in a 2017 phishing attack. Another said he’d been relieved of funds needed to pay for his 3-year-old daughter’s medical treatment.

“You and your team will leave a trail and will be found,” wrote one victim, using the handle ‘Illfindyou.’ “You’ll only be able to hide behind the facade for a short while. Go steal from whales you piece of shit.”

There is potentially some good news for victims of these phishing attacks. According to the Treasury Department, millions of dollars in virtual currency and U.S. dollars traced to Karasavidi’s account was seized in a forfeiture action by the United States Secret Service.

Whether any of those funds can be returned to victims of this phishing spree remains to be seen. And assuming that does happen, it could take years. In February 2020, KrebsOnSecurity wrote about being contacted by an Internal Revenue Service investigator seeking to return funds seized seven years earlier as part of the governments 2013 seizure of Liberty Reserve, a virtual currency service that acted as a $6 billion hub for the cybercrime world.

Today’s action is the latest indication that the Treasury Department is increasingly willing to use its authority to restrict the financial resources tied to various cybercrime activities. Earlier this month, the agency’s Office of Foreign Asset Control (OFAC) added three Russian nationals and a host of cryptocurrency addresses to its sanctions lists in a case involving efforts by Russian online troll farms to influence the 2018 mid-term elections.

In June, OFAC took action against six Nigerian nationals suspected of stealing $6 million from U.S. businesses and individuals through Business Email Compromise fraud and romance scams.

And in 2019, OFAC sanctioned 17 members allegedly associated with “Evil Corp.,” an Eastern European cybercrime syndicate that has stolen more than $100 million from small businesses via malicious software over the past decade.

A copy of the indictments against Potekhin and Karasavidi is available here (PDF).

Kevin RuddSky UK: Green Recovery, Tony Abbott and Donald Trump

E&OE TRANSCRIPT
TV INTERVIEW
SKY NEWS UK
16 SEPTEMBER 2020

Topics: Project Syndicate Conference; Climate Change; Tony Abbott; Trade; US Election

Kay Burley
The world’s green recovery from the pandemic is the centre of a virtual conference which will feature speakers from across the world, including former Prime Minister Gordon Ryan, also a former Australian Prime Minister and president of the Asia Society Policy Institute, Kevin Rudd, will be speaking at the event and he is with us now. Hello to Mr Rudd, it’s a pleasure to have you on the program this morning. Tell us a bit more about this conference. What are you hoping to achieve?

Kevin Rudd
Well, the purpose of the conference is two-fold. There is a massive global investment program underway at present to bring about economic recovery, given the impact of the COVID-19 crisis. And secondly, therefore, we either engineer a green economic recovery, or we lose this massive opportunity. So, therefore, these opportunities do not come along often to radically change the dial. And we’re talking about investments in renewable energy, we’re talking about decommissioning coal-fired stations, and we’re talking about also investment in the next generation of R&D so that we can, in fact, bring global greenhouse gas emissions down to the extent necessary to keep temperature increases this century within 1.5 degrees centigrade.

Kay Burley
It’s difficult though isn’t it with the global pandemic Covid infiltrating almost everyone’s lives around the globe. How are you going to get people focused on climate change?

Kevin Rudd
It’s called — starts with L ends with P — and it’s called leadership. Look, when we went through the Global Financial Crisis of a decade or so ago, many of us, President Obama, myself and others, we elected to bring about a green recovery then as well. In our case, we legislated for renewable energies in Australia to go from a then base of 4% of total electricity supply to 20% by 2020. Everyone said you can’t do that in the midst of a global financial crisis, all too difficult. But guess what, a decade later, we now have 21% renewables in Australia. We did the same as solar subsidies for people’s houses and on their roofs, and for the installation of people’s homes to bring down electricity demand. These are the practical things which can be done at scale when you’re seeking to invest to create new levels of economic activity, but at the same time doing so in a manner which keeps our greenhouse gas emissions heading south rather than north.

Kay Burley
Of course, you talk about leadership, the leader of the free world says that the planet is getting cooler.

Kevin Rudd
Yeah well President Trump on this question is stark raving mad. I realise that may not be regarded as your standard diplomatic reference to the leader of the free world but on the climate change question, he has abandoned the science all together. And mainstream Republicans and certainly mainstream Democrats fundamentally disagree with him. If there is a change in US administration, I think you will see climate change action move to the absolute centre of the domestic priorities of a Biden administration. And importantly, the international priorities. You in the United Kingdom will be hosting before too much longer the Conference of the Parties No.26, which will be critical in terms of increasing the ambition of the contracting parties to the Paris Agreement on climate change action, to begin to go down the road necessary to keep those temperature increases within 1.5 degrees centigrade. American leadership will be important. European leadership has been positive so far, but we’ll need to see the United Kingdom put its best foot forward as well given that you are the host country.

Kay Burley
You say that but of course Tony Abbott, who succeeded you as Prime Minister of Australia, is also a climate change denier.

Kevin Rudd
The climate change denialists seem to find their way into a range of conservative parties around the world regrettably. Not always the case — if you look in continental Europe, that’s basically a bipartisan consensus on climate change action — but certainly, Mr Abbott is notorious in this country, Australia, for describing climate change as, quote, absolute crap, and has been a climate change denialist for the better part of a decade and a half. And so if I was the United Kingdom, the government of the United Kingdom, would certainly not certainly not be letting Mr Abbott anywhere near a policy matter which has to do with the future of greenhouse gas emissions, or for that matter the highly contentious policy question of carbon tariffs which will be considered by the Europeans before much longer: tariffs imposed against those countries which are not lifting their weight globally to take their own national action on climate change.

Kay Burley
Very good at trade though isn’t he? He did a great deal with Japan, didn’t he when he was Prime Minister?

Kevin Rudd
Well, Mr Abbott is good at self-promotion on these questions. The reality is in the free trade agreements that we have agreed in recent years with China and with Japan, and with the Republic of Korea. These things are negotiations which spread over many years by many Australian governments. For example, the one we did with China took 10 years, begun by my predecessor, continued under me and concluded finally under the Abbott government. So I think for anyone to beat their chest and say that they are uniquely responsible as Prime Minister to bring about a particular trade agreement that belies the facts. And I think professional trade negotiators would have that view as well.

Kay Burley
You don’t like him very much then by the sounds of it.

Kevin Rudd
Well, I’m simply being objective. You asked me. I mean, I came to talk about climate change and you’ve asked me questions about Mr Abbott, and I’m simply being frank in my response. He’s a climate change denier. And on the trade policy front, ask your average trade negotiator who was material and bringing about these outcomes; perhaps the Australian trade ministers at the time, but also spread over multiple governments. These are complex negotiations. And they take a lot of effort to finally reach the point of agreement. So at a personal level, of course, Mr Abbott is a former Prime Minister of Australia, I wish him well, but you’ve asked me two direct questions and I’ve tried to give you a frank answer.

Kay Burley
And we’d like that from our politicians. We don’t always see that from politicians in the United Kingdom, I have to say, especially on this show. Let me ask you a final question before I let you go, if I may, please former prime minister. Why are you supporting Joe Biden? Is it all about climate change?

Kevin Rudd
Well, there are two reasons why on balance — I mean, I head an independent American think tank and I’m normally based in New York, so on questions of political alignment we are independent and we will work with any US administration. If you’re asking my personal view as a former Prime Minister of Australia and someone who’s spent 10 or 15 years working on climate change policy at home and abroad, then it is critical for the planet’s future that America return to the global leadership table. It’s the world’s second-largest emitter. Unless the Americans and the Chinese are able to bring about genuine global progress on bringing down their separate national emissions, then what we see in California, what we saw in Sydney in January, what you’ll see in other parts of the world in terms of extreme weather events, more intense drought, more intense cyclonic activity, et cetera, will simply continue. That’s a principal reason. But I think there’s a broader reason as well. It’s that we need to have America respected in the world again. America is a great country, enormous strengths, vibrant democracies, a massive economy, technological prowess, but we need to see also American global leadership which can bring together as friends and allies in the world to do with multiple global challenges that we face today, including the rise of China.

Kay Burley
Okay. It’s great to talk to you. Mr. Rudd. First time, I’ve interviewed, I think, and thank you for being so honest and straightforward. And hopefully we’ll see you again before too long on the program. Thank you.

Kevin Rudd
Good to be with you.

The post Sky UK: Green Recovery, Tony Abbott and Donald Trump appeared first on Kevin Rudd.

Sociological ImagesOfficer Friendly’s Adventures in Wonderland

Many of us know the Officer Friendly story. He epitomizes liberal police virtues. He seeks the public’s respect and willing cooperation to follow the law, and he preserves their favor with lawful enforcement.

Norman Rockwell’s The Runaway, 1958

The Officer Friendly story also inspired contemporary reforms that seek and preserve public favor, including what most people know as Community Policing. Norman Rockwell’s iconic painting is an idealized depiction of this narrative. Officer Friendly sits in full uniform. His blue shirt contrasts sharply with the black boots, gun, and small ticket book that blend together below the lunch counter. He is a paternalistic guardian. The officer’s eyes are fixed on the boy next to him. The lunch counter operator surveying the scene seems to smirk. All of them do, in fact. And all of them are White. The original was painted from the White perspective and highlighted the harmonious relationship between the officer and the boy. But for some it may be easy to imagine a different depiction: a hostile relationship between a boy of color and an officer in the 1950s and a friendly one between a White boy and an officer now.

Desmond Devlin (Writer) and Richard Williams’s (Artist) The Militarization of the Police Department, a painting parody of Rockwell’s The Runaway, 2014

The parody of Rockwell’s painting offers us a visceral depiction of contemporary urban policing. Both pictures depict different historical eras and demonstrate how police have changed. Officer Unfriendly is anonymous, of unknown race, and presumably male. He is prepared for battle, armed with several weapons that extend beyond his imposing frame. Officer Unfriendly is outfitted in tactical military gear with “POLICE” stamped across his back. The images also differ in their depictions of the boy’s race and his relationship to the officer. Officer Unfriendly appears more punitive than paternalistic. He looms over the Black boy sitting on the adjacent stool and peers at him through a tear gas mask. The boy and White lunch counter operator back away in fright. All of the tenderness in the original have given way to hostility in this parody.

Inspired by the critical race tradition, my new project “Officer Friendly’s Adventures in Wonderland: A Counter-Story of Race Riots, Police Reform, and Liberalism” employs composite counter-storytelling to narrate the experiences of young men of color in their explosive encounters with police. Counter-stories force dominant groups to see the world through the “Other’s” (non-White person’s) eyes, thereby challenging their preconceptions. I document the evolution of police-community relations in the last eighty years, and I reflect on the interrupted career of our protagonist, Officer Friendly. He worked with the Los Angeles Police Department (LAPD) for several stints primarily between the 1940s and 1990s.

My story focuses on Los Angeles, a city renowned for its police force and riot history. This story is richly informed by ethnographic field data and is further supplemented with archival and secondary historical data. It complicates the nature of so-called race riots, highlights how Officer Friendly was repeatedly evoked in the wake of these incidents, and reveals the pressures on LAPD officials to favor increasingly unfriendly police tactics. More broadly, the story of Officer Friendly’s embattled career raises serious questions about how to achieve racial justice. This work builds on my recently published coauthored book, The Limits of Community Policing, and can shape future critical race scholarship and historical and contemporary studies of police-community relations.

Daniel Gascón is an assistant professor of sociology at the University of Massachusetts Boston. For more on his latest work, follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

LongNowTime’s Arrow Flies through 500 Years of Classical Music, Physicists Say

A new statistical study of 8,000 musical compositions suggests that there really is a difference between music and noise: time-irreversibility. From The Smithsonian:

Noise can sound the same played forwards or backward in time, but composed music sounds dramatically different in those two time directions.

Compared with systems made of millions of particles, a typical musical composition consisting of thousands of notes is relatively short. Counterintuitively, that brevity makes statistically studying most music much harder, akin to determining the precise trajectory of a massive landslide based solely on the motions of a few tumbling grains of sand. For this study, however, [Lucas Lacasa, a physicist at Queen Mary University of London] and his co-authors exploited and enhanced novel methods particularly successful at extracting patterns from small samples. By translating sequences of sounds from any given composition into a specific type of diagrams or graphs, the researchers were able to marshal the power of graph theory to calculate time irreversibility.

In a time-irreversible music piece, the sense of directionality in time may help the listener generate expectations. The most compelling compositions, then, would be those that balance between breaking those expectations and fulfilling them—a sentiment with which anyone anticipating a catchy tune’s ‘hook’ would agree.

Worse Than FailureCodeSOD: Get My Switch

You know how it is. The team is swamped, so you’ve pulled on some junior devs, given them the bare minimum of mentorship, and then turned them loose. Oh, sure, there are code reviews, but it’s like, you just glance at it, because you’re already so far behind on your own development tasks and you’re sure it’s fine.

And then months later, if you’re like Richard, the requirements have changed, and now you’ve got to revisit the junior’s TypeScript code to make some changes.

		switch (false) {
			case (this.fooIsConfigured() === false && this.barIsConfigured() === false):
				this.contactAdministratorText = 'Foo & Bar';
				break;
			case this.fooIsConfigured():
				this.contactAdministratorText = 'Bar';
				break;
			case this.barIsConfigured():
				this.contactAdministratorText = 'Foo';
				break;
		}

We’ve seen more positive versions of this pattern before, where we switch on true. We’ve even seen the false version of this switch before.

What makes this one interesting, to me, is just how dedicated it is to this inverted approach to logic: if it’s false that “foo” is false and “bar” is false, then obviously they’re all true, thus we output a message to that effect. If one of those is false, we need to figure out which one that is, and then do the opposite, because if “foo” is false, then clearly “bar” must be true, so we output that.

Richard was able to remove this code, and then politely suggest that maybe they should be more diligent in their code reviews.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Cryptogram New Bluetooth Vulnerability

There’s a new unpatched Bluetooth vulnerability:

The issue is with a protocol called Cross-Transport Key Derivation (or CTKD, for short). When, say, an iPhone is getting ready to pair up with Bluetooth-powered device, CTKD’s role is to set up two separate authentication keys for that phone: one for a “Bluetooth Low Energy” device, and one for a device using what’s known as the “Basic Rate/Enhanced Data Rate” standard. Different devices require different amounts of data — and battery power — from a phone. Being able to toggle between the standards needed for Bluetooth devices that take a ton of data (like a Chromecast), and those that require a bit less (like a smartwatch) is more efficient. Incidentally, it might also be less secure.

According to the researchers, if a phone supports both of those standards but doesn’t require some sort of authentication or permission on the user’s end, a hackery sort who’s within Bluetooth range can use its CTKD connection to derive its own competing key. With that connection, according to the researchers, this sort of erzatz authentication can also allow bad actors to weaken the encryption that these keys use in the first place — which can open its owner up to more attacks further down the road, or perform “man in the middle” style attacks that snoop on unprotected data being sent by the phone’s apps and services.

Another article:

Patches are not immediately available at the time of writing. The only way to protect against BLURtooth attacks is to control the environment in which Bluetooth devices are paired, in order to prevent man-in-the-middle attacks, or pairings with rogue devices carried out via social engineering (tricking the human operator).

However, patches are expected to be available at one point. When they’ll be, they’ll most likely be integrated as firmware or operating system updates for Bluetooth capable devices.

The timeline for these updates is, for the moment, unclear, as device vendors and OS makers usually work on different timelines, and some may not prioritize security patches as others. The number of vulnerable devices is also unclear and hard to quantify.

Many Bluetooth devices can’t be patched.

Final note: this seems to be another example of simultaneous discovery:

According to the Bluetooth SIG, the BLURtooth attack was discovered independently by two groups of academics from the École Polytechnique Fédérale de Lausanne (EPFL) and Purdue University.

Worse Than FailureCodeSOD: //article title here

Menno was reading through some PHP code and was happy to see that it was thoroughly commented:

function degToRad ($value) { return $value * (pi()/180); // convert excel timestamp to php date }

Today's code is probably best explained in meme format:
expanding brain meme: write clear comments, comments recap the exact method name, write no comments, write wrong comments

As Menno summarizes: "It's a function that has the right name, does the right thing, in the right way. But I'm not sure how that comment came to be."

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

MEMore About the PowerEdge R710

I’ve got the R710 (mentioned in my previous post [1]) online. When testing the R710 at home I noticed that sometimes the VGA monitor I was using would start flickering when in some parts of the BIOS setup, it seemed that the horizonal sync wasn’t working properly. It didn’t seem to be a big deal at the time. When I deployed it the KVM display that I had planned to use with it mostly didn’t display anything. When the display was working the KVM keyboard wouldn’t work (and would prevent a regular USB keyboard from working if they were both connected at the same time). The VGA output of the R710 also wouldn’t work with my VGA->HDMI device so I couldn’t get it working with my portable monitor.

Fortunately the Dell front panel has a display and tiny buttons that allow configuring the IDRAC IP address, so I was able to get IDRAC going. One thing Dell really should do is allow the down button to change 0 to 9 when entering numbers, that would make it easier to enter 8.8.8.8 for the DNS server. Another thing Dell should do is make the default gateway have a default value according to the IP address and netmask of the server.

When I got IDRAC going it was easy to setup a serial console, boot from a rescue USB device, create a new initrd with the driver for the MegaRAID controller, and then reboot into the server image.

When I transferred the SSDs from the old server to the newer Dell server the problem I had was that the Dell drive caddies had no holes in suitable places for attaching SSDs. I ended up just pushing the SSDs in so they are hanging in mid air attached only by the SATA/SAS connectors. Plugging them in took the space from the above drive, so instead of having 2*3.5″ disks I have 1*2.5″ SSD and need the extra space to get my hand in. The R710 is designed for 6*3.5″ disks and I’m going to have trouble if I ever want to have more than 3*2.5″ SSDs. Fortunately I don’t think I’ll need more SSDs.

After booting the system I started getting alerts about a “fault” in one SSD, with no detail on what the fault might be. My guess is that the SSD in question is M.2 and it’s in a M.2 to regular SATA adaptor which might have some problems. The data seems fine though, a BTRFS scrub found no checksum errors. I guess I’ll have to buy a replacement SSD soon.

I configured the system to use the “nosmt” kernel command line option to disable hyper-threading (which won’t provide much performance benefit but which makes certain types of security attacks much easier). I’ve configured BOINC to run on 6/8 CPU cores and surprisingly that didn’t cause the fans to be louder than when the system was idle. It seems that a system that is designed for 6 SAS disks doesn’t need a lot of cooling when run with SSDs.

Update: It’s a R710 not a T710. I mostly deal with Dell Tower servers and typed the wrong letter out of habit.

,

Krebs on SecurityDue Diligence That Money Can’t Buy

Most of us automatically put our guard up when someone we don’t know promises something too good to be true. But when the too-good-to-be-true thing starts as our idea, sometimes that instinct fails to kick in. Here’s the story of how companies searching for investors to believe in their ideas can run into trouble.

Nick is an investment banker who runs a firm that helps raise capital for its clients (Nick is not his real name, and like other investment brokers interviewed in this story spoke with KrebsOnSecurity on condition of anonymity). Nick’s company works primarily in the mergers and acquisitions space, and his job involves advising clients about which companies and investors might be a good bet.

In one recent engagement, a client of Nick’s said they’d reached out to an investor from Switzerland — The Private Office of John Bernard — whose name was included on a list of angel investors focused on technology startups.

“We ran into a group that one of my junior guys found on a list of data providers that compiled information on investors,” Nick explained. “I told them what we do and said we were working with a couple of companies that were interested in financing, and asked them to send some materials over. The guy had a British accent, claimed to have made his money in tech and in the dot-com boom, and said he’d sold a company to Geocities that was then bought by Yahoo.”

But Nick wasn’t convinced Mr. Bernard’s company was for real. Nick and his colleagues couldn’t locate the company Mr. Bernard claimed to have sold, and while Bernard said he was based in Switzerland, virtually all of his staff were all listed on LinkedIn as residing in Ukraine.

Nick told his clients about his reservations, but each nevertheless was excited that someone was finally interested enough to invest in their ideas.

“The CEO of the client firm said, ‘This is great, someone is willing to believe in our company’,” Nick said. “After one phone call, he made an offer to invest tens of millions of dollars. I advised them not to pursue it, and one of the clients agreed. The other was very gung ho.”

When companies wish to link up with investors, what follows involves a process known as “due diligence” wherein each side takes time to research the other’s finances, management, and any lurking legal liabilities or risks associated with the transaction. Typically, each party will cover their own due diligence costs, but sometimes the investor or the company that stands to benefit from the transaction will cover the associated fees for both parties.

Nick said he wasn’t surprised when Mr. Bernard’s office insisted that its due diligence fees of tens of thousands of dollars be paid up front by his client. And he noticed the website for the due diligence firm that Mr. Bernard suggested using — insideknowledge.ch — also was filled with generalities and stock photos, just like John Bernard’s private office website.

“He said we used to use big accounting firms for this but found them to be ineffective,” Nick said. “The company they wanted us to use looked like a real accounting firm, but we couldn’t find any evidence that they were real. Also, we asked to see an investment portfolio. He said he’s invested in over 30 companies, so I would expect to see a document that says, “here’s the various companies we’ve invested in.” But instead, we got two recommendation letters on letterhead saying how great these investors were.”

KrebsOnSecurity located two other investment bankers who had similar experiences with Mr. Bernard’s office.

“A number of us have been comparing notes on this guy, and he never actually delivers,” said one investment banker who asked not to be named because he did not have permission from his clients. “In each case, he agreed to invest millions with no push back, the documentation submitted from their end was shabby and unprofessional, and they seem focused on companies that will write a check for due diligence fees. After their fees are paid, the experience has been an ever increasing and inventive number of reasons why the deal can’t close, including health problems and all sorts of excuses.”

Mr. Bernard’s investment firm did not respond to multiple requests for comment. The one technology company this author could tie to Mr. Bernard was secureswissdata.com, a Swiss concern that provides encrypted email and data services. The domain was registered in 2015 by Inside Knowledge. In February 2020, Secure Swiss Data was purchased in an “undisclosed multimillion buyout” by SafeSwiss Secure Communication AG.

SafeSwiss co-CEO and Secure Swiss Data founder David Bruno said he couldn’t imagine that Mr. Bernard would be involved in anything improper.

“I can confirm that I know John Bernard and have always found him very honourable and straight forward in my dealings with him as an investor,” Bruno said. “To be honest with you, I struggle to believe that he would, or would even need to be, involved in the activity you mentioned, and quite frankly I’ve never heard about those things.”

DUE DILIGENCE

John Bernard is named in historic WHOIS domain name registration records from 2015 as the owner of the due diligence firm insideknowledge.ch. Another “capital investment” company tied to John Bernard’s Swiss address is liftinvest.ch, which was registered in November 2017.

Curiously, in May 2018, its WHOIS ownership records switched to a new name with the same initials: one “Jonathan Bibi,” with an address in the offshore company haven of Seychelles. Likewise, Mr. Bibi was listed as a onetime owner of the domain for Mr. Bernard’s company  —the-private-office.ch — as well as johnbernard.ch.

Running a reverse WHOIS search through domaintools.com [an advertiser on this site] reveals several other interesting domains historically tied to a Jonathan Bibi from the Seychelles. Among those is acheterdubitcoin.org, a business that was blacklisted by French regulators in 2018 for promoting cryptocurrency scams.

Another Seychelles concern tied to Mr. Bibi was effectivebets.com, which in 2017 and 2018 promoted sports betting via cryptocurrencies and offered tips on picking winners.

A Google search on Jonathan Bibi from Seychelles reveals he was listed as a respondent in a lawsuit filed in 2018 by the State of Missouri, which named him as a participant in an unlicensed “binary options” investment scheme that bilked investors out of their money.

Jonathan Bibi from Seychelles also was named as the director of another binary options scheme called the GoldmanOptions scam that was ultimately shut down by regulators in the Czech Republic.

Jason Kane is an attorney with Peiffer Wolf, a litigation firm that focuses on investment fraud. Kane said companies bilked by small-time investment schemes rarely pursue legal action, mainly because the legal fees involved can quickly surpass the losses. What’s more, most victims will likely be too ashamed to come forward.

“These are cases where you might win but you’ll never collect any money,” Kane said. “This seems like an investment twist on those fairly simple scams we all can’t believe people fall for, but as scams go this one is pretty good. Do this a few times a year and you can make a decent living and no one is really going to come after you.”

If you liked this post, please see Part II of the investigation, Who is John Bernard?

Cory DoctorowIP

This week on my podcast, I read the first half of my latest Locus Magazine column, “IP,” the longest, most substantial column I’ve written in my 14 years on Locus‘s masthead.

IP explores the history of how we have allowed companies to control more and more of our daily lives, and has come to mean, “any law that I can invoke that allows me to control the conduct of my competitors, critics, and customers.”

It represents a major realization on my part after decades of writing, talking and thinking about this stuff. I hope you give it a listen and/or a read.

MP3

Sociological ImagesStop Trying to Make Fetch Happen: Social Theory for Shaken Routines

It is hard to keep up habits these days. As the academic year starts up with remote teaching, hybrid teaching, and rapidly-changing plans amid the pandemic, many of us are thinking about how to design new ways to connect now that our old habits are disrupted. How do you make a new routine or make up for old rituals lost? How do we make them stick and feel meaningful?

Social science shows us how these things take time, and in a world where we would all very much like a quick solution to our current social problems, it can be tough to sort out exactly what new rules and routines can do for us.

For example, The New York Times recently profiled “spiritual consultants” in the workplace – teams that are tasked with creating a more meaningful and communal experience on the job. This is part of a larger social trend of companies and other organizations implementing things like mindfulness practices and meditation because they…keep workers happy? Foster a sense of community? Maybe just keep the workers just a little more productive in unsettled times?

It is hard to talk about the motives behind these programs without getting cynical, but that snark points us to an important sociological point. Some of our most meaningful and important institutions emerge from social behavior, and it is easy to forget how hard it is to design them into place.

This example reminded me of the classic Social Construction of Reality by Berger and Luckmann, who argue that some of our strongest and most established assumptions come from habit over time. Repeated interactions become habits, habits become routines, and suddenly those routines take on a life of their own that becomes meaningful to the participants in a way that “just is.” Trust, authority, and collective solidarity fall into place when people lean on these established habits. In other words: on Wednesdays we wear pink.

The challenge with emergent social institutions is that they take time and repetition to form. You have to let them happen on their own, otherwise they don’t take on the same same sense of meaning. Designing a new ritual often invites cringe, because it skips over the part where people buy into it through their collective routines. This is the difference between saying “on Wednesdays we wear pink” and saying

“Hey team, we have a great idea that’s going to build office solidarity and really reinforce the family dynamic we’ve got going on. We’re implementing: Pink. Wednesdays.”

All of our usual routines are disrupted right now, inviting fear, sadness, anger, frustration, and disappointment. People are trying to persist with the rituals closest to them, sometimes to the extreme detriment of public health (see: weddings, rallies, and ugh). I think there’s some good sociological advice for moving through these challenges for ourselves and our communities: recognize those emotions, trust in the routines and habits that you can safely establish for yourself and others, and know that they will take a long time to feel really meaningful again, but that doesn’t mean they aren’t working for you. In other words, stop trying to make fetch happen.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Kevin RuddA Rational Fear Podcast: Climate Action

INTERVIEW AUDIO
A RATIONAL FEAR PRESENTS:
‘GREATEST MORAL PODCAST OF OUR GENERATION’
WITH DAN ILIC
RECORDED 2 SEPTEMBER 2020
PUBLISHED 14 SEPTEMBER 2020

The post A Rational Fear Podcast: Climate Action appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Nice Save

Since HTTP is fundamentally stateless, developers have found a million ways to hack state into web applications. One of my "favorites" was the ASP.NET ViewState approach.

The ViewState is essentially a dictionary, where you can store any arbitrary state values you might want to track between requests. When the server outputs HTML to send to the browser, the contents of ViewState are serialized, hashed, and base-64 encoded and dumped into an <input type="hidden"> element. When the next request comes in, the server unpacks the hidden field and deserializes the dictionary. You can store most objects in it, if you'd like. The goal of this, and all the other WebForm state stuff was to make handling web forms more like handling forms in traditional Windows applications.

It's "great". It's extra great when its default behavior is to ensure that the full state for every UI widget on the page. The incident which inpsired Remy's Law of Requirements Gathering was a case where our users wanted like 500 text boxes on a page, and we blew out our allowed request sizes due to gigundous ViewState because, at the time, we didn't know about that "helpful" feature.

Ryan N inherited some code which uses this, and shockingly, ViewState isn't the biggest WTF here:

protected void ValidateForm(object sender, EventArgs e) { bool Save = true; if (sInstructions == string.Empty) { sInstructions = string.Empty; } else { Save = false; } if (Save) {...} }

Let me just repeat part of that:

if (sInstructions == string.Empty) { sInstructions = string.Empty; }

If sInstructions is empty, set it to be empty. If sInstructions is not empty, then we set… Save to false? Reading this code, it implies if we have instructions we shouldn't save? What the heck is this sInstructions field anyway?

Well, Ryan explains: "sInstructions is a ViewState string variable, it holds error messages."

I had to spend a moment thinking "wait, why is it called 'instructions'?" But the method name is called ValidateForm, which explains it. It's because it's user instructions, as in, "please supply a valid email address". Honestly, I'm more worried about the fact that this approach is starting to make sense to me, than anything else.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Kevin RuddDoorstop: Queensland’s Covid Response

E&OE TRANSCRIPT
DOORSTOP INTERVIEW
SUNSHINE COAST
13 SEPTEMBER 2020

Topic: The Palaszczuk Government’s Covid-19 response

Journalist: Kevin, what’s your stance on the border closures at the moment? Do you think it’s a good idea to open them up, or to keep them closed?

Kevin Rudd: Anyone who’s honest will tell you these are really hard decisions and it’s often not politically popular to do the right thing. But I think Annastacia Palaszczuk, faced with really hard decisions, I think has made the right on-balance judgment, because she’s putting first priority on the health security advice for all Queenslanders. That I think is fundamental. You see, in my case, I’m a Queenslander, proudly so, but the last five years I’ve lived in New York. Let me tell you what it’s like living in a city like New York, where they threw health security to the wind. It was a complete debacle. And so many thousands of people died as a result. So I think we’ve got to be very cautious about disregarding the health security advice. So when I see the Murdoch media running a campaign to force Premier Palaszczuk to open the borders of Queensland, I get suspicious about what’s underneath all that. Here’s the bottom line: Do you want your Premier to make a decision based on political advice, or based on media bullying? Or do you want your Premier to make a decision based on what the health professionals are saying? And from my point of view — as someone who loves this state, loves the Sunshine Coast, grew up here and married to a woman who spent her life in business employing thousands of people — on balance, what Queenslanders want is for them to have their health needs put first. And you know something? It may not be all that much longer that we need to wait. But as soon as we say, ‘well, it’s time for politics to take over, it’s time for the Murdoch media to take over and dictate when elected governments should open the borders or not’, I think that’s a bad day for Queensland.

Journalist: Do you think the recession and the economic impact is something that Australia will be able to mitigate post pandemic?

Kevin Rudd: Well, the responsibility for mitigating the national recession lies in the hands of Prime Minister Morrison. I know something about what these challenges mean. I was Prime Minister during the Global Financial Crisis when the rest of the world went into recession. So he has it within his hands to devise the economic plans necessary for the nation; state governments are responsible primarily for the health of their citizens. That’s the division of labour in our commonwealth. So, it’s either you have a situation where Premiers are told that it’s in your political interests, because the Murdoch media has launched a campaign to throw open your borders, or you’re attentive to the health policy advice from the health professionals to keep all Queenslanders safe. It’s hard, it’s tough, but on balance I think she’s probably made the right call. In fact, I’m pretty sure she has.

The post Doorstop: Queensland’s Covid Response appeared first on Kevin Rudd.

MESetting Up a Dell R710 Server with DRAC6

I’ve just got a Dell R710 server for LUV and I’ve been playing with the configuration options. Here’s a list of what I’ve done and recommendations on how to do things. I decided not to try to write a step by step guide to doing stuff as the situation doesn’t work for that. I think that a list of how things are broken and what to avoid is more useful.

BIOS

Firstly with a Dell server you can upgrade the BIOS from a Linux root shell. Generally when a server is deployed you won’t want to upgrade the BIOS (why risk breaking something when it’s working), so before deployment you probably should install the latest version. Dell provides a shell script with encoded binaries in it that you can just download and run, it’s ugly but it works. The process of loading the firmware takes a long time (a few minutes) with no progress reports, so best to plug both PSUs in and leave it alone. At the end of loading the firmware a hard reboot will be performed, so upgrading your distribution while doing the install is a bad idea (Debian is designed to not lose data in this situation so it’s not too bad).

IDRAC

IDRAC is the Integrated Dell Remote Access Controller. By default it will listen on all Ethernet ports, get an IP address via DHCP (using a different Ethernet hardware address to the interface the OS sees), and allow connections. Configuring it to be restrictive as to which ports it listens on may be a good idea (the R710 had 4 ports built in so having one reserved for management is usually OK). You need to configure a username and password for IDRAC that has administrative access in the BIOS configuration.

Web Interface

By default IDRAC will run a web server on the IP address it gets from DHCP, you can connect to that from any web browser that allows ignoring invalid SSL keys. Then you can use the username and password configured in the BIOS to login. IDRAC 6 on the PowerEdge R710 recommends IE 6.

To get a ssl cert that matches the name you want to use (and doesn’t give browser errors) you have to generate a CSR (Certificate Signing Request) on the DRAC, the only field that matters is the CN (Common Name), the rest have to be something that Letsencrypt will accept. Certbot has the option “--config-dir /etc/letsencrypt-drac” to specify an alternate config directory, the SSL key for DRAC should be entirely separate from the SSL key for other stuff. Then use the “--csr” option to specify the path of the CSR file. When you run letsencrypt the file name of the output file you want will be in the format “*_chain.pem“. You then have to upload that to IDRAC to have it used. This is a major pain for the lifetime of letsencrypt certificates. Hopefully a more recent version of IDRAC has Certbot built in.

When you access RAC via ssh (see below) you can run the command “racadm sslcsrgen” to generate a CSR that can be used by certbot. So it’s probably possible to write expect scripts to get that CSR, run certbot, and then put the ssl certificate back. I don’t expect to use IDRAC often enough to make it worth the effort (I can tell my browser to ignore an outdated certificate), but if I had dozens of Dells I’d script it.

SSH

The web interface allows configuring ssh access which I strongly recommend doing. You can configure ssh access via password or via ssh public key. For ssh access set TERM=vt100 on the host to enable backspace as ^H. Something like “TERM=vt100 ssh root@drac“. Note that changing certain other settings in IDRAC such as enabling Smartcard support will disable ssh access.

There is a limit to the number of open “sessions” for managing IDRAC, when you ssh to the IDRAC you can run “racadm getssninfo” to get a list of sessions and “racadm closessn -i NUM” to close a session. The closessn command takes a “-a” option to close all sessions but that just gives an error saying that you can’t close your own session because of programmer stupidity. The IDRAC web interface also has an option to close sessions. If you get to the limits of both ssh and web sessions due to network issues then you presumably have a problem.

I couldn’t find any documentation on how the ssh host key is generated. I Googled for the key fingerprint and didn’t get a match so there’s a reasonable chance that it’s unique to the server (please comment if you know more about this).

Don’t Use Graphical Console

The R710 is an older server and runs IDRAC6 (IDRAC9 is the current version). The Java based graphical console access doesn’t work with recent versions of Java. The Debian package icedtea-netx has has the javaws command for running the .jnlp command for the console, by default the web browser won’t run this, you download the .jnlp file and pass that as the first parameter to the javaws program which then downloads a bunch of Java classes from the IDRAC to run. One error I got with Java 11 was ‘Exception in thread “Timer-0” java.util.ServiceConfigurationError: java.nio.charset.spi.CharsetProvider: Provider sun.nio.cs.ext.ExtendedCharsets could not be instantiated‘, Google didn’t turn up any solutions to this. Java 8 didn’t have that problem but had a “connection failed” error that some people reported as being related to the SSL key, but replacing the SSL key for the web server didn’t help. The suggestion of running a VM with an old web browser to access IDRAC didn’t appeal. So I gave up on this. Presumably a Windows VM running IE6 would work OK for this.

Serial Console

Fortunately IDRAC supports a serial console. Here’s a page summarising Serial console setup for DRAC [1]. Once you have done that put “console=tty0 console=ttyS1,115200n8” on the kernel command line and Linux will send the console output to the virtual serial port. To access the serial console from remote you can ssh in and run the RAC command “console com2” (there is no option for using a different com port). The serial port seems to be unavailable through the web interface.

If I was supporting many Dell systems I’d probably setup a ssh to JavaScript gateway to provide a web based console access. It’s disappointing that Dell didn’t include this.

If you disconnect from an active ssh serial console then the RAC might keep the port locked, then any future attempts to connect to it will give the following error:

/admin1-> console com2
console: Serial Device 2 is currently in use

So far the only way I’ve discovered to get console access again after that is the command “racadm racreset“. If anyone knows of a better way please let me know. As an aside having “racreset” being so close to “racresetcfg” (which resets the configuration to default and requires a hard reboot to configure it again) seems like a really bad idea.

Host Based Management

deb http://linux.dell.com/repo/community/ubuntu xenial openmanage

The above apt sources.list line allows installing Dell management utilities (Xenial is old but they work on Debian/Buster). Probably the packages srvadmin-storageservices-cli and srvadmin-omacore will drag in enough dependencies to get it going.

Here are some useful commands:

# show hardware event log
omreport system esmlog
# show hardware alert log
omreport system alertlog
# give summary of system information
omreport system summary
# show versions of firmware that can be updated
omreport system version
# get chassis temp
omreport chassis temps
# show physical disk details on controller 0
omreport storage pdisk controller=0

RAID Control

The RAID controller is known as PERC (PowerEdge Raid Controller), the Dell web site has an rpm package of the perccli tool to manage the RAID from Linux. This is statically linked and appears to have different versions of libraries used to make it. The command “perccli show” gives an overview of the configuration, but the command “perccli /c0 show” to give detailed information on controller 0 SEGVs and the kernel logs a “vsyscall attempted with vsyscall=none” message. Here’s an overview of the vsyscall enmulation issue [2]. Basically I could add “vsyscall=emulate” to the kernel command line and slightly reduce security for the system to allow system calls from perccli that are called from old libc code to work, or I could run code from a dubious source as root.

Some versions of IDRAC have a “racadm raid” command that can be run from a ssh session to perform RAID administration remotely, mine doesn’t have that. As an aside the help for the RAC system doesn’t list all commands and the Dell documentation is difficult to find so blog posts from other sysadmins is the best source of information.

I have configured IDRAC to have all of the BIOS output go to the virtual serial console over ssh so I can see the BIOS prompt me for PERC administration but it didn’t accept my key presses when I tried to do so. In any case this requires downtime and I’d like to be able to change hard drives without rebooting.

I found vsyscall_trace on Github [3], it uses the ptrace interface to emulate vsyscall on a per process basis. You just run “vsyscall_trace perccli” and it works! Thanks Geoffrey Thomas for writing this!

Here are some perccli commands:

# show overview
perccli show
# help on adding a vd (RAID)
perccli /c0 add help
# show controller 0 details
perccli /c0 show
# add a vd (RAID) of level RAID0 (r0) with the drive 32:0 (enclosure:slot from above command)
perccli /c0 add vd r0 drives=32:0

When a disk is added to a PERC controller about 525MB of space is used at the end for RAID metadata. So when you create a RAID-0 with a single device as in the above example all disk data is preserved by default except for the last 525MB. I have tested this by running a BTRFS scrub on a disk from another system after making it into a RAID-0 on the PERC.

Update: It’s a R710 not a T710. I mostly deal with Dell Tower servers and typed the wrong letter out of habit.

Planet Linux AustraliaSimon Lyall: Talks from KubeCon + CloudNativeCon Europe 2020 – Part 1

Various talks I watched from their YouTube playlist.

Application Autoscaling Made Easy With Kubernetes Event-Driven Autoscaling (KEDA) – Tom Kerkhove

I’ve been using Keda a little bit at work. Good way to scale on random stuff. At work I’m scaling pods against length of AWS SQS Queues and as a cron. Lots of other options. This talk is a 9 minute intro. A bit hard to read the small font on the screen of this talk.

Autoscaling at Scale: How We Manage Capacity @ Zalando – Mikkel Larsen, Zalando SE

  • These guys have their own HPA replacement for scaling. Kube-metrics-adapter .
  • Outlines some new stuff in scaling in 1.18 and 1.19.
  • They also have a fork of the Cluster Autoscaler (although some of what it seems to duplicate Amazon Fleets).
  • Have up to 1000 nodes in some of their clusters. Have to play with address space per nodes, also scale their control plan nodes vertically (control plan autoscaler).
  • Use Virtical Pod autoscaler especially for things like prometheus that varies by the size of the cluster. Have had problems with it scaling down too fast. They have some of their own custom changes in a fork

Keynote: Observing Kubernetes Without Losing Your Mind – Vicki Cheung

  • Lots of metrics dont’t cover what you want and get hard to maintain and complex
  • Monitor core user workflows (ie just test a pod launch and stop)
  • Tiny tools
    • 1 watches for events on cluster and logs them -> elastic
    • 2 watches container events -> elastic
    • End up with one timeline for a deploy/job covering everything
    • Empowers users to do their own debugging

Autoscaling and Cost Optimization on Kubernetes: From 0 to 100 – Guy Templeton & Jiaxin Shan

  • Intro to HPA and metric types. Plus some of the newer stuff like multiple metrics
  • Vertical pod autoscaler. Good for single pod deployments. Doesn’t work will with JVM based workloads.
  • Cluster Autoscaler.
    • A few things like using prestop hooks to give pods time to shutdown
    • pod priorties for scaling.
    • –expandable-pods-priority-cutoff to not expand for low-priority jobs
    • Using the priority-expander to try and expand spots first and then fallback to more expensive node types
    • Using mixed instance policy with AWS . Lots of instance types (same CPU/RAM though) to choose from.
    • Look at poddistruptionbudget
    • Some other CA flags like scale-down-utilisation-threshold to lok at.
  • Mention of Keda
  • Best return is probably tuning HPA
  • There is also another similar talk . Note the Male Speaker talks very slow so crank up the speed.

Keynote: Building a Service Mesh From Scratch – The Pinterest Story – Derek Argueta

  • Changed to Envoy as a http proxy for incoming
  • Wrote own extension to make feature complete
  • Also another project migrating to mTLS
    • Huge amount of work for Java.
    • Lots of work to repeat for other languages
    • Looked at getting Envoy to do the work
    • Ingress LB -> Inbound Proxy -> App
  • Used j2 to build the Static config (with checking, tests, validation)
  • Rolled out to put envoy in front of other services with good TLS termination default settings
  • Extra Mesh Use Cases
    • Infrastructure-specific routing
    • SLI Monitoring
    • http cookie monitoring
  • Became a platform that people wanted to use.
  • Solving one problem first and incrementally using other things. Many groups had similar problems. “Just a node in a mesh”.

Improving the Performance of Your Kubernetes Cluster – Priya Wadhwa, Google

  • Tools – Mostly tested locally with Minikube (she is a Minikube maintainer)
  • Minikube pause – Pause the Kubernetes systems processes and leave app running, good if cluster isn’t changing.
  • Looked at some articles from Brendon Gregg
  • Ran USE Method against Minikube
  • eBPF BCC tools against Minikube
  • biosnoop – noticed lots of writes from etcd
  • KVM Flamegraph – Lots of calls from ioctl
  • Theory that etcd writes might be a big contributor
  • How to tune etcd writes ( updated –snapshot-count flag to various numbers but didn’t seem to help)
  • Noticed CPU spkies every few seconds
  • “pidstat 1 60” . Noticed kubectl command running often. Running “kubectl apply addons” regularly
  • Suspected addon manager running often
  • Could increase addon manager polltime but then addons would take a while to show up.
  • But in Minikube not a problem cause minikube knows when new addons added so can run the addon manager directly rather than it polling.
  • 32% reduction in overhead from turning off addon polling
  • Also reduced coredns number to one.
  • pprof – go tool
  • kube-apiserver pprof data
  • Spending lots of times dealing with incoming requests
  • Lots of requests from kube-controller-manager and kube-scheduler around leader-election
  • But Minikube is only running one of each. No need to elect a leader!
  • Flag to turn both off –leader-elect=false
  • 18% reduction from reducing coredns to 1 and turning leader election off.
  • Back to looking at etcd overhead with pprof
  • writeFrameAsync in http calls
  • Theory could increase –proxy-refresh-interval from 30s up to 120s. Good value at 70s but unsure what behavior was. Asked and didn’t appear to be a big problem.
  • 4% reduction in overhead

Share

,

Kevin RuddAustralian Jewish News: Rudd Responds To Critics

With the late Shimon Peres

Published in The Australian Jewish News on 12 September 2020

Since writing in these pages last month, I’ve been grateful to receive many kind messages from readers reflecting on my record in office. Other letters, including those published in the AJN, deserve a response.

First to Michael Gawenda, whose letter repeated the false assertion that I believe in some nefarious connection between Mark Leibler’s lobbying of Julia Gillard and the factional machinations that brought her to power. Challenged to provide evidence for this fantasy, Gawenda pointed to this passage in my book, The PM Years: “The meticulous work of moving Gillard from the left to the right on foreign policy has already begun in earnest more than a year before the coup”.

Hold the phone! Of course Gillard was on the move – on the US, on Israel, and even on marriage equality. On all these, unlike me, she had a socialist-left background and wanted to appeal to right-wing factional bosses. Even Gawenda must comprehend that a strategy on Gillard’s part does not equate to Leibler plotting my demise.

My complaint against Leibler is entirely different: his swaggering arrogance in insisting Labor back virtually every move by Benjamin Netanyahu, good or bad; that we remain silent over the Mossad’s theft of Australian passports to carry out an assassination, when they’d already been formally warned under Howard never to do it again; and his bad behaviour as a dinner guest at the Lodge when, having angrily disagreed with me over the passports affair, leaned over the table and said menacingly to my face: “Julia is looking very good in the public eye these days, prime minister.”

There is a difference between bad manners and active conspiracy. Gawenda knows this. If he’d bothered to read my book, rather than flipping straight to the index, he would have found on page 293 the list of those actually responsible for plotting the coup. They were: Wayne Swan, Mark Arbib, David Feeney, Don Farrell and Stephen Conroy (with Bill Ludwig, Paul Howes, Bill Shorten, Tony Sheldon and Karl Bitar in supporting roles). All Labor Party members.

So what was Leibler up to? Unsurprisingly, he was cultivating Gillard as an influential contact in the government. Isn’t that, after all, what lobbyists do? They lobby. And Leibler, by his own account and Gawenda’s obsequious reporting, was very good at it.

Finally, I am disappointed by Gawenda’s lack of contrition for not contacting me before publication. This was his basic professional duty as a journalist. If he’d bothered, I could have dispelled his conspiracy theory inside 60 seconds. But, then again, it would have ruined his yarn.

A second letter, by Leon Poddebsky, asserted I engaged in “bombastic opposition to Israel’s blockade against the importation by Hamas of military and dual-purpose materials for its genocidal war against the Jewish state”.

I draw Mr Poddebsky to my actual remarks in 2010: “When it comes to a blockade against Gaza, preventing the supply of humanitarian aid, such a blockade should be removed. We believe that the people of Gaza … should be provided with humanitarian assistance.” It is clear my remarks were limited to curbs on humanitarian aid. The AJN has apologised for publishing this misrepresentation, and I accept that apology with no hard feeling.

Regarding David Singer’s letter, who denies Netanyahu’s stalled West Bank plan constitutes “annexation”, I decline to debate him on ancient history. I only note the term “annexation” is used by the governments of Australia and Britain, the European Union, the United Nations, the Israeli press and even the AJN.

Mr Singer disputes the illegality of annexation, citing documents from 1922. I direct him to the superseding Oslo II Accord of 1995 where Israel agreed “neither side shall initiate or take any step that will change the status of the West Bank and Gaza Strip pending the outcome of permanent status negotiations”.

Further, Mr Singer challenges my assertion that Britain’s Prime Minister agrees that annexation would violate international law. I direct Mr Singer to Boris Johnson’s article for Yedioth Ahronoth in July: “Annexation would represent a violation of International law. It would also be a gift to those who want to perpetuate old stories about Israel… If it does (go ahead), the UK will not recognise any changes to the 1967 lines, except those agreed by both parties.”

Finally, on Leibler’s continuing protestations about his behaviour at dinner, I will let others who know him better pass judgment on his character. More importantly, the net impact of Leibler’s lobbying has been to undermine Australia’s once-strong bipartisan support for Israel. His cardinal error, together with others, has been to equate support for Israel with support for Netanyahu’s policies. This may be tactically smart for their Likud friends, but it’s strategically dumb given the challenges Israel will face in the decades ahead.

The post Australian Jewish News: Rudd Responds To Critics appeared first on Kevin Rudd.

Sam VargheseWhen will Michael Hayden explain why the NSA did not predict 9/11?

As America marks the 19th anniversary of the destruction of the World Trade Centre towers by terrorists, it is a good time to ask when General Michael Hayden, head of the NSA at the time of 9/11, will come forward and explain why the agency was unable to detect the chatter among those who had banded together to wreak havoc in the US.

Before I continue, let me point out that nothing of what appears below is new; it was all reported some four years ago, but mainstream media have conspicuously avoided pursuing the topic because it would probably trouble some people in power.

The tale of how Hayden came to throw out a system known as ThinThread, devised by probably the most brilliant metadata analyst, William Binney, at that time the technical director of the NSA, has been told in a searing documentary titled A Good American.

Binney devised the system which would look at links between all people around the globe and added in measures to prevent violations of Americans’ privacy. Cryptologist Ed Loomis, analyst Kirk Wiebe, software engineer Tom Drake and Diane Roark, senior staffer at the House Intelligence Committee, worked along with him.

A skunkworks project, Binney’s ThinThread system handled data as it was ingested, unlike the existing method at the NSA which was to just collect all the data and analyse it later.

But when Hayden took the top job in the mid-1990s, he wanted to spread the NSA’s work around and also boost its budget – a bit of empire building, no less. Binney was asked what he could do with a billion dollars and more for the ThinThread system but said he could not use more than US$300 million.

What followed was surprising. Though ThinThread had been demonstrated to be able to sort data fast and accurately and track patterns within it, Hayden banned its use and put in place plans to develop a system known as Trailblazer – which incidentally had to be trashed many years later as an unmitigated disaster after many billions had gone down the drain.

Trailblazer was being used when September 11 came around.

Drake points out in the film that after 9/11, they set up ThinThread again and analysed all the data flowing into the NSA in the lead-up to the attack and found clear indications of the terrorists’ plans.

At the beginning of the film, Binney’s voice is heard saying, “It was just revolting… and disgusting that we allowed it to happen,” as footage of people leaping from the burning Trade Centre is shown.

After the tragedy, ThinThread, sans its privacy protections, was used to conduct blanket surveillance on Americans. Binney and his colleagues left the NSA shortly thereafter.

Hayden is often referred to as a great American patriot by American talk-show host Bill Maher. He was offered an interview by the makers of A Good American but did not take it up. Perhaps he would like to break his silence now.

,

Sam VargheseSerena Williams, please go before people start complaining

The US Open 2020 represented the best chance for an aging Serena Williams to win that elusive 24th Grand Slam title and equal the record of Australian Margaret Court. Seeds Bianca Andreescu (6), Ashleigh Barty (1), Simona Halep (2), Kiki Bertens (7) and Elina Svitolina (5) are all not taking part.

But Williams, now 39, could not get past Victoria Azarenka in the semi-finals, losing 1-6, 6-3, 6-3.

Prior to this, Williams had lost four Grand Slam finals in pursuit of Court’s record: Andreescu defeated her at the US Open in 2019, Angelique Kerber beat her at Wimbledon in 2018, Naomi Osaka took care of her in the 2018 US Open and Halep accounted for Williams at Wimbledon in 2019. In all those finals, Williams was unable to win more than four games in any set.

Williams took a break to have a child some years ago and after returning her only final win was in the Australian Open in 2017 when she beat her sister, Venus, 6-4, 6-4.

It looks like it is time to bow out, if not gracefully, then clumsily. One, unfortunately, cannot associate grace with Williams.

But, no, Williams has already signed up to play in the French Open which has been pushed back to September 27 from its normal time of May due to the coronavirus pandemic. Only Barty has said she would not be participating.

Sportspeople are remembered for more than mere victories. One remembers the late Arthur Ashe not merely because he was the first black man to win Wimbledon, the US Open and the Australian Open, but for the way he carried himself.

He was a great ambassador for his country and the soul of professionalism. Sadly, he died at the age of 49 after contracting AIDS from a blood transfusion.

Another lesser known person who was in the Ashe mould is Larry Gomes, the West Indies cricketer who was part of the world-beating sides that never lost a Test series for 15 years from 1980 to 1995.

Gomes quit after playing 60 Tests, and the letter he sent to the Board when he did so was a wonderful piece of writing, reflecting both his humility and his professionalism.

In an era of dashing stroke-players, he was the steadying influence on the team many a time, and was thoroughly deserving of his place among the likes of the much better-known superstars like Viv Richards, Gordon Greenidge, Desmond Haynes and Clive Lloyd.

In Williams case, she has shown herself to be a poor sportsperson, far too focused on herself and unable to see the bigger picture.

She is at the other end of the spectrum compared to players like Chris Evert and Steffi Graf, both great champions, and also models of good fair-minded competitors.

It would be good for tennis if Williams leaves the scene after the French Open, no matter if she wins there or loses. There is more to a good sportsperson than mere statistics.

Cryptogram Friday Squid Blogging: Calamari vs. Squid

St. Louis Magazine answers the important question: “Is there a difference between calamari and squid?” Short answer: no.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Kevin RuddWPR Trend Lines: China Under Xi Jinping

E&OE TRANSCRIPT
PODCAST INTERVIEW
WORLD POLITICS REVIEW
11 SEPTEMBER 2020

Topics: China under Xi Jinping

World Politics Review: Mr. Rudd, thank you so much for joining us on Trend Lines.

Kevin Rudd: Happy to be on the program.

WPR: I want to start with a look at this bipartisan consensus that’s emerged in the United States, this reappraisal of China in the U.S., but also in Europe and increasingly in Australia. This idea that the problems of China’s trade policies and the level playing fields within the domestic market, aren’t going away. If anything, they’re getting worse. Same thing with regard to human rights and political liberalization under Xi Jinping. And that the West really needs to start preparing for a period of strategic rivalry with China, including increased friction and maybe some bumping of elbows. Are you surprised by how rapidly this new consensus has emerged and how widespread it is?

Mr. Rudd: I’m not surprised by the emergence of a form of consensus across the democracies, whether they happen to be Western or Asian democracies, or democracies elsewhere in the world. And there’s a reason for that, and that is that the principle dynamic here has been China’s changing course itself.

And secondly, quite apart from a change in management under Xi Jinping and the change in direction we’ve seen under him, China is now increasingly powerful. It doesn’t matter what matrix of power we’re looking at—economic power, trade, investment; whether it’s capital markets, whether it’s technology or whether it’s the classic determinants of international power and various forms of military leverage.

You put all those things together, we have a new guy in charge who has decided to be more assertive about China’s interests and values in the world beyond China’s borders. And secondly, a more powerful China capable of giving that effect.

So as a result of that, many countries for the first time have had this experience rub up against them. In the past, it’s only been China’s near neighbors who have had this experience. Now it’s a much broader and shared experience around the region and around the world.

So there are the structural factors at work in terms of shaping an emerging consensus on the part of the democracies, and those who believe in an open, free trading system in the world, and those who are committed to the retention of the liberal international order—that these countries are beginning to find a common cause in dealing with their collective challenges with the Middle Kingdom.

WPR: Now you mentioned the new guy in charge, obviously that’s Xi Jinping who’s been central to all of the recent shifts in China, whether it’s domestic policy or foreign policy. At the same time there’s some suggestion that as transformational as Xi is, that there’s quite a bit of continuity in terms of some of the more assertive policies of his predecessor Hu Jintao. Is it a mistake to focus so much on Xi? Does he reflect a break in China’s approach to the world, or is it continuity or more of a transition that responds to the context of perhaps waning American power as perceived by China? So is it a mistake to focus so much on Xi, as opposed to trying to understand the Chinese leadership writ large, their perception of their own national interest?

Mr. Rudd: I think it’s a danger to see policy continuity and policy change in China as some binary alternative, because with Xi Jinping, yes, there are elements of continuity, but there are also profound elements of change. So on the continuity front, yes, in the period of Hu Jintao, we began to see a greater Chinese experimental behavior in the South China Sea, a more robust approach to the assertion of China’s territorial claims there, for example.

But with Xi Jinping, this was taken to several extra degrees in intensity when we saw a full-blown island reclamation exercise, where you saw rocky atolls suddenly being transformed into sand-filled islands, and which were then militarized. So is that continuity or change? That becomes, I think, more of a definitional question.

If I was trying to sum up what is new, however, about Xi Jinping in contrast to his predecessors, it would probably be in these terms.

In politics, Chinese domestic politics, he has changed the discourse by taking it further to the left—by which I mean a greater role for the party and ideology, and the personal control of the leader, compared with what existed before.

On the economy we see a partial shift to the left with a resuscitation of state-owned enterprises and some disincentives emerging for the further and continued growth of China’s own hitherto successful private-sector entrepreneurial champions.

On nationalism we have seen a further push under Xi Jinping further to the right than his predecessors.

And in terms of degrees of international assertiveness, whether it’s over Hong Kong, the South China Sea, or over Taiwan, in relation to Japan and the territorial claims in the East China Sea, or with India, or in the big bilateral relationship with the United States as well as with other American allies—the Canadians, the Australians and the Europeans, et cetera—as well as big, new large-canvas foreign policy initiatives like the Belt and Road Initiative, what we’ve seen is an infinitely more assertive China.

So, yes, there are elements of continuity, but it would be foolish for us to underestimate the degree of change as a consequence of the agency of Xi Jinping’s leadership.

WPR: You mentioned the concentration of power in the leader’s hands, Xi Jinping’s hands. He’s clearly the most powerful Chinese leader since Mao Zedong. At the same time and especially in the immediate aftermath of the coronavirus pandemic, but over the course of the last year or so, there’s been some reporting about rumbling within the Chinese Communist Party about his leadership. What do you make of those reports? Is he a leader who’s in firm control in China? Is that something that’s always somewhat at question, given the kind of internal politics of the Chinese Communist Party? Or are there potential challenges to his leadership?

Mr. Rudd: Well, in our analysis of Chinese politics, it hasn’t really changed a lot since I was a young diplomat working in Beijing in the mid-1980s, and that is: The name of the game is opacity. With Chinese domestic politics we are staring through a glass dimly. And that’s because the nature of a Marxist-Leninist party, in particular a Leninist party, is deeply secretive about its own internal operations. So we are left to speculate.

But what we know from the external record, we know that Xi Jinping, as you indicated in your question, has become China’s most powerful leader at least since Deng and probably since Mao, against most matrices of power. He’s become what is described as the “chairman of everything.” Every single leading policy group of the politburo is now chaired by him.

And on top of that, the further instruments of power consolidation have been reflected in his utilization in the anti-corruption campaign, and now the unleashing of a Party Rectification campaign, which is designed to reinforce compliance on the part of party members to central leadership diktat. So that’s the sort of individual that we have, and that’s the journey that he has traveled in the last six to seven years, and where most of his principle opponents have either been arrested or incarcerated or have committed suicide.

So where does that leave us in terms of the prospects for any organized political opposition? Under the party constitution, Xi Jinping is up for a reelect at the 20th Party Congress, and that is his reappointment as general secretary of the party and as chairman of the Central Military Commission. Separately, he’s up for a reelect by the National People’s Congress for a further term as China’s president.

For him to go a further term would be to break all the post-Deng Xiaoping conventions built around the principles of shared or collective leadership. But Xi Jinping, in my judgment, is determined to remain China’s paramount leader through the 2020s and into the 2030s. And how would he do that? He would perhaps see himself at the next Party Congress appointed as party chairman, a position last occupied by Chairman Mao and Mao’s immediate successor, Chairman Hua Guofeng.

He would probably retain the position of president of the country, given the constitutional changes he brought about three years or so ago to remove the two-term limit for the presidency. And he would, I think, most certainly retain the chairmanship of the Central Military Commission.

These predispositions, however, of themselves combined with the emerging cult of personality around Xi, combined with fundamental disagreements on elements of economic policy and international policy, have created considerable walls of opposition to Xi Jinping within the Chinese Communist Party.

And the existence of that opposition is best proven by the fact that Xi Jinping, in August of 2020, decided to launch this new Party Rectification campaign in order to reestablish, from his perspective, proper party discipline—that is, obedience to Xi Jinping. So the $6,000 question is, Can these different sources of dissent within the Chinese Communist Party coalesce? There’s no obvious candidate to do the coalescing, just as Deng Xiaoping in the past was a candidate for coalescing different political and policy views to those of Mao Zedong, which is why Deng Xiaoping was purged at least two times in his latter stages of his career before his final return and rehabilitation.

So we can’t see a ready candidate, but the dynamics of Chinese politics tend to have been that if there is, however, a catastrophic event—an economic implosion, a major foreign policy or international policy mishap, misstep, or crisis or conflict which goes wrong—then these tend to generate their own dynamics. So Xi Jinping’s watchword between now and the 20th Party Congress to be held in October/November of 2022 will be to prevent any such crises from emerging.

WPR: It’s a natural transition to my next question, because another point of disagreement among China watchers is whether a lot of these moves that we’ve seen over the past five years, accelerating over the past five years, but this reassertion of the party centrality within or across all spheres of Chinese society, but also some of the international moves, whether they’re signs of strength in the face of what they see as a waning United States, or whether they’re signs of weakness and a sense of having to move faster than perhaps was previously anticipated? A lot of the challenges that China faces, whether it can become a rich country before it becomes an old country, some of the environmental degradation that its development has caused, none of them are going away and none of them have gone away. So what’s your view on that in terms of China’s future prospects? Do you see a continued rise? A leveling off? The danger of a failing China and everything that that might imply?

Mr. Rudd: Well, there are a bunch of questions embedded in what you’re saying, but I’d begin by saying that Americans shouldn’t talk themselves out of global leadership in the future. America remains a powerful country in economic terms, in technological terms and in military terms, and against all three measures still today more powerful than China. In the case of the military, significantly more powerful than China.

Of course, the gap begins to narrow, but this narrowing process is not overnight, it takes a long period of time, and there are a number of potential mishaps for China. An anecdote from the Chinese commentariat recently—in a debate about America’s preparedness to go into armed conflict or war with China in the South China Sea or over Taiwan, and the Chinese nationalist constituency in China basically chanting, “Bring it on.” If people chant, “USA,” in the United States, similar gatherings would probably chant, “PRC,” in Beijing.

But it was interesting what a Chinese scholar had to say as this debate unfolded, when one of the commentators said, “Well, at the end of the day, America is a paper tiger.” Of course, that was a phrase used by Mao back in the 1950s, ‘60s. The response from the Chinese scholar in China’s online discourse was, “No, America is not a paper tiger. America is a tiger with real teeth.”

So it’s important for Americans to understand that they are still in an extraordinarily powerful position in relation to both China and the rest of the world. So the question becomes one ultimately of the future leadership direction in the United States.

The second point I’d make in response to your question is that, when we seek to understand China’s international behavior, the beginning of wisdom is to understand its domestic politics, to the extent that we can. So when we are asked questions about strength or weakness, or why is China in the COVID world doubling down on its posture towards Hong Kong, the South China Sea, towards Taiwan, the East China Sea, Japan, as well as other countries like India, Canada, Australia, and some of the Europeans and the United States.

Well, I think the determination on the part of Xi Jinping’s leadership was to make it plain to all that China had not been weakened by COVID-19. Is that a sign of weakness or of strength? I think that again becomes a definitional question.

In terms of Chinese domestic politics, however, if Xi Jinping is under attack in terms of his political posture at home and the marginalization of others within the Chinese Communist Party, if he is under attack in terms of its direction on economic policy, with the slowing of growth even in the pre-COVID period, then of course from Xi Jinping’s perspective, the best way to deal with any such dissension on the home front is to become more nationalist on the foreign front. And we’ve seen evidence of that. Is that strength, or is it weakness? Again, it’s a definitional question, but it’s the reality of what we’re dealing with.

For the future, I think ultimately what Xi Jinping’s administration will be waiting for is what sort of president emerges from November of 2020. If Trump is victorious, I think Xi Jinping will privately be pretty happy about that, because he sees the Trump administration on the one hand being hard-lined towards China, but utterly chaotic in its management of America’s national China strategy—strong on some things, weak on the other, vacillating between A and B above—and ultimately a divisive figure in terms of the solidarity of American alliances around the world.

If Biden wins, I think the judgment in China will be that America could seriously get its act back together again, run a coherent hard-line national comprehensive China strategy. Secondly, do so—unlike the Trump administration—with the friends and allies around the world in full harness and incorporative harness. So it really does depend on what the United States chooses to do in terms of its own national leadership, and therefore foreign policy direction for the future.

WPR: You mentioned the question of whether the U.S. is a paper tiger or not. Clearly in terms of military assets and capabilities, it’s not, but there’s been some strategic debate in Australia over the past five years or so—and I’m thinking particularly of Hugh White and his line of analysis—the idea that regardless of what America can do, that there really won’t be, when push comes to shove, the will to engage in the kind of military conflict that would be required to respond, for instance, to Chinese aggression or attempt to reunify militarily with Taiwan, let alone something like the Senkaku islands, the territorial dispute with Japan. And the recently released Australian Defense White Paper suggests that there’s a sense that the U.S. commitment to mutual defense in the Asia-Pacific might be waning. So what are your thoughts about the future of American primacy in Asia? And you mentioned it’s a question of leadership, but are you optimistic about the public support in America for that historical project? And if not, what are the implications for Australia, for other countries in Asia and the world?

Mr. Rudd: Well, it’s been a while since I’ve been in rural Pennsylvania. So it’s hard for me to answer your question. [Laughter] I am obviously both a diplomat by training and a politician, so I kind of understand the policy world, but I also understand rural politics.

It does depend on which way the American people go. When I look at the most recent series of findings from the Pew Research Project on American attitudes to the world, what I find interesting is that contrary to many of the assumptions by the American political class, that nearly three-quarters of Americans are strongly supportive of trade and strongly supportive of variations of free trade. If you listen to the populist commentary in the United States, you would think that notions of free trade had become a bit like the anti-Christ.

So therefore I think it will be a challenge for the American political class to understand that the American public, at least reflected in these polling numbers on trade, are still fundamentally internationally engaged, and therefore are not in support of America incrementally withdrawing from global economic leadership. On the question of political leadership and America’s global national security role and its regional national security role in the Asia Pacific, again it’s a question of the proper harnessing of American resources.

I think one of the great catastrophes of the last 20 years was America’s decision to invade Iraq. You flushed up a whole lot of political capital and foreign policy capital around the world against the wall. You expended so much in blood and treasure that I think the political wash through also in America itself was a deep set of reservations in American voters about, let’s call it extreme foreign policy folly.

Remember, that enterprise was about eliminating weapons of mass destruction, which the Bush administration alleged to exist in Saddam Hussein’s bathroom locker, and they never did. So we’re partly experiencing the wash up of all of that in the American body politic, where there is a degree of, shall we say, exhaustion about unnecessary wars and about unnecessary foreign policy engagement. So the wisdom for the future will be what is necessary, as opposed to that which is marginal.

Now, I would think, on that basis, given the galvanizing force of American public sentiment—on COVID questions, on trade and investment questions, on national security questions, and on human rights questions—that the No. 1 foreign policy priority for the United States, for the period ahead, will be China. And so what I therefore see is that the centrality of future U.S. administrations, Republican and Democrat, is likely to be galvanized by a finding from the American people that, contrary to any of the urban myths, that the American people still want to see their country become more prosperous through trade. They want their country to become bigger through continued immigration—another finding in terms of Pew research. And they want their country to deal with the greatest challenge to America’s future, which is China’s rise.

So that is different from becoming deeply engaged and involved in every nuance of national security policy in the eastern reaches of either Libya or in the southern slopes of Lebanon. There has been something of a long-term Middle Eastern quagmire, and I think the wake-up call of the last several years has been for America to say that if there’s to be a second century of American global leadership, another Pax Americana for the 21st century, then it will ultimately be resolved on whether America rises to the global economic challenge, the global technology leadership challenge and the global China challenge.

And I think the American people in their own wonderfully, shall we say unorchestrated way, but subject to the bully pulpit of American presidential leadership and politics, can harness themselves for that purpose.

WPR: So much of the West’s engagement with China over the past 30 years has been this calculated gamble on what kind of power China would be when it did achieve great power status, and this idea that trade and engagement would pull China more toward being the kind of power that’s compatible with the international order, the postwar American-led international order. Do you think we have an answer to that question yet, and has the gamble of engaging with China paid off?

Mr. Rudd: The bottom line is that if you look at the history of America’s engagement strategy with China and the engagement strategy of many of America’s allies over the decades—really since the emergence of Deng in the late ‘70s and recommenced in the aftermath of Tiananmen, probably from about 1992, 1993—it has always been engagement, but hedge. In other words, there was engagement with China across all the instrumentalities of the global economy and global political and economic governance, and across the wide panoply of the multilateral system.

But there was always a conditionality and there was always a condition, which is, countries like the United States were not about to deplete their military, even after it won the Cold War against the Soviet Union. And America did not deplete its military. America’s military remained formidable. There are ebbs and flows in the debate, but the capabilities remained world class and dominant. And so in fairness to those who prosecuted that strategy of engagement, it always was engagement plus hedge.

And so what’s happened as China’s leadership direction has itself changed over the last six or seven years under Xi Jinping, is the hedge component of engagement has reared its head and said, “Well, I’m glad we did so.” Because we still, as the United States, have the world’s most powerful military. We still, as the United States, are the world’s technology leaders—including in artificial intelligence, despite all the hype coming out of Beijing. We are the world’s leaders when it comes to the core driver of the future of artificial intelligence, which is the production of computer chips, of semiconductors and other fundamental advances in computing. And you still are the world’s leaders in the military. And so therefore America is not in a hugely disadvantageous position. And on top of all the above, you still retain the global reserve currency called the U.S. dollar.

Each of these is under challenge by China. China is pursuing a highly systematic strategy to close the gap in each of these domains. But I think there is a degree of excessive pessimism when you look at America through the Washington lens, that nothing is happening. Well, a lot is. You go to Silicon Valley, you go to the Pacific Fleet, you go to the New York Stock Exchange, and you look at the seminal debates in Hong Kong at the moment, about whether Hong Kong will remain linked between the Hong Kong dollar and the U.S. dollar.

All this speaks still to the continued strength of America’s position, but it really does depend on whether you choose to husband these strengths for the future, and whether you choose to direct them in the future towards continued American global leadership of what we call the rules-based liberal international order.

WPR: Mr. Rudd, thank you so much for being so generous with your time and your insights.

Kevin Rudd: Good to be with you.

The post WPR Trend Lines: China Under Xi Jinping appeared first on Kevin Rudd.

Cryptogram The Third Edition of Ross Anderson’s Security Engineering

Ross Anderson’s fantastic textbook, Security Engineering, will have a third edition. The book won’t be published until December, but Ross has been making drafts of the chapters available online as he finishes them. Now that the book is completed, I expect the publisher to make him take the drafts off the Internet.

I personally find both the electronic and paper versions to be incredibly useful. Grab an electronic copy now while you still can.

Cryptogram Ranking National Cyber Power

Harvard Kennedy School’s Belfer Center published the “National Cyber Power Index 2020: Methodology and Analytical Considerations.” The rankings: 1. US, 2. China, 3. UK, 4. Russia, 5. Netherlands, 6. France, 7. Germany, 8. Canada, 9. Japan, 10. Australia, 11. Israel. More countries are in the document.

We could — and should — argue about the criteria and the methodology, but it’s good that someone is starting this conversation.

Executive Summary: The Belfer National Cyber Power Index (NCPI) measures 30 countries’ cyber capabilities in the context of seven national objectives, using 32 intent indicators and 27 capability indicators with evidence collected from publicly available data.

In contrast to existing cyber related indices, we believe there is no single measure of cyber power. Cyber Power is made up of multiple components and should be considered in the context of a country’s national objectives. We take an all-of-country approach to measuring cyber power. By considering “all-of-country” we include all aspects under the control of a government where possible. Within the NCPI we measure government strategies, capabilities for defense and offense, resource allocation, the private sector, workforce, and innovation. Our assessment is both a measurement of proven power and potential, where the final score assumes that the government of that country can wield these capabilities effectively.

The NCPI has identified seven national objectives that countries pursue using cyber means. The seven objectives are:

  1. Surveilling and Monitoring Domestic Groups;
  2. Strengthening and Enhancing National Cyber Defenses;
  3. Controlling and Manipulating the Information Environment;
  4. Foreign Intelligence Collection for National Security;
  5. Commercial Gain or Enhancing Domestic Industry Growth;
  6. Destroying or Disabling an Adversary’s Infrastructure and Capabilities; and,
  7. Defining International Cyber Norms and Technical Standards.

In contrast to the broadly held view that cyber power means destroying or disabling an adversary’s infrastructure (commonly referred to as offensive cyber operations), offense is only one of these seven objectives countries pursue using cyber means.

LongNowStunning New Universe Fly-Through Really Puts Things Into Perspective

A stunning new video lets viewers tour the universe at superluminal speed. Miguel Aragon of Johns Hopkins, Mark Subbarao of the Adler Planetarium, and Alex Szalay of Johns Hopkins reconstructed the layout of 400,000 galaxies based on information from the Sloan Digital Sky Survey (SDSS) Data Release 7:

“Vast as this slice of the universe seems, its most distant reach is to redshift 0.1, corresponding to roughly 1.3 billion light years from Earth. SDSS Data Release 9 from the Baryon Oscillation Spectroscopic Survey (BOSS), led by Berkeley Lab scientists, includes spectroscopic data for well over half a million galaxies at redshifts up to 0.8 — roughly 7 billion light years distant — and over a hundred thousand quasars to redshift 3.0 and beyond.”

Worse Than FailureError'd: This Could Break the Bank!

"Sure, free for the first six months is great, but what exactly does happen when I hit month seven?" Stuart L. wrote.

 

"In order to add an app on the App Store Connect dashboard, you need to 'Register a new bundle ID in Certificates, Identifiers & Profiles'," writes Quentin, "Open the link, you have a nice 'register undefined' and cannot type anything in the identifier input field!"

 

"I was taught to keep money amounts as pennies rather than fractional dollars, but I guess I'm an old-fashioned guy!" writes Paul F.

 

Anthony C. wrote, "I was looking for headphones on Walmart.com and well, I guess they figured I'd like to look at something else for a change?"

 

"Build an office chair using only a spork, a napkin, and a coffee stirrer? Sounds like a job for McGuyver!"

 

"Translation from Swedish, 'We assume that most people who watch Just Chatting probably also like Just Chatting.' Yes, I bet it's true!," Bill W. writes.

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Planet Linux AustraliaDavid Rowe: Playing with PAPR

The average power of a FreeDV signal is surprisingly hard to measure as the parallel carriers produce a waveform that has many peaks an troughs as the various carriers come in and out of phase with each other. Peter, VK3RV has been working on some interesting experiments to measure FreeDV power using calorimeters. His work got me thinking about FreeDV power and in particular ways to improve the Peak to Average Power Ratio (PAPR).

I’ve messed with a simple clipper for FreeDV 700C in the past, but decided to take a more scientific approach and use some simulations to measure the effect of clipping on FreeDV PAPR and BER. As usual, asking a few questions blew up into a several week long project. The usual bugs and strange, too good to be true initial results until I started to get results that felt sensible. I’ve tested some of the ideas over the air (blowing up an attenuator along the way), and learnt a lot about PAPR and related subjects like Peak Envelope Power (PEP).

The goal of this work is to explore the effect of a clipper on the average power and ultimately the BER of a received FreeDV signal, given a transmitter with a fixed peak output power.

Clipping to reduce PAPR

In normal operation we adjust our Tx drive so the peaks just trigger the ALC. This sets the average power at Ppeak – PAPR Watts, for example Pav = 100WPEP – 10dB = 10W average.

The idea of the clipper is to chop the tops off the FreeDV waveform so the PAPR is decreased. We can then increase the Tx drive, and get a higher average power. For example if PAPR is reduced from 10 to 4dB, we get Pav = 100WPEP – 4dB – 40W. That’s 4x the average power output of the 10dB PAPR case – Woohoo!

In the example below the 16 carrier waveform was clipped and the PAPR reduced from 10.5 to 4.5dB. The filtering applied after the clipper smooths out the transitions (and limits the bandwidth to something reasonable).

However it gets complicated. Clipping actually reduces the average power, as we’ve removed the high energy parts of the waveform. It also distorts the signal. Here is a scatter diagram of the signal before and after clipping:


The effect looks like additive noise. Hmmm, and what happens on multipath channels, does the modem perform the same as for AWGN with clipped signals? Another question – how much clipping should we apply?

So I set about writing a simulation (papr_test.m) and doing some experiments to increase my understanding of clippers, PAPR, and OFDM modem performance using typical FreeDV waveforms. I started out trying a few different compression methods such as different compander curves, but found that clipping plus a bandpass filter gives about the same result. So for simplicity I settled on clipping. Throughout this post many graphs are presented in terms of Eb/No – for the purpose of comparison just consider this the same thing as SNR. If the Eb/No goes up by 1dB, so does the SNR.

Here’s a plot of PAPR versus the number of carriers, showing PAPR getting worse with the number of carriers used:

Random data was used for each symbol. As the number of carriers increases, you start to get phases in carriers cancelling due to random alignment, reducing the big peaks. Behaivour with real world data may be different; if there are instances where the phases of all carriers are aligned there may be larger peaks.

To define the amount of clipping I used an estimate of the PDF and CDF:

The PDF (or histogram) shows how likely a certain level is, and the CDF shows the cumulative PDF. High level samples are quite unlikely. The CDF shows us what proportion of samples are above and below a certain level. This CDF shows us that 80% of the samples have a level of less than 4, so only 20% of the samples are above 4. So a clip level of 0.8 means the clipper hard limits at a level of 4, which would affect the top 20% of the samples. A clip value of 0.6, would mean samples with a level of 2.7 and above are clipped.

Effect of clipping on BER

Here are a bunch of curves that show the effect of clipping on and AWGN and multipath channel (roughly CCIR poor). A 16 carrier signal was used – typical of FreeDV waveforms. The clipping level and resulting PAPR is shown in the legend. I also threw in a Tx diversity curve – sending each symbol twice on double the carriers. This is the approach used on FreeDV 700C and tends to help a lot on multipath channels.

As we clip the signal more and more, the BER performance gets worse (Eb/No x-axis) – but the PAPR is reduced so we can increase the average power, which improves the BER. I’ve tried to show the combined effect on the (peak Eb/No x-axis) curves which scales each curve according to it’s PAPR requirements. This shows the peak power required for a given BER. Lower is better.




Take aways:

  1. The 0.8 and 0.6 clip levels work best on the peak Eb/No scale, ie when we combine effect of the hit on BER performance (bad) and PAPR improvement (good).
  2. There is about 4dB improvement across a range of operating points. This is pretty signficant – similar to gains we get from Tx diversity or a good FEC code.
  3. AWGN and Multipath improvements are similar – good. Sometimes you get an algorithm that works well on AWGN but falls in a heap on multipath channels, which are typically much tougher to push bits through.
  4. I also tried 8 carrier waveforms, which produced results about 1dB better, as I guess fewer carriers have a lower PAPR to start with.
  5. Non-linear techniques like clipping spread the energy in frequency.
  6. Filtering to constrain the frequency spread brings the PAPR up again. We can trade off PAPR with bandwidth: lower PAPR, more bandwidth.
  7. Non-linear technqiques will mess with QAM more. So we may hit a wall at high data rates.

Testing on a Real PA

All these simulations are great, but how do they compare with operation on a real HF radio? I designed an experiment to find out.

First, some definitions.

The same FreeDV OFDM signal is represented in different ways as it winds it’s way through the FreeDV system:

  1. Complex valued samples are used for much of the internal signal processing.
  2. Real valued samples at the interfaces, e.g. for getting samples in and out of a sound card and standard HF radio.
  3. Analog baseband signals, e.g. voltage inside your radio.
  4. Analog RF signals, e.g. at the output of your PA, and input to your receiver terminals.
  5. An electromagnetic wave.

It’s the same signal, as we can convert freely between the representations with no loss of fidelity, but it’s representation can change the way measures like PAPR work. This caused me some confusion – for example the PAPR of the real signal is about 3dB higher than the complex valued version! I’m still a bit fuzzy on this one, but have satisfied myself that the PAPR of the complex signal is the same as the PAPR of the RF signal – which is what we really care about.

Another definition that I had to (re)study was Peak Envelope Power (PEP) – which is the peak power averaged over one or more carrier cycles. This is the RF equivalent to our “peak” in PAPR. When driven by any baseband input signal, it’s the maximum RF power of the radio, averaged over one or more carrier cycles. Signals such as speech and FreeDV waveforms will have occasional peaks that hit the PEP. A baseband sine wave driving the radio would generate a RF signal that sits at the PEP power continuously.

Here is the experimental setup:

The idea is to play canned files through the radio, and measure the average Tx power. It took me several attempts before my experiment gave sensible results. A key improvement was to make the peak power of each sampled signal the same. This means I don’t have to keep messing with the audio drive levels to ensure I have the same peak power. The samples are 16 bits, so I normalised each file such that the peak was at +/- 10000.

Here is the RF power sampler:

It works pretty well on signals from my FT-817 and IC-7200, and will help prevent any more damage to RF test equipment. I used my RF sampler after my first attempt using a SMA barell attenuator resulted in it’s destruction when I accdentally put 5W into it! Suddenly it went from 30dB to 42dB attenuation. Oops.

For all the experiments I am tuned to 7.175 MHz and have the FT-817 on it’s lowest power level of 0.5W.

For my first experiment I played a 1000 Hz sine wave into the system, and measured the average power. I like to start with simple signals, something known that lets me check all the fiddly RF kit is actually working. After a few hours of messing about – I did indeed see 27dBm (0.5W) on my spec-an. So, for a signal with 0dB PAPR, we measure average power = PEP. Check.

In my next experiment, I measured the effect of ALC on TX power. With the FT-817 on it’s lowest power setting (0.5W), I increased the drive until just before the ALC bars came on: Here is the relationship I found with output power:

Bars Tx Power
0 26.7
1 26.4
2 26.7
3 27.0

So the ALC really does clamp the power at the peak value.

On to more complex FreeDV signals.

Mesuring the average power OFDM/parallel tone signals proved much harder to measure on the spec-an. The power bounces around over a period of several seconds the ODFM waveform evolves which can derail many power measurement techniques. The time constant, or measurement window is important – we want to capture the total power over a few seconds and average the value.

After several attempts and lots of head scratching I settled on the following spec-an settings:

  1. 10s sweep time so the RBW filter is averging a lot of time varying power at each point in the sweep.
  2. 100kHz span.
  3. RBW/VBW of 10 kHz so we capture all of the 1kHz wide OFDM signal in the RBW filter peak when averaging.
  4. Power averaging over 5 samples.

The two-tone signal was included to help me debug my spec-an settings, as it has a known (3dB) PAPR.

Here is a table showing the results for several test signals, all of which have the same peak power:

Sample Description PAPR Theory/Sim (dB) PAPR Mesured (dB)
sine1000 sine wave at 1000 Hz 0 0
sine_800_1200 two tones at 800 and 1200Hz 3 4
vanilla 700D test frames unclipped 7.1 7
clip0.8 700D test frames clipped at 0.8 3.4 4
ve9qrp 700D with real speech payload data 11 10.5

Click on the file name to listen to a 5 second sample of the sample. The lower PAPR (higher average power) signals sound louder – I guess our ears work on average power too! I kept the drive constant and the PEP/peak just happened to hit 26dBm. It’s not critical, as long as the drive (and hence peak level) is the same across all waveforms tested.

Note the two tone “control” is 1dB off (4dB measured on a known 3dB PAPR signal), I’m not happy about that. This suggests a spec-an set up issue or limitation on my spec-an (e.g. the way it averages power).

However the other signals line up OK to the simulated values, within about +/- 0.5dB, which suggests I’m on the right track with my simulations.

The modulated 700D test frame signals were generated by the Octave ofdm_tx.m script, which reports the PAPR of the complex signal. The same test frame repeats continuously, which makes BER measurements convenient, but is slightly unrealistic. The PAPR was lower than the ve9qrp signal which has real speech payload data. Perhaps because the more random, real world payload data leads to occasional frames where the phase of the carriers align leading to large peaks.

Another source of discrepancy is the non flat frequency filtering in the baseband audio/crystal filter path the signal has to flow through before it emerges as RF.

The zero-span spec-an setting plots power over time, and is very useful for visualing PAPR. The first plot shows the power of our 1000 Hz sine signal (yellow), and the two tone test signal (purple):

You can see how mixing just two signals modulates the power over time, the effect on PAPR, and how the average power is reduced. Next we have the ve9qrp signal (yellow), and our clip 0.8 signal (purple):

It’s clear the clipped signal has a much higher average power. Note the random way the waveform power peaks and dips, as the various carriers come into phase. Note very few high power peaks in the ve9qrp signal – in this sample we don’t have any that hits +26dBm, as they are fairly rare.

I found eye-balling the zero-span plots gave me similar values to non-zero span results in the table above, a good cross check.

Take aways:

  1. Clipping is indeed improving our measured average power, but there are some discrepencies between the measured and PAPR values estimated from theory/simulation.
  2. Using a SDR to receive the signal and measure PAPR using my own maths might be easier than fiddling with the spec-an and guessing at it’s internal algorithms.
  3. PAPR is worse for real world signals (e.g. ve9qrp) than my canned test frames due to relatively rare alignments of the carrier phases. This might only happen once every few seconds, but significantly raises the PAPR, and hurts our average power. These occasional peaks might be triggering the ALC, pushing the average power down every time they occur. As they are rare, these peaks can be clipped with no impact on perceived speech quality. This is why I like the CDF/PDF method of setting thresholds, it lets us discard rare (low probability) outliers that might be hurting our average power.

Conclusions and Further work

The simulations suggest we can improve FreeDV by 4dB using the right clipper/filter combination. Initial tests over a real PA show we can indeed reduce PAPR in line with our simulations.

This project has lead me down an interesting rabbit hole that has kept me busy for a few weeks! Just in case I haven’t had enough, some ideas for further work:

  1. Align these clipping levels and filtering to FreeDV 700D (and possibly 2020). There is existing clipper and filter code but the thresholds were set by educated guess several years for 700C.
  2. Currently each FreeDV waveform is scaled to have the same average power. This is the signal fed via the sound card to your Tx. Should the levels of each FreeDV waveform be adjusted to be the same peak value instead?
  3. Design an experiment to prove BER performance at a given SNR is improved by 4dB as suggested by these simulations. Currently all we have measured is the average power and PAPR – we haven’t actually verified the expected 4dB increase in performance (suggested by the BER simulations above) which is the real goal.
  4. Try the experiments on a SDR Tx – they tend to get results closer to theory due to no crystal filters/baseband audio filtering.
  5. Try the experiments on a 100WPEP Tx – I have ordered a dummy load to do that relatively safely.
  6. Explore the effect of ALC on FreeDV signals and why we set the signals to “just tickle” the ALC. This is something I don’t really understand, but have just assumed is good practice based on other peoples experiences with parallel tone/OFDM modems and on-air FreeDV use. I can see how ALC would compress the amplitude of the OFDM waveform – which this blog post suggests might be a good thing! Perhaps it does so in an uncontrolled manner – as the curves above show the amount of compression is pretty important. “Just tickling the ALC” guarantees us a linear PA – so we can handle any needed compression/clipping carefully in the DSP.
  7. Explore other ways of reducing PAPR.

To peel away the layers of a complex problem is very satisfying. It always takes me several goes, improvements come as the bugs fall out one by one. Writing these blog posts oftens makes me sit back and say “huh?”, as I discover things that don’t make sense when I write them up. I guess that’s the review process in action.

Links

Design for a RF Sampler I built, mine has a 46dB loss.

Peak to Average Power Ratio for OFDM – Nice discussion of PAPR for OFDM signals from DSPlog.

Worse Than FailureCodeSOD: Put a Dent in Your Logfiles

Valencia made a few contributions to a large C++ project run by Harvey. Specifically, there were some pass-by-value uses of a large data structure, and changing those to pass-by-reference fixed a number of performance problems, especially on certain compilers.

“It’s a simple typo,” Valencia thought. “Anyone could have done that.” But they kept digging…

The original code-base was indented with spaces, but Harvey just used tabs. That was a mild annoyance, but Harvey used a lot of tabs, as his code style was “nest as many blocks as deeply as possible”. In addition to loads of magic numbers that should be enums, Harvey also had a stance that “never use an int type when you can store your number as a double”.

Then, for example, what if you have a char and you want to turn the char into a string? Do you just use the std::string() constructor that accepts a char parameter? Not if you’re Harvey!

std::string ToString(char c)
{
    std::stringstream ss;
    std::string out = "";
    ss << c;
    ss >> out;
    return out;
}

What if you wanted to cache some data in memory? A map would be a good place to start. How many times do you want to access a single key while updating a cache entry? How does “four times” work for you? It works for Harvey!

void WriteCache(std::string key, std::string value)
{
    Setting setting = mvCache["cache_"+key];
    if (!setting.initialized)
    {
        setting.initialized=true;
        setting.data = "";
        mvCache.insert(std::map<std::string,Cache>::value_type("cache_"+key,setting));
        mvCache["cache_"+key]=setting;
    }
    setting.data = value;
    mvCache["cache_"+key]=setting;
}

And I don’t know exactly what they are trying to communicate with the mv prefix, but people have invented all sorts of horrible ways to abuse Hungarian notation. Fortunately, Valencia clarifies: “Harvey used an incorrect Hungarian notation prefix while they were at it.”

That’s the easy stuff. Ugly, bad code, sure, but nothing that leaves you staring, stunned into speechlessness.

Let’s say you added a lot of logging messages, and you wanted to control how many logging messages appeared. You’ve heard of “logging levels”, and that gives you an inspiration for how to solve this problem:

bool LogLess(int iMaxLevel)
{
     int verboseLevel = rand() % 1000;
     if (verboseLevel < iMaxLevel) return true;
     return false;
}

//how it's used:
if(LogLess(500))
   log.debug("I appear half of the time");

Normally, I’d point out something about how they don’t need to return true or return false when they could just return the boolean expression, but what’d be the point? They’ve created probabilistic log levels. It’s certainly one way to solve the “too many log messages” problem: just randomly throw some of them away.

Valencia gives us a happy ending:

Needless to say, this has since been rewritten… the end result builds faster, uses less memory and is several orders of magnitude faster.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Kevin RuddNew Israel Fund: Australia and the Middle East

INTERVIEW VIDEO
NIF AUSTRALIA
WITH LIAM GETREU
9 SEPTEMBER 2020

The post New Israel Fund: Australia and the Middle East appeared first on Kevin Rudd.

Rondam RamblingsThis is what the apocalypse looks like

This is a photo of our house taken at noon today:                 This is not the raw image.  I took this with an iPhone, whose auto-exposure made the image look much brighter than it actually is.  I've adjusted the brightness and color balance to match the actual appearance as much as I can.  Even so, this image doesn't do justice to the reality.  For one thing, the sky is much too blue.  The

Cryptogram How the FIN7 Cybercrime Gang Operates

The Grugq has written an excellent essay on how the Russian cybercriminal gang FIN7 operates. An excerpt:

The secret of FIN7’s success is their operational art of cyber crime. They managed their resources and operations effectively, allowing them to successfully attack and exploit hundreds of victim organizations. FIN7 was not the most elite hacker group, but they developed a number of fascinating innovations. Looking at the process triangle (people, process, technology), their technology wasn’t sophisticated, but their people management and business processes were.

Their business… is crime! And every business needs business goals, so I wrote a mock FIN7 mission statement:

Our mission is to proactively leverage existing long-term, high-impact growth strategies so that we may deliver the kind of results on the bottom line that our investors expect and deserve.

How does FIN7 actualize this vision? This is CrimeOps:

  • Repeatable business process
  • CrimeBosses manage workers, projects, data and money.
  • CrimeBosses don’t manage technical innovation. They use incremental improvement to TTP to remain effective, but no more
  • Frontline workers don’t need to innovate (because the process is repeatable)

Cryptogram Privacy Analysis of Ambient Light Sensors

Interesting privacy analysis of the Ambient Light Sensor API. And a blog post. Especially note the “Lessons Learned” section.

Cryptogram Interesting Attack on the EMV Smartcard Payment Standard

It’s complicated, but it’s basically a man-in-the-middle attack that involves two smartphones. The first phone reads the actual smartcard, and then forwards the required information to a second phone. That second phone actually conducts the transaction on the POS terminal. That second phone is able to convince the POS terminal to conduct the transaction without requiring the normally required PIN.

From a news article:

The researchers were able to demonstrate that it is possible to exploit the vulnerability in practice, although it is a fairly complex process. They first developed an Android app and installed it on two NFC-enabled mobile phones. This allowed the two devices to read data from the credit card chip and exchange information with payment terminals. Incidentally, the researchers did not have to bypass any special security features in the Android operating system to install the app.

To obtain unauthorized funds from a third-party credit card, the first mobile phone is used to scan the necessary data from the credit card and transfer it to the second phone. The second phone is then used to simultaneously debit the amount at the checkout, as many cardholders do nowadays. As the app declares that the customer is the authorized user of the credit card, the vendor does not realize that the transaction is fraudulent. The crucial factor is that the app outsmarts the card’s security system. Although the amount is over the limit and requires PIN verification, no code is requested.

The paper: “The EMV Standard: Break, Fix, Verify.”

Abstract: EMV is the international protocol standard for smartcard payment and is used in over 9 billion cards worldwide. Despite the standard’s advertised security, various issues have been previously uncovered, deriving from logical flaws that are hard to spot in EMV’s lengthy and complex specification, running over 2,000 pages.

We formalize a comprehensive symbolic model of EMV in Tamarin, a state-of-the-art protocol verifier. Our model is the first that supports a fine-grained analysis of all relevant security guarantees that EMV is intended to offer. We use our model to automatically identify flaws that lead to two critical attacks: one that defrauds the cardholder and another that defrauds the merchant. First, criminals can use a victim’s Visa contact-less card for high-value purchases, without knowledge of the card’s PIN. We built a proof-of-concept Android application and successfully demonstrated this attack on real-world payment terminals. Second, criminals can trick the terminal into accepting an unauthentic offline transaction, which the issuing bank should later decline, after the criminal has walked away with the goods. This attack is possible for implementations following the standard, although we did not test it on actual terminals for ethical reasons. Finally, we propose and verify improvements to the standard that prevent these attacks, as well as any other attacks that violate the considered security properties.The proposed improvements can be easily implemented in the terminals and do not affect the cards in circulation.

Kevin RuddABC: Journalists in China and Rio Tinto

E&OE TRANSCRIPT
TV INTERVIEW
ABC AFTERNOON BRIEFING
9 SEPTEMBER 2020

Topics: Mike Smith, Bill Birtles, Cheng Lei, Rio Tinto and Queensland’s health border

Patricia Karvelas
Kevin Rudd, welcome.

Kevin Rudd
Good to be on the program, Patricia.

Patricia Karvelas
Does the treatment of the ABC and the AFR China correspondents suggest Australia and China and the relationship between the two nations has entered a new, more dangerous phase now?

Kevin Rudd
I believe it’s been trending that way for quite some time. And without being in the business of, you know, apportioning responsibility for that, the bottom line is the trajectory is all negative. So this is just one further step in that direction and deeply disturbing if your fundamental interest here is press freedom, and your interest is the proper protection of foreign correspondents in an authoritarian country like China.

Patricia Karvelas
Is there any doubt in your mind that these two correspondents were caught up in politics?

Kevin Rudd
No, listen, I actually know these two guys reasonably well. They’re highly professional journalists and when I’ve been in China myself, I’ve spent time with them. And these are journalists just doing the professional duties of their task. To infer that these folks were somehow involved in some, you know, nefarious plot against the Chinese government is just laughable. Absolutely laughable. If that was the case then, you know, you would have every foreign correspondent in Beijing up on the witness stand at the moment. It’s just nonsensical.

Patricia Karvelas
Okay. But these are Australians that have been targeted. What do you read into that? Should we be particularly concerned about the fact that they are Australians — obviously we’re are concerned about that — but does that demonstrate that the relationship with Australia is particularly troubled?

Kevin Rudd
Look at the end of the day, at this stage, we don’t know what has actually driven this particular, you know, roundup of these two Australian journalists. We know that it’s directly related to the Cheng Lei case and we don’t know what underpins that. And she’s an Australian citizen, and she needs to have her rights properly protected by the Australian Government, and I’m confident that DFAT is doing everything it can on that score. But in the broader bilateral relationship, I think you’re right. When Beijing looks at the world at the moment, it sees its number one problem as being the United States and the collapse in the US-China relationship which is now in its worst state in 50 years. And secondly, in terms of its most adverse relationships abroad, it would then place Australia, and I simply base that on what I read in each day’s official commentary in the Chinese media.

Patricia Karvelas
So is this retaliation? Is that how we should see this?

Kevin Rudd
As I said, we don’t know the full details here, other than that the Chinese themselves have confirmed that it’s related to the Cheng Lei case. But what underpins that in terms of her Australian citizenship, in terms of any other matters that she was engaged in, we don’t know. But certainly the overall frame of the Australia-China relationship is a factor here, and including the accusations contained in today’s Global Times alleging that Australian intelligence officials engaged with Chinese correspondents in their homes here in Australia on the 26th of June.

Patricia Karvelas
Bill Birtles says he was questioned about the case of this detained Chinese Australian broadcaster you’ve mentioned Cheng Lei. Do you see these incidences as related? What what does that demonstrate? And how concerned are you about Miss Cheng Lei?

Kevin Rudd
Well, she’s an Australian citizen and I’ve met her as well, in fact, at a conference I was attending in Beijing, from memory, last November. And as with any Australian, and whoever they work for, and despite the fact that she worked for China’s international media arm CGTN, we’ve got a responsibility to do everything we can for her wellbeing as well. But I think if we stand back from the individual cases, there are two big facts we should focus on. One we’ve touched on, which is the spiralling nature of the Australia-China relationship and I’ve not seen anything like it in my 35-years-plus of working on Australia-China relations. But the second one is the changing nature of Chinese domestic politics. As China increasingly cracks down on all forms of dissent, both real and imagined, and as a result what we’ve seen in Chinese domestic politics is the assertion of greater and greater powers by China’s police, security and intelligence apparatus. And the traditional constraints exercise, for example, by the Chinese Foreign Ministry, not just on this case but on others, are being pushed to one side.

Patricia Karvelas
Chinese authorities have confirmed that Ms Cheng is suspected of endangering national security. What does that suggest about how she’ll be dealt with?

Kevin Rudd
Well, the Chinese domestic national security law, quite apart from the one which we focused on recently which has been legislated for Hong Kong, the Chinese domestic law is draconian, and the terms used in it are wide-reaching, and provide, you know, maximum leverage for the prosecutors to go after an individual. Regrettably, the precedence in Chinese judicial practice is that once a formal investigation has been launched, as I understand has been the case in Cheng Lei’s case, then the prospect of securing a good outcome are radically reduced. I’ve seen this in so many cases over the years. That should not prevent the Australian Government from doing everything it possibly can, but I am concerned about where this case has already reached given that she was only arrested in the middle of August.

Patricia Karvelas
Do you see her case as an example of what’s being called hostage diplomacy?

Kevin Rudd
I don’t believe so, but I obviously am constrained by what we don’t know. What I have been concerned about and have been intimately — engaged is the wrong term — but in discussions with various people about is the case of the two Canadians who have now been incarcerated in China for well over a year in what was plainly a retaliatory action by the Chinese government over the incarceration of Madam Meng by the Canadian government at the request of the United States, Madam Meng being the daughter of the owner and chief executive of Huawei, China’s Major 5G telecommunications company. So China has acted in this way already in relation to Canada. I presume that was the basis upon which the Australian government issued its own travel advisory in early July about the risks faced by Australians traveling in China. But on these circumstances concerning Cheng Lei, I’d be speculating rather than providing hard analysis, Patricia.

Patricia Karvelas
You say in 35 years of watching you haven’t seen relations this low or deteriorate quite like they have. You’ve talked about responsibility here, who is responsible? Is it ultimately China and the Chinese regime that’s making this relationship untenable?

Kevin Rudd
Well, after the last Australian election, I went and saw Mr Morrison and was very blunt with him, which is that this is a difficult relationship. It was difficult when I was in office, and it’s difficult for him. So I’m not about to pretend that it’s easy managing a relationship with an authoritarian state which is now the world’s second-largest economy and during the course of the next 10 years likely to become the largest, and whose foreign policy is becoming increasingly assertive. There’s a high degree of difficulty here. What I have been critical of in the Australian public media is the fact that every time an issue arises in the Australia-China relationship, I’ve seen a knee-jerk reaction by some in the Australian Government and various ministers to pull out the megaphone within 30 minutes and to take what is a problem, and frankly to play it into Australian domestic politics. So China is difficult to deal with, but I think the current Australian government from time to time has been less-than-skilled in its handling. I’ve got to say however the professionals in DFAT including, you know, Ambassador Graham Fletcher’s handling of this most recent case in Beijing and the Consul-General in Shanghai, has been first class and they should be congratulated for their efforts.

Patricia Karvelas
And how should the Morrison government deal with these issues? Clearly, they’ve been pretty key here too, as have the media companies involved, both the ABC our own Managing Director, but also the Australian Financial Review. How do you think the Morison government should be behaving now?

Kevin Rudd
Look on this question of Australian media access to China, which is important as a public policy objective in its own right, and broader Western media access to China, bearing in mind we’ve had something like 17 American journalists booted out in one form or another over the last year or two. What I would suggest is that the Australian Government and the media companies keep their powder dry for a bit. What China has done in my judgment is unacceptable in the treatment of these two individuals, these two Australian journalists, but there’s a structural interest here which is to try and find an opportunity for the re-establishment of an Australian media presence in China. So until we have greater clarity in terms of the direction of Chinese overall policy on these questions, I mean, my counsel, if I was in government at the time, would be to keep our powder dry. If possible, rebuild this element of the relationship. And if not take whatever actions are then necessary. But it’s far better to be cautious about your response to these questions than to automatically jump into the domestic political trenches and try and score domestic political points in Australia by beating your chest and showing how hairy-chested you are at the same time.

Patricia Karvelas
How will the fact that there are no correspondents from Australian media left in China affect how this incredibly important subject is covered?

Kevin Rudd
Well, Patricia, to state the bleeding obvious it’s not going to help. Both Birtles and the correspondent from the Australian Financial Review, Mike Smith, are first-class journalists and they write well, and so their coverage has been important in terms of the continuing awareness of Australians not just of the Australia-China relationship, but frankly, its fundamentals which is: what’s going on in China domestically? Ultimately, what we see at this end is driven by Chinese domestic politics and the Chinese domestic economy. China’s foreign policy is ultimately a product of those factors. And the more you have effective analysis of those things, the better we are in Australia and around the world in framing our own response to this phenomenally complex country which is now becoming increasingly assertive, requiring an increasingly sophisticated response from governments like Australia.

Patricia Karvelas
Just a couple of other issues. It would be odd for me not to ask is it reasonable for Queensland to continue keeping its border with New South Wales closed given how well authorities there have kept the virus under control?

Kevin Rudd
Look I think Annastacia Palaszcuk has played this pretty well from day one, and she like the other state premiers has been highly attentive to the professional advice the chief medical officer. I mean, pardon me for being old fashioned about these things, Patricia, but I always think you should listen to the experts. We do believe, at least on our side of politics, in something called objective science. And if the experts are saying ‘be cautious about this’ then so you should be. And the other thing to say is, what would I do if I was in her position right now? Well, you could have listened to the leader of the LNP in Queensland, Ms Frecklington and opened the borders to the south many, many months ago, and run all sorts of risks including people coming in from Victoria as well. Annastasia Palaszczuk resisted those demands. I think she’s been proven to be right as a result. So if I was her, and if I was in her shoes today, I think I’d be doing exactly what she’s doing which is listening to what the chief medical officer is advising her to do.

Patricia Karvelas
Just finally, I spoke to Noel Pearson a little while ago about of course this issue around Rio Tinto and that conduct around the destruction of the Juukan Gorge caves. And of course, you know, you as prime minister delivered the the apology and has I know a long interest in Indigenous affairs. What’s your assessment of the way the company has behaved here? And what is the sort of lasting legacy here that you’re concerned about?

Kevin Rudd
For Rio Tinto — which will soon be known in Australia as Rio TNT, I think — for Rio Tinto it has blown up its own reputation as anything approximating a responsible corporate citizen in Australia. What we know already from the parliamentary inquiry is that far from this being some sort of shock and surprise or accident, that in fact, senior management within Rio Tinto were already taking legal advice and PR advice about how to handle a reaction once the detonation occurred. So for the company, they should be hauled over the coals. If there’s a fining regime in place, it should be deployed. The executives responsible for this decision should no longer be executives. If I was a shareholder of Rio Tinto, this is just appalling for Indigenous people. Look, you know, something as old as 40,000 years. This is as old as the ancient caves of Altamira and Lascaux in in Europe. And to simply allow someone to walk in with a stick of jelly and bang, you’re gone? I mean, this will be devastating for the Indigenous people who in this generation are responsible for the custodianship of the land which means these ancient sites. So for our Indigenous brothers and sisters, this has been an appalling development. For the company, I think their reputation now is mud.

Patricia Karvelas
Kevin Rudd, many thanks for joining us this afternoon.

Kevin Rudd
Good to be on the program.

The post ABC: Journalists in China and Rio Tinto appeared first on Kevin Rudd.

Worse Than FailureWeb Server Installation

Connect the dots puzzle

Once upon a time, there lived a man named Eric. Eric was a programmer working for the online development team of a company called The Company. The Company produced Media; their headquarters were located on The Continent where Eric happily resided. Life was simple. Straightforward. Uncomplicated. Until one fateful day, The Company decided to outsource their infrastructure to The Service Provider on Another Continent for a series of complicated reasons that ultimately benefited The Budget.

Part of Eric's job was to set up web servers for clients so that they could migrate their websites to The Platform. Previously, Eric would have provisioned the hardware himself. Under the new rules, however, he had to request that The Service Provider do the heavy lifting instead.

On Day 0 of our story, Eric received a server request from Isaac, a representative of The Client. On Day 1, Eric asked for the specifications for said server, which were delivered on Day 2. Day 2 being just before a long weekend, it was Day 6 before the specs were delivered to The Service Provider. The contact at The Service Provider, Thomas, asked if there was a deadline for this migration. Eric replied with the hard cutover date almost two months hence.

This, of course, would prove to be a fatal mistake. The following story is true; only the names have been changed to protect the guilty. (You might want some required listening for this ... )

Day 6

  • Thomas delivers the specifications to a coworker, Ayush, without requesting a GUI.
  • Ayush declares that the servers will be ready in a week.

Day 7

  • Eric informs The Client that the servers will be delivered by Day 16, so installations could get started by Day 21 at the latest.
  • Ayush asks if The Company wants a GUI.

Day 8

  • Eric replies no.

Day 9

  • Another representative of The Service Provider, Vijay, informs Eric that the file systems were not configured according to Eric's request.
  • Eric replies with a request to configure the file systems according to the specification.
  • Vijay replies with a request for a virtual meeting.
  • Ayush tells Vijay to configure the system according to the specification.

Day 16

  • The initial delivery date comes and goes without further word. Eric's emails are met with tumbleweeds. He informs The Client that they should be ready to install by Day 26.

Day 19

  • Ayush asks if any ports other than 22 are needed.
  • Eric asks if the servers are ready to be delivered.
  • Ayush replies that if port 22 needs to be opened, that will require approval from Eric's boss, Jack.

Day 20

  • Ayush delivers the server names to Eric as an FYI.

Day 22

  • Thomas asks Eric if there's been any progress, then asks Ayush to schedule a meeting to discuss between the three of them.

Day 23

  • Eric asks for the login credentials to the aforementioned server, as they were never provided.
  • Vijay replies with the root credentials in a plaintext email.
  • Eric logs in and asks for some network configuration changes to allow admin access from The Client's network.
  • Mehul, yet another person at The Service Provider, asks for the configuration change request to be delivered via Excel spreadsheet.
  • Eric tells The Client that Day 26 is unlikely, but they should probably be ready by end of Day 28, still well before the hard deadline of Day 60.

Day 28

  • The Client reminds Eric that they're decommissioning the old datacenter on Day 60 and would very much like to have their website moved by then.
  • Eric tells Mehul that the Excel spreadsheet requires information he doesn't have. Could he make the changes?
  • Thomas asks Mehul and Ayush if things are progressing. Mehul replies that he doesn't have the source IP (which was already sent). Thomas asks whom they're waiting for. Mehul replies and claims that Eric requested access from the public Internet.
  • Mehul escalates to Jack.
  • Thomas reminds Ayush and Mehul that if their work is pending some data, they should work toward getting that obstacle solved.

Day 29

  • Eric, reading the exchange from the evening before, begins to question his sanity as he forwards the original email back over, along with all the data they requested.

Day 30

  • Mehul replies that access has been granted.

Day 33

  • Eric discovers he can't access the machine from inside The Client's network, and requests opening access again.
  • Mehul suggests trying from the Internet, claiming that the connection is blocked by The Client's firewall.
  • Eric replies that The Client's datacenter cannot access the Internet, and that the firewall is configured properly.
  • Jack adds more explicit instructions for Mehul as to exactly how to investigate the network problem.

Day 35

  • Mehul asks Eric to try again.

Day 36

  • It still doesn't work.
  • Mehul replies with instructions to use specific private IPs. Eric responds that he is doing just that.
  • Ayush asks if the problem is fixed.
  • Eric reminds Thomas that time is running out.
  • Thomas replies that the firewall setting changes must have been stepped on by changes on The Service Provider's side, and he is escalating the issue.

Day 37

  • Mehul instructs Eric to try again.

Day 40

  • It still doesn't work.

Day 41

  • Mehul asks Eric to try again, as he has personally verified that it works from the Internet.
  • Eric reminds Mehul that it needs to work from The Client's datacenter—specifically, for the guy doing the migration at The Client.

Day 42

  • Eric confirms that the connection does indeed work from Internet, and that The Client can now proceed with their work.
  • Mehul asks if Eric needs access through The Company network.
  • Eric replies that the connection from The Company network works fine now.

Day 47

  • Ayush requests a meeting with Eric about support handover to operations.

Day 48

  • Eric asks what support is this referring to.
  • James (The Company, person #3) replies that it's about general infrastructure support.

Day 51

  • Eric notifies Ayush and Mehul that server network configurations were incorrect, and that after fixing the configuration and rebooting the server, The Client can no longer log in to the server because the password no longer works.
  • Ayush instructs Vijay to "setup the repository ASAP." Nobody knows what repository he's talking about.
  • Vijay responds that "licenses are not updated for The Company servers." Nobody knows what licenses he is talking about.
  • Vijay sends original root credentials in a plaintext email again.

Day 54

  • Thomas reminds Ayush and Mehul that the servers need to be moved by day 60.
  • Eric reminds Thomas that the deadline was extended to the end of the month (day 75) the previous week.
  • Eric replies to Vijay that the original credentials sent no longer work.
  • Vijay asks Eric to try again.
  • Mehul asks for the details of the unreachable servers, which were mentioned in the previous email.
  • Eric sends a summary of current status (can't access from The Company's network again, server passwords not working) to Thomas, Ayush, Mehul and others.
  • Vijay replies, "Can we discuss on this."
  • Eric replies that he's always reachable by Skype or email.
  • Mehul says that access to private IPs is not under his control. "Looping John and Jared," but no such people were added to the recipient list. Mehul repeats that from The Company's network, private IPs should be used.
  • Thomas tells Eric that the issue has been escalated again on The Service Provider's side.
  • Thomas complains to Roger (The Service Provider, person #5), Theodore (The Service Provider, person #6) and Matthew (The Service Provider, person #7) that the process isn't working.

Day 55

  • Theodore asks Peter (The Service Provider, person #8), Mehul, and Vinod (The Service Provider, person #9) what is going on.
  • Peter responds that websites should be implemented using Netscaler, and asks no one in particular if they could fill an Excel template.
  • Theodore asks who should be filling out the template.
  • Eric asks Thomas if he still thinks the sites can be in production by the latest deadline, Day 75, and if he should install the server on AWS instead.
  • Thomas asks Theodore if configuring the network really takes two weeks, and tells the team to try harder.

Day 56

  • Theodore replies that configuring the network doesn't take two weeks, but getting the required information for that often does. Also that there are resourcing issues related to such configurations.
  • Thomas suggests a meeting to fill the template.
  • Thomas asks if there's any progress.

Day 57

  • Ayush replies that if The Company provides the web service name, The Service Provider can fill out the rest.
  • Eric delivers a list of site domains and required ports.
  • Thomas forwards the list to Peter.
  • Tyler (The Company, person #4) informs Eric that any AWS servers should be installed by Another Service Provider.
  • Eric explains that the idea was that he would install the server on The Company's own AWS account.
  • Paul (The Company, person #5) informs Eric that all AWS server installations are to be done by Another Service Provider, and that they'll have time to do it ... two months down the road.
  • Kane (The Company, person #6) asks for a faster solution, as they've been waiting for nearly two months already.
  • Eric sets up the server on The Company's AWS account before lunch and delivers it to The Client.

Day 58

  • Peter replies that he needs a list of fully qualified domain names instead of just the site names.
  • Eric delivers a list of current blockers to Thomas, Theodore, Ayush and Jagan (The Service Provider, person #10).
  • Ayush instructs Vijay and the security team to check network configuration.
  • Thomas reminds Theodore, Ayush and Jagan to solve the issues, and reminds them that the original deadline for this was a month ago.
  • Theodore informs everyone that the servers' network configuration wasn't compatible with the firewall's network configuration, and that Vijay and Ayush are working on it.

Day 61

  • Peter asks Thomas and Ayush if they can get the configuration completed tomorrow.
  • Thomas asks Theodore, Ayush, and Jagan if the issues are solved.

Day 62

  • Ayush tells Eric that they've made configuration changes, and asks if he can now connect.

Day 63

  • Eric replies to Ayush that he still has trouble connecting to some of the servers from The Company's network.
  • Eric delivers network configuration details to Peter.
  • Ayush tells Vijay and Jai (The Service Provider, person #11) to reset passwords on servers so Eric can log in, and asks for support from Theodore with network configurations.
  • Matthew replies that Theodore is on his way to The Company.
  • Vijay resets the password and sends it to Ayush and Jai.
  • Ayush sends the password to Eric via plaintext email.
  • Theodore asks Eric and Ayush if the problems are resolved.
  • Ayush replies that connection from The Company's network does not work, but that the root password was emailed.

Day 64

  • Tyler sends an email to everyone and cancels the migration.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Cryptogram US Space Cybersecurity Directive

The Trump Administration just published “Space Policy Directive – 5“: “Cybersecurity Principles for Space Systems.” It’s pretty general:

Principles. (a) Space systems and their supporting infrastructure, including software, should be developed and operated using risk-based, cybersecurity-informed engineering. Space systems should be developed to continuously monitor, anticipate,and adapt to mitigate evolving malicious cyber activities that could manipulate, deny, degrade, disrupt,destroy, surveil, or eavesdrop on space system operations. Space system configurations should be resourced and actively managed to achieve and maintain an effective and resilient cyber survivability posture throughout the space system lifecycle.

(b) Space system owners and operators should develop and implement cybersecurity plans for their space systems that incorporate capabilities to ensure operators or automated control center systems can retain or recover positive control of space vehicles. These plans should also ensure the ability to verify the integrity, confidentiality,and availability of critical functions and the missions, services,and data they enable and provide.

These unclassified directives are typically so general that it’s hard to tell whether they actually matter.

News article.

Krebs on SecurityMicrosoft Patch Tuesday, Sept. 2020 Edition

Microsoft today released updates to remedy nearly 130 security vulnerabilities in its Windows operating system and supported software. None of the flaws are known to be currently under active exploitation, but 23 of them could be exploited by malware or malcontents to seize complete control of Windows computers with little or no help from users.

The majority of the most dangerous or “critical” bugs deal with issues in Microsoft’s various Windows operating systems and its web browsers, Internet Explorer and Edge. September marks the seventh month in a row Microsoft has shipped fixes for more than 100 flaws in its products, and the fourth month in a row that it fixed more than 120.

Among the chief concerns for enterprises this month is CVE-2020-16875, which involves a critical flaw in the email software Microsoft Exchange Server 2016 and 2019. An attacker could leverage the Exchange bug to run code of his choosing just by sending a booby-trapped email to a vulnerable Exchange server.

“That doesn’t quite make it wormable, but it’s about the worst-case scenario for Exchange servers,” said Dustin Childs, of Trend Micro’s Zero Day Initiative. “We have seen the previously patched Exchange bug CVE-2020-0688 used in the wild, and that requires authentication. We’ll likely see this one in the wild soon. This should be your top priority.”

Also not great for companies to have around is CVE-2020-1210, which is a remote code execution flaw in supported versions of Microsoft Sharepoint document management software that bad guys could attack by uploading a file to a vulnerable Sharepoint site. Security firm Tenable notes that this bug is reminiscent of CVE-2019-0604, another Sharepoint problem that’s been exploited for cybercriminal gains since April 2019.

Microsoft fixed at least five other serious bugs in Sharepoint versions 2010 through 2019 that also could be used to compromise systems running this software. And because ransomware purveyors have a history of seizing upon Sharepoint flaws to wreak havoc inside enterprises, companies should definitely prioritize deployment of these fixes, says Alan Liska, senior security architect at Recorded Future.

Todd Schell at Ivanti reminds us that Patch Tuesday isn’t just about Windows updates: Google has shipped a critical update for its Chrome browser that resolves at least five security flaws that are rated high severity. If you use Chrome and notice an icon featuring a small upward-facing arrow inside of a circle to the right of the address bar, it’s time to update. Completely closing out Chrome and restarting it should apply the pending updates.

Once again, there are no security updates available today for Adobe’s Flash Player, although the company did ship a non-security software update for the browser plugin. The last time Flash got a security update was June 2020, which may suggest researchers and/or attackers have stopped looking for flaws in it. Adobe says it will retire the plugin at the end of this year, and Microsoft has said it plans to completely remove the program from all Microsoft browsers via Windows Update by then.

Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for Windows updates to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Cryptogram More on NIST’s Post-Quantum Cryptography

Back in July, NIST selected third-round algorithms for its post-quantum cryptography standard.

Recently, Daniel Apon of NIST gave a talk detailing the selection criteria. Interesting stuff.

NOTE: We’re in the process of moving this blog to WordPress. Comments will be disabled until the move is complete. The management thanks you for your cooperation and support.

Cory DoctorowMy first-ever Kickstarter: the audiobook for Attack Surface, the third Little Brother book

I have a favor to ask of you. I don’t often ask readers for stuff, but this is maybe the most important ask of my career. It’s a Kickstarter – I know, ‘another crowdfunder?’ – but it’s:

a) Really cool;

b) Potentially transformative for publishing.

c) Anti-monopolistic

Here’s the tldr: Attack Surface – AKA Little Brother 3- is coming out in 5 weeks. I retained audio rights and produced an amazing edition that Audible refuses to carry. You can pre-order the audiobook, ebook (and previous volumes), DRM- and EULA-free.

https://www.kickstarter.com/projects/doctorow/attack-surface-audiobook-for-the-third-little-brother-book

That’s the summary, but the details matter. First: the book itself. ATTACK SURFACE is a standalone Little Brother book about Masha, the young woman from the start and end of the other two books; unlike Marcus, who fights surveillance tech, Masha builds it.

Attack Surface is the story of how Masha has a long-overdue moral reckoning with the way that her work has hurt people, something she finally grapples with when she comes home to San Francisco.

Masha learns her childhood best friend is leading a BLM-style uprising – and she’s being targeted by the same cyberweapons that Masha built to hunt Iraqi insurgents and post-Soviet democracy movements.

I wrote Little Brother in 2006, it came out in 2008, and people tell me it’s “prescient” because the digital human rights issues it grapples with – high-tech authoritarianism and high-tech resistance – are so present in our current world.

But it’s not so much prescient as observant. I wrote Little Brother during the Bush administration’s vicious, relentless, tech-driven war on human rights. Little Brother was a bet that these would not get better on their own.

And it was a bet that tales of seizing the means of computation would inspire people to take up digital arms of their own. It worked. Hundreds of cryptographers, security experts, cyberlawyers, etc have told me that Little Brother started them on their paths.

ATTACK SURFACE – a technothriller about racial injustice, police brutality, high-tech turnkey totalitarianism, mass protests and mass surveillance – was written between May 2016 and Nov 2018, before the current uprisings and the tech worker walkouts.

https://twitter.com/search?q=%20(%23dailywords)%20(from%3Adoctorow)%20%22crypto%20wars%22&src=typed_query&f=live

But just as with Little Brother, the seeds of the current situation were all around us in 2016, and if Little Brother inspired a cohort of digital activists, I hope Attack Surface will give a much-needed push to a group of techies (currently) on the wrong side of history.

As I learned from Little Brother, there is something powerful about technologically rigorous thrillers about struggles for justice – stories that marry excitement, praxis and ethics. Of all my career achievements, the people I’ve reached this way matter the most.

Speaking of careers and ethics. As you probably know, I hate DRM with the heat of 10000 suns: it is a security/privacy nightmare, a monopolist’s best friend, and a gross insult to human rights. As you may also know, Audible will not carry any audiobooks unless they have DRM.

Audible is Amazon’s audiobook division, a monopolist with a total stranglehold on the audiobook market. Audiobooks currently account for almost as much revenue as hardcovers, and if you don’t sell on Audible, you sacrifice about 95% of that income.

That’s a decision I’ve made, and it means that publishers are no longer willing to pay for my audiobook rights (who can blame them?). According to my agent, living my principles this way has cost me enough to have paid off my mortgage and maybe funding my retirement.

I’ve tried a lot of tactics to get around Audible; selling through the indies (libro.fm, downpour.com, etc), through Google Play, and through my own shop (craphound.com/shop).

I appreciate the support there but it’s a tiny fraction of what I’m giving up – both in terms of dollars and reach – by refusing to lock my books (and my readers) (that’s you) to Amazon’s platform for all eternity with Audible DRM.

Which brings me to this audiobook.

Look, this is a great audiobook. I hired Amber Benson (a brilliant writer and actor who played Tara on Buffy), Skyboat Media and director Cassandra de Cuir, and Wryneck Studios, and we produced a 15h long, unabridged masterpiece.

It’s done. It’s wild. I can’t stop listening to it. It drops on Oct 13, with the print/ebook edition.

It’ll be on sale in all audiobook stores (except Audible) on the 13th,for $24.95.

But! You can get it for a mere $20 via my first Kickstarter.

https://www.kickstarter.com/projects/doctorow/attack-surface-audiobook-for-the-third-little-brother-book

What’s more, you can pre-order the ebook – and also buy the previous ebooks and audiobooks (read by Wil Wheaton and Kirby Heyborne) – all DRM free, all free of license “agreements.”

The deal is: “You bought it, you own it, don’t violate copyright law and we’re good.”

And here’s the groundbreaking part. For this Kickstarter, I’m the retailer. If you pre-order the ebook from my KS, I get the 30% that would otherwise go to Jeff Bezos – and I get the 25% that is the standard ebook royalty.

This is a first-of-its-kind experiment in letting authors, agents, readers and a major publisher deal directly with one another in a transaction that completely sidesteps the monopolists who have profited so handsomely during this crisis.

Which is where you come in: if you help me pre-sell a ton of ebooks and audiobooks through this crowdfunder, it will show publishing that readers are willing to buy their ebooks and audiobooks without enriching a monopolist, even if it means an extra click or two.

So, to recap:

Attack Surface is the third Little Brother book

It aims to radicalize a generation of tech workers while entertaining its audience as a cracking, technologically rigorous thriller

The audiobook is amazing, read by the fantastic Amber Benson

If you pre-order through the Kickstarter:

You get a cheaper price than you’ll get anywhere else

You get a DRM- and EULA-free purchase

You’ll fight monopolies and support authorship

If you’ve ever enjoyed my work and wondered how you could pay me back: this is it. This is the thing. Do this, and you will help me artistically, professionally, politically, and (ofc) financially.

Thank you!

https://www.kickstarter.com/projects/doctorow/attack-surface-audiobook-for-the-third-little-brother-book

PS: Tell your friends!

Cory DoctorowAttack Surface Kickstarter Promo Excerpt!

This week’s podcast is a generous excerpt – 3 hours! – of the audiobook for Attack Surface, the third Little Brother book, which is available for pre-order today on my very first Kickstarter.

This Kickstarter is one of the most important moments in my professional career, an experiment to see if I can viably publish audiobooks without caving into Amazon’s monopolistic requirement that all Audible books be sold with DRM that locks it to Amazon’s corporate platform…forever. If you’ve ever wanted to thank me for this podcast or my other work, there has never been a better way than to order the audiobook (or ebook) (or both!).

Attack Surface is a standalone novel, meaning you can enjoy it without reading Little Brother or its sequel, Homeland. Please give this extended preview a listen and, if you enjoy it, back the Kickstarter and (this is very important): TELL YOUR FRIENDS.

Thank you, sincerely.

MP3

Kevin RuddSMH: Scott Morrison is Yearning for a Donald Trump victory

Published in The Sydney Morning Herald on 08 September 2020

The PM will be praying for a Republican win in the US to back up his inaction on climate and the Paris Agreement.

A year out from Barack Obama’s election in 2008, John Howard made a stunning admission that he thought Americans should be praying for a Republican victory. Ideologically this was unremarkable. But the fact Howard said so publicly was because he knew just how uncomfortable an Obama victory would be for him given his refusal to withdraw our troops from Iraq.

Fast forward more than a decade, and Scott Morrison – even in the era of Donald Trump – will also be yearning desperately for a Republican victory come November. But this time it is the conservative recalcitrance on a very different issue that risks Australia being isolated on the world stage: climate change.

And as the next summer approaches, Australians will be reminded afresh of how climate change, and its impact on our country and economy, has not gone away.

Former vice-president Joe Biden has put at the centre of his campaign a historic plan to fight climate change both at home and abroad. On his first day in office, he has promised to return the US to the Paris Agreement. And he recently unveiled an unprecedented $2 trillion green investment plan, including the complete decarbonisation of the domestic electricity system by 2035.

By contrast, Morrison remains hell-bent on Australia doing its best to disrupt global momentum to tackle the climate crisis and burying our head in the sand when it comes to embracing the new economic opportunities that come with effective climate change action.

As a result, if Biden is elected this November, we will be on track for a collision course with our American ally in a number of areas.

First, Morrison remains recklessly determined on being able to carry over so-called “credits” from the overachievement of our 2020 Kyoto target to help it meet its already lacklustre 2030 target under the new Paris regime.

No other government in the world is digging their heels in like this. None. It is nothing more than an accounting trick to allow Australia to do less. Perhaps the greatest irony is that this “overachievement” was also in large part because of the mitigation actions of our government.

That aside, these carbon credits also do nothing for the atmosphere. At worst, using them beyond 2020 could be considered illegal and only opens the back door for other countries to also do less by following Morrison’s lead.

This will come to a head at the next UN climate talks in Glasgow next year. While Australia has thus far been able to dig in against objections by most of the rest of the world, a Biden victory would only strengthen the hand of the UK hosts to simply ride over the top of any further Australian intransigence. Morrison would be foolhardy to believe that Boris Johnson’s government will burn its political capital at home and abroad to defend the indefensible Australian position.

Second, unlike 114 countries around the world, Morrison remains hell-bent on ignoring the central promise of Paris: that all governments increase their 2030 targets by the time they get to Glasgow. That’s because even if all those commitments were fully implemented, it would only give the planet one-third of what is necessary to keep average temperature increases within 1.5 degrees by 2100, as the Paris Agreement requires. This is why governments agreed to increase their ambition every five years as technologies improved, costs lowered and political momentum built.

In 2014, the Liberal government explained our existing Paris target on the basis that it was the same as what the Americans were doing. In reality, the Obama administration planned to achieve the same cut of 26 to 28 per cent on 2005 emissions by 2025 – not 2030 as we pledged and sought to disguise.

So based on the logic that what America does is this government’s benchmark for its global climate change commitments, if the US is prepared to increase it’s Paris target (as it will under Biden), so too should we. Biden himself has not just committed the US to the goal of net zero emissions by 2050, but has undertaken to embed it in legislation as countries such as Britain and New Zealand have done, and rally others to do the same.

Unsurprisingly, despite the decisions of 121 countries around the world, Morrison also refuses to even identify a timeline for achieving the Paris Agreement’s long-term goal to reach net zero emissions. As the science tells us, this needs to be by 2050 to have any shot of protecting the world’s most vulnerable populations – including in the Pacific – and saving Australia from a rolling apocalypse of weather-related disasters that will wreak havoc on our economy.

For our part, the government insists that it won’t “set a target without a plan”. But governments exist to do the hard work. And politically, it goes against the myriad of support domestically for a net zero by 2050 goal, including from the peak business, industry and union groups, the top bodies for energy and agriculture (two sectors that together account for almost half of our emissions), as well as our national airline, two largest mining companies, every state and territory government, and even a majority of conservative voters.

As Tuvalu’s recent prime minister Enele Sopoaga reminded us recently, the fact that Morrison himself looked Pacific island leaders in the eye last year and promised to develop such a long-term plan – a promise he reiterated at the G20 – also shows we risk being a country that does not do what we say. For those in the Pacific, this just rubs salt into the wound of Morrison’s decision to blindly follow Trump’s lead in halting payments to the Green Climate Fund (something Biden would also reverse), requiring them to navigate a bureaucratic maze of individual aid programs as a result.

Finally, Biden has undertaken to also align trade and climate policy by imposing carbon tariffs against those countries that fail to do their fair share in global greenhouse gas reductions. The EU is in the process of embracing the same approach. So if Morrison doesn’t act, he’s going to put our entire export sector at risk of punitive tariffs because the Liberals have so consistently failed to take climate change seriously.

Under Trump, Morrison has been able to get one giant leave pass for doing nothing on climate. But under Biden, he’ll be seen as nothing more than the climate change free-loader that he is. As he will by the rest of the world. And our economy will be punished as a result.

The post SMH: Scott Morrison is Yearning for a Donald Trump victory appeared first on Kevin Rudd.

LongNowTime-Binding and The Music History Survey

Musicologist Phil Ford, co-host of the Weird Studies podcast, makes an eloquent argument for the preservation of the “Chants to Minimalism” Western Music History survey—the standard academic curriculum for musicology students, akin to the “fish, frogs, lizards, birds” evolutionary spiral taught in bio classes—in an age of exponential change and an increased emphasis on “relevance” over the remembrance of canonical works:

Perhaps paradoxically, the rate of cultural change increases in proportional measure to the increase in cultural memory. Writing and its successor media of prosthetic memory enact a contradiction: the easy preservation of cultural memory enables us to break with the past, to unbind time. At its furthest extremes, this is manifested in the familiar and dismal spectacle of fascist and communist regimes, impelled by intellectual notions permitted by the intensified time-binding of literacy, imagining utopias that will ‘wipe the slate clean’ and trying to force people to live in a world entirely divorced from the bound time of social/cultural tradition.

See, for instance, Mao Zedong’s crusade to destroy Tibetan Buddhism by the erasure of context that held the Dalai Lama’s social role in place for fourteen generations. How is the culture to find a fifteenth Dalai Lama if no child can identify the relics of a prior Dalai Lama? Ironically this speaks to larger questions of the agency of landscapes and materials, and how it isn’t just our records as we understand them that help scaffold our identity; but that we are in some sense colonial organisms inextricable from, made by, our environments — whether built or wild. As recounted in countless change-of-station stories, monarchs and other leaders, like whirlpools or sea anemones, dissolve when pulled out of the currents that support them.

That said, the current isn’t just stability but change. Both novelty and structure are required to bind time. By pushing to extremes, modernity self-undermines, imperils its own basis for existence; and those cultures that slam on the brakes and dig into conservative tradition risk self-suffocation, or being ripped asunder by the friction of collision with the moving edge of history:

Modernity is (among other things) the condition in which time-binding is threatened by its own exponential expansion, and yet where it’s not clear exactly how we are to slow its growth.  Very modern people are reflexively opposed to anything that would slow down the acceleration: for them, the essence of the human is change. Reactionaries are reflexively opposed to anything that will speed up acceleration: for them, the essence of the human is continuity. Both are right!  Each side, given the opportunity to realize its imagined utopia of change or continuity, would make a world no sensible person would be caught dead in.

Ultimately, therefore, a conservative-yet-innovative balance must be found in the embrace of both new information technologies and their use for preservation of historic repertoires. When on a rocket into space, a look back at the whole Earth is essential to remember where we come from:

The best argument for keeping Sederunt in the classroom is that it is one of the nearly-infinite forms of music that the human mind has contrived, and the memory of those forms — time-binding — is crucial not only to the craft of musicians but to our continued sense of what it is to be a human being.

This isn’t just future-shocked reactionary work but a necessary integrative practice that enables us to reach beyond:

To tell the story of who we are is to engage in the scholar’s highest mission. It is the gift that shamans give their tribe.