Planet Russell

,

Charles StrossCountdown to Crazy

This is your official thread for discussing the upcoming US presidential and congressional election on November 3rd; along with its possible outcomes.

Do not chat about the US supreme court, congress, presidency, constitution, constitutional crises (possible), coup (possible), Donald Trump and his hellspawn offspring and associates, or anything about US politics in general on the Laundry Files book launch threads. If you do, your comments will be ruthlessly moderated into oblivion.

You are allowed and encouraged to discuss those topics in the comments below this topic.

(If you want to discuss "Dead Lies Dreaming" here I won't stop you, but there's plenty of other places for that!)

Charles StrossThe Laundry Files: an updated chronology

I've been writing Laundry Files stories since 1999, and there's now about 1.4 million words in that universe. That's a lot of stuff: a typical novel these days is 100,000 words, but these books trend long, and this count includes 11 novels (of which, #10 comes out later this month) and some shorter work. It occurs to me that while some of you have been following them from the beginning, a lot of people come to them cold in the shape of one story or another.

So below the fold I'm going to explain the Laundry Files time line, the various sub-series that share the setting, and give a running order for the series—including short stories as well as novels.

(The series title, "The Laundry Files", was pinned on me by editorial fiat at a previous publisher whose policy was that any group of 3 or more connected novels had to have a common name. It wasn't my idea: my editor at the time also published Jim Butcher, and Bob—my sole protagonist at that point in the series—worked for an organization disparagingly nicknamed "the Laundry", so the inevitable happened. Using a singular series title gives the impression that it has a singular theme, which would be like calling Terry Pratchett's Discworld books as "the Unseen University series". Anyway ...)

TLDR version: If you just want to know where to start reading, pick one of: The Atrocity Archives, The Rhesus Chart, The Nightmare Stacks, or Dead Lies Dreaming. These are all safe starting points for the series, that don't require prior familiarity. Other books might leave you confused if you dive straight in, so here's an exhaustive run-down of all the books and short stories.




Typographic conventions: story titles are rendered in italics (like this). Book titles are presented in boldface (thus).

Publication dates are presented like this: (pub: 2016). The year in which a story is set is presented like so: (set: 2005).

The list is sorted in story order rather than publication order.




The Atrocity Archive (set: 2002; pub: 2002-3)

  • The short novel which started it all. Originally published in an obscure Scottish SF digest-format magazine called Spectrum SF, it ran from 2002 to 2003, and introduced our protagonist Bob Howard, his (eventual) love interest Mo O'Brien, and a bunch of eccentric minor characters and tentacled horrors. Is a kinda-sorta tribute to spy thriller author Len Deighton.

The Concrete Jungle (set: 2003: pub: see below)

  • Novella, set a year after The Atrocity Archive, in which Bob is awakened in the middle of the night to go and count the concrete cows in Milton Keynes. Winner of the 2005 Hugo award for best SF/F novella.

The Atrocity Archives (set 2002-03, pub: 2003 (hbk), 2006 (trade ppbk))

  • Start reading here! A smaller US publisher, Golden Gryphon, liked The Atrocity Archive and wanted to publish it, but considered it to be too short on its own. So The Concrete Jungle was written, and along with an afterword they were published together as a two-story collection/episodic novel, The Atrocity Archives (note the added 's' at the end). A couple of years later, Ace (part of Penguin group) picked up the US trade and mass market paperback rights and Orbit published it in the UK. (Having won a Hugo award in the meantime really didn't hurt; it's normally quite rare for a small press item such as TAA to get picked up and republished like this.)

The Jennifer Morgue (set: 2005, pub: 2007 (hbk), 2008 (trade ppbk))

  • Golden Gryphon asked for a sequel, hence the James Bond episode in what was now clearly going to be a trilogy of comedy Lovecraftian/spy books. Note that it's riffing off the Broccoli movie franchise version of Bond, not Iain Fleming's original psychopathic British government assassin. Orbit again took UK rights, while Ace picked up the paperbacks. Because I wanted to stick with the previous book's two-story format, I wrote an extra short story:

Pimpf (set: 2006, pub: collected in The Jennifer Morgue)

  • A short story set in what I think of as the Chibi-Laundry continuity; Bob ends up inside a computer running a Neverwinter Nights server (hey, this was before World of Warcraft got big). Chibi-Laundry stories are self-parodies and probably shouldn't be thought of as canonical. (Ahem: there's a big continuity blooper tucked away in this one what comes back to bite me in later books because I forgot about it.)

Down on the Farm (novelette: set 2007, pub. 2008, Tor.com)

  • Novelette: Bob has to investigate strange goings-on at a care home for Laundry agents whose minds have gone. Introduces Krantzberg Syndrome, which plays a major role later in the series.

Equoid (novella: set 2007, pub: 2013, Tor.com)

  • A novella set between The Jennifer Morgue and The Fuller Memorandum; Bob is married to Mo and working for Iris Carpenter. Bob learns why Unicorns are Bad News. Won the 2014 Hugo award for best SF/F novella. Also published as the hardback novella edition Equoid by Subterranean Press.

The Fuller Memorandum (set: 2008, pub: 2010 (US hbk/UK ppbk))

  • Third novel, first to be published in hardback by Ace, published in paperback in the UK by Orbit. The title is an intentional call-out to Adam Hall (aka Elleston Trevor), author of the Quiller series of spy thrillers—but it's actually an Anthony Price homage. This is where we begin to get a sense that there's an overall Laundry Files story arc, and where I realized I wasn't writing a trilogy. Didn't have a short story trailer or afterword because I flamed out while trying to come up with one before the deadline. Bob encounters skullduggery within the organization and has to get to the bottom of it before something really nasty happens: also, what and where is the misplaced "Teapot" that the KGB's London resident keeps asking him about?

Overtime (novelette: set 2009, pub 2009, Tor.com)

  • A heart-warming Christmas tale of Terror. Shortlisted for the Hugo award for best novelette, 2010.

Three Tales from the Laundry Files (ebook-only collection)

  • Collection consisting of Down on the Farm, Overtime, and Equoid published the Tor.com as an ebook.

The Apocalypse Codex (set: 2010, pub: 2012 (US hbk/UK ppbk))

  • Fourth novel, and a tribute to the Modesty Blaise comic strip and books by Peter O'Donnell. A slick televangelist is getting much to cosy with the Prime Minister, and the Laundry—as a civil service agency—is forbidden from investigating. We learn about External Assets, and Bob gets the first inkling that he's being fast-tracked for promotion. Won the Locus Award for best fantasy novel in 2013.

A Conventional Boy (set: ~2011-12, not yet written)

  • Projected interstitial novella, introducing Derek the DM (The Nightmare Stacks) and Camp Sunshine (The Delirium Brief). Not yet written.

The Rhesus Chart (set: spring 2013, pub: 2014 (US hbk/UK hbk))

  • Fifth novel, a new series starting point if you want to bypass the early novels. First of a new cycle remixing contemporary fantasy sub-genres (I got bored with British spy thriller authors). Subject: Banking, Vampires, and what happens when an agile programming team inside a merchant bank develops PHANG syndrome. First to be published in hardcover in the UK by Orbit.

  • Note that the books are now set much closer together. This is a key point: the world of the Laundry Files has now developed its own parallel and gradually diverging history as the supernatural incursions become harder to cover up. Note also that Bob is powering up (the Bob of The Atrocity Archive wouldn't exactly be able to walk into a nest of vampires and escape with only minor damage to his dignity). This is why we don't see much of Bob in the next two novels.

The Annihilation Score (set: summer/autumn 2013, pub: 2015 (US hbk/UK ppbk))

  • Sixth novel, first with a non-Bob viewpoint protagonist—it's told by Mo, his wife, and contains spoilers for The Rhesus Chart. Deals with superheroes, mid-life crises, nervous breakdowns, and the King in Yellow. We're clearly deep into ahistorical territory here as we have a dress circle box for the very last Last Night of the Proms, and Orbit's lawyers made me very carefully describe the female Home Secretary as clearly not being one of her non-fictional predecessors, not even a little bit.

Escape from Puroland (set: March-April 2014, pub: summer 2021, forthcoming)

  • Interstitial novella, explaining why Bob wasn't around in the UK during the events described in The Nightmare Stacks. He was on an overseas liason mission, nailing down the coffin lid on one of Angleton's left-over toxic waste sites—this time, it's near Tokyo.

The Nightmare Stacks (set: March-April 2014, pub: June 2016 (US hbk/UK ppbk))

  • Seventh novel, and another series starting point if you want to dive into the most recent books in the series. Viewpoint character: Alex the PHANG. Deals with, well ... the Laundry has been so obsessed by CASE NIGHTMARE GREEN that they're almost completely taken by surprise when CASE NIGHTMARE RED happens. Implicitly marks the end of the Masquerade. Features a Maniac Pixie Dream Girl and the return of Bob's Kettenkrad from The Atrocity Archive. Oh, and it also utterly destroys the major British city I grew up in, because revenge is a dish best eaten cold.

The Delirium Brief (set: May-June 2014, pub: June 2017 (US hbk/UK ppbk))

  • Eighth novel, primary viewpoint character: Bob again, but with an ensemble of other viewpoints cropping up in their own chapters. And unlike the earlier Bob books it no longer pastiches other works or genres. Deals with the aftermath of The Nightmare Stacks; opens with Bob being grilled live on Newsnight by Jeremy Paxman and goes rapidly downhill from there. (I'm guessing that if the events of the previous novel had just taken place, the BBC's leading current affairs news anchor might have deferred his retirement for a couple of months ...)

The Labyrinth Index (set: winter 2014/early 2015, pub: October 2018, (US hbk/UK ppbk))

  • Ninth novel, viewpoint character: Mhari, working for the New Management in the wake of the drastic governmental changes that took place at the end of "The Delirium Brief". The shit has well and truly hit the fan on a global scale, and the new Prime Minister holds unreasonable expectations ...

Dead Lies Dreaming (set: December 2016: pub: Oct 2020 (US hbk/UK hbk))

  • New spin-off series, new starting point! The marketing blurb describes it as "book 10 in the Laundry Files" but by the time this book is set—after CASE NIGHTMARE GREEN and the end of the main Laundry story arc (some time in 2015-16) the Laundry no longer exists. We meet a cast of entirely new characters, civilians (with powers) living under the aegis of the New Management, ruled by his Dread Majesty, the Black Pharaoh. The start of a new trilogy, Dead Lies Dreaming riffs heavily off "Peter and Wendy", the original grimdark version of Peter Pan (before Walt Disney made him twee).

In His House (set: December 2016, pub: probably 2022)

  • Second book in the Dead Lies Dreaming trilogy: continues the story, riffs off Sweeney Todd and Mary Poppins—again: the latter was much darker than the Disney musical implies. (The book is written, but COVID19 has done weird things to publishers' schedules and it's provisionally in the queue behind Invisible Sun, the final Empire Games book, which is due out in September 2021.)

Bones and Nightmares (set: December 2016 and summer of 1820, pub: possibly 2023)

  • Third book in the Dead Lies Dreaming trilogy: finishes the story, riffs off The Prisoner and Jane Austen: also Kingsley's The Water Babies (with Deep Ones). In development.

Further novels are planned but not definite: there need to be 1-2 more books to finish the main Laundry Files story arc with Bob et al, filling in the time line before Dead Lies Dreaming, but the Laundry is a civil service security agency and the current political madness gripping the UK makes it really hard to satirize HMG, so I'm off on a side-quest following the tribulations of Imp, Eve, Wendy, and the gang (from Dead Lies Dreaming) until I figure out how to get back to the Laundry proper.




That's all for now. I'll attempt to update this entry as I write/publish more material.

Charles StrossUpcoming Attractions!

As you know by now, my next novel, Dead Lies Dreaming comes out next week—on Tuesday the 27th in the US and Thursday 29th in the UK, because I've got different publishers in different territories).

Signed copies can be ordered from Transreal Fiction in Edinburgh via the Hive online mail order service.

(You can also order it via Big River co and all good bookshops, but they don't stock signed copies: Link to Amazon US: Link to Amazon UK. Ebooks are available too, and I gather the audiobook—again, there's a different version in the US, from Audible, and the UK, from Hachette Digital—should be released at the same time.)

COVID-19 has put a brake on any plans I might have had to promote the book in public, but I'm doing a number of webcast events over the next few weeks. Here are the highlights:

Outpost 2020 is a virtual SF convention taking place from Friday 23rd (tomorrow!) to Sunday 25th. I'm on a discussion panel on Saturday 24th at 4pm (UK time), on the subject of "Reborn from the Apocalypse": Both history and current events teach that a Biblical-proportioned apocalypse is not necessarily confined to the realms of fiction. How can we reinvent ourselves, and more importantly, will we?. (Panelists: Charlie Stross, Gabriel Partida, David D. Perlmutter. Moderator: Mike Fatum.)

Orbit Live! As part of a series of Crowdcast events, at 8pm GMT on Thursday 27th RJ Barker is going to host myself and Luke Arnold in conversation about our new books: sign up for the crowdcast here.

Reddit AmA: No book launch is complete these days without an Ask me Anything on Reddit, which in my case is booked for Tuesday 3rd, starting at 5pm, UK time (9am on the US west coast, give or take an hour—the clocks change this weekend in the UK but I'm not sure when the US catches up).

The Nürnberg Digital Festival is a community driven Festival with about 20.000 attendees in Nuremberg, to discuss the future, change and everything that comes with it. Obviously this year it's an extra-digital (i.e. online-only) festival, which has the silver lining of enabling the organizers to invite guests to connect from a long way away. Which is why I'm doing an interview/keynote on Monday November 9th at 5pm (UK time). You can find out more about the Festival here (as well as buying tickets for any or all days' events). It's titled "Are we in dystopian times?" which seems to be an ongoing theme of most of the events I'm being invited to these days, and probably gives you some idea of what my answer is likely to be ...

Anyway, that's all for now: I'll add to this post if new events show up.

Worse Than FailureError'd: Nothing for You!

"No, that's not the coupon code. They literally ran out of digital coupons," Steve wrote.

 

"Wow! Amazon is absolutely SLASHING their prices out of existence," wrote Brian.

 

Björn S. writes, "IKEA now employs what I'd call psychological torture kiosks. The text translates to 'Are you satisfied with your waiting time?' but the screen below displays an eternal spinner. Gaaah!"

 

Daniel O. writes, "I mean, I could change my password right now, but I'm kind of tempted to wait and see when it'll actually expire."

 

"This bank seems to offer its IT employees some nice perks! For example, this ATM is reserved strictly for its administrators," Oton R. wrote.

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Sam VargheseAustralia pulls in new kids on the block for crucial Bledisloe Cup game

The focal point of the third Bledisloe Cup game in Sydney on Saturday will be the Australian back-line where two rookies will be playing as fly-half and centre; that incidentally is the place on the field which many opposition players slip through when making a line-break.

Noah Lolesio and Irae Simone will be a lot of scrutiny and it may well be the game that establishes them. Both have come in because of injuries to the regulars in these positions, James O’Connor and Christian Lealiifano respectively. It will be a literal baptism of fire.

For the second time in as many years, Australia will be going into a Bledisloe Cup game against New Zealand with more Pacific Islanders in its ranks than Anglo-Saxons.

Of the 15 picked by the new coach, Dave Rennie, to take the field in Sydney on Saturday (31 October), eight will be Islanders. And on the bench, there will be another four from the same geographical area.

In the first game of 2019, former coach Michael Cheika picked nine islanders and one Aboriginal player as well for the team that thrashed New Zealand 47-26 in Perth. And on the bench, half the number were again islanders.

2019 game 1: Kurtley Beale, Reece Hodge, James O’Connor, Samu Kerevi, Marika Koroibete, Christian Lealiifano, Nic White, Isi Naisarani, Michael Hooper, Lukhan Salakaia-Loto, Rory Arnold, Izack Rodda, Allan Alaalatoa, Tolu Latu and Scott Sio.

Bench: Folau Fainga’a, James Slipper, Taniela Tupou, Adam Coleman, Luke Jones, Will Genia, Matt To’omua and Tom Banks.

2020 game 3: James Slipper, Brandon Paenga-Amosa, Allan Alaalatoa, Lukhan Salakaia-Loto, Matt Philip, Ned Hanigan, Michael Hooper, Harry Wilson, Nic White, Noah Lolesio, Marika Koroibete, Irae Simone, Jordan Petaia, Filipo Daugunu and Dane Haylett-Petty.

Bench: Jordan Uelese, Scott Sio, Taniela Tupou, Rob Simmons, Fraser McReight, Tate McDermott, Reece Hodge and Hunter Paisami.

Given Rennie is from New Zealand, his inclusion of this many islanders is not a surprise. New Zealand has benefitted greatly by attracting players from the many islands in the Pacific, with some notable names like the late Jonah Lomu and former All Blacks captain Jonathan Falafesa “Tana” Umaga two well-known names. Rennis knows the worth of these players.

But will this ensure a win for Australia to keep the series alive? One doubts that Rennie is basing his selection on that criterion; rather, like all rugby coaches of teams that have a chance of being number one in the world, he will be looking to identify the best 15 for the next Rugby World Cup which is in 2023.

New Zealand is also trying out a few new faces for the third game of the Bledisloe Cup, which is also the opening game of the Rugby Championship. [The latter contest will be a three-nation affair this year as South Africa has pulled out.]

Coach Ian Foster has had to call in Hoskins Sotutu to fill in for Ardie Savea who has taken paternity leave and Karl Tu’inukuafe will come in for Joe Moody who is under observation after being knocked out during the second game in Auckland. Sam Whitelock will return as lock, taking over from newcomer Tupou Vaa’i.

,

Planet DebianUlrike Uhlig: Better handling emergencies

We all know these situations when we receive an email asking Can you check the design of X, I need a reply by tonight. Or an instant message: My website went down, can you check? Another email: I canceled a plan at the hosting company, can you restore my website as fast as possible? A phone call: The TLS certificate didn’t get updated, and now we can’t access service Y. Yet another email: Our super important medical advice website is suddenly being censored in country Z, can you help?

Everyone knows those messages that have “URGENT” in capital letters in the email subject. It might be that some of them really are urgent. Others are the written signs of someone having a hard time properly planning their own work and passing their delays on to someone who comes later in the creation or production chain. And others again come from people who are overworked and try to delegate some of their tasks to a friendly soul who is likely to help.

How emergencies create more emergencies

In the past, my first reflex when I received an urgent request was to start rushing into solutions. This happened partly out of empathy, partly because I like to be challenged into solving problems, and I’m fairly good at that. This has proven to be unsustainable, and here is why.

Emergencies create unplanned work

The first issue is that emergencies create a lot of unplanned work. Which in turn means not getting other, scheduled, things done. This can create a backlog, end up in working late, or working on weekends.

Emergencies can create a permanent state of exception

Unplanned work can also create a lot of frustration, out of the feeling of not getting the things done that one planned to do. We might even get a feeling of being nonautonomous (in German I would say fremdbestimmt, which roughly translates to “being directed by others”).

On the long term, this can generate unsustainable situations: higher work loads, and burnout. When working in a team of several people, A might have to take over the work of B because B has not enough capacities. Then A gets overloaded in turn, and C and D have to take over A’s work. Suddenly the team is stuck in a permanent state of exception. This state of exception will produce more backlog. The team might start to deprioritize social issues over getting technical things done. They might not be able to recruit new people anymore because they have no capacity left to onboard newcomers.

One emergency can result in a variety of emergencies for many people

The second issue produced by urgent requests is that if I cannot solve the initial emergency by myself, I might try to involve colleagues, other people who are skilled in the area, or people who work in another relevant organization to help with this. Suddenly, the initial emergency has become my emergency as well as the emergency of a whole bunch of other people.

A sidenote about working with friends

This might be less of an issue in a classical work setup than in a situation in which a bunch of freelancers work together, or in setups in which work and friendships are intertwined. This is a problem, because the boundaries between friend and worker role, and the expectations that go along with these roles, can get easily confused. If a colleague asks me to help with task X, I might say no; if a friend asks, I might be less likely to say no.

What I learnt about handling emergencies

I came up with some guidelines that help me to better handle emergencies.

Plan for unplanned work

It doesn’t matter and it doesn’t help to distinguish if urgent requests are legitimate or if they come from people who have not done their homework on time.

What matters is to make one’s weekly todo list sustainable. After reading Making work visible by Domenica de Grandis, I understood the need to add free slots for unplanned work into one’s weekly schedule. Slots for unplanned work can take up to 25% of the total work time!

Take time to make plans

Now that there are some free slots to handle emergencies, one can take some time to think when an urgent request comes in. A German saying proposes to wait and have some tea (“abwarten und Tee trinken”). I think this is actually really good advice, and works for any non-obvious problem. Sit down and let the situation sink in. Have a tea, take a shower, go for a walk. It’s never that urgent. Really, never. If possible, one can talk about the issue with another person, rubberduck style. Then one can make a plan on how to address the emergency properly, it could be that the solution is easier than at first thought.

Affirming boundaries: Saying no

Is the emergency that I’m asked to solve really my problem? Or is someone trying to involve me because they know I’m likely to help? Take a deep breath and think about it. No? It’s not my job, not my role? I have no time for this right now? I don’t want to do it? Maybe I’m not even paid for it? A colleague is pushing my boundaries to get some task on their own todo list done? Then I might want to say no. I can’t help with this. or I can help you in two weeks. I don’t need to give a reason. No. is a sentence. And: Saying no doesn’t make me an arse.

Affirming boundaries: Clearly defining one’s role

Clearly defining one’s role is something that is often overlooked. In many discussions I have with friends it appears that this is a major cause of overwork and underpayment. Lots of people are skilled, intelligent, and curious, and easily get challenged into putting on their super hero dress. But they’re certainly not the only person that can help—even if an urgent request makes them think that at first.

To clearly define our role, we need to make clear which part of the job is our work, and which part needs to be done by other people. We should stop trying to accomodate people and their requests to the detriment of our own sanity. You’re a language interpreter and are being asked to mediate a bi-lingual conflict between the people you are interpreting for? It’s not your job. You’re the graphic designer for a poster, but the text you’ve been given is not good enough? Send back a recommendation to change the text; don’t do these changes yourself: it’s not your job. But you can and want to do this yourself and it would make your client’s life easier? Then ask them to get paid for the extra time, and make sure to renegotiate your deadline!

Affirming boundaries: Defining expectations

Along with our role, we need to define expectations: in which timeframe am I willing to do the job? Under which contract, which agreement, which conditions? For which payment?

People who work in a salary office job generally do have a work contract in which their role and the expectations that come with this role are clearly defined. Nevertheless, I hear from friends that their superiors regularly try to make them do tasks that are not part of their role definition. So, here too, role and expectations sometimes need to be renegotiated, and the boundaries of these roles need to be clearly affirmed.

Random conclusive thoughts

If you’ve read until here, you might have experienced similar things.

Or, on the contrary, maybe you’re already good at communicating your boundaries and people around you have learnt to respect them? Congratulations.

In any case, for improving one’s own approach to such requests, it can be useful to find out which inner dynamics are at play when we interact with other people. Additionally, it can be useful to understand the differences between Asker and Guesser culture:

when an Asker meets a Guesser, unpleasantness results. An Asker won’t think it’s rude to request two weeks in your spare room, but a Guess culture person will hear it as presumptuous and resent the agony involved in saying no. Your boss, asking for a project to be finished early, may be an overdemanding boor – or just an Asker, who’s assuming you might decline. If you’re a Guesser, you’ll hear it as an expectation.

Askers should also be aware that there might be Guessers in their team. It can help to define clear guidelines about making requests (when do I expect an answer, under which budget/contract/responsibility does the request fall, what other task can be put aside to handle the urgent task?) Last, but not least, Making work visible has a lot of other proposals on how to visibilize and then deal with unplanned work.

Cryptogram Tracking Users on Waze

A security researcher discovered a wulnerability in Waze that breaks the anonymity of users:

I found out that I can visit Waze from any web browser at waze.com/livemap so I decided to check how are those driver icons implemented. What I found is that I can ask Waze API for data on a location by sending my latitude and longitude coordinates. Except the essential traffic information, Waze also sends me coordinates of other drivers who are nearby. What caught my eyes was that identification numbers (ID) associated with the icons were not changing over time. I decided to track one driver and after some time she really appeared in a different place on the same road.

The vulnerability has been fixed. More interesting is that the researcher was able to de-anonymize some of the Waze users, proving yet again that anonymity is hard when we’re all so different.

Worse Than FailureCodeSOD: Graceful Depredations

Cloud management consoles are, in most cases, targeted towards enterprise customers. This runs into Remy’s Law of Enterprise Software: if a piece of software is in any way described as being “enterprise”, it’s a piece of garbage.

Richard was recently poking around on one of those cloud provider’s sites. The software experience was about as luxurious as one expects, which is to say it was a pile of cryptically named buttons for the 57,000 various kinds of preconfigured services this particular service had on offer.

At the bottom of each page, there was a small video thumbnail, linking back to a YouTube video, presumably to provide marketing information, or support guidance for whatever the user was trying to do.

This was the code which generated them (whitespace added for readability):

<script>function lazyLoadThumb(e){var t='<img loading="lazy" 
  data-lazy-src="https://i.ytimg.com/vi/ID/hqdefault.jpg" alt="" 
  width="480" height="360">
  <noscript><img src="https://i.ytimg.com/vi/ID/hqdefault.jpg" 
  alt="" width="480" height="360"></noscript>'
  ,a='<div class="play"></div>';return t.replace("ID",e)+a}</script>

I appreciate the use of a <noscript> tag. Providing a meaningful fallback for browsers that, for whatever reason, aren’t executing JavaScript is a good thing. In the olden days, we called that “progressive enhancement” or “graceful degradation”: the page might work better with JavaScript turned on, but you can still get something out of it even if it’s not.

Which is why it’s too bad the <noscript> is being output by JavaScript.

And sure, I’ve got some serious questions with the data-lazy-src attribute, the fact that we’re dumping a div with a class="play" presumably to get wired up as our play button, and all the string mangling to get the correct video ID in there for the thumbnail, and just generating DOM elements by strings at all. But outputting <noscript>s from JavaScript is a new one on me.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianNorbert Preining: Deleting many files from an S3 bucket

So we found ourselves in the need to delete a considerable amount of files (around 500000, amounting to 1.6T) from an S3 bucket. With the list of files in hand my first shot was calling

aws s3 rm s3://BUCKET/FILE

for each file. That wasn’t the best idea I have to say, since first of all, it makes 500000 requests, and then it takes a looong time. And this command does not allow to pass in multiple files.

Fortunately there is aws s3api delete-objects which takes a json input and can delete multiple files:

aws 3api delete-objects --bucket BUCKET --delete '{"Objects": [ { "Key" : "FILE1" }, { "Key" : "FILE2"} ... ]}}'

That did help, and with a bit of magic from bash (mapfile which can read in lines from stdin in batches) and jq, at the end it was a business of some 20min or so:

cat files-to-be-deleted |  while mapfile -t -n 500 ary && ((${#ary[@]})); do
        objdef=$(printf '%s\n' "${ary[@]}" | jq -nR '{Objects: (reduce inputs as $line ([]; . + [{"Key":$line}]))}')
        aws s3api --no-cli-pager  delete-objects --bucket BUCKET --delete "$objdef"
done

This reads 500 files a time, and reformats it using jq into the proper json format: reduce inputs is a jq filter that iterates over the input lines and does a map/reduce step. In this case we use an empty array as start and add new key/filename pairs on the go. Finally, the whole bunch is send to AWS with the above API call.

Puuuh, 500000 files and 1.6T less, in 20min.

Krebs on SecurityFBI, DHS, HHS Warn of Imminent, Credible Ransomware Threat Against U.S. Hospitals

On Monday, Oct. 26, KrebsOnSecurity began following up on a tip from a reliable source that an aggressive Russian cybercriminal gang known for deploying ransomware was preparing to disrupt information technology systems at hundreds of hospitals, clinics and medical care facilities across the United States. Today, officials from the FBI and the U.S. Department of Homeland Security hastily assembled a conference call with healthcare industry executives warning about an “imminent cybercrime threat to U.S. hospitals and healthcare providers.”

The agencies on the conference call, which included the U.S. Department of Health and Human Services (HHS), warned participants about “credible information of an increased and imminent cybercrime threat to US hospitals and healthcare providers.”

The agencies said they were sharing the information “to provide warning to healthcare providers to ensure that they take timely and reasonable precautions to protect their networks from these threats.”

The warning came less than two days after this author received a tip from Alex Holden, founder of Milwaukee-based cyber intelligence firm Hold Security. Holden said he saw online communications this week between cybercriminals affiliated with a Russian-speaking ransomware group known as Ryuk in which group members discussed plans to deploy ransomware at more than 400 healthcare facilities in the U.S.

One participant on the government conference call today said the agencies offered few concrete details of how healthcare organizations might better protect themselves against this threat actor or purported malware campaign.

“They didn’t share any IoCs [indicators of compromise], so it’s just been ‘patch your systems and report anything suspicious’,” said a healthcare industry veteran who sat in on the discussion.

However, others on the call said IoCs may be of little help for hospitals that have already been infiltrated by Ryuk. That’s because the malware infrastructure used by the Ryuk gang is often unique to each victim, including everything from the Microsoft Windows executable files that get dropped on the infected hosts to the so-called “command and control” servers used to transmit data between and among compromised systems.

Nevertheless, cybersecurity incident response firm Mandiant today released a list of domains and Internet addresses used by Ryuk in previous attacks throughout 2020 and up to the present day. Mandiant refers to the group by the threat actor classification “UNC1878,” and aired a webcast today detailing some of Ryuk’s latest exploitation tactics.

Charles Carmakal, senior vice president for Mandiant, told Reuters that UNC1878 is one of most brazen, heartless, and disruptive threat actors he’s observed over the course of his career.

“Multiple hospitals have already been significantly impacted by Ryuk ransomware and their networks have been taken offline,” Carmakal said.

One health industry veteran who participated in the call today and who spoke with KrebsOnSecurity on condition of anonymity said if there truly are hundreds of medical facilities at imminent risk here, that would seem to go beyond the scope of any one hospital group and may implicate some kind of electronic health record provider that integrates with many care facilities.

So far, however, nothing like hundreds of facilities have publicly reported ransomware incidents. But there have been a handful of hospitals dealing with ransomware attacks in the past few days.

Becker’s Hospital Review reported today that a ransomware attack hit Klamath Falls, Ore.-based Sky Lakes Medical Center’s computer systems.

WWNY’s Channel 7 News in New York reported yesterday that a Ryuk ransomware attack on St. Lawrence Health System led to computer infections at Caton-Potsdam, Messena and Gouverneur hospitals.

SWNewsMedia.com on Monday reported on “unidentified network activity” that caused disruption to certain operations at Ridgeview Medical Center in Waconia, Minn. SWNews says Ridgeview’s system includes Chaska’s Two Twelve Medical Center, three hospitals, clinics and other emergency and long-term care sites around the metro area.

NBC5 reports The University of Vermont Health Network is dealing with a “significant and ongoing system-wide network issue” that could be a malicious cyber attack.

This is a developing story. Stay tuned for further updates.

Update, 10:11 p.m. ET: The FBI, DHS and HHS just jointly issued an alert about this, available here.

,

Planet DebianDaniel Lange: Git shared hosting quirk

Show https://github.com/torvalds/linux/blob/b4061a10fc29010a610ff2b5b20160d7335e69bf/drivers/hid/hid-samsung.c#L113-L118 to a friend.

Oops 'eh? Yep, Linux has been backdoored.

Well, or not.

Konstantin Ryabitsev explains it nicely in a cgit mailing list email:

It is common for git hosting environments to configure all forks of the same repo to use an "object storage" repository. For example, this is what allows git.kernel.org's 600+ forks of linux.git to take up only 10GB on disk as opposed to 800GB. One of the side-effects of this setup is that any object in the shared repository can be accessed from any of the forks, which periodically confuses people into believing that something terrible has happened.

The hack was discussed on Github in Dec 2018 when it was discovered. I forgot about it again but Konstantin's mail brought the memory back and I think it deserves more attention.

I'm sure putting some illegal content into a fork and sending a made up "blob" URL to law enforcement would go quite far. Good luck explaining the issue. "Yes this is my repo" but "no, no that's not my data" ... "yes, it is my repo but not my data" ... "no we don't want that data either, really" ... "but, but there is nothing we can do, we host on github...1".


  1. Actually there is something you can do. Making a repo private takes it out of the shared "object storage". You can make it public again afterwards. Seems to work at least for now. 

Kevin RuddStatement on the International Peace Institute

October 28, 2020

Mr Rudd has issued a statement regarding the International Peace Institute.

Click here to read the statement.

The post Statement on the International Peace Institute appeared first on Kevin Rudd.

Krebs on SecuritySecurity Blueprints of Many Companies Leaked in Hack of Swedish Firm Gunnebo

In March 2020, KrebsOnSecurity alerted Swedish security giant Gunnebo Group that hackers had broken into its network and sold the access to a criminal group which specializes in deploying ransomware. In August, Gunnebo said it had successfully thwarted a ransomware attack, but this week it emerged that the intruders stole and published online tens of thousands of sensitive documents — including schematics of client bank vaults and surveillance systems.

The Gunnebo Group is a Swedish multinational company that provides physical security to a variety of customers globally, including banks, government agencies, airports, casinos, jewelry stores, tax agencies and even nuclear power plants. The company has operations in 25 countries, more than 4,000 employees, and billions in revenue annually.

Acting on a tip from Milwaukee, Wis.-based cyber intelligence firm Hold Security, KrebsOnSecurity in March told Gunnebo about a financial transaction between a malicious hacker and a cybercriminal group which specializes in deploying ransomware. That transaction included credentials to a Remote Desktop Protocol (RDP) account apparently set up by a Gunnebo Group employee who wished to access the company’s internal network remotely.

Five months later, Gunnebo disclosed it had suffered a cyber attack targeting its IT systems that forced the shutdown of internal servers. Nevertheless, the company said its quick reaction prevented the intruders from spreading the ransomware throughout its systems, and that the overall lasting impact from the incident was minimal.

Earlier this week, Swedish news agency Dagens Nyheter confirmed that hackers recently published online at least 38,000 documents stolen from Gunnebo’s network. Linus Larsson, the journalist who broke the story, says the hacked material was uploaded to a public server during the second half of September, and it is not known how many people may have gained access to it.

Larsson quotes Gunnebo CEO Stefan Syrén saying the company never considered paying the ransom the attackers demanded in exchange for not publishing its internal documents. What’s more, Syrén seemed to downplay the severity of the exposure.

“I understand that you can see drawings as sensitive, but we do not consider them as sensitive automatically,” the CEO reportedly said. “When it comes to cameras in a public environment, for example, half the point is that they should be visible, therefore a drawing with camera placements in itself is not very sensitive.”

It remains unclear whether the stolen RDP credentials were a factor in this incident. But the password to the Gunnebo RDP account — “password01” — suggests the security of its IT systems may have been lacking in other areas as well.

After this author posted a request for contact from Gunnebo on Twitter, KrebsOnSecurity heard from Rasmus Jansson, an account manager at Gunnebo who specializes in protecting client systems from electromagnetic pulse (EMP) attacks or disruption, short bursts of energy that can damage electrical equipment.

Jansson said he relayed the stolen credentials to the company’s IT specialists, but that he does not know what actions the company took in response. Reached by phone today, Jansson said he quit the company in August, right around the time Gunnebo disclosed the thwarted ransomware attack. He declined to comment on the particulars of the extortion incident.

Ransomware attackers often spend weeks or months inside of a target’s network before attempting to deploy malware across the network that encrypts servers and desktop systems unless and until a ransom demand is met.

That’s because gaining the initial foothold is rarely the difficult part of the attack. In fact, many ransomware groups now have such an embarrassment of riches in this regard that they’ve taken to hiring external penetration testers to carry out the grunt work of escalating that initial foothold into complete control over the victim’s network and any data backup systems  — a process that can be hugely time consuming.

But prior to launching their ransomware, it has become common practice for these extortionists to offload as much sensitive and proprietary data as possible. In some cases, this allows the intruders to profit even if their malware somehow fails to do its job. In other instances, victims are asked to pay two extortion demands: One for a digital key to unlock encrypted systems, and another in exchange for a promise not to publish, auction or otherwise trade any stolen data.

While it may seem ironic when a physical security firm ends up having all of its secrets published online, the reality is that some of the biggest targets of ransomware groups continue to be companies which may not consider cybersecurity or information systems as their primary concern or business — regardless of how much may be riding on that technology.

Indeed, companies that persist in viewing cyber and physical security as somehow separate seem to be among the favorite targets of ransomware actors. Last week, a Russian journalist published a video on Youtube claiming to be an interview with the cybercriminals behind the REvil/Sodinokibi ransomware strain, which is the handiwork of a particularly aggressive criminal group that’s been behind some of the biggest and most costly ransom attacks in recent years.

In the video, the REvil representative stated that the most desirable targets for the group were agriculture companies, manufacturers, insurance firms, and law firms. The REvil actor claimed that on average roughly one in three of its victims agrees to pay an extortion fee.

Mark Arena, CEO of cybersecurity threat intelligence firm Intel 471, said while it might be tempting to believe that firms which specialize in information security typically have better cybersecurity practices than physical security firms, few organizations have a deep understanding of their adversaries. Intel 471 has published an analysis of the video here.

Arena said this is a particularly acute shortcoming with many managed service providers (MSPs), companies that provide outsourced security services to hundreds or thousands of clients who might not otherwise be able to afford to hire cybersecurity professionals.

“The harsh and unfortunate reality is the security of a number of security companies is shit,” Arena said. “Most companies tend to have a lack of ongoing and up to date understanding of the threat actors they face.”

Cryptogram The NSA is Refusing to Disclose its Policy on Backdooring Commercial Products

Senator Ron Wyden asked, and the NSA didn’t answer:

The NSA has long sought agreements with technology companies under which they would build special access for the spy agency into their products, according to disclosures by former NSA contractor Edward Snowden and reporting by Reuters and others.

These so-called back doors enable the NSA and other agencies to scan large amounts of traffic without a warrant. Agency advocates say the practice has eased collection of vital intelligence in other countries, including interception of terrorist communications.

The agency developed new rules for such practices after the Snowden leaks in order to reduce the chances of exposure and compromise, three former intelligence officials told Reuters. But aides to Senator Ron Wyden, a leading Democrat on the Senate Intelligence Committee, say the NSA has stonewalled on providing even the gist of the new guidelines.

[…]

The agency declined to say how it had updated its policies on obtaining special access to commercial products. NSA officials said the agency has been rebuilding trust with the private sector through such measures as offering warnings about software flaws.

“At NSA, it’s common practice to constantly assess processes to identify and determine best practices,” said Anne Neuberger, who heads NSA’s year-old Cybersecurity Directorate. “We don’t share specific processes and procedures.”

Three former senior intelligence agency figures told Reuters that the NSA now requires that before a back door is sought, the agency must weigh the potential fallout and arrange for some kind of warning if the back door gets discovered and manipulated by adversaries.

The article goes on to talk about Juniper Networks equipment, which had the NSA-created DUAL_EC PRNG backdoor in its products. That backdoor was taken advantage of by an unnamed foreign adversary.

Juniper Networks got into hot water over Dual EC two years later. At the end of 2015, the maker of internet switches disclosed that it had detected malicious code in some firewall products. Researchers later determined that hackers had turned the firewalls into their own spy tool here by altering Juniper’s version of Dual EC.

Juniper said little about the incident. But the company acknowledged to security researcher Andy Isaacson in 2016 that it had installed Dual EC as part of a “customer requirement,” according to a previously undisclosed contemporaneous message seen by Reuters. Isaacson and other researchers believe that customer was a U.S. government agency, since only the U.S. is known to have insisted on Dual EC elsewhere.

Juniper has never identified the customer, and declined to comment for this story.

Likewise, the company never identified the hackers. But two people familiar with the case told Reuters that investigators concluded the Chinese government was behind it. They declined to detail the evidence they used.

Okay, lots of unsubstantiated claims and innuendo here. And Neuberger is right; the NSA shouldn’t share specific processes and procedures. But as long as this is a democratic country, the NSA has an obligation to disclose its general processes and procedures so we all know what they’re doing in our name. And if it’s still putting surveillance ahead of security.

Planet DebianKentaro Hayashi: lltsv 0.6.1-2: applied background color patch for readability

Original version of lltsv doesn't specify background color, so it is hard to recognize texts in dark background.

I've fixed this issue by specifying background color explicitly.

Here is the screenshot of fixed version. Yay!

f:id:kenhys:20201028223848p:plain

This issue was already forwarded as PR

github.com

Worse Than FailureCodeSOD: A Type of Useless

TypeScript offers certain advantages over JavaScript. Compile time type-checking can catch a lot of errors, it can move faster than browsers, so it offers the latest standards (and the compiler handles the nasty details of shimming them into browsers), plus it has a layer of convenient, syntactic sugar.

If you’re using TypeScript, you can use the compiler to find all sorts of ugly problems with your code, and all you need to do is turn the right flags on.

Or, you can be like Quintus’s co-worker, who checked in this… thing.

/**
 * Container holding definition information.
 *
 * @param String version
 * @param String date

 */
export class Definition {
  private id: string;
  private name: string;
  constructor(private version, private data) {}

  /**
   * get the definition version
   *
   * @return String version
   */
  getVersion() {
    return this.id;
  }

  /**
   * get the definition date
   *
   * @return String date
   */
  getDate() {
    return this.name;
  }
}

Now, if you were to try this on the TypeScript playground, you’d find that while it compiles and generates JavaScript, the compiler has a lot of reasonable complaints about it. However, if you were just to compile this with the command line tsc, it gleefully does the job without complaint, using the default settings.

So the code is bad, and the tool can tell you it’s bad, but you have to actually ask the tool to tell you that.

In any case, it’s easy to understand what happened with this bad code: this is clearly programming by copy/paste. They had a class that tied an id to a name. They copy/pasted to make one that mapped a version to a date, but got distracted halfway through and ended up with this incomplete dropping. And then they somehow checked it in, and nobody noticed it until Quintus was poking around.

Now, a little bit about this code. You’ll note that there are private id and name properties. The constructor defines two more properties (and wires the constructor params up to map to them) with its private version, private data params.

So if you call the constructor, you initialize two private members that have no accessors at all. There are accessors, but they point to id and name, which never get initialized in the constructor, and have no mutators.

Of course, TypeScript compiles down into JavaScript, so those private keywords don’t really matter. JavaScript doesn’t have private.

My suspicion is that this class ended up in the code base, but is never actually used. If it is used, I bet it’s used like:

let f = new Definition();
f.id = "1.0.1"
f.name = "28-OCT-2020"
…
let ver = f.getVersion();

That would work and do what the original developer expected. If they did that, the TypeScript compiler might complain, but as we saw, they don’t really care about what the compiler says.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Cryptogram Reverse-Engineering the Redactions in the Ghislaine Maxwell Deposition

Slate magazine was able to cleverly read the Ghislaine Maxwell deposition and reverse-engineer many of the redacted names.

We’ve long known that redacting is hard in the modern age, but most of the failures to date have been a result of not realizing that covering digital text with a black bar doesn’t always remove the text from the underlying digital file. As far as I know, this reverse-engineering technique is new.

EDITED TO ADD: A similar technique was used in 1991 to recover the Dead Sea Scrolls.

Planet DebianNoah Meyerhans: Debian STS: Short Term Support

In another of my frequent late-night bouts with insomnia, I started thinking about the intersection of a number of different issues facing Debian today, both from a user point of view and a developer point of view.

Debian has a reputation for shipping “stale” software. Versions in the stable branch are often significantly behind the latest development upstream. Debian’s policy here has been that this is fine, our goal is to ship something stable, not something bleeding edge. Unofficially, our response to users is: If you need bleeding edge software, Debian may not be for you. Officially, we have no response to users who want fresher software.

Debian also has a problem with a lack of manpower. I believe that part of why we have a hard time attracting contributors is our reputation for stale software. It might be worth it for us to consider changes to our approach to releases.

What about running testing?

People who want newer software often look to Debian’s testing branch as a possible solution. It’s tempting, as it’s a dynamically generated release based on unstable, so it should be quite current. In practice, it’s not at all uncommon to find people running testing, and in fact I’m running it right now on the ThinkPad on which this is being typed. However, testing comes with a glaring issue: a lack of timely security support. Security updates must still propagate through unstable, and this can take some time. They can be held up by dependencies, library transitions, or other factors. Nearly every list of “best practices for computer security” lists keeping software up-to-date at or near the top of most important steps to take to safely use networked computer. Debian’s testing branch makes this very difficult, especially when faced with a zero-day with potential for real-world exploit.

What about stable-backports?

Stable backports is both better and worse than testing. It’s better in that it allows you to run a system comprised mainly of packages from the stable branch, which receive updates from the security team in a timely manner. However, it’s worse in that the packages from the backports repository incur an additional delay. The expectation around backports is that a package migrates naturally from unstable to testing, and then requires a maintainer to upload a new package based on the version in testing specifically targeted at stable backports. The migration can potentially be bypassed, and we used to have a mechanism for announcing the availability of security updates for the stable backports archive, but it has gone unused for several years now. The documentation describes a workflow for posting security updates that involves creating a ticket in Debian’s RT system, which is going to be quite foreign to most people. News from mid 2019 suggests that this process might change, but nothing appears to have come of this in over a year, and we still haven’t seen a proper security advisory for stable backports in years.

Looking to LTS for ideas

The Long-Term Support project is an “alternative” branch of Debian, maintained outside the normal stable release infrastructure. It’s stable, and expected to behave that way, but it’s not supported by the stable security team or release team. LTS provides a framework for providing security updates via targeted uploads by a team of interested individuals working outside the structure of the existing stable releases. This project seems to be quite active (how much of this is because at least some members are being paid?), and as of this writing has actually published more security advisories in the past month than the stable security team has published for the current stable branch. This is also interesting in that the software in LTS is quite old, first appearing in a Debian stable release in 2017.

LTS is particularly interesting here as it’s an example an initiative within the Debian community taken specifically to address user needs. For some of our users, remaining on an old release is a perfectly valid thing for them to do, and we recognize this and support them in doing so.

Debian Short-Term Support

So, what would it take to create an “LTS-like” initiative in the other direction? Instead of providing ongoing support for ancient versions of software that previously comprised a stable release, could we build a distribution branch based on something that hasn’t yet reached stable? What would that look like? How would it fit in the existing unstable→testing migration process? What impact would it have on the size of the archive? Would we want a rolling release, or discrete releases? If the latter, how many would we want between proper stable releases?

The security tracker already tracks outstanding issues in unstable and testing, and can even show issues that have been fixed in unstable but haven’t yet propagated to testing.

If we want a rolling release, maybe we could just open up the testing-security repository more broadly? There was once a testing security team, which IIRC was chartered to publish updated packages directly to testing security, along with associated security advisory. Based on the mailing list history, that effort seems to have shut down around the time of the squeeze (Debian 6.0) release in early 2011. Would it be worth resurrecting it? We’ve probably got much of the infrastructure required in place already, since it previously existed.

Personally I’m not really a fan of a pure rolling release. I’d rather see a light-weight release. Maybe a snapshot of testing that gets just a date, not a Toy Story codename. Probably skip building a dedicated installer for it. Upgrade from stable or use a d-i snapshot from testing if needed. This mini release is supported until the next one comes out, maybe 6 or 8 months later. By supported, I mean that the “Short Term Release” team is responsible for it. They can upload security or other critical bug fixes directly to a dedicated repository. When the next STS snapshot is released, packages in the updates repository are either archived, if they’re a lower version than the one in the new mini release, or rebuilt against the new mini release and preserved.

Using some of the same mechanisms as the LTS release, we’d need

  1. Something to take the place of oldstable, that is the base release against which updates are released. This could be something that effectively maps to a date snapshot served by http://snapshot.debian.org/. (Snapshot itself could not currently handle the increased load, as I understand it, but conceptually it’s similar.)

  2. Something to take the place of the dist/updates apt repository that hosts the packages that are updated.

In theory, if the infrastructure could support hose things, then we could in effect generate a mini release at any time based on a snapshot. I wonder if this could start as something totally unofficial; mirror an arbitrary testing snapshot and provide a place for interested people to publish package updates.

Not a proposal, nor a criticism

To be clear, I don’t really intend this as a proposal; It’s really half-baked. Maybe these ideas have already been considered and dismissed. I don’t know if people would be interested in working on such a project, and I’m not nearly familiar enough with the Debian archive tooling to even make a guess as to how hard it would be to implement much of it. I’m just posting some ideas that I came up with while pondering something that, from my perspective, is an area where Debian is clearly failing to meet the needs of some of our users. We know Debian is a popular and respected Linux distribution, and we know people value our stability. However, we also know that people like running Fedora and Ubuntu’s non-LTS releases. People like Arch Linux. Not just “end-users”, but also the people developing the software shipped by the distros themselves. There are a lot of potential contributors to Debian who are kept away by our unwillingness to provide a distro offering both fresh software and security support. I think that we could attract more people to the Debian community if we could provide a solution for these people, and that would ultimately be good for everybody.

Also, please don’t interpret this as being critical of the release team, the stable security team, or any other team or individual in Debian. I’m sharing this because I think there are opportunities for Debian to improve how we serve our users, not because I think anybody is doing anything wrong.

With all that said, though, let me know if you find the ideas interesting. If you think they’re crazy, you can tell me that, too. I’ll probably agree with you.

Charles StrossAll Glory to the New Management!

Dead Lies Dreaming - UK cover

Today is September 27th, 2020. On October 27th, Dead Lies Dreaming will be published in the USA and Canada: the British edition drops on October 29th. (Yes, there will be audio editions too, via the usual outlets.)

This book is being marketed as the tenth Laundry Files novel. That's not exactly true, though it's not entirely wrong, either: the tenth Laundry book, about the continuing tribulations of Bob Howard and his co-workers, hasn't been written yet. (Bob is a civil servant who by implication deals with political movers and shakers, and politics has turned so batshit crazy in the past three years that I just can't go there right now.)

There is a novella about Bob coming next summer. It's titled Escape from Puroland and Tor.com will be publishing it as an ebook and hardcover in the USA. (No UK publication is scheduled as yet, but we're working on it.) I've got one more novella planned, about Derek the DM, and then either one or two final books: I'm not certain how many it will take to wrap the main story arc yet, but rest assured that the tale of SOE's Q-Division, the Laundry, reaches its conclusion some time in 2015. Also rest assured that at least one of our protagonists survives ... as does the New Management.

All Glory to the Black Pharaoh! Long may he rule over this spectred isle!

(But what's this book about?)

Dead Lies Dreaming - US cover

Dead Lies Dreaming is the first book in a project I dreamed up in (our world's) 2017, with the working title Tales of the New Management. It came about due to an unhappy incident: I found out the hard way that writing productively while one of your parents is dying is rather difficult. The first time it happened, it took down a promising space opera project. I plan to pick it up and re-do it next year, but it was the kind of learning experience I could happily have done without. The second time it happened, I had to stop work on Invisible Sun, the third and final Empire Games novel—I just couldn't get into the right head-space. (Empire Games is now written and in the hands of the production folks at Tor. It will almost certainly be published next September, if the publishing industry survives the catastrophe novel we're all living through right now.)

Anyway, I was unable work on the a project with a fixed deadline, but I couldn't not write: so I gave myself license to doodle therapeutically. The therapeutic doodles somehow colonized the abandoned first third of a magical realist novel I pitched in 2014, and turned into an unexpected attack novel titled Lost Boys. (It was retitled Dead Lies Dreaming because a cult comedy movie from 1987 got remade for TV in 2020—unless you're a major bestseller you do not want your book title to clash with an unrelated movie—but it's still Lost Boys in my headcanon.)

Lost Boys—that is, Dead Lies Dreaming—riffs heavily off Peter and Wendy, the original taproot of Peter Pan, a stage play and novel by J. M. Barrie that predates the more familiar, twee, animated Disney version of Peter Pan from 1953 by some decades. (Actually Peter and Wendy recycled Barrie's character from an earlier work, The Little White Bird, from 1902, but let's not get into the J. M. Barrie arcana at this point.) Peter and Wendy can be downloaded from Project Gutenberg here. And if you only know Pan from Disney, you're in for a shock.

Barrie was writing in an era when antibiotics hadn't been discovered, and far fewer vaccines were available for childhood diseases. Almost 20% of children died before reaching their fifth birthday, and this was a huge improvement over the earlier decades of the 19th century: parents expected some of their babies to die, and furthermore, had to explain infant deaths to toddlers and pre-tweens. Disney's Peter is a child of the carefree first flowering of the antibiotic age, and thereby de-fanged, but the original Peter Pan isn't a twee fairy-like escapist fantasy. He's a narcissistic monster, a kidnapper and serial killer of infants who is so far detached from reality that his own shadow can't keep up. Barrie's story is a metaphor designed to introduce toddlers to the horror of a sibling's death. And I was looking at it in this light when I realized, "hey, what if Peter survived the teind of infant mortality, only to grow up under the dictatorship of the New Management?"

This led me down all sorts of rabbit holes, only some of which are explored in Dead Lies Dreaming. The nerdish world-building impulse took over: it turns out that civilian life under the rule of N'yar lat-Hotep, the Black Pharaoh (in his current incarnation as Fabian Everyman MP), is harrowing and gruesome in its own right—there's a Tzompantli on Marble Arch: indications that Lovecraft's Elder Gods were worshipped under other names by other cultures: oligarchs and private equity funds employ private armies: and Brexit is still happening—but nevertheless, ordinary life goes on. There are jobs for cycle couriers, administrative assistants, and ex-detective constables-turned-security guards. People still need supermarkets and high street banks and toy shops. The displays of severed heads on the traffic cameras on the M25 don't stop drivers trying to speed. Boys who never grew up are still looking for a purpose in life, at risk of their necks, while their big sisters try to save them. And so on.

Dead Lies Dreaming is the first of the Tales of the New Management, which are being positioned as a continuation of the Laundry Files (because Marketing). There will be more. A second novel, In His House, already exists in first draft. Tt's a continuation of the story, remixed with Sweeney Todd and Mary Poppins—who in the original form is, like Peter Pan, much more sinister than the Disney whitewash suggests. A third novel, Bones and Nightmares, is planned. (However, I can't give you a publication date, other than to say that In His house can't be published before late 2022: COVID19 has royally screwed up publishers' timetables.)

Anyway, you probably realized that instead of riffing off classic British spy thrillers or urban fantasy tropes, I'm now perverting beloved childhood icons for my own nefarious purposes—and I'm having a gas. Let's just hope that the December of 2016 in which Dead Lies Dreaming is set doesn't look impossibly utopian and optimistic by the time we get to the looming and very real December of 2020! I really hate it when reality front-runs my horror novels ...

Worse Than FailureCodeSOD: On the Creation

Understanding the Gang of Four design patterns is a valuable bit of knowledge for a programmer. Of course, instead of understanding them, it sometimes seems like most design pattern fans just… use them. Sometimes- often- overuse them. The Java Spring Framework infamously has classes with names like SimpleBeanFactoryAwareAspectInstanceFactory. Whether that's a proper use of patterns and naming conventions is a choice I leave to the reader, but boy do I hate looking at it.

The GoF patterns break down into four major categories: Behavioral, Structural, Concurrency, and Creational patterns. The Creational category, as the name implies, is all about code which can be used to create instances of objects, like that Factory class above. It is a useful collection of patterns for writing reusable, testable, and modular code. Most Dependency Injection/Inversion of Control frameworks are really just applied creational patterns.

It also means that some people decide that "directly invoking constructors is considered harmful". And that's why Emiko found this Java code:

/** * Creates an empty {@link MessageCType}. * @return {@link MessageCType} */ public static MessageCType createMessage() { MessageCType retVal = new MessageCType() return retVal; }

This is just a representitive method; the code was littered with piles of these. It'd be potentially forgiveable if they also used a fluent interface with method chaining to intialize the object, buuut… they don't. Literally, this ends up getting used like:

MessageCType msg = MessageCType.createMessage(); msg.type = someMessageType; msg.body = …

Emiko sums up:

At work, we apparently pride ourselves in using the most fancyful patterns available.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecurityGoogle Mending Another Crack in Widevine

For the second time in as many years, Google is working to fix a weakness in its Widevine digital rights management (DRM) technology used by online streaming sites like Disney, Hulu and Netflix to prevent their content from being pirated.

The latest cracks in Widevine concern the encryption technology’s protection for L3 streams, which is used for low-quality video and audio streams only. Google says the weakness does not affect L1 and L2 streams, which encompass more high-definition video and audio content.

“As code protection is always evolving to address new threats, we are currently working to update our Widevine software DRM with the latest advancements in code protection to address this issue,” Google said in a written statement provided to KrebsOnSecurity.

In January 2019, researcher David Buchanan tweeted about the L3 weakness he found, but didn’t release any proof-of-concept code that others could use to exploit it before Google fixed the problem.

This latest Widevine hack, however, has been made into an extension for Microsoft Windows users of the Google Chrome web browser and posted for download on the software development platform Github.

Tomer Hadad, the researcher who developed the browser extension, said his proof-of-concept code “was done to further show that code obfuscation, anti-debugging tricks, whitebox cryptography algorithms and other methods of security-by-obscurity will eventually by defeated anyway, and are, in a way, pointless.”

Google called the weakness a circumvention that would be fixed. But Hadad took issue with that characterization.

“It’s not a bug but an inevitable flaw because of the use of software, which is also why L3 does not offer the best quality,” Hadad wrote in an email. “L3 is usually used on desktops because of the lack of hardware trusted zones.”

Media companies that stream video online using Widevine can select different levels of protection for delivering their content, depending on the capabilities of the device requesting access. Most modern smartphones and mobile devices support much more robust L1 and L2 Widevine protections that do not rely on L3.

Further reading: Breaking Content Protection on Streaming Websites

Planet DebianMolly de Blanc: Digital Self

When we talk about the digital self, we are talking about the self as it exists within digital spaces. This holds differently for different people, as some of us prefer to live within an pseudonymous or anonymous identity online, divested from our physical selves, while others consider the digital a more holistic identity that extends from the physical.

Your digital self is gestalt, in that it exists across whatever mediums, web sites, and services you use. These bits are pieces together to form a whole picture of what it means to be you, or some aspect of  you. This may be carefully curated, or it may be an emergent property of who you are.

The way your physical self has rights, so too does your digital self. Or, perhaps, it would be more accurate to say that your rights extend to your digital self. I do not personally consider that there is a separation between these selves when it comes to rights, as both are aspects of you and you have rights. I am explicitly not going to list what these rights are, because I have my own ideas about them and yours may differ. Instead, I will briefly talk about consent.

I think it is essential that we genuinely consent to how others interact with us to maintain the sanctity of our selves. Consent is necessary to the protection and expression of our rights, as it ensures we are able to rely on our rights and creates a space where we are able to express our rights in comfort and safety. We may classically think of consent as it relates to sex and sexual consent: only we have the right to determine what happens to our bodies; no one else has the right to that determination. We are able to give sexual consent, and we are able to revoke it. Sexual consent, in order to be in good faith, must be requested and given from a place of openness and transparency. For this, we discuss with our partners the things about ourselves that may impact their decision to consent: we are sober; we are not ill; we are using (or not) protection as we agree is appropriate; we are making this decision because it is a thing we desire, rather than a thing we feel we ought to do or are being forced to do; as well as other topics.

These things also all hold true for technology and the digital spaces in which we reside. Our digital autonomy is not the only thing at stake when we look at digital consent. The ways we interact in digital spaces impact our whole selves, and exploitation of our consent too impacts our whole selves. Private information appearing online can have material consequences — it can directly lead to safety issues, like stalking or threats, and it can lead to a loss of psychic safety and have a chilling effect. These are in addition to the threats posed to digital safety and well being. Consent must be actively sought, what one is consenting to is transparent, and the potential consequences must be known and understood.

In order to protect and empower the digital self, to treat everyone justly and with respect, we must hold the digital self be as sacrosanct as other aspects of the self and treat it accordingly.

LongNowHow Long-term Thinking Can Help Earth Now

Inside Finland’s Olkiluoto nuclear waste repository, 1,500 feet underground. Photo Credit: Peter Guenzel

With half-lives ranging from 30 to 24,000, or even 16 million years, the radioactive elements in nuclear waste defy our typical operating time frames. The questions around nuclear waste storage — how to keep it safe from those who might wish to weaponize it, where to store it, by what methods, for how long, and with what markings, if any, to warn humans who might stumble upon it thousands of years in the future — require long-term thinking.

These questions brought the anthropologist Vincent Ialenti to Finland’s Olkiluoto nuclear waste repository in 02012, where he began more than two years of field work with a team of experts seeking to answer them.

Ialenti’s goal was to examine how these experts conceived of the future:

What sort of scientific ethos, I wondered, do Safety Case experts adopt in their daily dealings with seemingly unimaginable spans of time? Has their work affected how they understand the world and humanity’s place within it? If so, how? If not, why not?

Ialenti has crystallized his learnings in his new book, Deep Time Reckoning: How Future Thinking Can Help Earth Now. It is a book of extraordinary insight and erudition, thoughtful and lucid throughout its surprisingly-light 180 pages.

Long Now’s Director of Development, Nicholas Paul Brysiewicz, recently posed a few questions to Ialenti about the genesis of his book; the “deflation of expertise” in North America, Western Europe and beyond; conceiving long-term thinking as a kind of exercise for the mind; and more.


Vincent, thanks for sending over a copy of your new book. And thanks for making time to unpack some of those ideas with me here for The Long Now Foundation and our members around the globe.

I’m curious: what drew you to study long-term thinking in the wild?

Vincent Ialenti: In 02008, I was a Masters student in “Law, Anthropology, and Society” at the London School of Economics. I had a growing interest in long-term engineering projects like the Svalbard Global Seed Vault and the Clock of the Long Now. I decided to write my thesis (now published here) on the currently-defunct U.S. Yucca Mountain nuclear waste repository’s million-year federal licensing procedure.

The licensing procedure stretched legal adjudication’s foundational rubric of “fact, rule, and judge” into distant futures. The U.S. Department of Energy developed quantitative models of million-year futures as facts. The U.S. Environmental Protection Agency defined multi-millennial radiological dose-limit standards as rules. The U.S. Nuclear Regulatory Commission subsumed the DOE’s facts to the EPA’s rules to produce a judgment on whether the repository could accept waste. In media commentaries, the Yucca Mountain project was enchanted with aesthetics of high modernist innovation and sci-fi futurology. Yet its decision-making procedure was grounded on an ancient legal procedural bedrock — rubrics formulated as far back as the Roman Empire.

I grew fascinated by this temporal mashup. To delve deeper, though, I knew I’d have to conduct long-term fieldwork. I enrolled in a PhD program at Cornell University. With the help of a U.S. National Science Foundation grant, I spent 32 months among Finland’s Olkiluoto nuclear waste repository Safety Caseexperts from 02012–02014. These experts developed models of geological, hydrological, and ecological events that could occur in Western Finland over the coming tens of thousands — or even hundreds of thousands — of years. They reckoned with distant future glaciations, climate changes, earthquakes, floods, human and animal population changes, and more. I ended up recording 121 interviews. Those became the basis of my recent book.

Early on in the book you introduce and discuss “the deflation of expertise.” Could you tell us what you mean by this phrase and how you see it shaping long-termism?

We’re witnessing a troubling institutional erosion of expert authority in North America, Western Europe, and beyond. Vaccine science, stem cell research, polling data, climate change models, critical social theories, cell phone radiation studies, pandemic disease advisories, and human evolution research are routinely attacked in cable news free-for-alls and social media echo-chambers. Political power is increasingly gained through charismatic, populist performances that energize crowds by mocking expert virtues of cautious speech and detail-driven analysis. Expert voices are drowned out by Twitter mobs, dulled by corporate-bureaucratic “knowledge management” red tape, exhausted by universities’ adjunctification trends, warped by contingent “gig economy” research funding contracts, and rushed by publish-or-perish productivity pressures.

As enthusiasm for liberal arts education and scientific inquiry declines, societies enter into a state of futurelessness: they develop a manic fixation on the present moment that incessantly shoots down proposals for envisioning better worlds. My book argues that anthropological engagement with Finland’s nuclear waste experts can help us (a) widen our thinking’s time horizons during a moment of ecological crisis some call the Anthropocene and (b) resist the deflation of expertise by opening our ears to a uniquely longsighted form of expert inquiry.

I was heartened each time you pointed to the importance of multidisciplinarity, discursive diversity, and strategic redundancy for doing things at long timescales. Our experience has borne this out, as well. How did you arrive at an emphasis on these themes? What are some of the pitfalls of homogeneity? What about generational homogeneity?

The Safety Case project convened several teams of experts — each with different disciplinary backgrounds — to pursue what my informants called “multiple lines of reasoning.”

Some were systems analysts developing models of how different kinds of radionuclides could “migrate” through future underground water channels. Others were engineers reporting on the mechanical strength tests conducted on Finland’s copper nuclear waste canisters. Some were geologists making analogies that compared the Olkiluoto’s far future Ice Age conditions to those of a glacial ice sheet found in Greenland today. Others studied “archaeological analogues.” This meant comparing future repository components to ancient Roman iron nails found in Scotland, to a bronze cannon from the 17th century Swedish warship Kronan, and to a 2,100-year-old cadaver preserved in clay in China. Still others wrote prose scenarios with titles like The Evolution of the Repository System Beyond a Million Years in the Future.

The Safety Case encompassed multiple disciplinary sensibilities to ensure that one potentially inaccurate assumption doesn’t invalidate the wider range of forecasts. For me, this holistic ethos was a refreshing counterpoint to the reductionist, homogeneous scientism that led us to many of today’s ecological crises.

Certainly, it is important to hedge against generational homogeneity in thought too. Finland’s nuclear waste experts planned to release updated versions of the Safety Case throughout the coming century. They called these successive versions “iterations.” The first major report was 01985’s “Safety Analysis of Disposal of Spent Nuclear Fuel: Normal and Disturbed Evolution Scenarios.” The iteration I studied was the 02012 Construction License Safety Case. The next iteration will be the Operating License Safety Case, scheduled for submission to Finland’s nuclear regulator STUK in late 02021. Each iteration is, ideally, supposed to be more robust than the one before it. The Safety Case is an intergenerational work-in-progress.

Throughout the work you describe long-term thinking as an imaginative exercise — a “calisthenics for the mind.” This suggestion floored me when I read it. I just couldn’t agree more. And earlier this year, prior to reading your book, I even published an essay arguing for that same interpretation. What do you think makes exercise such an apt metaphor for understanding this phenomenon we’re discussing?

We all need to integrate a more vivid awareness of deep time into our everyday habits, actions, and intuitions. We need to override the shallow time discipline into which we’ve been enculturated. This requires self-discipline and ongoing practice. I believe putting aside time to do long-termist intellectual workouts or deep time mental exercise routines can help get us there.

Here’s an example. From 02017 to 02020, I was a researcher at George Washington University in Washington DC. In 01922, fossilized ~100,000-year-old bald cypress trees were found just twenty feet below the nearby city surface. Back then, America’s capital city was a literal swamp. Today, four bald cypresses, planted in the mid-1800s, grow in Lafayette Square right near the White House. I approached them as intellectual exercise equipment for stretching my mind across time. The cypresses provided me with tree imageries I could draw upon when re-imagining the U.S. capital as a prehistoric swamp.

Here’s another example. I sometimes headed west to hike in West Virginia. Hundreds of millions of years ago, Appalachia was home to much taller mountains. Some say their elevations rivaled those of today’s Rockies, Alps, or Himalayas. I tried to discipline my imagination, while hiking, into reimagining the hills in a wider temporal frame. I drew upon on the images I had in my head of what taller mountain ranges look like today. This helped me stretch the momentary “now” of my hike by endowing it with a deeper history and future.

Anyone can do these long-termist exercises. A person in Bangladesh, New York, Rio de Janeiro, Osaka, or Shanghai could, for instance, try imagining their area submerged by, or fighting off, future sea level rise. But what inspired me to integrate these exercises into my own life?

Well, it was — again — my fieldwork among Finland’s nuclear waste experts. I modeled these exercises on the Safety Case’s natural and archaeological analogue studies. The key was to (a) make an analogical connection between one’s immediate surroundings and a dramatically long-term future or past and then (b) try to envision it as accurately as possible by drawing, analogically, from scientific information and imageries we already have in our heads of real-world locales out there today.

What was the most surprising thing you discovered while working on this book?

Early on, I decided to end each chapter with five or six takeaway lessons in long-termism. I call these lessons “reckonings.” As I wrote, however, I was surprised to discover that, even as I engaged with very alien far future Finlands, most of the “reckonings” I collected ended up pertaining to some of the most ordinary features of everyday experience. These include the power of analogy (Chapter 1), the power of pattern-making (Chapter 2), the power of shifting and re-shifting perspectives (Chapter 3), and the problem of human mortality (Chapter 4). I found that these familiarities can be useful. Their sheer relatability can serve as a launching-off point for the rest of us as we pursue long-termist learning. The analogical exercises I mentioned previously are a good example of this.

It’s been so exciting for us to see this next generation of long-term thinkers publishing excellent new books on the topic — from your penetrating work in Deep Time Reckoning to Long Now Seminar Speaker Bina Venkataraman’s encompassing work in The Optimist’s Telescope to Long Now Seminar Speaker (and author of the foreword to your book) Marcia Bjornerud’s geological work in Timefulness to Long Now Research Fellow Roman Krznaric’s philosophical work in The Good Ancestor. What role do you think books play in helping the world think long-term?

Those are important books! I’ll add a few more: David Farrier’s Footprints: In Search of Future Fossils, Richard Irvine’s An Anthropology of Deep Time, and Hugh Raffles’ Book of Unconformities: Speculations on Lost Time. These pose crucial questions about time, geology, and human (and non-human) imagination. An argument could be made that there’s sort of a diffuse, de-centralized, interdisciplinary “deep time literacy” movement coalescing (mostly on its own!).

This is urgent work. Earlier this year, the Trump Administration advanced a proposal to reform key National Environmental Policy Act regulations to read: “effects should not be considered significant if they are remote in time, geographically remote, or the product of a lengthy causal chain.” This is out of sync with our mission to become more mindful of far future worlds. I won’t speak for the others, but my hope is that these books can inch us closer to escaping the virulent short-termism that our current ecological woes, deflations of expertise, and political crises exploit and reinforce.

In closing, I’ll mention that, thanks to you, I now know there is no direct way for me to say “Someday we will have an in-person discussion about all this at The Interval” in Finnish because Finnish does not have a future tense. And yet here we are, discussing Finnish expertise at thinking about the future. What’s up with that?

Hah! Yes, that’s right. There’s no future tense in Finnish. Finns tend to use the present tense instead. There is something sensible about this: all visions of the future are, indeed, tethered to the present moment. Marcia Bjornerud cleverly linked this linguistic quirk to my book’s broader arguments:

“There is some irony in studying Finns as exemplars of future thinkers: as Ialenti points out, the Finnish language has no future tense. Instead, either present tense or conditional mode verbs are used, which seems a rather oblique way of speaking of times to come. But this linguistic treatment of the future may reflect a deep wisdom in Finnish culture that informs the philosophy of the Safety Case. Making declarative pronouncements about the future is imprudent; the best that can be done is to envisage a spectrum of possible futures and develop a sense for how likely each is to unfold.”

From all of us at and around The Long Now Foundation: thank you for your time and expertise, Vincent.

And thank you! I’ll get back to reading your essay on long-termist askēsis. Keep up the great work.


Learn More

  • Read Long Now Editor Ahmed Kabil’s 02017 feature on the nuclear waste storage problem.
  • Read Vincent Ialenti’s 02016 essay for Long Now, “Craters & Mudrock: Tools for Imagining Distant Finlands.”
  • Watch Ralph Cavanagh and Peter Schwartz’s 02006 Long Now Seminar on Nuclear Power, Climate Change, and the Next 10,000 Years.

Planet DebianReproducible Builds: Second Reproducible Builds IRC meeting

After the success of our previous IRC meeting, we are having our second IRC meeting today. Monday 26th October, at 18:00 UTC:

  • 11:00am San Francisco. []
  • 2:00pm New York. []
  • 6:00pm London. []
  • 7:00pm Paris/Berlin. []
  • 11:30pm Delhi. []
  • 2:00am Beijing. [] (+1 day)

Please join us on the #reproducible-builds channel on irc.oftc.net — an agenda is available. As mentioned in our previous meeting announcement, due to the unprecedented events in 2020, there will be no in-person Reproducible Builds event this year, but we plan to run these IRC meetings every fortnight.

Worse Than FailureSerial Problems

If we presume there is a Hell for IT folks, we can only assume the eternal torment involves configuring or interfacing with printers. Perhaps Anabel K is already in Hell, because that describes her job.

Anabel's company sells point-of-sale tools, including receipt printers. Their customers are not technical, so a third-party installer handles configuring those printers in the field. To make the process easy and repeatable, Anabel maintains an app which automates the configuration process for the third party.

The basic flow is like this:

The printer gets shipped to the installer. At their facility, someone from the installer opens the box, connects the printer to power, and then to the network. They then run the app Anabel maintains, which connects to the printer's on-board web server and POSTs a few form-data requests. Assuming everything works, the app reports success. The printer goes back in the box and is ready to get installed at the client site at some point in the near future.

The whole flow was relatively painless until the printer manufacturer made a firmware change. Instead of the username/password being admin/admin, it was now admin/serial-number. No one was interested in having the installer techs key in the long serial number, but digging around in the documentation, Anabel found a simple fix.

In addition to the on-board web-server, there was also a TCP port running. If you connected to the port and sent the correct command, it would reply with the serial number.

Anabel made the appropriate changes. Now, her app would try and authenticate as admin/admin, and if it failed, it'd open a TCP connection, query the serial number, and then try again. Anabel grabbed a small pile of printers from storage, a mix of old and new firmware, loaded them up with receipt paper, and ran the full test suite to make sure everything still worked.

Within minutes, they were all happily churning out test prints. Anabel released her changes to production, and off it went to the installer technicians.

A few weeks later, the techs call in through support, in an absolute panic. "The configuration app has stopped working. It doesn't work on any of the printers we received in the past few weeks."

There was a limited supply of the old version of printers, and dozens got shipped out every day. If this didn't get fixed ASAP, they would very quickly find themselves with a pile of printers the installers couldn't configure. Management got on conference calls, roped Anabel in on the middle of long email chains, and they all agreed: there must be something wrong with Anabel's changes.

It wasn't unreasonable to suspect, but Anabel had tested it thoroughly. Heck, she had a few of the new printers on her desk and couldn't replicate the failure. So she got on a call with a tech and started from square one. Is it plugged in. Is it plugged into the network. Are there any restrictions on the network, or on the machine running the app, that might prevent access to non-standard ports?

Over the next few days, while the stock of old printers kept dwindling, this escalated up to sending a router with a known configuration out to the technicians. It was just to ensure that there were no hidden firewalls or network policies preventing access to the TCP port. Even still, on its own dedicated network, nothing worked.

"Okay, let's double check the printer's network config," Anabel said on the call. "When it boots up, it should print out its network config- IP, subnet, gateway, DHCP config, all of that. What does that say?"

The tech replied, "Oh. We don't have paper in it. One sec." While rooting around in the box, they added, "We don't normally bother. It's just one more thing to unbox before putting it right back in the box."

The printer booted up, faithfully printed out its network config, which was what everyone expected. "Okay, I guess… try running the tool again?" Anabel suggested.

And it worked.

Anabel turned to one of the test printers she had been using, and pulled out the roll of receipt paper. She ran the configuration tool… and it failed.

The TCP service only worked when there was paper in the printer. Anabel reported it as a bug to the printer vendor, but if and when that gets fixed is anyone's guess. The techs didn't want to have to fully unbox the printers, including the paper, for every install, but that was an easy fix: with each shipment of printers Anabel's company just started shipping a few packs of receipt paper for the techs. They can just crack one open and use it to configure a bunch of printers before it runs out.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cryptogram IMSI-Catchers from Canada

Gizmodo is reporting that Harris Corp. is no longer selling Stingray IMSI-catchers (and, presumably, its follow-on models Hailstorm and Crossbow) to local governments:

L3Harris Technologies, formerly known as the Harris Corporation, notified police agencies last year that it planned to discontinue sales of its surveillance boxes at the local level, according to government records. Additionally, the company would no longer offer access to software upgrades or replacement parts, effectively slapping an expiration date on boxes currently in use. Any advancements in cellular technology, such as the rollout of 5G networks in most major U.S. cities, would render them obsolete.

The article goes on to talk about replacement surveillance systems from the Canadian company Octasic.

Octasic’s Nyxcell V800 can target most modern phones while maintaining the ability to capture older GSM devices. Florida’s state police agency described the device, made for in-vehicle use, as capable of targeting eight frequency bands including GSM (2G), CDMA2000 (3G), and LTE (4G).

[…]

A 2018 patent assigned to Octasic claims that Nyxcell forces a connection with nearby mobile devices when its signal is stronger than the nearest legitimate cellular tower. Once connected, Nyxcell prompts devices to divulge information about its signal strength relative to nearby cell towers. These reported signal strengths (intra-frequency measurement reports) are then used to triangulate the position of a phone.

Octasic appears to lean heavily on the work of Indian engineers and scientists overseas. A self-published biography of the company notes that while the company is headquartered in Montreal, it has “R&D facilities in India,” as well as a “worldwide sales support network.” Nyxcell’s website, which is only a single page requesting contact information, does not mention Octasic by name. Gizmodo was, however, able to recover domain records identifying Octasic as the owner.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 20)

Here’s part twenty of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:


Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Planet DebianMarco d'Itri: RPKI validation with FORT Validator

This article documents how to install FORT Validator (an RPKI relying party software which also implements the RPKI to Router protocol in a single daemon) on Debian 10 to provide RPKI validation to routers. If you are using testing or unstable then you can just skip the part about apt pinnings.

The packages in bullseye (Debian testing) can be installed as is on Debian stable with no need to rebuild them, by configuring an appropriate pinning for apt:

cat <<END > /etc/apt/sources.list.d/bullseye.list
deb http://deb.debian.org/debian/ bullseye main
END

cat <<END > /etc/apt/preferences.d/pin-rpki
# by default do not install anything from bullseye
Package: *
Pin: release bullseye
Pin-Priority: 100

Package: fort-validator rpki-trust-anchors
Pin: release bullseye
Pin-Priority: 990
END

apt update

Before starting, make sure that curl (or wget) and the web PKI certificates are installed:

apt install curl ca-certificates

If you already know about the legal issues related to the ARIN TAL then you may instruct the package to automatically install it. If you skip this step then you will be asked at installation time about it, either way is fine.

echo 'rpki-trust-anchors rpki-trust-anchors/get_arin_tal boolean true' \
  | debconf-set-selections

Install the package as usual:

apt install fort-validator

You may also install rpki-client and gortr on Debian 10, or maybe cfrpki and gortr. I have also tried packaging Routinator 3000 for Debian, but this effort is currently on hold because the Rust ecosystem is broken and hostile to the good packaging practices of Linux distributions.

Planet DebianMarco d'Itri: RPKI validation with OpenBSD's rpki-client and Cloudflare's gortr

This article documents how to install rpki-client (an RPKI relying party software, the actual validator) and gortr (which implements the RPKI to Router protocol) on Debian 10 to provide RPKI validation to routers. If you are using testing or unstable then you can just skip the part about apt pinnings.

The packages in bullseye (Debian testing) can be installed as is on Debian stable with no need to rebuild them, by configuring an appropriate pinning for apt:

cat <<END > /etc/apt/sources.list.d/bullseye.list
deb http://deb.debian.org/debian/ bullseye main
END

cat <<END > /etc/apt/preferences.d/pin-rpki
# by default do not install anything from bullseye
Package: *
Pin: release bullseye
Pin-Priority: 100

Package: gortr rpki-client rpki-trust-anchors
Pin: release bullseye
Pin-Priority: 990
END

apt update

Before starting, make sure that curl (or wget) and the web PKI certificates are installed:

apt install curl ca-certificates

If you already know about the legal issues related to the ARIN TAL then you may instruct the package to automatically install it. If you skip this step then you will be asked at installation time about it, either way is fine.

echo 'rpki-trust-anchors rpki-trust-anchors/get_arin_tal boolean true' \
  | debconf-set-selections

Install the packages as usual:

apt install rpki-client gortr

And then configure rpki-client to generate its output in the the JSON format needed by gortr:

echo 'OPTIONS=-j' > /etc/default/rpki-client

You may manually start the service unit to immediately generate the data instead of waiting for the next timer run:

systemctl start rpki-client &

gortr too needs to be configured to use the JSON data generated by rpki-client:

echo 'GORTR_ARGS=-bind :323 -verify=false -checktime=false -cache /var/lib/rpki-client/json' > /etc/default/gortr

And then it needs to be restarted to use the new configuration:

systemctl restart gortr

You may also install FORT Validator on Debian 10, or maybe cfrpki with gortr. I have also tried packaging Routinator 3000 for Debian, but this effort is currently on hold because the Rust ecosystem is broken and hostile to the packaging practices of Linux distributions.

,

Charles StrossIntroducing a new guest blogger: Sheila Williams

It's been ages since I last hosted a guest blogger here, but today I'd like to introduce you to Sheila Williams, who will be talking about her work next week.

Normally my gues bloggers are other SF/F authors, but Sheila is something different: she's the multiple Hugo-Award winning editor of Asimov's Science Fiction magazine. She is also the winner of the 2017 Kate Wilhelm Solstice Award for distinguished contributions to the science fiction and fantasy community.

Sheila started at Asimov's in June 1982 as the editorial assistant. Over the years, she was promoted to a number of different editorial positions at the magazine and she also served as the executive editor of Analog from 1998 until 2004. With Rick Wilber, she is also the co-founder of the Dell Magazines Award for Undergraduate Excellence in Science Fiction and Fantasy. This annual award has been bestowed on the best short story by an undergraduate student at the International Conference on the Fantastic since 1994.

In addition, Sheila is the editor or co-editor of twenty-six anthologies. Her newest anthology, Entanglements: Tomorrow's Lovers, Families, and Friends, is the 2020 volume of the Twelve Tomorrow series. The book is just out from MIT Press.

Charles StrossEntanglements!

Entanglements Cover.jpg

Many thanks to Charlie for giving me the chance to write about editing and my latest project. I'm very excited about the publication of Entanglements. The book has received a starred review from Publishers Weekly and terrific reviews in Lightspeed, Science, and the Financial Times. MIT Press has created a very nice "Pubpub" page about Entanglements, with information about the book and its various contributors. The "On the Stories" section has an essay about by Nick Wolven about his amazing story, "Sparkly Bits," and a fun Zoom conversation with James Patrick Kelly, Nancy Kress, and Sam J. Miller. I think the site is well worth checking out, and here's the Pubpub description of the book:

Science fiction authors offer original tales of relationships in a future world of evolving technology.

In a future world dominated by the technological, people will still be entangled in relationships--in romances, friendships, and families. This volume in the Twelve Tomorrows series considers the effects that scientific and technological discoveries will have on the emotional bonds that hold us together.

The strange new worlds in these stories feature AI family therapy, floating fungitecture, and a futuristic love potion. A co-op of mothers attempts to raise a child together, lovers try to resolve their differences by employing a therapeutic sexbot, and a robot helps a woman dealing with Parkinson's disease. Contributions include Xia Jia's novelette set in a Buddhist monastery, translated by the Hugo Award-winning writer Ken Liu; a story by Nancy Kress, winner of six Hugos and two Nebulas; and a profile of Kress by Lisa Yaszek, Professor of Science Fiction Studies at Georgia Tech. Stunning artwork by Tatiana Plakhova--"infographic abstracts" of mixed media software--accompanies the texts.

,

Planet DebianDirk Eddelbuettel: digest 0.6.27: Build fix

Exactly one week after the previous release 0.6.26 of digest, a minor cleanup release 0.6.27 just arrived on CRAN and will go to Debian shortly.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, and blake3 algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at one million monthly downloads, 282 direct reverse dependencies and 8068 indirect reverse dependencies, or just under half of CRAN) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

Release 0.6.26 brought support for the (nice, even cryptographic) blake3 hash algorithm. In the interest of broader buildability we had already (with a sad face) disabled a few very hardware-specific implementation aspects using intrinsic ops. But to our chagrin, we left one #error define that raised its head on everybody’s favourite CRAN build platform. Darn. So 0.6.27 cleans that up and also removes the check and #error as … all the actual code was already commented out. If you read this and tears start running down your cheeks, then by all means come and help us bring blake3 to its full (hardware-accelerated) potential. This (probably) only needs a little bit of patient work with the build options and configurations. You know where to find us…

My CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianSandro Tosi: Multiple git configurations depending on the repository path

For my work on Debian, i want to use my debian.org email address, while for my personal projects i want to use my gmail.com address.

One way to change the user.email git config value is to git config --local in every repo, but that's tedious, error-prone and doesn't scale very well with many repositories (and the chances to forget to set the right one on a new repo are ~100%).

The solution is to use the git-config ability to include extra configuration files, based on the repo path, by using includeIf:

Content of ~/.gitconfig:

[user]
name = Sandro Tosi
email = <personal.address>@gmail.com

[includeIf "gitdir:~/deb/"]
path = ~/.gitconfig-deb

Every time the git path is in ~/deb/ (which is where i have all Debian repos) the file ~/.gitconfig-deb will be included; its content:

[user]
email = morph@debian.org

That results in my personal address being used on all repos not part of Debian, where i use my Debian email address. This approach can be extended to every other git configuration values.

Sam VargheseAustralian sports writer’s predictions prove to be those of a false prophet

After the first match in the Bledisloe Cup series ended in a 16-all draw, Australian sports writers were on a giddy high, predicting that the dominance of the All Blacks had more or less ended and the big boys had been caught with their pants down.

Well before this hype began, at the end of the game, there was a gesture by the Australian team which showed that its mental state was still very fragile. When the final whistle blew, the ball was still live, so the referee let play proceed.

A thrilling nine minutes ensued, with first Australia, and then New Zealand, threatening to score. Strangely, though, neither team thought of attempting a drop-goal to win the game. After one of the New Zealand forays, the Australians regained the ball and fly-half James O’Connor kicked it into touch, ending the game.

Now O’Connor could have continued play, by running the ball from his own end. The All Blacks never took the option of ending the game when they got the ball during that nine-minute stretch. O’Connor’s gesture gave the game away: for Australia, a draw was as good as a win. It indicated the extent to which he ranked his team against the All Blacks, despite the heroics they had showed.

With that kind of mental attitude, it was only to be expected that Australia would lose at Eden Park the following week. As they did, by a 27-7 margin, at a venue where they have not beaten New Zealand since 1986.

During the week, there were several triumphal essays in the Australian press; one, by Jamie Pandaram, a senior sports writer at The Daily Telegraph, gives an insight into the type of shallow understanding that sports writers on this side of the Tasman have when it comes to rugby union, and the nationalistic fervour that surrounds sport (as it does everything else).

The headline gave an indication of the bombast that was to follow, reading: “Why All Blacks are finally vulnerable.” It started off saying that facing a beaten All Blacks team was a dangerous exercise, “there’s a different feel about 2020”.

And he cited concerns about the coaching, selection, tactics and loss of seniority in All Black ranks as factors that had contributed to what he called a decline in their ranks.

He cited the absence of four players – Kieran Read, Brodie Retallick, Sonny Bill Williams and Ryan Crotty – as making the team unable to produce those moments of inspiration for which they have become famous. And he claimed that Ian Foster, who took over as coach from Steve Hansen after the 2019 World Cup, was not the best coach among those who could be chosen.

As evidence that stress was allegedly mounting on New Zealand, Pandaram cited the complaints made by the assistant coach John Plumtree about illegal tactics employed by Australia in taking out players.

The first game was unusual in that it was not refereed by a neutral referee – the first time this has happened in a long time and mainly due to the travel issues cause by the coronavirus pandemic.

The referee, New Zealand’s Paul Williams, had to tread a difficult path; he had to ensure that his rulings could not be criticised as being partial to his own country and at the same time he had to police Australia’s thuggery properly without accusations of bias. Having watched the match twice, I can say with confidence that Williams only erred once, in not calling Rieko Ioane for stepping on the sideline boundary when he began what ended up as a try. This was the fault of Australian Angus Gardner who was the linesman on the side concerned.

“It’s a common Kiwi play; turn the referees’ and public’s attention to perceived cheating by their opposition – we’ve seen them previously call out the Wallabies’ scrumming and breakdown play – to take the spotlight away from their own,” wrote Pandaram, completely forgetting that this was exactly what former Wallabies coach Michael Cheika did after every game.

He opined that Foster would be under “intense scrutiny” during the second game as many people in New Zealand felt that the job of chief coach should have gone to Scott Robertson instead, the latter being one who has taken the Crusaders to four Super Rugby titles in his first four years as their coach.

And Pandaram went on and on, outlining what he perceived to be issues with the team, about playing this player and that in this position or that.

I haven’t seen anything he wrote after the second game when Australia was competitive for just one half, and unable to score in the second half. All those perceived “problems” he pointed out were gone.

One crucial factor that he forgot was that this was the first game for both teams this year. Normally, both Australia and New Zealand play two or three Tests before the games against each other, South Africa and Argentina, which make up the Bledisloe Cup and Rugby Championship each year, begin. Both teams were quite rusty.

In 2015, New Zealand lost more talent after the World Cup than they did in 2019; that time Daniel Carter, Richie McCaw, Ma’a Nonu, Conrad Smith, Tony Woodcock and Keven Mealamu all retired from international rugby. But the side picked up and carried on.

A lot of Pandaram’s moaning comes out of nationalism; Australians are the most one-sided sports writers I have seen. When one is shown up like this, they tend to lie quiet until the public forgets. As Pandaram is doing now.

Planet DebianJelmer Vernooij: Debian Janitor: Hosters used by Debian packages

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

The Janitor knows how to talk to different hosting platforms. For each hosting platform, it needs to support the platform- specific API for creating and managing merge proposals. For each hoster it also needs to have credentials.

At the moment, it supports the GitHub API, Launchpad API and GitLab API. Both GitHub and Launchpad have only a single instance; the GitLab instances it supports are gitlab.com and salsa.debian.org.

This provides coverage for the vast majority of Debian packages that can be accessed using Git. More than 75% of all packages are available on salsa - although in some cases, the Vcs-Git header has not yet been updated.

Of the other 25%, the majority either does not declare where it is hosted using a Vcs-* header (10.5%), or have not yet migrated from alioth to another hosting platform (9.7%). A further 2.3% are hosted somewhere on GitHub (2%), Launchpad (0.18%) or GitLab.com (0.15%), in many cases in the same repository as the upstream code.

The remaining 1.6% are hosted on many other hosts, primarily people’s personal servers (which usually don’t have an API for creating pull requests).

Packages per hoster

Outdated Vcs-* headers

It is possible that the 20% of packages that do not have a Vcs-* header or have a Vcs header that say there on alioth are actually hosted elsewhere. However, it is hard to know where they are until a version with an updated Vcs-Git header is uploaded.

The Janitor primarily relies on vcswatch to find the correct locations of repositories. vcswatch looks at Vcs-* headers but has its own heuristics as well. For about 2,000 packages (6%) that still have Vcs-* headers that point to alioth, vcswatch successfully finds their new home on salsa.

Merge Proposals by Hoster

These proportions are also visible in the number of pull requests created by the Janitor on various hosters. The vast majority so far has been created on Salsa.

Hoster Open Merged & Applied Closed
github.com921685
gitlab.com1230
code.launchpad.net24511
salsa.debian.org1,3605,657126
Merge Proposal statistics

In this graph, “Open” means that the pull request has been created but likely nobody has looked at it yet. Merged means that the pull request has been marked as merged on the hoster, and applied means that the changes have ended up in the packaging branch but via a different route (e.g. cherry-picked or manually applied). Closed means that the pull request was closed without the changes being incorporated.

Note that this excludes ~5,600 direct pushes, all of which were to salsa-hosted repositories.

See also:

For more information about the Janitor's lintian-fixes efforts, see the landing page.

,

Planet DebianDirk Eddelbuettel: RcppSpdlog 0.0.3: New features and much more docs

A good month after the initial two releases, we are thrilled to announce relase 0.0.3 of RcppSpdlog. This brings us release 1.8.1 of spdlog as well as a few local changes (more below).

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic.

This version of RcppSpdlog brings a new top-level function setLogLevel to control what events get logged, updates the main example to show this and to also make the R-aware logger the default logger, and adds both an extended vignette showing several key features and a new (external) package documentation site.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.3 (2020-10-23)

  • New function setLogLevel with R accessor in exampleRsink example

  • Updated exampleRsink to use default logger instance

  • Upgraded to upstream release 1.8.1 which contains finalised upstream use to switch to REprintf() if R compilation detected

  • Added new vignette with extensive usage examples, added compile-time logging switch example

  • A package documentation website was added

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBirger Schacht: An Analysis of 5 Million OpenPGP Keys

In July I finished my Bachelor’s Degree in IT Security at the University of Applied Sciences in St. Poelten. During the studies I did some elective courses, one of which was about Data Analysis using Python, Pandas and Jupyter Notebooks. I found it very interesting to do calculations on different data sets and to visualize them. Towards the end of the Bachelor I had to find a topic for my Bachelor Thesis and as a long time user of OpenPGP I thought it would be interesting to do an analysis of the collection of OpenPGP keys that are available on the keyservers of the SKS keyserver network.

So in June 2019 I fetched a copy of one of the key dumps of the one of the keyservers (some keyserver publish these copies of their key database so people who want to join the SKS keyserver network can do an initial import). At that time the copy of the key database contained 5,499,675 keys and was around 12GB. Using the hockeypuck keyserver software I imported the keys into an PostgreSQL database. Hockeypuck uses a table called keys to store the keys and in there the column doc stores the OpenPGP keys in JSON format (always with a data field containing the original unparsed data).

For the thesis I split the analysis in three parts, first looking at the Public Key packets, then analysing the User ID packets and finally studying the Signature Packets. To analyse the respective packets I used SQL to export the data to CSV files and then used the pandas read_csv method to create a dataframe of the values. In a couple of cases I did some parsing before converting to a DataFrame to make the analysis step faster. The parsing was done using the pgpdump python library.

Together with my advisor I decided to submit the thesis for a journal, so we revised and compressed the whole paper and the outcome was now

PUBLISHED

in the Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications (JoWUA).

I think the work gives some valuable insight in the development of the use of OpenPGP in the last 30 years. Looking at the public key packets we were able to compare the different public key algorithms and for example visualize how DSA was the most used algorithm until around 2010 when it was replaced by RSA. When looking at the less used algorithms a trend towards ECC based crytography is visible.

What we also noticed was an increase of RSA keys with algorithm ID 3 (RSA Sign-Only), which are deprecated. When we took a deeper look at those keys we realized that most of those keys used a specific User ID string in the User ID packets which allowed us to attribute those keys to two software projects both using the Bouncy Castle Java Cryptographic API (resp. the Spongy Castle version for Android). We also stumbled over a tutorial on how to create RSA keys with Bouncycastle which also describes how to create RSA keys with code that produces RSA Sign-Only keys. In one of those projects, this was then fixed.

By looking at the User ID packets we did some statistics about the most used email providers used by OpenPGP users. One domain stood out, because it is not the domain of an email provider: tellfinder.com is a domain used in around 45,000 keys. Tellfinder is a Big Data analysis software and the UID of all but two of those keys is TellFinder Page Archiver- Signing Key <support@tellfinder.com>.

We also looked at the comments used in OpenPGP User ID fields. In 2013 Daniel Kahn Gillmor published a blog post titled OpenPGP User ID Comments considered harmful in which he pointed out that most of the comments in the User ID field of OpenPGP keys are duplicating information that is already present somewhere in the User ID or the key itself. In our dataset 3,133 comments were exactly the same as the name, 3,346 were the same as the domain and 18,246 comments were similar to the local part of the email address

Last but not least we looked at the signature subpackets and the development of some of the preferences (Preferred Symmetric Algorithm, Preferred Hash Algorithm) that are being published using signature packets.

Analysing this huge dataset of cryptographic keys of the last 20 to 30 years was very interesting and I learned a lot about the history of PGP resp. OpenPGP and the evolution of cryptography overall. I think it would be interesting to look at even more properties of OpenPGP keys and I also think it would be valuable for the OpenPGP ecosystem if these kinds analysis could be done regularly. An approach like Tor Metrics could lead to interesting findings and could also help to back decisions regarding future developments of the OpenPGP standard.

Planet DebianEnrico Zini: Hetzner build machine

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

Building Qt5 takes a long time. The build server I was using had CPUs and RAM, but was very slow on I/O. I was very frustrated by that, and I started evaluating alternatives. I ended up setting up scripts to automatically provision a throwaway cloud server at Hetzner.

Initial setup

I got an API key from my customer's Hetzner account.

I installed hcloud-cli, currently only in testing and unstable:

apt install hcloud-cli

Then I configured hcloud with the API key:

hcloud context create

Spin up

I wrote a quick and dirty script to spin up a new machine, which grew a bit with little tweaks:

#!/bin/sh

# Create the server
hcloud server create --name buildqt --ssh-key … --start-after-create \
                     --type cpx51 --image debian-10 --datacenter …

# Query server IP
IP="$(hcloud server describe buildqt -o json | jq -r .public_net.ipv4.ip)"

# Update ansible host file
echo "buildqt ansible_user=root ansible_host=$IP" > hosts

# Remove old host key
ssh-keygen -f ~/.ssh/known_hosts -R "$IP"

# Update login script
echo "#!/bin/sh" > login
echo "ssh root@$IP" >> login
chmod 0755 login

I picked a datacenter in the same location as where we have other servers, to get quicker data transfers.

I like that CLI tools have JSON output that I can cleanly pick at with jq. Sadly, my ISP doesn't do IPv6 yet.

Since the server just got regenerated, I remove a possibly cached host key.

Provisioning the machine

One git server I need is behind HTTP authentication. Here's a quick hack to pass the relevant .netrc credentials to ansible before provisioning:

#!/usr/bin/python3

import subprocess
import netrc
import tempfile
import json

login, account, password = netrc.netrc().authenticators("…")

with tempfile.NamedTemporaryFile(mode="wt", suffix=".json") as fd:
    json.dump({
        "repo_user": login,
        "repo_password": password,
    }, fd)
    fd.flush()

    subprocess.run([
        "ansible-playbook",
        "-i", "hosts",
        "-l", "buildqt",
        "--extra-vars", f"@{fd.name}",
        "provision.yml",
        ], check=True)

And here's the ansible playbook:

#!/usr/bin/env ansible-playbook

- name: Install and configure buildqt
  hosts: all
  tasks:
   - name: Update apt cache
     apt:
        update_cache: yes
        cache_valid_time: 86400

   - name: Create build user
     user:
        name: build
        comment: QT5 Build User
        shell: /bin/bash

   - name: Create sources directory
     become: yes
     become_user: build
     file:
        path: ~/sources
        state: directory
        mode: 0755

   - name: Download sources
     become: yes
     become_user: build
     get_url:
        url: "https://…/{{item}}"
        dest: "~/sources/{{item}}"
        mode: 0644
     with_items:
      - "qt-everywhere-src-5.15.1.tar.xz"
      - "qt-creator-enterprise-src-4.13.2.tar.gz"

   - name: Populate home directory
     become: yes
     become_user: build
     copy:
        src: build
        dest: ~/
        mode: preserve

   - name: Write .netrc
     become: yes
     become_user: build
     copy:
        dest: ~/.netrc
        mode: 0600
        content: |
           machine …
           login {{repo_user}}
           password {{repo_password}}

   - name: Write .screenrc
     become: yes
     become_user: build
     copy:
        dest: ~/.screenrc
        mode: 0644
        content: |
           hardstatus alwayslastline
           hardstatus string '%{= cw}%-Lw%{= KW}%50>%n%f* %t%{= cw}%+Lw%< %{= kK}%-=%D %Y-%m-%d %c%{-}'
           startup_message off
           defutf8 on
           defscrollback 10240

   - name: Install base packages
     apt:
        name: git,mc,ncdu,neovim,eatmydata,devscripts,equivs,screen
        state: present

   - name: Clone git repo
     become: yes
     become_user: build
     git:
        repo: https://…@…/….git
        dest: ~/…

   - name: Copy Qt license
     become: yes
     become_user: build
     copy:
        src: qt-license.txt
        dest: ~/.qt-license
        mode: 0600

Now everything is ready for a 16 core, 32Gb ram build on SSD storage.

Tear down

When done:

#!/bin/sh
hcloud server delete buildqt

The whole spin up plus provisioning takes around a minute, so I can do it when I start a work day, and take it down at the end. The build machine wasn't that expensive to begin with, and this way it will even be billed by the hour.

A first try on a CPX51 machine has just built the full Qt5 Everywhere Enterprise including QtWebEngine and all its frills, for amd64, in under 1 hour and 40 minutes.

Cryptogram New Report on Police Decryption Capabilities

There is a new report on police decryption capabilities: specifically, mobile device forensic tools (MDFTs). Short summary: it’s not just the FBI that can do it.

This report documents the widespread adoption of MDFTs by law enforcement in the United States. Based on 110 public records requests to state and local law enforcement agencies across the country, our research documents more than 2,000 agencies that have purchased these tools, in all 50 states and the District of Columbia. We found that state and local law enforcement agencies have performed hundreds of thousands of cellphone extractions since 2015, often without a warrant. To our knowledge, this is the first time that such records have been widely disclosed.

Lots of details in the report. And in this news article:

At least 49 of the 50 largest U.S. police departments have the tools, according to the records, as do the police and sheriffs in small towns and counties across the country, including Buckeye, Ariz.; Shaker Heights, Ohio; and Walla Walla, Wash. And local law enforcement agencies that don’t have such tools can often send a locked phone to a state or federal crime lab that does.

[…]

The tools mostly come from Grayshift, an Atlanta company co-founded by a former Apple engineer, and Cellebrite, an Israeli unit of Japan’s Sun Corporation. Their flagship tools cost roughly $9,000 to $18,000, plus $3,500 to $15,000 in annual licensing fees, according to invoices obtained by Upturn.

Planet DebianMolly de Blanc: Endorsements

Transparency is essential to trusting a technology. Through transparency we can understand what we’re using and build trust. When we know what is actually going on, what processes are occurring and how it is made, we are able to decide whether interacting with it is something we actually want, and we’re able to trust it and use it with confidence.

This transparency could mean many things, though it most frequently refers to the technology itself: the code or, in the case of hardware, the designs. We could also apply it to the overall architecture of a system. We could think about the decision making, practices, and policies of whomever is designing and/or making the technology. These are all valuable in some of the same ways, including that they allow us to make a conscious choice about what we are supporting.

When we choose to use a piece of technology, we are supporting those who produce it. This could be because we are directly paying for it, however our support is not limited to direct financial contributions. In some cases this is because of things hidden within a technology: tracking mechanisms or backdoors that could allow companies or governments access to what we’re doing. When creating different types of files on a computer, these files can contain metadata that says what software was used to make it. This is an implicit endorsement, and you can also explicitly endorse a technology by talking about that or how you use it. In this, you have a right (not just a duty) to be aware of what you’re supporting. This includes, for example, organizational practices and whether a given company relies on abusive labor policies, indentured servitude, or slave labor.
Endorsements inspire others to choose a piece of technology. Most of my technology is something I investigate purely for functionality, and the pieces I investigate are based on what people I know use. The people I trust in these cases are more inclined than most to do this kind of research, to perform technical interrogations, and to be aware of what producers of technology are up to.

This is how technology spreads and becomes common or the standard choice. In one sense, we all have the responsibility (one I am shirking) to investigate our technologies before we choose them. However, we must acknowledge that not everyone has the resources for this – the time, the skills, the knowledge, and therein endorsements become even more important to recognize.

Those producing a technology have the responsibility of making all of these angles something one could investigate. Understanding cannot only be the realm of experts. It should not require an extensive background in research and investigative journalism to find out whether a company punishes employees who try to unionize or pay non-living wages. Instead, these must be easy activities to carry out. It should be standard for a company (or other technology producer) to be open and share with people using their technology what makes them function. It should be considered shameful and shady to not do so. Not only does this empower those making choices about what technologies to use, but it empowers others down the line, who rely on those choices. It also respects the people involved in the processes of making these technologies. By acknowledging their role in bringing our tools to life, we are respecting their labor. By holding companies accountable for their practices and policies, we are respecting their lives.

Worse Than FailureError'd: Errors by the Pound

"I can understand selling swiss cheese by the slice, but copier paper by the pound?" Dave P. wrote.

 

Amanda R. writes, "Ok, that's fine, but can the 1% correctly spell 'people'?"

 

"In this form, language is quite variable as is when you are able to cancel your reservation ...which are in fact, actual variables," wrote Jean-Pierre M.

 

Barry M. wrote, "Hey, Royal Caribbean, you know what? I'll take the win-win: total control AND save $7!"

 

"Oh wow! The secret on how to write good articles is out!" writes Barry L.

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Krebs on SecurityThe Now-Defunct Firms Behind 8chan, QAnon

Some of the world’s largest Internet firms have taken steps to crack down on disinformation spread by QAnon conspiracy theorists and the hate-filled anonymous message board 8chan. But according to a California-based security researcher, those seeking to de-platform these communities may have overlooked a simple legal solution to that end: Both the Nevada-based web hosting company owned by 8chan’s current figurehead and the California firm that provides its sole connection to the Internet are defunct businesses in the eyes of their respective state regulators.

In practical terms, what this means is that the legal contracts which granted these companies temporary control over large swaths of Internet address space are now null and void, and American Internet regulators would be well within their rights to cancel those contracts and reclaim the space.

The IP address ranges in the upper-left portion of this map of QAnon and 8kun-related sites — some 21,000 IP addresses beginning in “206.” and “207.” — are assigned to N.T. Technology Inc. Image source: twitter.com/Redrum_of_Crows

That idea was floated by Ron Guilmette, a longtime anti-spam crusader who recently turned his attention to disrupting the online presence of QAnon and 8chan (recently renamed “8kun”).

On Sunday, 8chan and a host of other sites related to QAnon conspiracy theories were briefly knocked offline after Guilmette called 8chan’s anti-DDoS provider and convinced them to stop protecting the site from crippling online attacks (8Chan is now protected by an anti-DDoS provider in St. Petersburg, Russia).

The public face of 8chan is Jim Watkins, a pig farmer in the Philippines who many experts believe is also the person behind the shadowy persona of “Q” at the center of the conspiracy theory movement.

Watkin owns and operates a Reno, Nev.-based hosting firm called N.T. Technology Inc. That company has a legal contract with the American Registry for Internet Numbers (ARIN), the non-profit which administers IP addresses for entities based in North America.

ARIN’s contract with N.T. Technology gives the latter the right to use more than 21,500 IP addresses. But as Guilmette discovered recently, N.T. Technology is listed in Nevada Secretary of State records as under an “administrative hold,” which according to Nevada statute is a “terminated” status indicator meaning the company no longer has the right to transact business in the state.

N.T. Technology’s listing in the Nevada Secretary of State records. Click to Enlarge.

The same is true for Centauri Communications, a Freemont, Calif.-based Internet Service Provider that serves as N.T. Technology’s colocation provider and sole connection to the larger Internet. Centauri was granted more than 4,000 IPv4 addresses by ARIN more than a decade ago.

According to the California Secretary of State, Centauri’s status as a business in the state is “suspended.” It appears that Centauri hasn’t filed any business records with the state since 2009, and the state subsequently suspended the company’s license to do business in Aug. 2012. Separately, the California State Franchise Tax Board (FTB) suspended this company as of April 1, 2014.

Centauri Communications’ listing with the California Secretary of State’s office.

Neither Centauri Communications nor N.T. Technology responded to repeated requests for comment.

KrebsOnSecurity shared Guilmette’s findings with ARIN, which said it would investigate the matter.

“ARIN has received a fraud report from you and is evaluating it,” a spokesperson for ARIN said. “We do not comment on such reports publicly.”

Guilmette said apart from reclaiming the Internet address space from Centauri and NT Technology, ARIN could simply remove each company’s listings from the global WHOIS routing records. Such a move, he said, would likely result in most ISPs blocking access to those IP addresses.

“If ARIN were to remove these records from the WHOIS database, it would serve to de-legitimize the use of these IP blocks by the parties involved,” he said. “And globally, it would make it more difficult for the parties to find people willing to route packets to and from those blocks of addresses.”

Planet DebianBastian Blank: Salsa updated to GitLab 13.5

Today, GitLab released the version 13.5 with several new features. Also Salsa got some changes applied to it.

GitLab 13.5

GitLab 13.5 includes several new features. See the upstream release postfix for a full list.

Shared runner builds on larger instances

It's been way over two years since we started to use Google Compute Engine (GCE) for Salsa. Since then, all the jobs running on the shared runners run within a n1-standard-1 instance, providing a fresh set of one vCPU and 3.75GB of RAM for each and every build.

GCE supports several new instance types, featuring better and faster CPUs, including current AMD EPICs. However, as it turns out, GCE does not support any single vCPU instances for any of those types. So jobs in the future will use n2d-standard-2 for the time being, provinding two vCPUs and 8GB of RAM..

Builds run with IPv6 enabled

All builds run with IPv6 enabled in the Docker environment. This means the lo network device got the IPv6 loopback address ::1 assigned. So tests that need minimal IPv6 support can succeed. It however does not include any external IPv6 connectivity.

Planet DebianVincent Fourmond: QSoas tips and tricks: generating smooth curves from a fit

Often, one would want to generate smooth data from a fit over a small number of data points. For an example, take the data in the following file. It contains (fake) experimental data points that obey to Michaelis-Menten kinetics: $$v = \frac{v_m}{1 + K_m/s}$$ in which \(v\) is the measured rate (the y values of the data), \(s\) the concentration of substrate (the x values of the data), \(v_m\) the maximal rate and \(K_m\) the Michaelis constant. To fit this equation to the data, just use the fit-arb fit:
QSoas> l michaelis.dat
QSoas> fit-arb vm/(1+km/x)
After running the fit, the window should look like this:
Now, with the fit, we have reasonable values for \(v_m\) (vm) and \(K_m\) (km). But, for publication, one would want to generate "smooth" curve going through the lines... Saving the curve from "Data.../Save all" doesn't help, since the data has as many points as the original data and looks very "jaggy" (like on the screenshot above)... So one needs a curve with more data points.

Maybe the most natural solution is simply to use generate-buffer together with apply-formula using the formula and the values of km and vm obtained from the fit, like:

QSoas> generate-buffer 0 20
QSoas> apply-formula y=3.51742/(1+3.69767/x)
By default, generate-buffer generate 1000 evenly spaced x values, but you can change their number using the /samples option. The two above commands can be combined to just one call to generate-buffer:
QSoas> generate-buffer 0 20 3.51742/(1+3.69767/x)
This works, but it is quite cumbersome and it is not going to work well for complex formulas or the results of differential equations or kinetic systems...

This is why to each fit- command corresponds a sim- command that computes the result of the fit using a "saved parameters" file (here, michaelis.params, but you can also save it yourself) and buffers as "models" for X values:

QSoas> generate-buffer 0 20
QSoas> sim-arb vm/(1+km/x) michaelis.params 0
This strategy works with every single fit ! As an added benefit, you even get the fit parameters as meta-data, which are displayed by the show command:
QSoas> show 0
Dataset generated_fit_arb.dat: 2 cols, 1000 rows, 1 segments, #0
Flags: 
Meta-data:	commands =	 sim-arb vm/(1+km/x) michaelis.params 0	fit =	 arb (formula: vm/(1+km/x))	km =	 3.69767
	vm =	 3.5174
They also get saved as comments if you save the data.

Important note: the sim-arb command will be available only in the 3.0 release, although you can already enjoy it if you use the github version.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code and compile it yourself or buy precompiled versions for MacOS and Windows there.

Planet DebianSteinar H. Gunderson: plocate in testing

plocate hit testing today, so it's officially on its way to bullseye :-) I'd love to add a backport to stable, but bpo policy says only to backport packages with a “notable userbase”, and I guess 19 installations in popcon isn't that :-) It's also hit Arch Linux, obviously Ubuntu universe, and seemingly also other distributions like Manjaro. No Fedora yet, but hopefully, some Fedora maintainer will pick it up. :-)

Also, pabs pointed out another possible use case, although this is just a proof-of-concept:

pannekake:~/dev/plocate/obj> time apt-file search bin/updatedb                 
locate: /usr/bin/updatedb.findutils       
mlocate: /usr/bin/updatedb.mlocate
roundcube-core: /usr/share/roundcube/bin/updatedb.sh
apt-file search bin/updatedb  1,19s user 0,58s system 163% cpu 1,083 total

pannekake:~/dev/plocate/obj> time ./plocate -d apt-file.plocate.db bin/updatedb
locate: /usr/bin/updatedb.findutils
mlocate: /usr/bin/updatedb.mlocate
roundcube-core: /usr/share/roundcube/bin/updatedb.sh
./plocate -d apt-file.plocate.db bin/updatedb  0,00s user 0,01s system 79% cpu 0,012 total

Things will probably be quieting down now; there's just not that many more logical features to add.

Worse Than FailureCodeSOD: Query Elegance

It’s generally hard to do worse than a SQL injection vulnerability. Data access is fundamental to pretty much every application, and every programming environment has some set of rich tools that make it easy to write powerful, flexible queries without leaving yourself open to SQL injection attacks.

And yet, and yet, they’re practically a standard feature of bad code. I suppose that’s what makes it bad code.

Gidget W inherited a PHP application which, unsurprisingly, is rife with SQL injection vulnerabilities. But, unusually, it doesn’t leverage string concatenation to get there. Gidget’s predecessor threw a little twist on it.

$fields = "t1.id, t1.name, UNIX_TIMESTAMP(t1.date) as stamp, ";
$fields .= "t2.idT1, t2.otherDate, t2.otherId";
$join = "table1 as t1 join table2 as t2 on t1.id=t2.idT1";
$where = "where t1.lastModified > $val && t2.lastModified = '$val2'";
$query = "select $field from $join $where";

This pattern appears all through the code. Because it leverages string interpolation, the same core structure shows up again and again, almost copy/pasted, with one line repeated each time.

$query = "select $field from $join $where";

What goes into $field and $join and $where may change each time, but "select $field from $join $where" is eternal, unchanging, and omnipresent. Every database query is constructed this way.

It’s downright elegant, in its badness. It simultaneously shows an understanding of how to break up a pattern into reusable code, but also no understanding of why all of this is a bad idea.

But we shouldn’t let that distract us from the little nuances of the specific query that highlight more WTFs.

t1.lastModified > $val && t2.lastModified = '$val2'

lastModified in both of these tables is a date, as one would expect. Which raises the question: why does one of these conditions get quotes and why does the other one not? It implies that $val probably has the quotes baked in?

Gidget also asks: “Why is the WHERE keyword part of the $where variable instead of inline in the query, but that isn’t the case for SELECT or FROM?”

That, at least, I can answer. Not every query has a filter condition. Since you can’t have WHERE followed by nothing, just make the $where variable contain that.

See? Elegant in its badness.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

LongNowThe Data of Long-lived Institutions

The following transcript has been edited for length and clarity. 

I want to lead you through some of the research that I’ve been doing on a meta-level around long-lived institutions, as well as some observations of the ways various systems have lasted for hundreds of thousands of years. 

Long Now as a Long-lived Institution

This is one of the early projects I worked with Stewart Brand on at Long Now. We were trying to define our problem space and explore the ways we think on different timescales. Generally, companies are working in the “nowadays,” although that’s been shortening to some extent, with more quarterly thinking than decade-level thinking.

It was Peter Schwartz who suggested this 10,000 year timeframe. Danny Hillis’ original idea for what would ultimately become The 10,000 Year Clock was that it would be a Millennium Clock: it would tick once a year, bong once a century, and the cuckoo would come out once a millennium. He didn’t really have an end date. 

We use the 10,000 year time frame to orient our efforts at Long Now because global civilization arose when the last Interglacial period ended 10,000 years ago. It was only then, around 8,000 BC, that we had the emergence of agriculture and the first cities. If we can look back that far, we should be able to look forward that far. Thinking about ourselves as in the middle of a 20,000 year story is very different than thinking about ourselves as at the end of a 10,000 year story. 

This pace layers diagram is the very first thing I worked on at Long Now. The notion of pace layers came out of a discussion between Stewart and Long Now co-founder Brian Eno. They were trying to tease apart these layers of human time. 

Institutions can be mapped across the pace layers diagram as well. Take Apple Computer, for example. They’re coming out with new iPhones every six months, which is the fashion layer. The commerce layer is Apple selling these devices. The infrastructure layer is the cell phone networks and chip fabs that it’s all built on. The governance layer—and note that it is governance, not government; they’re mostly working with governments, but they also have to work with general governing systems. Some of these companies are hitting walls against different types of governments who have different ideas of privacy, different ideas of commercialization, and they’re now having to shape their companies around that. And then obviously, culture is moving slower underneath all of this, but Apple is starting to affect culture. And then there’s the last pace layer, nature, moving the slowest. At some point, Apple is going to have to come to terms with the level of environmental damage and problems that are happening on the nature pace layer if it is going to be a company that lasts for hundreds or a thousand years. So we could imagine any large institution mapped across this and I think it’s a useful tool for that. 

Also very early on in Long Now’s history, in 01997, Kees van der Heijden, who wrote the book on scenario planning, came to a charrette that Long Now organized to come up with business ideas for our organization. He formulated a business plan that was strangely prophetic:

The squares are areas where we have core competencies. The dotted lines indicate temporary competencies, like the founders. The other items indicate all the things we hadn’t really gotten to yet or figured out: we didn’t have a way of funding ourselves; we didn’t have a membership program; we didn’t have a large community of donors; we didn’t have an endowment; and we didn’t have people willing to give their estates to us. We still don’t have an endowment or people willing to give us our estates, but we’ve achieved the rest. And now that we’ve been around for 22 years, we can imagine how those two items are going to start to happen next.  

I also want to point out the cyclical nature of this diagram. There’s no system in the world that I’ve found that is linear that has lasted on these timescales. You need to have a cyclical business model, not a linear business model.

The Longest-Lived Institutions in the World

I’ve been collecting data on all of the longest lived institutions in the world. As you look at these, there’s a few things that stick out. Notice: brewery, brewery, winery, hotel, bar, pub, right? And also notice that a lot of them are in Japan. There’s been a rough system of government there for over 2,000 years (the Royal Family) that’s held together enough to enable something like the Royal Confectioner of Japan to be one of the oldest companies in the world. But there’s also temple builders and things like that.

In the West, most of the companies that have survived for a very long time are basically service companies. It’s a lot easier to reinvent yourself as a service-oriented company than it is as a commodity company when that particular commodity goes out of use.

Colgate Palmolive (founded 01806) and DuPont (founded 01802) are commodity companies that are broad enough to change the kinds of products they sell over time. I’m interested in learning more about all these companies, as they probably all have some kind of special sauce in their stories of longevity. 

Something else that came out of this research is the fact that the length of company’s lives is shrinking at almost one year per year. In 01950, the average company on the Fortune 500 had been around for 61 years. Now it’s 18 years. Companies’ lives are getting shorter.

As I mentioned, most of the oldest companies in the world are in Japan. In a survey of 5,500 companies over 200 years old, 3,100 are based in Japan. The rest are in some of the older countries in Europe. 

But—and this was a fact I found curious, and one that speaks to the cyclical nature of things—90% of the companies that are over 200 years old have 300 employees or less; they’re not mega companies. 

In surveying 1,000 companies over 300 years old, you find a huge amount of disparity concerning which industries they’re a part of. But there were a few big groupings that I found interesting. 23% are in the alcohol industry, and this doesn’t even include pubs and restaurants and hotels that may sell alcohol. 

Patrick McGovern, a biomolecular archeologist who I talked to when we were building The Interval, has done DNA analysis on vines, which are a clonal species. From that analysis, we  know that civilization started cultivating wine 8,000 BC. McGovern supposes that it’s not at all clear if civilization stopped being nomadic in order to ferment things, or because they started fermenting things, they stopped being nomadic. It’s an intriguing correlation, and notable that some of this overwhelmingly large segment of the oldest companies in the world deal in alcohol.

Long-term Thinking is Not Inherently Good

A quick word about values: long-term thinking, and aspiring to be a long-term institution, is not inherently good. At Long Now, we’ve always emphasized the importance of long-term thinking without trying to ascribe a lot of values to it. But I don’t think that’s intellectually honest. We have to ask ourselves what we’re trying to perpetuate. We have to step back far enough and ensure that the kinds of things we’re perpetuating are generally good for society. 

How to Build Things That Last

One way that things have lasted for a really long time is to just take a really long time to build them. Cathedrals are a famous example of this. The most dangerous time for anything that’s lasting is really just one generation after it was built. It’s no longer a new, cool thing; it’s the thing that your parents did, and it’s not cool anymore. And it’s not until another couple generations later where everyone values it, largely because it is old and venerable and has become a kind of cultural icon. 

And we already see this with this cathedral: the Sagrada Familia in Barcelona.  It’s still under construction, 125 years into its build process, and it’s already a UNESCO World Heritage Site. 

The other way things last for a really long time, and this is the Japanese model, is that they’re just extremely well-maintained. 

At about 1,400 years old, these are the two oldest continuously standing wooden structures in the world. And they’ve replaced a lot of parts of them. They keep the roofs on them, and even in a totally humid and raining environment, the central timbers of these buildings have stayed true. Interestingly, this temple was also the place where, over a thousand years ago, a Japanese princess had a vision that she needed to send a particular prayer out to the world to make sure that it survived into the future. And so she had, literally, a million wooden pagodas made with the prayer put inside them, and distributed these little pagodas as far and wide as she could. You can still buy these on eBay right now. It’s an early example of the philosophy of “Lots of Copies Keep Stuff Safe” (LOCKSS). 

Another Japanese example that uses a totally different strategy is this Shinto shrine.

Shinto is an animist religion whose adherents believe that spirits are in everything, unlike Buddhism, which came to Japan later. In the Shinto belief system, temples have this renewing technology, if you will, where they’re rebuilt in a site right next to each other in different periodicities. This one, which is the most famous in Japan, is the Ise Shrine, which is rebuilt every 20 years. A few years ago, I was fortunate enough to attend the rebuilding ceremony. (One of the oldest companies in the world, I should add, is the Japanese temple building company that builds these temples. 

The emphasis here is not on maintenance, but renewal. These temples made of thatch and wood—totally ephemeral materials—have lasted for 2,000 years. They have documented evidence of exact rebuilding for 1,600 years, but this site was founded in 4 AD—also by a visionary Japanese princess. And every 20 years, with the Japanese princess in attendance, they move the treasures from one from one temple to the other. And the kami—the spirits—follow that. And then they deconstruct the temple, the old one, and they send all those parts out to various Shinto temples in Japan as religious artifacts.

I think the most important thing about this particular example is that each generation gets to have this moment where the older master teaches the apprentice the craft. So you get this handing off of knowledge that’s extremely physical and real and has a deliverable. It’s got to ship, right? “The princess is coming to move the stuff; we have to do this.” It’s an eight year process with tons of ritual that goes into rebuilding these temples.

I think an interesting counterexample to things lasting a very long time is when they ascribe to certain ideologies. And I think it’s curious, one of our longest lived institutions is the Catholic Church, and the ideology of something like the Buddha’s Obamian has lasted, but a lot of the artifacts become targets for people who don’t believe in that ideology. The Taliban spent weeks dynamiting and using artillery to destroy these Buddhas. You would think that Buddhism, a relatively innocuous religion, is unthreatening—but not so much to the Taliban.

This is the University of Bologna, which is largely credited as the earliest university in the world. It’s almost 1000 years old at this point. Oxford was shortly behind it. And there’s another 40 or so universities over 500 years old.

Universities have this ability to do a kind of continual refresh where every four years, especially in undergraduate programs, you have a whole new set of people. And so they have to sell themselves to a new generation every single year. Their customer is a whole class. And we see universities now struggle when they aren’t teaching relevant things to people and they have to adjust. And that has kept them around as some of the longest lived institutions in the world. 

I think the idea of communities of practice is a really interesting one. In these communities, knowledge of practice is handed down from generation to generation. Such is the case with martial arts, which we have evidence for dating back at least 2,000 or 3,000 years.

There’s several strategies in nature that allow systems to last for thousands of years. There’s clonal strategies like the Aspen tree. We’ve measured Mesquite rings in the desert where they die and then they grow up in a ring from the root structure that indicates that a Mesquite ring has the same DNA, effectively for 50,000 years. And these clonal forests have definitely been around for thousands of years, even though each individual will only last a few years in some cases.

In other cases, things are cultivated. Going back to the wine example, where we know we have effectively the DNA of clonal species like these grapes from ancient Rome where we have taken a clipping and cultivated it from generation to generation. So there’s been this kind of interplay between humans and the natural world and we also see this in a lot of tree-caring practices. 

The bristlecone exemplifies how an existential crisis gives you practice in terms of how you’re going to survive. The bristlecone is the oldest continuously living single organism that we know of in the world. And the funny thing about the bristlecone is it was not discovered by coring to be the oldest living species in the world; it was postulated because a particular tree scientist had cored other pine species, and as they did that, they found that all the ones in the worst environments were the oldest. And he said: “If you can find the pine species that is living in the absolute worst environment, you will find the oldest species of pine in the world.” And he coined this term: adversity breeds longevity. And so then people went to go find the pines in the worst environment and up at the top of the White Mountains and in the Snake Range in Nevada, and some in Colorado as well, they found three different species of bristlecone, which have been dated to over 5,000 years at this point.

Taking the Future into Account

If any of us are to build an institution that’s going to last for the next few hundred or 1,000 years, the changes in demographics and the climate are a big part of it. This is the projected impact of climate change on agricultural yields until the 02080s. And as you can see, agricultural yields in the global south are going to be going down. In the global north and the much further north, more like Canada and Russia, they’re going to be getting a lot better. And this is going to change the world markets, world populations, and what we’re warring over for the next 100 years.

In all natural systems, you have these sigmoid curves that things always go down. We are always assuming things like our population and our economies to always go up, but that is not the way the world works; it has never been that way, and we always have these kinds of corrections. In this case, a predator follows the prey as a lower sigmoid. Once its prey runs out, then the predators start dying off. 

How do we get good at failing, but not totally dying out? The lynx never dies out totally, but companies that do one thing are bad at recovering when that one thing is no longer the big commodity. It wasn’t record companies that invented iTunes; it was an outsider company. Record companies were adept at selling plastic circles and when there were no plastic circles to sell music on, they didn’t know how to adjust for that. The crux of anything that’s going to last for a long time is: how do you get good at reinvention and retooling?

There’s no scenario that I’ve seen where the world population doesn’t start going down by at least a hundred years from now, if not less than 50 years from now. 

So even the median projection, that red line in the middle, tapers off. But this data is a couple of years old, and it’s now starting to increasingly look a lot more like that dotted blue line at the bottom. And the world has really never lived through a time, except for a few short plague seasons, where the world population was going down—and, by extension, where the customer base was going down.

Even more dangerous than the population going down is that the population is changing. The red line here is the number of 15 to 64 year olds. And the blue line is the zero to 14 year olds. If the world is made up largely of older people who hoard wealth, don’t work hard, and don’t make huge contributions of creativity to the world the way 20 year olds do, that world is a world that I don’t think we’re prepared to live in right now.

We’re seeing this now happening in a lot of the developed world and most notably in Japan. Those of you who remember the 01980s recall that there was no scenario where Japan was not an absolute dominant part of the economy of the world. And now they’re struggling just to be relevant in a lot of ways, and it’s largely because this population change happened and the young people were not there. They wouldn’t allow any immigration, and that creativity, and that thrust of civilization, went out of a country that was a dominant world economic power.

Watch the video of Alexander Rose’s talk on the Data of Long-lived Institutions:

Planet DebianChristian Kastner: RStudio is a refreshingly intuitive IDE

I currently need to dabble with R for a smallish thing. I have previously dabbled with R only once, for an afternoon, and that was about a decade ago, so I had no prior experience to speak of regarding the language and its surrounding ecosystem.

Somebody recommended that I try out RStudio, a popular IDE for R. I was happy to see that an open-source community edition exists, in the form of a .deb package no less, so I installed it and gave it a try.

It's remarkable how intuitive this IDE is. My first guess at doing something has so far been correct every. single. time. I didn't have to open the help, or search the web, for any solutions, either -- they just seem to offer themselves up.

And it's not just my inputs; it's the output, too. The RStudio window has multiple tiles, and each tile has multiple tabs. I found this quite confusing and intimidating on first impression, but once I started doing some work, I was surprised to see that whenever I did something that produced output in one or more of the tabs, it was (again) always in an intuitive manner. There's a fine line between informing with relevant context and distracting with irrelevant context, but RStudio seems to have placed itself on the right side of it.

This, and many other features that pop up here and there, like the live-rendering of LaTeX equations, contributed to what has to be one of the most positive experiences with an IDE that I've had so far.

LongNowA Long Now Drive-in Double Feature at Fort Mason

Join the Long Now Community for a night of films that inspire long-term thinking. On October 27, 02020, we’ll screen Samsara followed by 2001: A Space Odyssey at Fort Mason.

SAMSARA

Drive-in Screening on Tuesday October 27, 02020 at 6:00pm PT

SAMSARA is a Sanskrit word that means “the ever turning wheel of life” and is the point of departure for the filmmakers as they search for the elusive current of interconnection that runs through our lives.  SAMSARA transports us to sacred grounds, disaster zones, industrial sites, global gatherings and natural wonders. By dispensing with dialogue and descriptive text, the film subverts our expectations of a traditional documentary, instead encouraging our own inner interpretations inspired by images and music that infuses the ancient with the modern. 

Filmed over five years in twenty-five countries, SAMSARA (02011) is a non-verbal documentary from filmmakers Ron Fricke and Mark Magidson, the creators of BARAKA. It is one of only a handful of films shot on 70mm in the last forty years. Through powerful images, the film illuminates the links between humanity and the rest of nature, showing how our life cycle mirrors the rhythm of the planet.

2001: A Space Odyssey

Drive-in Screening on Tuesday October 27, 02020 at 8:45pm PT

The genius is not in how much Stanley Kubrick does in “2001: A Space Odyssey,” but in how little. This is the work of an artist so sublimely confident that he doesn’t include a single shot simply to keep our attention. He reduces each scene to its essence, and leaves it on screen long enough for us to contemplate it, to inhabit it in our imaginations. Alone among science-fiction movies, “2001″ is not concerned with thrilling us, but with inspiring our awe. 

What Kubrick had actually done was make a philosophical statement about man’s place in the universe, using images as those before him had used words, music or prayer. And he had made it in a way that invited us to contemplate it — not to experience it vicariously as entertainment, as we might in a good conventional science-fiction film, but to stand outside it as a philosopher might, and think about it.

 Roger Ebert

Ticket & Event Information:

  • Tickets are $30 per vehicle for members, General Public Tickets are $60 per vehicle.
  • Separate tickets must be purchased for each of the screenings.
  • Parking opens at 5:00pm for the 6:00pm showing, and 7:45pm for the 8:45pm showing.
  • Please have your ticket printed out or on your phone so we can check you in.
  • Parking location will be chosen by the venue to insure that everyone can best see the screen.
  • The film audio will be through your FM radio receiver. 
  • There will be concessions available at the event! The Interval will be open to purchase to-go drinks, there will be a Food Truck and popcorn, candy and other snacks for sale.

COVID-19 Safety Information:

  • This is a socially distant event. Please do not attend if you are experiencing any symptoms of COVID-19. 
  • Bathrooms will be cleaned throughout the evening.
  • Masks are required when outside of your vehicle. Masks with exhalation valves are not allowed.
  • Attendees must remain inside their vehicles except to use the restroom facilities or pickup concessions.
  • Each vehicle may only be occupied by members of a “pod” who have already been in close contact with each other.
  • Attendees who fail to follow safe distancing at the request of staff will cause the attendee to be subject to ejection of the event. No refund will be given.

About FORT MASON FLIX  

With drive-in theaters experiencing a renaissance around the country, Fort Mason Center for Arts & Culture (FMCAC) announces FORT MASON FLIX, a pop-up drive-in theater launching September 18, 2020. Housed on FMCAC’s historic waterfront campus, FORT MASON FLIX will present a cornucopia of film programming, from family favorites and cult classics to blockbusters and arthouse cinema.

Cryptogram NSA Advisory on Chinese Government Hacking

The NSA released an advisory listing the top twenty-five known vulnerabilities currently being exploited by Chinese nation-state attackers.

This advisory provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks. Most of the vulnerabilities listed below can be exploited to gain initial access to victim networks using products that are directly accessible from the Internet and act as gateways to internal networks. The majority of the products are either for remote access (T1133) or for external web services (T1190), and should be prioritized for immediate patching.

Planet DebianDirk Eddelbuettel: RcppZiggurat 0.1.6

ziggurats

A new release, now at version 0.1.6, of RcppZiggurat is now on the CRAN network for R.

The RcppZiggurat package updates the code for the Ziggurat generator by Marsaglia and other which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

This release brings a corrected seed setter and getter which now correctly take care of all four state variables, and not just one. It also corrects a few typos in the vignette. Both were fixed quite a while back, but we somehow managed to not ship this to CRAN for two years.

The NEWS file entry below lists all changes.

Changes in version 0.1.6 (2020-10-18)

  • Several typos were corrected in the vignette (Blagoje Ivanovic in #9).

  • New getters and setters for internal state were added to resume simulations (Dirk in #11 fixing #10).

  • Minor updates to cleanup script and Travis CI setup (Dirk).

Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppZiggurat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBits from Debian: Debian donation for Peertube development

The Debian project is happy to announce a donation of 10,000 € to help Framasoft reach the fourth stretch-goal of its Peertube v3 crowdfunding campaign -- Live Streaming.

This year's iteration of the Debian annual conference, DebConf20, had to be held online, and while being a resounding success, it made clear to the project our need to have a permanent live streaming infrastructure for small events held by local Debian groups. As such, Peertube, a FLOSS video hosting platform, seems to be the perfect solution for us.

We hope this unconventional gesture from the Debian project will help us make this year somewhat less terrible and give us, and thus humanity, better Free Software tooling to approach the future.

Debian thanks the commitment of numerous Debian donors and DebConf sponsors, particularly all those that contributed to DebConf20 online's success (volunteers, speakers and sponsors). Our project also thanks Framasoft and the PeerTube community for developing PeerTube as a free and decentralized video platform.

The Framasoft association warmly thanks the Debian Project for its contribution, from its own funds, towards making PeerTube happen.

This contribution has a twofold impact. Firstly, it's a strong sign of recognition from an international project - one of the pillars of the Free Software world - towards a small French association which offers tools to liberate users from the clutches of the web's giant monopolies. Secondly, it's a substantial amount of help in these difficult times, supporting the development of a tool which equally belongs to and is useful to everyone.

The strength of Debian's gesture proves, once again, that solidarity, mutual aid and collaboration are values which allow our communities to create tools to help us strive towards Utopia.

Worse Than FailureCodeSOD: Delete This

About three years ago, Consuela inherited a giant .NET project. It was… not good. To communicate how “not good” it was, Consuela had a lot of possible submissions. Sending the worst code might be the obvious choice, but it wouldn’t give a good sense of just how bad the whole thing was, so they opted instead to find something that could roughly be called the “median” quality.

This is a stored procedure that is roughly about the median sample of the overall code. Half of it is better, but half of it gets much, much worse.

CREATE proc [dbo].[usermgt_DeleteUser]
  (
    @ssoid uniqueidentifier
  )
AS
  begin
    declare @username nvarchar(64)
    select @username = Username from Users where SSOID = @ssoid
    if (not exists(select * from ssodata where ssoid = @ssoid))
      begin
        insert into ssodata (SSOID, UserName, email, givenName, sn)
        values (@ssoid, @username, 'Email@email.email', 'Firstname', 'Lastname')
        delete from ssodata where ssoid = @ssoid
      end
    else begin
      RAISERROR ('This user still exists in sso', 10, 1)
    end

Let’s talk a little bit about names. As you can see, they’re using an “internal” schema naming convention- usermgt clearly is defining a role for a whole class of stored procedures. Already, that’s annoying, but what does this procedure promise to do? DeleteUser.

But what exactly does it do?

Well, first, it checks to see if the user exists. If the user does exist… it raises an error? That’s an odd choice for deleting. But what does it do if the user doesn’t exist?

It creates a user with that ID, then deletes it.

Not only is this method terribly misnamed, it also seems to be utterly useless. At best, I think they’re trying to route around some trigger nonsense, where certain things happen ON INSERT and then different things happen ON DELETE. That’d be a WTF on its own, but that’s possibly giving this more credit than it deserves, because that assumes there’s a reason why the code is this way.

Consuela adds a promise, which hopefully means some follow-ups:

If you had access to the complete codebase, you would not EVER run out of new material for codesod. It’s basically a huge collection of “How Not To” on all possible layers, from single lines of code up to the complete architecture itself.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianReproducible Builds: Supporter spotlight: Civil Infrastructure Platform

The Reproducible Builds project depends on our many projects, supporters and sponsors. We rely on their financial support, but they are also valued ambassadors who spread the word about the Reproducible Builds project and the work that we do.

This is the first installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org.

However, we are kicking off this series by featuring Urs Gleim and Yoshi Kobayashi of the Civil Infrastructure Platform (CIP) project.


Chris Lamb: Hi Urs and Yoshi, great to meet you. How might you relate the importance of the Civil Infrastructure Platform to a user who is non-technical?

A: The Civil Infrastructure Platform (CIP) project is focused on establishing an open source ‘base layer’ of industrial-grade software that acts as building blocks in civil infrastructure projects. End-users of this critical code include systems for electric power generation and energy distribution, oil and gas, water and wastewater, healthcare, communications, transportation, and community management. These systems deliver essential services, provide shelter, and support social interactions and economic development. They are society’s lifelines, and CIP aims to contribute to and support these important pillars of modern society.


Chris: We have entered an age where our civilisations have become reliant on technology to keep us alive. Does the CIP believe that the software that underlies our own safety (and the safety of our loved ones) receives enough scrutiny today?

A: For companies developing systems running our infrastructure and keeping our factories working, it is part of their business to ensure the availability, uptime, and security of these very systems. However, software complexity continues to increase, and the efforts spent on those systems is now exploding. What is missing is a common way of achieving this through refining the same tools, and cooperating on the hardening and maintenance of standard components such as the Linux operating system.


Chris: How does the Reproducible Builds effort help the Civil Infrastructure Platform achieve its goals?

A: Reproducibility helps a great deal in software maintenance. We have a number of use-cases that should have long-term support of more than 10 years. During this period, we encounter issues that need to be fixed in the original source code. But before we make changes to the source code, we need to check whether it is actually the original source code or not. If we can reproduce exactly the same binary from the source code even after 10 years, we can start to invest time and energy into making these fixes.


Chris: Can you give us a brief history of the Civil Infrastructure Platform? Are there any specific ‘success stories’ that the CIP is particularly proud of?

A: The CIP Project formed in 2016 as a project hosted by Linux Foundation. It was launched out of necessity to establish an open source framework and the subsequent software foundation delivers services for civil infrastructure and economic development on a global scale. Some key milestones we have achieved as a project include our collaboration with Debian, where we are helping with the Debian Long Term Support (LTS) initiative, which aims to extend the lifetime of all Debian stable releases to at least 5 years. This is critical because most control systems for transportation, power plants, healthcare and telecommunications run on Debian-based embedded systems.

In addition, CIP is focused on IEC 62443, a standards-based approach to counter security vulnerabilities in industrial automation and control systems. Our belief is that this work will help mitigate the risk of cyber attacks, but in order to deal with evolving attacks of this kind, all of the layers that make up these complex systems (such as system services and component functions, in addition to the countless operational layers) must be kept secure. For this reason, the IEC 62443 series is attracting attention as the de facto cyber-security standard.


Chris: The Civil Infrastructure Platform project comprises a number of project members from different industries, with stakeholders across multiple countries and continents. How does working together with a broad group of interests help in your effectiveness and efficiency?

A: Although the members have different products, they share the requirements and issues when developing sustainable products. In the end, we are driven by common goals. For the project members, working internationally is simply daily business. We see this as an advantage over regional or efforts that focus on narrower domains or markets.


Chris: The Civil Infrastructure Platform supports a number of other existing projects and initiatives in the open source world too. How much do you feel being a part of the broader free software community helps you achieve your aims?

A: Collaboration with other projects is an essential part of how CIP operates — we want to enable commonly-used software components. It would not make sense to re-invent solutions that are already established and widely used in product development. To this end, we have an ‘upstream first’ policy which means that, if existing projects need to be modified to our needs or are already working on issues that we also need, we work directly with them.


Chris: Open source software in desktop or user-facing contexts receives a significant amount of publicity in the media. However, how do you see the future of free software from an industrial-oriented context?

A: Open source software has already become an essential part of the industry and civil infrastructure, and the importance of open source software there is still increasing. Without open source software, we cannot achieve, run and maintain future complex systems, such as smart cities and other key pieces of civil infrastructure.


Chris: If someone wanted to know more about the Civil Infrastructure Platform (or even to get involved) where should they go to look?

A: We have many avenues to participate and learn more! We have a website, a wiki and you can even follow us on Twitter.



For more about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

,

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.10.1.0.0

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 786 other packages on CRAN.

A little while ago, Conrad released version 10.1.0 of Armadillo, a a new major release. As before, given his initial heads-up we ran two full reverse-depends checks, and as a consequence contacted four packages authors (two by email, two via PR) about a miniscule required change (as Armadillo now defaults to C++11, an old existing setting of avoiding C++11 lead to an error). Our thanks to those who promptly update their packages—truly appreciated. As it turns out, Conrad also softened the error by the time the release ran around.

But despite our best efforts, the release was delayed considerably by CRAN. We had made several Windows test builds but luck had it that on the uploaded package CRAN got itself a (completely spurious segfault—which can happen on a busy machine building machine things at once). Sadly it took three or four days for CRAN to reply our email. After which it took another number of days for them to ponder the behaviour of a few new ‘deprecated’ messaged tickled by at the most ten or so (out of 786) packages. Oh well. So here we are, eleven days after I emailed the rcpp-devel list about the new package being on CRAN but possibly delayed (due to that seg.fault). But during all that time the package was of course available via the Rcpp drat.

The changes in this release are summarized below as usual and are mostly upstream along with an improved Travis CI setup due to the aforementioned use of the bspm package for binaries at Travis.

Changes in RcppArmadillo version 0.10.1.0.0 (2020-10-09)

  • Upgraded to Armadillo release 10.1.0 (Orchid Ambush)

    • C++11 is now the minimum required C++ standard

    • faster handling of compound expressions by trimatu() and trimatl()

    • faster sparse matrix addition, subtraction and element-wise multiplication

    • expanded sparse submatrix views to handle the non-contiguous form of X.cols(vector_of_column_indices)

    • expanded eigs_sym() and eigs_gen() with optional fine-grained parameters (subspace dimension, number of iterations, eigenvalues closest to specified value)

    • deprecated form of reshape() removed from Cube and SpMat classes

    • ignore and warn on use of the ARMA_DONT_USE_CXX11 macro

  • Switch Travis CI testing to focal and BSPM

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianPetter Reinholdtsen: Buster based Bokmål edition of Debian Administrator's Handbook

I am happy to report that we finally made it! Norwegian Bokmål became the first translation published on paper of the new Buster based edition of "The Debian Administrator's Handbook". The print proof reading copy arrived some days ago, and it looked good, so now the book is approved for general distribution. This updated paperback edition is available from lulu.com. The book is also available for download in electronic form as PDF, EPUB and Mobipocket, and can also be read online.

I am very happy to wrap up this Creative Common licensed project, which concludes several months of work by several volunteers. The number of Linux related books published in Norwegian are few, and I really hope this one will gain many readers, as it is packed with deep knowledge on Linux and the Debian ecosystem. The book will be available for various Internet book stores like Amazon and Barnes & Noble soon, but I recommend buying "Håndbok for Debian-administratoren" directly from the source at Lulu.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianSteve Kemp: Offsite-monitoring, from my desktop.

For the past few years I've had a bunch of virtual machines hosting websites, services, and servers. Of course I want them to be available - especially since I charge people money to access at some of them (for example my dns-hosting service) - and that means I want to know when they're not.

The way I've gone about this is to have a bunch of machines running stuff, and then dedicate an entirely separate machine solely for monitoring and alerting. Sure you can run local monitoring, testing that services are available, the root-disk isn't full, and that kind of thing. But only by testing externally can you see if the machine is actually available to end-users, customers, or friends.

A local-agent might decide "I'm fine", but if the hosting-company goes dark due to a fibre cut you're screwed.

I've been hosting my services with Hetzner (cloud) recently, and their service is generally pretty good. Unfortunately I've started to see an increasing number of false-alarms. I'd have a server in Germany, with the monitoring machine in Helsinki (coincidentally where I live!). For the past month I've started to get pinged with a failure every three/four days on average, "service down - dns failed", or "service down - timeout". When the notice would wake me up I'd go check and it would be fine, it was a very transient failure.

To be honest the reason for this is my monitoring is just too damn aggressive, I like to be alerted immediately in case something is wrong. That means if a single test fails I get an alert, as rather than only if a test failed for something more reasonable like three+ consecutive failures.

I'm experimenting with monitoring in a less aggressive fashion, from my home desktop. Since my monitoring tool is a single self-contained golang binary, and it is already packaged as a docker-based container deployment was trivial. I did a little work writing an agent to receive failure-notices, and ping me via telegram - instead of the previous approach where I had an online status-page which I could view via my mobile, and alerts via pushover.

So far it looks good. I've tweaked the monitoring to setup a timeout of 15 seconds, instead of 5, and I've configured it to only alert me if there is an outage which lasts for >= 2 consecutive failures. I guess the TLDR is I now do offsite monitoring .. from my house, rather than from a different region.

The only real reason to write this post was mostly to say that the process of writing a trivial "notify me" gateway to interface with telegram was nice and straightforward, and to remind myself that transient failures are way more common than we expect.

I'll leave things alone for a moment, but it was a fun experiment. I'll keep the two systems in parallel for a while, but I guess I can already predict the outcome:

  • The desktop monitoring will report transient outages now and again, because home broadband isn't 100% available.
  • The heztner-based monitoring, in a different region, will report transient problems, because even hosting companies are not 100% available.
    • Especially at the cheap prices I'm paying.
  • The way to avoid being woken up by transient outages/errors is to be less agressive.
    • I think my paying users will be OK if I find out a services is offline after 5 minutes, rather than after 30 seconds.
    • If they're not we'll have to talk about budgets ..

Worse Than FailureCodeSOD: Extended Time

The C# "extension method" feature lets you implement static methods which "magically" act like they're instance methods. It's a neat feature which the .NET Framework uses extensively. It's also a great way to implement some convenience functions.

Brandt found some "convenience" functions which were exploiting this feature.

public static bool IsLessThen<T>(this T a, T b) where T : IComparable<T> => a.CompareTo(b) < 0; public static bool IsGreaterThen<T>(this T a, T b) where T : IComparable<T> => a.CompareTo(b) > 0; public static bool And(this bool a, bool b) => a && b; public static bool Or(this bool a, bool b) => a || b; public static bool IsBetweensOrEqual<T>(this T a, T b, T c) where T : IComparable<T> => a.IsGreaterThen(b).Or(a.Equals(b)).And( a.IsLessThen(c).Or(a.Equals(c)) );

Here, we observe someone who maybe heard the term "functional programming" and decided to wedge it into their programming style however it would fit. We replace common operators and expressions with extension method versions. Instead of the cryptic a || b, we can now write a.Or(b), which is… better?

I almost don't hate the IsLessThan/IsGreaterThan methods, as that is (arguably) more readable. But wait, I have to correct myself: IsLessThen and IsGreaterThen. So they almost got something that was (arguably) more readable, but with a minor typo just made it all more confusing.

All this, though, to solve their actual problem: the TimeSpan data type doesn't have a "between" comparator, and at one point in their code- one point- they need to perform that check. It's also worth noting that C# supports operator overloading, and TimeSpan does have an overload for all your basic comparison operators, so they could have just done that.

Brandt adds:

While you can argue that there is no out of the box 'Between' functionality, the colleague who programmed this ignored an already existing extension method that was in the same file, that offered exactly this functionality in a better way.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Charles StrossEditorial Entanglements

A young editor once asked me what was the biggest secret to editing a fiction magazine. My answer was "confidence." I have to be confident that the stories I choose will fit together, that people will read them and enjoy them, and most importantly, that each month I'll receive enough publishable material to fill the pages of the magazine.

Asimov's Science Fiction comes out as combined monthly issues six times a year. A typical issue contains ten to twelve stories. That means I buy about 65 stories a year. Roughly speaking, I need to buy five to six stories per month--although I may actually buy two one month and ten the next. That I will receive these stories should seem inevitable. I get to choose them from about eight hundred submissions per month. Yet, since I know that I will have to reject over 99 percent of the stories that wing their way to me, there is always a slight concern that that someday 100 percent of the submissions won't be right for the magazine.

Luckily, this anxiety is strongly offset by a lifetime of experience. For sixteen years as the editor-in-chief, and far longer as a staff member, I've seen that each issue of the magazine has been filled with wonderful stories. Asimov's tales are balanced, they are long and short, amusing and tragic, near- and distant-future explorations of hard SF, far-flung space opera, time travel, surreal tales and a little fantasy. They're by well-known names and brand new authors. I have confidence these stories will show up and that I'll know them when I see them.

I have edited or co-edited more than two-dozen reprint anthologies. These books consisted of stories that previously appeared in genre magazines. Pulling them together mostly required sifting through years and years of published fiction. The tales have been united by a common theme such as Robots or Ghosts or The Solar System.

Editing my first original anthology was not like editing these earlier books or like editing an issue of the magazine. Entanglements: Tomorrows Lovers, Families, and Friends, which I edited as part of the Twelve Tomorrow Series, has just come out from MIT Press. The tales are connected by a theme--the effect of emerging technologies on relationships--but the stories are brand new. Instead of waiting for eight hundred stories to come to me, I asked specific authors for their tales. I approached prominent authors like Nancy Kress (who is also profiled in the book by Lisa Yaszek), Annalee Newitz, James Patrick Kelly, and Mary Robinette Kowal, as well as up and coming authors like Sam J. Miller, Cadwell Turnbull, and Rich Larson. I was working with some writers for the first time. Others, like Suzanne Palmer and Nick Wolven, were people I'd published on several occasions.

I deliberately chose authors who I felt were capable of writing the sort of hard science fiction that the Twelve Tomorrows series is famous for. I was also pretty sure that I was contacting people who were good at making deadlines! I knew I enjoyed the work of Chinese author Xia Jia and I was delighted to have an opportunity to work with her translator, Ken Liu. I was also thrilled to get artwork from Tatiana Plakhova.

Once I commissioned the stories, I had to wait with fingers crossed. What if an author went off in the wrong direction? What if an author failed to get inspired? What if they all missed their deadlines? It turned out that I had no need to worry. Each author came through with a story that perfectly fit the anthology's theme. The material was diverse, with stories ranging from tales about lovers and mentors and friends to stories populated with children and grandparents. The book includes charming and amusing tales, heart-rending stories, and exciting thrillers.

I learned so much from editing Entanglements. The next time I edit an original anthology, I expect to approach it with a self-assurance akin to the confidence I feel when I read through a month of submissions to Asimov's.

Planet DebianReproducible Builds (diffoscope): diffoscope 161 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 161. This version includes the following changes:

[ Chris Lamb ]
* Fix failing testsuite: (Closes: #972518)
  - Update testsuite to support OCaml 4.11.1. (Closes: #972518)
  - Reapply Black and bump minimum version to 20.8b1.
* Move the OCaml tests to the assert_diff helper.

[ Jean-Romain Garnier ]
* Add support for radare2 as a disassembler.

[ Paul Spooren ]
* Automatically deploy Docker images in the continuous integration pipeline.

You find out more by visiting the project homepage.

,

Planet DebianAntoine Beaupré: SSH 2FA with Google Authenticator and Yubikey

About a lifetime ago (5 years), I wrote a tutorial on how to configure my Yubikey for OpenPGP signing, SSH authentication and SSH 2FA. In there, I used the libpam-oath PAM plugin for authentication, but it turns out that had too many problems: users couldn't edit their own 2FA tokens and I had to patch it to avoid forcing 2FA on all users. The latter was merged in the Debian package, but never upstream, and the former was never fixed at all. So I started looking at alternatives and found the Google Authenticator libpam plugin. A priori, it's designed to work with phones and the Google Authenticator app, but there's no reason why it shouldn't work with hardware tokens like the Yubikey. Both use the standard HOTP protocol so it should "just work".

After some fiddling, it turns out I was right and you can authenticate with a Yubikey over SSH. Here's that procedure so you don't have to second-guess it yourself.

Installation

On Debian, the PAM module is shipped in the google-authenticator source package:

apt install libpam-google-authenticator

Then you need to add the module in your PAM stack somewhere. Since I only use it for SSH, I added this line on top of /etc/pam.d/sshd:

auth required pam_google_authenticator.so nullok

I also used no_increment_hotp debug while debugging to avoid having to renew the token all the time and have more information about failures in the logs.

Then reload ssh (not sure that's actually necessary):

service ssh reload

Creating or replacing tokens

To create a new key, run this command on the server:

google-authenticator -c

This will prompt you for a bunch of questions. To get them all right, I prefer to just call the right ones on the commandline directly:

google-authenticator --counter-based --qr-mode=NONE --rate-limit=1 --rate-time=30 --emergency-codes=1 --window-size=3

Those are actually the defaults, if my memory serves me right, except for the --qr-mode and --emergency-codes (which can't be disabled so I only print one). I disable the QR code display because I won't be using the codes on my phone, but you would obviously keep it if you want to use the app.

Converting to a Yubikey-compatible secret

Unfortunately, the encoding (base32) produced by the google-authenticator command is not compatible with the token expected by the ykpersonalize command used to configure the Yubikey (base16 AKA "hexadecimal", with a fixed 20 bytes length). So you need a way to convert between the two. I wrote a program called oath-convert which basically does this:

read base32
add padding
convert to hex
print

Or, in Python:

def convert_b32_b16(data_b32):
    remainder = len(data_b32) % 8
    if remainder > 0:
        # XXX: assume 6 chars are missing, the actual padding may vary:
        # https://tools.ietf.org/html/rfc3548#section-5
        data_b32 += "======"
    data_b16 = base64.b32decode(data_b32)
    if len(data_b16) < 20:
        # pad to 20 bytes
        data_b16 += b"\x00" * (20 - len(data_b16))
    return binascii.hexlify(data_b16).decode("ascii")

Note that the code assumes a certain token length and will not work correctly for other sizes. To use the program, simply call it with:

head -1 .google_authenticator | oath-convert

Then you paste the output in the prompt:

$ ykpersonalize -1 -o oath-hotp -o append-cr -a
Firmware version 3.4.3 Touch level 1541 Program sequence 2
 HMAC key, 20 bytes (40 characters hex) : [SECRET GOES HERE]

Configuration data to be written to key configuration 1:

fixed: m:
uid: n/a
key: h:[SECRET REDACTED]
acc_code: h:000000000000
OATH IMF: h:0
ticket_flags: APPEND_CR|OATH_HOTP
config_flags: 
extended_flags: 

Commit? (y/n) [n]: y

Note that you must NOT pass the -o oath-hotp8 parameter to the ykpersonalize commandline, which we used to do in the Yubikey howto. That is because Google Authenticator tokens are shorter: it's less secure, but it's an acceptable tradeoff considering the plugin is actually maintained. There's actually a feature request to support 8-digit codes so that limitation might eventually be fixed as well.

Thanks to the Google Authenticator people and Yubikey people for their support in establishing this procedure.

Planet DebianRussell Coker: Video Decoding

I’ve had a saga of getting 4K monitors to work well. My latest issue has been video playing, the dreaded mplayer error about the system being too slow. My previous post about 4K was about using DisplayPort to get more than 30Hz scan rate at 4K [1]. I now have a nice 60Hz scan rate which makes WW2 documentaries display nicely among other things.

But when running a 4K monitor on a 3.3GHz i5-2500 quad-core CPU I can’t get a FullHD video to display properly. Part of the process of decoding the video and scaling it to 4K resolution is too slow, so action scenes in movies lag. When running a 2560*1440 monitor on a 2.4GHz E5-2440 hex-core CPU with the mplayer option “-lavdopts threads=3” everything is great (but it fails if mplayer is run with no parameters). In doing tests with apparent performance it seemed that the E5-2440 CPU gains more from the threaded mplayer code than the i5-2500, maybe the E5-2440 is more designed for server use (it’s in a Dell PowerEdge T320 while the i5-2500 is in a random white-box system) or maybe it’s just because it’s newer. I haven’t tested whether the i5-2500 system could perform adequately at 2560*1440 resolution.

The E5-2440 system has an ATI HD 6570 video card which is old, slow, and only does PCIe 2.1 which gives 5GT/s or 8GB/s. The i5-2500 system has a newer ATI video card that is capable of PCIe 3.0, but “lspci -vv” as root says “LnkCap: Port #0, Speed 8GT/s, Width x16” and “LnkSta: Speed 5GT/s (downgraded), Width x16 (ok)”. So for reasons unknown to me the system with a faster PCIe 3.0 video card is being downgraded to PCIe 2.1 speed. A quick check of the web site for my local computer store shows that all ATI video cards costing less than $300 have PCI3 3.0 interfaces and the sole ATI card with PCIe 4.0 (which gives double the PCIe speed if the motherboard supports it) costs almost $500. I’m not inclined to spend $500 on a new video card and then a greater amount of money on a motherboard supporting PCIe 4.0 and CPU and RAM to go in it.

According to my calculations 3840*2160 resolution at 24bpp (probably 32bpp data transfers) at 30 frames/sec means 3840*2160*4*30/1024/1024=950MB/s. PCIe 2.1 can do 8GB/s so that probably isn’t a significant problem.

I’d been planning on buying a new video card for the E5-2440 system, but due to some combination of having a better CPU and lower screen resolution it is working well for video playing so I can save my money.

As an aside the T320 is a server class system that had been running for years in a corporate DC. When I replaced the high speed SAS disks with SSDs SATA disks it became quiet enough for a home workstation. It works very well at that task but the BIOS is quite determined to keep the motherboard video running due to the remote console support. So swapping monitors around was more pain than I felt like going through, I just got it working and left it. I ordered a special GPU power cable but found that the older video card that doesn’t need an extra power cable performs adequately before the cable arrived.

Here is a table comparing the systems.

2560*1440 works well 3840*2160 goes slow
System Dell PowerEdge T320 White Box PC from rubbish
CPU 2.4GHz E5-2440 3.3GHz i5-2500
Video Card ATI Radeon HD 6570 ATI Radeon R7 260X
PCIe Speed PCIe 2.1 – 8GB/s PCIe 3.0 downgraded to PCIe 2.1 – 8GB/s

Conclusion

The ATI Radeon HD 6570 video card is one that I had previously tested and found inadequate for 4K support, I can’t remember if it didn’t work at that resolution or didn’t support more than 30Hz scan rate. If the 2560*1440 monitor dies then it wouldn’t make sense to buy anything less than a 4K monitor to replace it which means that I’d need to get a new video card to match. But for the moment 2560*1440 is working well enough so I won’t upgrade it any time soon. I’ve already got the special power cable (specified as being for a Dell PowerEdge R610 for anyone else who wants to buy one) so it will be easy to install a powerful video card in a hurry.

Worse Than FailureCodeSOD: Don't Not Be Negative

One of my favorite illusions is the progress bar. Even the worst, most inaccurate progress bar will make an application feel faster. The simple feedback which promises "something is happening" alters the users' sense of time.

So, let's say you're implementing a JavaScript progress bar. You need to decide if you are "in progress" or not. So you need to check: if there is a progress value, and the progress value is less than 100, you're still in progress.

Mehdi's co-worker decided to implement that check as… the opposite.

const isInProgress = progress => !(!progress || (progress && progress > 100))

This is one of those lines of code where you can just see the developer's process, encoded in each choice made. "Okay, we're not in progress if progress doesn't have a value: !(!progress). Or we're not in progress if progress has a value and that value is over 100."

There's nothing explicitly wrong with this code. It's just the most awkward, backwards possible way to express that check. I suspect that part of its tortured logic arises from the fact that the developer wanted to return false if the value was null or undefined, and this was the way they figured out to do that.

Of course, a more straightforward way to write that might be (progress) => (progress && progress <= 100) || false. This will have "unexpected" behavior if the progress value is negative, but then again, so will the original code.

In the end, this is just a story of a double negative. I definitely won't say you should never not use a double negative. Don't not avoid them.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Krebs on SecurityQAnon/8Chan Sites Briefly Knocked Offline

A phone call to an Internet provider in Oregon on Sunday evening was all it took to briefly sideline multiple websites related to 8chan/8kun — a controversial online image board linked to several mass shootings — and QAnon, the far-right conspiracy theory which holds that a cabal of Satanic pedophiles is running a global child sex-trafficking ring and plotting against President Donald Trump. Following a brief disruption, the sites have come back online with the help of an Internet company based in St. Petersburg, Russia.

The IP address range in the upper-right portion of this map of QAnon and 8kun-related sites — 203.28.246.0/24 — is assigned to VanwaTech and briefly went offline this evening. Source: twitter.com/Redrum_of_Crows.

A large number of 8kun and QAnon-related sites (see map above) are connected to the Web via a single Internet provider in Vancouver, Wash. called VanwaTech (a.k.a. “OrcaTech“). Previous appeals to VanwaTech to disconnect these sites have fallen on deaf ears, as the company’s owner Nick Lim reportedly has been working with 8kun’s administrators to keep the sites online in the name of protecting free speech.

But VanwaTech also had a single point of failure on its end: The swath of Internet addresses serving the various 8kun/QAnon sites were being protected from otherwise crippling and incessant distributed-denial-of-service (DDoS) attacks by Hillsboro, Ore. based CNServers LLC.

On Sunday evening, security researcher Ron Guilmette placed a phone call to CNServers’ owner, who professed to be shocked by revelations that his company was helping QAnon and 8kun keep the lights on.

Within minutes of that call, CNServers told its customer — Spartan Host Ltd., which is registered in Belfast, Northern Ireland — that it would no longer be providing DDoS protection for the set of 254 Internet addresses that Spartan Host was routing on behalf of VanwaTech.

Contacted by KrebsOnSecurity, the person who answered the phone at CNServers asked not to be named in this story for fear of possible reprisals from the 8kun/QAnon crowd. But they confirmed that CNServers had indeed terminated its service with Spartan Host. That person added they weren’t a fan of either 8kun or QAnon, and said they would not self-describe as a Trump supporter.

CNServers said that shortly after it withdrew its DDoS protection services, Spartan Host changed its settings so that VanwaTech’s Internet addresses were protected from attacks by ddos-guard[.]net, a company based in St. Petersburg, Russia.

Spartan Host’s founder, 25-year-old Ryan McCully, confirmed CNServers’ report. McCully declined to say for how long VanwaTech had been a customer, or whether Spartan Host had experienced any attacks as a result of CNServers’ action.

McCully said while he personally doesn’t subscribe to the beliefs espoused by QAnon or 8kun, he intends to keep VanwaTech as a customer going forward.

“We follow the ‘law of the land’ when deciding what we allow to be hosted with us, with some exceptions to things that may cause resource issues etc.,” McCully said in a conversation over instant message. “Just because we host something, it doesn’t say anything about we do and don’t support, our opinions don’t come into hosted content decisions.”

But according to Guilmette, Spartan Host’s relationship with VanwaTech wasn’t widely known previously because Spartan Host had set up what’s known as a “private peering” agreement with VanwaTech. That is to say, the two companies had a confidential business arrangement by which their mutual connections were not explicitly stated or obvious to other Internet providers on the global Internet.

Guilmette said private peering relationships often play a significant role in a good deal of behind-the-scenes-mischief when the parties involved do not want anyone else to know about their relationship.

“These arrangements are business agreements that are confidential between two parties, and no one knows about them, unless you start asking questions,” Guilmette said. “It certainly appears that a private peering arrangement was used in this instance in order to hide the direct involvement of Spartan Host in providing connectivity to VanwaTech and thus to 8kun. Perhaps Mr. McCully was not eager to have his involvement known.”

8chan, which rebranded last year as 8kun, has been linked to white supremacism, neo-Nazism, antisemitism, multiple mass shootings, and is known for hosting child pornography. After three mass shootings in 2019 revealed the perpetrators had spread their manifestos on 8chan and even streamed their killings live there, 8chan was ostracized by one Internet provider after another.

The FBI last year identified QAnon as a potential domestic terror threat, noting that some of its followers have been linked to violent incidents motivated by fringe beliefs.

Further reading:

What Is QAnon?

QAnon: A Timeline of Violent Linked to the Conspiracy Theory

Planet DebianLouis-Philippe Véronneau: Musings on long-term software support and economic incentives

Although I still read a lot, during my college sophomore years my reading habits shifted from novels to more academic works. Indeed, reading dry textbooks and economic papers for classes often kept me from reading anything else substantial. Nowadays, I tend to binge read novels: I won't touch a book for months on end, and suddenly, I'll read 10 novels back to back1.

At the start of a novel binge, I always follow the same ritual: I take out my e-reader from its storage box, marvel at the fact the battery is still pretty full, turn on the WiFi and check if there are OS updates. And I have to admit, Kobo Inc. (now Rakuten Kobo) has done a stellar job of keeping my e-reader up to date. I've owned this model (a Kobo Aura 1st generation) for 7 years now and I'm still running the latest version of Kobo's Linux-based OS.

Having recently had trouble updating my Nexus 5 (also manufactured 7 years ago) to Android 102, I asked myself:

Why is my e-reader still getting regular OS updates, while Google stopped issuing security patches for my smartphone four years ago?

To try to answer this, let us turn to economic incentives theory.

Although not the be-all and end-all some think it is3, incentives theory is not a bad tool to analyse this particular problem. Executives at Google most likely followed a very business-centric logic when they decided to drop support for the Nexus 5. Likewise, Rakuten Kobo's decision to continue updating older devices certainly had very little to do with ethics or loyalty to their user base.

So, what are the incentives that keep Kobo updating devices and why are they different than smartphone manufacturers'?

A portrait of the current long-term software support offerings for smartphones and e-readers

Before delving deeper in economic theory, let's talk data. I'll be focusing on 2 brands of e-readers, Amazon's Kindle and Rakuten's Kobo. Although the e-reader market is highly segmented and differs a lot based on geography, Amazon was in 2015 the clear worldwide leader with 53% of the worldwide e-reader sales, followed by Rakuten Kobo at 13%4.

On the smartphone side, I'll be differentiating between Apple's iPhones and Android devices, taking Google as the barometer for that ecosystem. As mentioned below, Google is sadly the leader in long-term Android software support.

Rakuten Kobo

According to their website and to this Wikipedia table, the only e-readers Kobo has deprecated are the original Kobo eReader and the Kobo WiFi N289, both released in 2010. This makes their oldest still supported device the Kobo Touch, released in 2011. In my book, that's a pretty good track record. Long-term software support does not seem to be advertised or to be a clear selling point in their marketing.

Amazon

According to their website, Amazon has dropped support for all 8 devices produced before the Kindle Paperwhite 2nd generation, first sold in 2013. To put things in perspective, the first Kindle came out in 2007, 3 years before Kobo started selling devices. Like Rakuten Kobo, Amazon does not make promises of long-term software support as part of their marketing.

Apple

Apple has a very clear software support policy for all their devices:

Owners of iPhone, iPad, iPod or Mac products may obtain a service and parts from Apple or Apple service providers for five years after the product is no longer sold – or longer, where required by law.

This means in the worst-case scenario of buying an iPhone model just as it is discontinued, one would get a minimum of 5 years of software support.

Android

Google's policy for their Android devices is to provide software support for 3 years after the launch date. If you buy a Pixel device just before the new one launches, you could theoretically only get 2 years of support. In 2018, Google decided OEMs would have to provide security updates for at least 2 years after launch, threatening not to license Google Apps and the Play Store if they didn't comply.

A question of cost structure

From the previous section, we can conclude that in general, e-readers seem to be supported longer than smartphones, and that Apple does a better job than Android OEMs, providing support for about twice as long.

Even Fairphone, who's entire business is to build phones designed to last and to be repaired was not able to keep the Fairphone 1 (2013) updated for more than a couple years and seems to be struggling to keep the Fairphone 2 (2015) running an up to date version of Android.

Anyone who has ever worked in IT will tell you: maintaining software over time is hard work and hard work by specialised workers is expensive. Most commercial electronic devices are sold and developed by for-profit enterprises and software support all comes down to a question of cost structure. If companies like Google or Fairphone are to be expected to provide long-term support for the devices they manufacture, they have to be able to fund their work somehow.

In a perfect world, people would be paying for the cost of said long-term support, as it would likely be cheaper then buying new devices every few years and would certainly be better for the planet. Problem is, manufacturers aren't making them pay for it.

Economists call this type of problem externalities: things that should be part of the cost of a good, but aren't for one a reason or another. A classic example of an externality is pollution. Clearly pollution is bad and leads to horrendous consequences, like climate change. Sane people agree we should drastically cut our greenhouse gas emissions, and yet, we aren't.

Neo-classical economic theory argues the way to fix externalities like pollution is to internalise these costs, in other words, to make people pay for the "real price" of the goods they buy. In the case of climate change and pollution, neo-classical economic theory is plain wrong (spoiler alert: it often is), but this is where band-aids like the carbon tax comes from.

Still, coming back to long-term software support, let's see what would happen if we were to try to internalise software maintenance costs. We can do this multiple ways.

1 - Include the price of software maintenance in the cost of the device

This is the choice Fairphone makes. This might somewhat work out for them since they are a very small company, but it cannot scale for the following reasons:

  1. This strategy relies on you giving your money to an enterprise now, and trusting them to "Do the right thing" years later. As the years go by, they will eventually look at their books, see how much ongoing maintenance is costing them, drop support for the device, apologise and move on. That is to say, enterprises have a clear economic incentive to promise long-term support and not deliver. One could argue a company's reputation would suffer from this kind of behaviour. Maybe sometime it does, but most often people forget. Political promises are a great example of this.

  2. Enterprises go bankrupt all the time. Even if company X promises 15 years of software support for their devices, if they cease to exist, your device will stop getting updates. The internet is full of stories of IoT devices getting bricked when the parent company goes bankrupt and their servers disappear. This is related to point number 1: to some degree, you have a disincentive to pay for long-term support in advance, as the future is uncertain and there are chances you won't get the support you paid for.

  3. Selling your devices at a higher price to cover maintenance costs does not necessarily mean you will make more money overall — raising more money to fund maintenance costs being the goal here. To a certain point, smartphone models are substitute goods and prices higher than market prices will tend to drive consumers to buy cheaper ones. There is thus a disincentive to include the price of software maintenance in the cost of the device.

  4. People tend to be bad at rationalising the total cost of ownership over a long period of time. Economists call this phenomenon hyperbolic discounting. In our case, it means people are far more likely to buy a 500$ phone each 3 years than a 1000$ phone each 10 years. Again, this means OEMs have a clear disincentive to include the price of long-term software maintenance in their devices.

Clearly, life is more complex than how I portrayed it: enterprises are not perfect rational agents, altruism exists, not all enterprises aim solely for profit maximisation, etc. Still, in a capitalist economy, enterprises wanting to charge for software maintenance upfront have to overcome these hurdles one way or another if they want to avoid failing.

2 - The subscription model

Another way companies can try to internalise support costs is to rely on a subscription-based revenue model. This has multiple advantages over the previous option, mainly:

  1. It does not affect the initial purchase price of the device, making it easier to sell them at a competitive price.

  2. It provides a stable source of income, something that is very valuable to enterprises, as it reduces overall risks. This in return creates an incentive to continue providing software support as long as people are paying.

If this model is so interesting from an economic incentives point of view, why isn't any smartphone manufacturer offering that kind of program? The answer is, they are, but not explicitly5.

Apple and Google can fund part of their smartphone software support via the 30% cut they take out of their respective app stores. A report from Sensor Tower shows that in 2019, Apple made an estimated US$ 16 billion from the App Store, while Google raked in US$ 9 billion from the Google Play Store. Although the Fortune 500 ranking tells us this respectively is "only" 5.6% and 6.5% of their gross annual revenue for 2019, the profit margins in this category are certainly higher than any of their other products.

This means Google and Apple have an important incentive to keep your device updated for some time: if your device works well and is updated, you are more likely to keep buying apps from their store. When software support for a device stops, there is a risk paying customers will buy a competitor device and leave their ecosystem.

This also explains why OEMs who don't own app stores tend not to provide software support for very long periods of time. Most of them only make money when you buy a new phone. Providing long-term software support thus becomes a disincentive, as it directly reduces their sale revenues.

Same goes for Kindles and Kobos: the longer your device works, the more money they make with their electronic book stores. In my opinion, it's likely Amazon and Rakuten Kobo produce quarterly cost-benefit reports to decide when to drop support for older devices, based on ongoing support costs and the recurring revenues these devices bring in.

Rakuten Kobo is also in a more precarious situation than Amazon is: considering Amazon's very important market share, if your device stops getting new updates, there is a greater chance people will replace their old Kobo with a Kindle. Again, they have an important economic incentive to keep devices running as long as they are profitable.

Can Free Software fix this?

Yes and no. Free Software certainly isn't a magic wand one can wave to make everything better, but does provide major advantages in terms of security, user freedom and sometimes costs. The last piece of the puzzle explaining why Rakuten Kobo's software support is better than Google's is technological choices.

Smartphones are incredibly complex devices and have become the main computing platform of many. Similar to the web, there is a race for features and complexity that tends to create bloat and make older devices slow and painful to use. On the other hand, e-readers are simpler devices built for a single task: display electronic books.

Control over the platform is also a key aspect of the cost structure of providing software updates. Whereas Apple controls both the software and hardware side of iPhones, Android is a sad mess of drivers and SoCs, all providing different levels of support over time6.

If you take a look at the platforms the Kindle and Kobo are built on, you'll quickly see they both use Freescale I.MX SoCs. These processors are well known for their excellent upstream support in the Linux kernel and their relative longevity, chips being produced for either 10 or 15 years. This in turn makes updates much easier and less expensive to provide.

So clearly, open architectures, free drivers and open hardware helps tremendously, but aren't enough on their own. One of the lessons we must learn from the (amazing) LineageOS project is how lack of funding hurts everyone.

If there is no one to do the volunteer work required to maintain a version of LOS for your device, it won't be supported. Worse, when purchasing a new device, users cannot know in advance how many years of LOS support they will get. This makes buying new devices a frustrating hit-and-miss experience. If you are lucky, you will get many years of support. Otherwise, you risk your device becoming an expensive insecure paperweight.

So how do we fix this? Anyone with a brain understands throwing away perfectly good devices each 2 years is not sustainable. Government regulations enforcing a minimum support life would be a step in the right direction, but at the end of the day, Capitalism is to blame. Like the aforementioned carbon tax, band-aid solutions can make things somewhat better, but won't fix our current economic system's underlying problems.

For now though, I'll leave fixing the problem of Capitalism to someone else.


  1. My most recent novel binge has been focused on re-reading the Dune franchise. I first read the 6 novels written by Frank Herbert when I was 13 years old and only had vague and pleasant memories of his work. Great stuff. 

  2. I'm back on LineageOS! Nice folks released an unofficial LOS 17.1 port for the Nexus 5 last January and have kept it updated since then. If you are to use it, I would also recommend updating TWRP to this version specifically patched for the Nexus 5. 

  3. Very few serious economists actually believe neo-classical rational agent theory is a satisfactory explanation of human behavior. In my opinion, it's merely a (mostly flawed) lens to try to interpret certain behaviors, a tool amongst others that needs to be used carefully, preferably as part of a pluralism of approaches

  4. Good data on the e-reader market is hard to come by and is mainly produced by specialised market research companies selling their findings at very high prices. Those particular statistics come from a MarketWatch analysis. 

  5. If they were to tell people: You need to pay us 5$/month if you want to receive software updates, I'm sure most people would not pay. Would you? 

  6. Coming back to Fairphones, if they had so much problems providing an Android 9 build for the Fairphone 2, it's because Qualcomm never provided Android 7+ support for the Snapdragon 801 SoC it uses. 

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 19)

Here’s part nineteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Some show notes:


Here’s the form for getting a free Little Brother story, “Force Multiplier” by pre-ordering the print edition of Attack Surface (US/Canada only)

Here’s the schedule for the Attack Surface lectures


Here’s the list of schools and other institutions in need of donated copies of Attack Surface.

Here’s the form to request a copy of Attack Surface for schools, libraries, classrooms, etc.

Here’s how my publisher described it when it came out:


Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

Planet DebianAntoine Beaupré: CDPATH replacements

after reading this post I figured I might as well bite the bullet and improve on my CDPATH-related setup, especially because it does not work with Emacs. so i looked around for autojump-related alternatives that do.

What I use now

I currently have this in my .shenv (sourced by .bashrc):

export CDPATH=".:~:~/src:~/dist:~/wikis:~/go/src:~/src/tor"

This allows me to quickly jump into projects from my home dir, or the "source code" (~/src), "work" (src/tor), or wiki checkouts (~/wikis) directories. It works well from the shell, but unfortunately it's very static: if I want a new directory, I need to edit my config file, restart shells, etc. It also doesn't work from my text editor.

Shell jumpers

Those are commandline tools that can be used from a shell, generally with built-in shell integration so that a shell alias will find the right directory magically, usually by keeping track of the directories visited with cd.

Some of those may or may not have integration in Emacs.

autojump

fasd

z

fzf

Emacs plugins not integrated with the shell

Those projects can be used to track files inside a project or find files around directories, but do not offer the equivalent functionality in the shell.

projectile

elpy

  • home page
  • elpy has a notion of projects, so, by default, will find files in the current "project" with C-c C-f which is useful

bookmarks.el

  • built-in
  • home page
  • "Bookmarks record locations so you can return to them later"

recentf

  • built-in
  • home page
  • "builds a list of recently opened files. This list is is automatically saved across sessions on exiting Emacs - you can then access this list through a command or the menu"

references

https://www.emacswiki.org/emacs/LocateFilesAnywhere

Planet DebianIustin Pop: Serendipity

To start off, let me say it again: I hate light pollution. I really, really hate it. I love the night sky where you look up and see thousands of stars, and constellations besides Ursa Major. As somebody said once, “You haven’t lived until you’ve seen your shadow by the light of the Milky Way”.

But, ahem, I live in a large city, and despite my attempts using star trackers, special filters, etc. you simply can’t escape it. So, whenever we go on vacation in the mountains, I’m trying to think if I can do a bit of astro-photography (not that I’m good at it).

Which bring me to our recent vacation up in the mountains. I was looking forward to it, until in the week before, when the weather prognosis was switching between snow, rain and overcast for the entire week. No actual day or night with clear skies, so… I didn’t take a tripod, I didn’t take a wide lens, and put night photography out of my mind.

Vacation itself was good, especially the quietness of the place, so I usually went to be early-ish and didn’t look outside. The weather was as forecasted - no new snow (but there was enough up in the mountains), but heavy clouds all the time, and the sun only showed itself for a few minutes at a time.

One night I was up a bit longer than usual, working on the laptop and being very annoyed by a buzzing sound. At first I thought maybe I was imagining it, but from time to time it was stopping briefly, so it was a real noise; I started hunting for the source. Not my laptop, not the fridge, not the TV… but it was getting stronger near the window. I open the door to the balcony, and… bam! Very loud noise, from the hotel nearby, where — at midnight — the pool was being cleaned. I look at the people doing the work, trying to estimate how long it’ll be until they finish, but it was looking like a long time.

Fortunately with the door closed the noise was not bad enough to impact my sleep, so I debate getting angry or just resigned, and since it was late, I just sigh, roll my eyes — not metaphorically, but actually roll my eyes and look up, and I can’t believe my eyes. Completely clear sky, no trace of clouds anywhere, and… stars. Lots of starts. I sit there, looking at the sky and enjoying the view, and I think to myself that it won’t look that nice on the camera, for sure. Especially without a real trip, and without a fast lens.

Nevertheless, I grab my camera and — just for kicks — take one handheld picture. To my surprise (and almost disbelief), blurry pixels aside, the photo does look like what I was seeing, so I grab my tiny tripod that I carried along, and (with only a 24-70 zoom lens), grab a photo. And another, and another and then I realise that if I can make the composition work, and find a good shutter speed, this can turn out a good picture.

I didn’t have a remote release, the tripod was not very stable and it cannot point the camera upwards (it’s basically an emergency tripod), so it was quite sub-optimal; still, I try multiple shots (different compositions, different shutter speeds); they look on the camera screen and on the phone pretty good, so just for safety I take a few more, and, very happy, go to bed.

Coming back from vacation, on the large monitor, it turns out that the first 28 out of the 30 pictures were either blurry or not well focused (as I was focusing manually), and the 29th was almost OK but still not very good. Only the last, the really last picture, was technically good and also composition-wise OK. Luck? Foresight? Don’t know, but it was worth deleting 28 pictures to get this one. One of my best night shots, despite being so unprepared

Stars! Lots of stars! And mountains… Stars! Lots of stars! And mountains…

Of course, compared to other people’s pictures, this is not special. But for me, it will be a keepsake of how a real night sky should look like.

If you want to zoom in, higher resolution on flickr.

Technically, the challenges for the picture were two-fold:

  • fighting the shutter speed; the light was not the problem, but rather the tripod and lack of remote release: a short shutter speed will magnify tripod issues/movement from the release (although I was using delayed release on the camera), but will prevent star trails, and a long shutter speed will do the exact opposite; in the end, at the focal length I was using, I settled on a 5 second shutter speed.
  • composition: due to the presence of the mountains (which I couldn’t avoid by tilting the camera fully up), this was for me a difficult thing, since it’s more on the artistic side, which is… very subjective; in the end, this turned out fine (I think), but mostly because I took pictures from many different perspectives.

Next time when travelling by car, I’ll surely take a proper tripod ☺

Until next time, clear and dark skies…

Cory DoctorowStop Techno Dystopia with SRSLY WRONG

SRSLY WRONG is a leftist/futuristic podcast incorporating sketches in long-form episodes; I became aware of them last year when Michael Pulsford recommended their series on “library socialism”, an idea I was so stricken by that it made its way into The Lost Cause, a novel I’m writing now. The Wrong Boys invited me on for an episode (Stop Techno Dystopia!) (MP3) as part of the Attack Surface tour and it came out so, so good! Thanks, Wrong Boys!

Cory DoctorowMy appearance on the Judge John Hodgman podcast!

I’ve been a fan of the Judge John Hodgman podcast for so many years, and often threaten my wife with bringing a case before the judge whenever we have a petty disagreement. I was so pleased to appear on the JJHO podcast (MP3) this week as part of the podcast tour for Attack Surface!

Cory DoctorowTalking writing with the Writing Excuses crew

A million years ago, I set sail on the Writing Excuses Cruise, a writing workshop at sea. As part of that workshop, I sat down with the Writing Excuses podcast team (Mary Robinette Kowal, Piper J Drake, and Howard Taylor) and recorded a series of short episodes explaining my approach to writing. I had clean forgotten that they saved one to coincide with the release of Attack Surface, until this week’s episode went live (MP3). Listening to it today, I discovered that it was incredibly entertaining!

Planet DebianFrançois Marier: Using a Let's Encrypt TLS certificate with Asterisk 16.2

In order to fix the following error after setting up SIP TLS in Asterisk 16.2:

asterisk[8691]: ERROR[8691]: tcptls.c:966 in __ssl_setup: TLS/SSL error loading cert file. <asterisk.pem>

I created a Let's Encrypt certificate using certbot:

apt install certbot
certbot certonly --standalone -d hostname.example.com

To enable the asterisk user to load the certificate successfuly (it doesn't permission to access to the certificates under /etc/letsencrypt/), I copied it to the right directory:

cp /etc/letsencrypt/live/hostname.example.com/privkey.pem /etc/asterisk/asterisk.key
cp /etc/letsencrypt/live/hostname.example.com/fullchain.pem /etc/asterisk/asterisk.cert
chown asterisk:asterisk /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
chmod go-rwx /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key

Then I set the following variables in /etc/asterisk/sip.conf:

tlscertfile=/etc/asterisk/asterisk.cert
tlsprivatekey=/etc/asterisk/asterisk.key

Automatic renewal

The machine on which I run asterisk has a tricky Apache setup:

  • a webserver is running on port 80
  • port 80 is restricted to the local network

This meant that the certbot domain ownership checks would get blocked by the firewall, and I couldn't open that port without exposing the private webserver to the Internet.

So I ended up disabling the built-in certbot renewal mechanism:

systemctl disable certbot.timer certbot.service
systemctl stop certbot.timer certbot.service

and then writing my own script in /etc/cron.daily/certbot-francois:

#!/bin/bash
TEMPFILE=`mktemp`

# Stop Apache and backup firewall.
/bin/systemctl stop apache2.service
/usr/sbin/iptables-save > $TEMPFILE

# Open up port 80 to the whole world.
/usr/sbin/iptables -D INPUT -j LOGDROP
/usr/sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT
/usr/sbin/iptables -A INPUT -j LOGDROP

# Renew all certs.
/usr/bin/certbot renew --quiet

# Restore firewall and restart Apache.
/usr/sbin/iptables -D INPUT -p tcp --dport 80 -j ACCEPT
/usr/sbin/iptables-restore < $TEMPFILE
/bin/systemctl start apache2.service

# Copy certificate into asterisk.
cp /etc/letsencrypt/live/hostname.example.com/privkey.pem /etc/asterisk/asterisk.key
cp /etc/letsencrypt/live/hostname.example.com/fullchain.pem /etc/asterisk/asterisk.cert
chown asterisk:asterisk /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
chmod go-rwx /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
/bin/systemctl restart asterisk.service

# Commit changes to etckeeper.
pushd /etc/ > /dev/null
/usr/bin/git add letsencrypt asterisk
DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
if [ -n "$DIFFSTAT" ] ; then
    /usr/bin/git commit --quiet -m "Renewed letsencrypt certs."
    echo "$DIFFSTAT"
fi
popd > /dev/null

,

Planet DebianDirk Eddelbuettel: digest 0.6.26: Blake3 and Tuning

And a new version of digest is now on CRAN will go to Debian shortly.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, and blake3 algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at 896k monthly downloads, 279 direct reverse dependencies and 8057 indirect reverse dependencies, or just under half of CRAN) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release brings two nice contributed updates. Dirk Schumacher added support for blake3 (though we could probably push this a little harder for performance, help welcome). Winston Chang benchmarked and tuned some of the key base R parts of the package. Last but not least I flipped the vignette to the lovely minidown, updated the Travis CI setup using bspm (as previously blogged about in r4 #30), and added a package website using Matertial for MkDocs.

My CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJunichi Uekawa: Troubleshooting your audio input.

Troubleshooting your audio input. When doing video conferencing sometimes I hear the remote end not doing very well. Especially when your friend tells you he bought a new mic and it didn't sound well, they might be using the wrong configuration on the OS and using the other mic, or they might have a constant noise source in the room that affects the video conferencing noise cancelling algorithms. Yes, noise cancelling algorithms aren't perfect because detecting what is noise is heuristic and better to have low level of noise. Here is the app. I have a video to demonstrate.

,

Cryptogram Cybersecurity Visuals

The Hewlett Foundation just announced its top five ideas in its Cybersecurity Visuals Challenge. The problem Hewlett is trying to solve is the dearth of good visuals for cybersecurity. A Google Images Search demonstrates the problem: locks, fingerprints, hands on laptops, scary looking hackers in black hoodies. Hewlett wanted to go beyond those tropes.

I really liked the idea, but find the results underwhelming. It’s a hard problem.

Hewlett press release.

Cryptogram Split-Second Phantom Images Fool Autopilots

Researchers are tricking autopilots by inserting split-second images into roadside billboards.

Researchers at Israel’s Ben Gurion University of the Negev … previously revealed that they could use split-second light projections on roads to successfully trick Tesla’s driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they’ve found they can pull off the same trick with just a few frames of a road sign injected on a billboard’s video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind.

[…]

In this latest set of experiments, the researchers injected frames of a phantom stop sign on digital billboards, simulating what they describe as a scenario in which someone hacked into a roadside billboard to alter its video. They also upgraded to Tesla’s most recent version of Autopilot known as HW3. They found that they could again trick a Tesla or cause the same Mobileye device to give the driver mistaken alerts with just a few frames of altered video.

The researchers found that an image that appeared for 0.42 seconds would reliably trick the Tesla, while one that appeared for just an eighth of a second would fool the Mobileye device. They also experimented with finding spots in a video frame that would attract the least notice from a human eye, going so far as to develop their own algorithm for identifying key blocks of pixels in an image so that a half-second phantom road sign could be slipped into the “uninteresting” portions.

The paper:

Abstract: In this paper, we investigate “split-second phantom attacks,” a scientific gap that causes two commercial advanced driver-assistance systems (ADASs), Telsa Model X (HW 2.5 and HW 3) and Mobileye 630, to treat a depthless object that appears for a few milliseconds as a real obstacle/object. We discuss the challenge that split-second phantom attacks create for ADASs. We demonstrate how attackers can apply split-second phantom attacks remotely by embedding phantom road signs into an advertisement presented on a digital billboard which causes Tesla’s autopilot to suddenly stop the car in the middle of a road and Mobileye 630 to issue false notifications. We also demonstrate how attackers can use a projector in order to cause Tesla’s autopilot to apply the brakes in response to a phantom of a pedestrian that was projected on the road and Mobileye 630 to issue false notifications in response to a projected road sign. To counter this threat, we propose a countermeasure which can determine whether a detected object is a phantom or real using just the camera sensor. The countermeasure (GhostBusters) uses a “committee of experts” approach and combines the results obtained from four lightweight deep convolutional neural networks that assess the authenticity of an object based on the object’s light, context, surface, and depth. We demonstrate our countermeasure’s effectiveness (it obtains a TPR of 0.994 with an FPR of zero) and test its robustness to adversarial machine learning attacks.

Planet DebianYves-Alexis Perez: iOS 14 USB tethering broken on Linux: looking for documentation and contact at Apple

It's a bit of a long shot, but maybe someone on Planet Debian or elsewhere can help us reach the right people at Apple.

Starting with iOS 14, something apparently changed on the way USB tethering (also called Personal Hotspot) is set up, which broke it for people using Linux. The driver in use is ipheth, developped in 2009 and included in the Linux kernel in 2010.

The kernel driver negotiates over USB with the iOS device in order to setup the link. The protocol used by both parties to communicate don't really seemed documented publicly, and it seems the protocol has evolved over time and iOS versions, and the Linux driver hasn't been kept up to date. On macOS and Windows the driver apparently comes with iTunes, and Apple engineers obviously know how to communicate with iOS devices, so iOS 14 is supported just fine.

There's an open bug on libimobildevice (the set of userlands tools used to communicate with iOS devices, although the update should be done in the kernel), with some debugging and communication logs between Windows and an iOS device, but so far no real progress has been done. The link is enabled, the host gets an IP from the device, can ping the device IP and can even resolve name using the device DNS resolver, but IP forwarding seems disabled, no packet goes farther than the device itself.

That means a lot of people upgrading to iOS 14 will suddenly lose USB tethering. While Wi-Fi and Bluetooth connection sharing still works, it's still suboptimal, so it'd be nice to fix the kernel driver and support the latest protocol used in iOS 14.

If someone knows the right contact (or the right way to contact them) at Apple so we can have access to some kind of documentation on the protocol and the state machine to use, please reach us (either to the libimobile device bug or to my email address below).

Thanks!

Worse Than FailureError'd: Try a Different Address

"Aldi doesn't deliver to a ...computer memory address!? Ok, that sounds fair," wrote Oisin.

 

"When you order pictures from this photography studio, you find that what sets them apart from other studios is group photography and not web site design," Stephen D. wrote.

 

John H. writes, "To be honest, I'm a little bit surprised. I figured for sure that it would have been a 50/50 split between meat and booze."

 

"I was just trying to perform a trivial merge using GVim, but then ṕ̵̡̛̃̍̊͊̌̉h̸̢̦̭̟̿̉̔̔͛̋̓̑̉́͌͊̒̕͜'̸̛͎̹̦͔̳͎̹͎̥̻̺̦̳͈̞̃̿̐́́͑̒͛̀̋̑̕͠n̸̨̛͉̥̻̘̣̞̉́͂̌͋́̾͗͠g̵̣̫̯̈̈́̕l̷͍̳͚̻̺̀̊͆̂͒͐̀̔́͘͜͝͝͝ừ̷̯̬̜̱̭̳̞͇̠̄̎͗͝į̴̗̺͓̣̘͚̯̳͕̠̗̮̄͛͌̍̈́͌͊͌̓̈́͌̌͝ ̵̼̼̥͍̙̋̇͂͋͐͐͂m̶̥̭͍͕͍̑̏g̸̡͖̜̜̭͈͖͇̦͍͉͚̎͜ͅl̵̡͉̜̀̂̎́͐̑̑͑̍̚ŵ̴̡͚̝̣̬̭̥̙͎̻̽͝'̸̼̩̑́̄͆̾͆̿́̕ṅ̶̛̯͍̰͒́̂̎ǎ̶͇̬̲͕͍̻̞̫̻͕͈͔̍͌͆͌̈̓͛̀̌̿̚͝͠f̸͎͖̰̖̫̪͎̄̈̓̏́̐̄̈̒́̈̾̔̕ḩ̶̠̺̯̦̪͓̜͙̬͎̳̭͊̀̌͂͆́̾̑̾͝ ̶̣̘̟͊̈́͊͗͒͋̊͊̀͛̉͝Ģ̶̛̙̜̗̖̼̺͓͐̀͐͒V̸̢̙̰̟̙͚̗͖̆̈̇̓͘͠͝͝i̶̛̹̳̍͂́͝͝͝m̶̛͈͈̹̯͕̗͐̂͊̇̃̃͌̌̓̄̔̆͘ ̸̢̛̣͓̪͚̘͚̰͖͐́͜R̸͍͈͚̻͕̗̻͉͙͆͌͐͒̓͒̐̓̑̊͒͝'̴̧͚͔̫̼̺͔͎̖͈̞͙͆͆l̷͍̠̄͂̆̒͊̔ẏ̶̢̮͓̄͆́̉̈́̑͗̉͠ͅè̴̡̧̝͙̖̹͓̤̼̻͓̬͚̰̅̋́̑̾͘͜h̷̳͇̖͔̤͇̦̹̮̐̏͛̊͐͒͐ ̷̧͚̔̉̍͆͌̓͗̓̐̐͘w̸̤̝̪͎͇̩̤̳͒̏̒̎̈́̉͗̈́̚̚͝m̴̼̦̩͉̜̳͓͔̟̭͇̜̰̬̋̎͗͆̒̀̍̍̇̔͋̕͝͝ͅê̸̢̡̢̝͓̟̭̞͍̞͈̖̠̲̬̒̐̎̿̇̍́̂͒̏̉͘͠r̸̬̉̈́͆̌͆̈̉ǧ̴̢̙͚͓͇͖͔̩̣͕̞̚e̸̡̢̩̠͙͖̺̥͉̦̟̩͐͘'̵̡̢̧̟͇̭̲̳͕̻̜͇̘̬̙̈̀̓́̄̒̔̒̕͘͠n̶̢̛͎̹̮̻̼̳̜͖̲̂̇̅̾̄̐̊̇̓̒̒̍͘a̸̟̞̗̻͕̘̳͔̿̌͑̌̎̈̊̓͊̊͋͜g̷̢̮̟̠͖̞̤͖̘̻̞̀̄̓͌̾́̉̏͑͐͜l̵̪̫̝̐̃ ̸̮̤̱͇͂̔̂̀̾̀̚f̵̥͌̂̒̐̚h̷̡̤̮͇̥̖̼̙́̌̕ͅţ̴̛̞̦̩͚̝̦̮͎͕̹̖̰̀̋̋͐̍̅̀͛̕̕̚͘͝͠ȁ̸̖̝͈͎̤͇̽͛́̄͆͒̃̓̏͐͊͒̔̌g̸̜̠͉̝̱̳͔̭̦͇̱̘̺͋ͅͅn̶̦̥͈̻͍̠̂̔̊́̑͆̉̈́͝," wrote Ivan.

 

Robert M. writes, "Look, considering how 2020 is going, 'Invalid Date' could be really any day coming up. Listen - I just want to know, is this before or after Christmas, because like it or not, I still have to do holiday shopping."

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Krebs on SecurityBreach at Dickey’s BBQ Smokes 3M Cards

One of the digital underground’s most popular stores for peddling stolen credit card information began selling a batch of more than three million new card records this week. KrebsOnSecurity has learned the data was stolen in a lengthy data breach at more than 100 Dickey’s Barbeque Restaurant locations around the country.

An ad on the popular carding site Joker’s Stash for “BlazingSun,” which fraud experts have traced back to a card breach at Dickey’s BBQ.

On Monday, the carding bazaar Joker’s Stash debuted “BlazingSun,” a new batch of more than three million stolen card records, advertising “valid rates” of between 90-100 percent. This is typically an indicator that the breached merchant is either unaware of the compromise or has only just begun responding to it.

Multiple companies that track the sale in stolen payment card data say they have confirmed with card-issuing financial institutions that the accounts for sale in the BlazingSun batch have one common theme: All were used at various Dickey’s BBQ locations over the past 13-15 months.

KrebsOnSecurity first contacted Dallas-based Dickey’s on Oct. 13. Today, the company shared a statement saying it was aware of a possible payment card security incident at some of its eateries:

“We received a report indicating that a payment card security incident may have occurred. We are taking this incident very seriously and immediately initiated our response protocol and an investigation is underway. We are currently focused on determining the locations affected and time frames involved. We are utilizing the experience of third parties who have helped other restaurants address similar issues and also working with the FBI and payment card networks. We understand that payment card network rules generally provide that individuals who timely report unauthorized charges to the bank that issued their card are not responsible for those charges.”

The confirmations came from Miami-based Q6 Cyber and Gemini Advisory in New York City.

Q6Cyber CEO Eli Dominitz said the breach appears to extend from May 2019 through September 2020.

“The financial institutions we’ve been working with have already seen a significant amount of fraud related to these cards,” Dominitz said.

Gemini says its data indicated some 156 Dickey’s locations across 30 states likely had payment systems compromised by card-stealing malware, with the highest exposure in California and Arizona. Gemini puts the exposure window between July 2019 and August 2020.

“Low-and-slow” aptly describes the card breach at Dickie’s, which persisted for at least 13 months.

With the threat from ransomware attacks grabbing all the headlines, it may be tempting to assume plain old credit card thieves have moved on to more lucrative endeavors. Alas, cybercrime bazaars like Joker’s Stash have continued plying their trade, undeterred by a push from the credit card associations to encourage more merchants to install credit card readers that require more secure chip-based payment cards.

That’s because there are countless restaurants — usually franchise locations of an established eatery chain — that are left to decide for themselves whether and how quickly they should make the upgrades necessary to dip the chip versus swipe the stripe.

“Dickey’s operates on a franchise model, which often allows each location to dictate the type of point-of-sale (POS) device and processors that they utilize,” Gemini wrote in a blog post about the incident. “However, given the widespread nature of the breach, the exposure may be linked to a breach of the single central processor, which was leveraged by over a quarter of all Dickey’s locations.”

While there have been sporadic reports about criminals compromising chip-based payment systems used by merchants in the U.S., the vast majority of the payment card data for sale in the cybercrime underground is stolen from merchants who are still swiping chip-based cards.

This isn’t conjecture; relatively recent data from the stolen card shops themselves bear this out. In July, KrebsOnSecurity wrote about an analysis by researchers at New York University, which looked at patterns surrounding more than 19 million stolen payment cards that were exposed after the hacking of BriansClub, a top competitor to the Joker’s Stash carding shop.

The NYU researchers found BriansClub earned close to $104 million in gross revenue from 2015 to early 2019, and listed over 19 million unique card numbers for sale. Around 97% of the inventory was stolen magnetic stripe data, commonly used to produce counterfeit cards for in-person payments.

Visa and MasterCard instituted new rules in October 2015 that put retailers on the hook for all of the losses associated with counterfeit card fraud tied to breaches if they haven’t implemented chip-based card readers and enforced the dipping of the chip when a customer presents a chip-based card.

Dominitz said he never imagined back in 2015 when he founded Q6Cyber that we would still be seeing so many merchants dealing with magstripe-based data breaches.

“Five years ago I did not expect we would be in this position today with card fraud,” he said. “You’d think the industry in general would have made a bigger dent in this underground economy a while ago.”

Tired of having your credit card re-issued and updating your payment records at countless e-commerce sites every time some restaurant you frequent has a breach? Here’s a radical idea: Next time you visit an eatery (okay, if that ever happens again post-COVID, etc), ask them if they use chip-based card readers. If not, consider taking your business elsewhere.

Planet DebianJelmer Vernooij: Debian Janitor: How to Contribute Lintian-Brush Fixers

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

lintian-brush can currently fix about 150 different issues that lintian can report, but that's still a small fraction of the more than thousand different types of issue that lintian can detect.

If you're interested in contributing a fixer script to lintian-brush, there is now a guide that describes all steps of the process:

  1. how to identify lintian tags that are good candidates for automated fixing
  2. creating test cases
  3. writing the actual fixer

For more information about the Janitor's lintian-fixes efforts, see the landing page.

Planet DebianGunnar Wolf: I am who I am and that's all that I am

Mexico was one of the first countries in the world to set up a national population registry in the late 1850s, as part of the church-state separation that was for long years one of the national sources of pride.

Forty four years ago, when I was born, keeping track of the population was still mostly a manual task. When my parents registered me, my data was stored in page 161 of book 22, year 1976, of the 20th Civil Registration office in Mexico City. Faithful to the legal tradition, everything is handwritten and specified in full. Because, why would they write 1976.04.27 (or even 27 de abril de 1976) when they could spell out día veintisiete de abril de mil novecientos setenta y seis? Numbers seem to appear only for addresses.

So, the State had record of a child being born, and we knew where to look if we came to need this information. But, many years later, a very sensible tecnification happened: all records (after a certain date, I guess) were digitized. Great news! I can now get my birth certificate without moving from my desk, paying a quite reasonable fee (~US$4). What’s there not to like?

Digitally certified and all! So great! But… But… Oh, there’s a problem.

Of course… Making sense of the handwriting as you can see is somewhat prone to failure. And I cannot blame anybody for failing to understand the details of my record.

So, my mother’s first family name is Iszaevich. It was digitized as Iszaerich. Fortunately, they do acknowledge some errors could have made it into the process, and there is a process to report and correct errors.

What’s there not to like?

Oh — That they do their best to emulate a public office using online tools. I followed some links in that link to get the address to contact and yesterday night sent them the needed documents. Quite immediately, I got an answer that… I must share with the world:

Yes, the mailing contact is in the @gmail.com domain. I could care about them not using a @….gob.mx address, but I’ll let it slip. The mail I got says (uppercase and all):

GOOD EVENING,

WE INFORM YOU THAT THE RECEPTION OF E-MAILS FOR REQUESTING
CORRECTIONS IN CERTIFICATES IS ONLY ACTIVE MONDAY THROUGH FRIDAY,
8:00 TO 15:00.

*IN CASE YOU SENT A MAIL OUTSIDE THE WORKING HOURS, IT WILL BE
AUTOMATICALLY DELETED BY THE SERVER*

CORDIAL GREETINGS,

I would only be half-surprised if they were paying the salary of somebody to spend the wee hours of the night receiving and deleting mails from their GMail account.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, September 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September, 208.25 work hours have been dispatched among 13 paid contributors. Their reports are available:
  • Abhijith PA did 12.0h (out of 14h assigned), thus carrying over 2h to October.
  • Adrian Bunk did 14h (out of 19.75h assigned), thus carrying over 5.75h to October.
  • Ben Hutchings did 8.25h (out of 16h assigned and 9.75h from August), but gave back 7.75h, thus carrying over 9.75h to October.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 19.75h (out of 19.75h assigned).
  • Holger Levsen did 5h coordinating/managing the LTS team.
  • Markus Koschany did 31.75h (out of 19.75h assigned and 12h from August).
  • Ola Lundqvist did 9.5h (out of 12h from August), thus carrying 2.5h to October.
  • Roberto C. Sánchez did 19.75h (out of 19.75h assigned).
  • Sylvain Beucler did 19.75h (out of 19.75h assigned).
  • Thorsten Alteholz did 19.75h (out of 19.75h assigned).
  • Utkarsh Gupta did 8.75h (out of 19.75h assigned), while he already anticipated the remaining 11h in August.

Evolution of the situation

September was a regular LTS month with an IRC meeting.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file has 48 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Worse Than FailureCodeSOD: A New Generation

Mapping between types can create some interesting challenges. Michal has one of those scenarios. The code comes to us heavily anonymized, but let’s see what we can do to understand the problem and the solution.

There is a type called ItemA. ItemA is a model object on one side of a boundary, and the consuming code doesn’t get to touch ItemA objects directly, it instead consumes one of two different types: ItemB, or SimpleItemB.

The key difference between ItemB and SimpleItemB is that they have different validation rules. It’s entirely possible that an instance of ItemA may be a valid SimpleItemB and an invalid ItemB. If an ItemA contains exactly five required fields importantDataPieces, and everything else is null, it should turn into a SimpleItemB. Otherwise, it should turn into an ItemB.

Michal adds: “Also noteworthy is the fact that ItemA is a class generated from XML schemas.”

The Java class was generated, but not the conversion method.

public ItemB doConvert(ItemA itemA) {
  final ItemA emptyItemA = new ItemA();
  emptyItemA.setId(itemA.getId());
  emptyItemA.setIndex(itemA.getIndex());
  emptyItemA.setOp(itemA.getOp());

  final String importantDataPiece1 = itemA.importantDataPiece1();
  final String importantDataPiece2 = itemA.importantDataPiece2();
  final String importantDataPiece3 = itemA.importantDataPiece3();
  final String importantDataPiece4 = itemA.importantDataPiece4();
  final String importantDataPiece5 = itemA.importantDataPiece5();

  itemA.withImportantDataPiece1(null)
          .withImportantDataPiece2(null)
          .withImportantDataPiece3(null)
          .withImportantDataPiece4(null)
          .withImportantDataPiece5(null);

  final boolean isSimpleItem = itemA.equals(emptyItemA) 
&& importantDataPiece1 != null && importantDataPiece2 != null && importantDataPiece3 != null && importantDataPiece4 != null;

  itemA.withimportantDataPiece1(importantDataPiece1)
          .withImportantDataPiece2(importantDataPiece2)
          .withImportantDataPiece3(importantDataPiece3)
          .withImportantDataPiece4(importantDataPiece4)
          .withImportantDataPiece5(importantDataPiece5);

  if (isSimpleItem) {
      return simpleItemConverter.convert(itemA);
  } else {
      return itemConverter.convert(itemA);
  }
}

We start by making a new instance of ItemA, emptyItemA, and copy a few values over to it. Then we clear out the five required fields (after caching the values in local variables). We rely on .equals, generated off that XML schema, to see if this newly created item is the same as our recently cleared out input item. If they are, and none of the required fields are null, we know this will be a SimpleItemB. We’ll put the required fields back into the input object, and then call the appropriate conversion methods.

Let’s restate the goal of this method, to understand how ugly it is: if an object has five required values and nothing else, it’s SimpleItemB, otherwise it’s a regular ItemB. The way this developer decided to perform this check wasn’t by examining the fields (which, in their defense, are being generated, so you might need reflection to do the inspection), but by this unusual choice of equality test. Create an empty object, copy a few ID related elements into it, and your default constructor should handle nulling out all the things which should be null, right?

Or, as Michal sums it up:

The intention of the above snippet appears to be checking whether itemA contains all fields mandatory for SimpleItemB and none of the other ones. Why the original author started by copying some fields to his ‘template item’ but switched to the ‘rip the innards of the original object, check if the ravaged carcass is equal to the barebones template, and then stuff the guts back in and pretend nothing ever happened’ approach halfway through? I hope I never find out what it’s like to be in a mental state when any part of this approach seems like a good idea.

Ugly, yes, but still, this code worked… until it didn’t. Specifically, a new nullable Boolean field was added to ItemA which was used by ItemB, but had no impact on SimpleItemB. This should have continued to work, except that the original developer defaulted the new field to false in the constructor, but didn’t update the doConvert method, so equals started deciding that our input item and our “empty” copy no longer matched. Downstream code started getting invalid ItemB objects when it should have been getting valid SimpleItemB objects, which triggered many hours of debugging to try and understand why this small change had such cascading effects.

Michal refactored this code, but was not the first person to touch it recently:

A cherry on top is the fact that importantDataPiece5 came about a few years after the original implementation. Someone saw this code, contributed to the monstrosity, and happily kept on trucking.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianDirk Eddelbuettel: dang 0.0.12: Two new functions

A new release of the dang package is now on CRAN, roughly one year after the last release. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!) is one, this overbought/oversold price band plotter from an older blog post is another. More recently added were helpers for data.table to xts conversion and a git repo root finder.

This release adds two functions. One was mentioned just days ago in a tweet by Nathan and is a reworked version of something Colin tweeted about a few weeks ago: a little data wrangling off the kewl rtweet to find maximally spammy accounts per search topic. In other words those who include more than ‘N’ hashtags for given search term. The other is something I, if memory serves, picked up a while back on one of the lists: a base R function to identify non-ASCII characters in a file. It is a C function that is not directly exported by and hence no accessible, so we put it here (with credits, of course). I mentioned it yesterday when announcing tidyCpp as I this C function was the starting point for the new tidyCpp wrapper around some C API of R functions.

The (very short) NEWS entry follows.

Changes in version 0.0.12 (2020-10-14)

  • New functions muteTweets and checkPackageAsciiCode.

  • Updated CI setup.

Courtesy of CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianMichael Stapelberg: Linux package managers are slow

I measured how long the most popular Linux distribution’s package manager take to install small and large packages (the ack(1p) source code search Perl script and qemu, respectively).

Where required, my measurements include metadata updates such as transferring an up-to-date package list. For me, requiring a metadata update is the more common case, particularly on live systems or within Docker containers.

All measurements were taken on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz running Docker 1.13.1 on Linux 4.19, backed by a Samsung 970 Pro NVMe drive boasting many hundreds of MB/s write performance. The machine is located in Zürich and connected to the Internet with a 1 Gigabit fiber connection, so the expected top download speed is ≈115 MB/s.

See Appendix C for details on the measurement method and command outputs.

Measurements

Keep in mind that these are one-time measurements. They should be indicative of actual performance, but your experience may vary.

ack (small Perl program)

distribution package manager data wall-clock time rate
Fedora dnf 114 MB 33s 3.4 MB/s
Debian apt 16 MB 10s 1.6 MB/s
NixOS Nix 15 MB 5s 3.0 MB/s
Arch Linux pacman 6.5 MB 3s 2.1 MB/s
Alpine apk 10 MB 1s 10.0 MB/s

qemu (large C program)

distribution package manager data wall-clock time rate
Fedora dnf 226 MB 4m37s 1.2 MB/s
Debian apt 224 MB 1m35s 2.3 MB/s
Arch Linux pacman 142 MB 44s 3.2 MB/s
NixOS Nix 180 MB 34s 5.2 MB/s
Alpine apk 26 MB 2.4s 10.8 MB/s


(Looking for older measurements? See Appendix B (2019).

The difference between the slowest and fastest package managers is 30x!

How can Alpine’s apk and Arch Linux’s pacman be an order of magnitude faster than the rest? They are doing a lot less than the others, and more efficiently, too.

Pain point: too much metadata

For example, Fedora transfers a lot more data than others because its main package list is 60 MB (compressed!) alone. Compare that with Alpine’s 734 KB APKINDEX.tar.gz.

Of course the extra metadata which Fedora provides helps some use case, otherwise they hopefully would have removed it altogether. The amount of metadata seems excessive for the use case of installing a single package, which I consider the main use-case of an interactive package manager.

I expect any modern Linux distribution to only transfer absolutely required data to complete my task.

Pain point: no concurrency

Because they need to sequence executing arbitrary package maintainer-provided code (hooks and triggers), all tested package managers need to install packages sequentially (one after the other) instead of concurrently (all at the same time).

In my blog post “Can we do without hooks and triggers?”, I outline that hooks and triggers are not strictly necessary to build a working Linux distribution.

Thought experiment: further speed-ups

Strictly speaking, the only required feature of a package manager is to make available the package contents so that the package can be used: a program can be started, a kernel module can be loaded, etc.

By only implementing what’s needed for this feature, and nothing more, a package manager could likely beat apk’s performance. It could, for example:

  • skip archive extraction by mounting file system images (like AppImage or snappy)
  • use compression which is light on CPU, as networks are fast (like apk)
  • skip fsync when it is safe to do so, i.e.:
    • package installations don’t modify system state
    • atomic package installation (e.g. an append-only package store)
    • automatically clean up the package store after crashes

Current landscape

Here’s a table outlining how the various package managers listed on Wikipedia’s list of software package management systems fare:

name scope package file format hooks/triggers
AppImage apps image: ISO9660, SquashFS no
snappy apps image: SquashFS yes: hooks
FlatPak apps archive: OSTree no
0install apps archive: tar.bz2 no
nix, guix distro archive: nar.{bz2,xz} activation script
dpkg distro archive: tar.{gz,xz,bz2} in ar(1) yes
rpm distro archive: cpio.{bz2,lz,xz} scriptlets
pacman distro archive: tar.xz install
slackware distro archive: tar.{gz,xz} yes: doinst.sh
apk distro archive: tar.gz yes: .post-install
Entropy distro archive: tar.bz2 yes
ipkg, opkg distro archive: tar{,.gz} yes

Conclusion

As per the current landscape, there is no distribution-scoped package manager which uses images and leaves out hooks and triggers, not even in smaller Linux distributions.

I think that space is really interesting, as it uses a minimal design to achieve significant real-world speed-ups.

I have explored this idea in much more detail, and am happy to talk more about it in my post “Introducing the distri research linux distribution".

There are a couple of recent developments going into the same direction:

Appendix C: measurement details (2020)

ack

You can expand each of these:

Fedora’s dnf takes almost 33 seconds to fetch and unpack 114 MB.

% docker run -t -i fedora /bin/bash
[root@62d3cae2e2f9 /]# time dnf install -y ack
Fedora 32 openh264 (From Cisco) - x86_64     1.9 kB/s | 2.5 kB     00:01
Fedora Modular 32 - x86_64                   6.8 MB/s | 4.9 MB     00:00
Fedora Modular 32 - x86_64 - Updates         5.6 MB/s | 3.7 MB     00:00
Fedora 32 - x86_64 - Updates                 9.9 MB/s |  23 MB     00:02
Fedora 32 - x86_64                            39 MB/s |  70 MB     00:01
[…]
real	0m32.898s
user	0m25.121s
sys	0m1.408s

NixOS’s Nix takes a little over 5s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -iA nixpkgs.ack'
unpacking channels...
created 1 symlinks in user environment
installing 'perl5.32.0-ack-3.3.1'
these paths will be fetched (15.55 MiB download, 85.51 MiB unpacked):
  /nix/store/34l8jdg76kmwl1nbbq84r2gka0kw6rc8-perl5.32.0-ack-3.3.1-man
  /nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31
  /nix/store/9fd4pjaxpjyyxvvmxy43y392l7yvcwy1-perl5.32.0-File-Next-1.18
  /nix/store/czc3c1apx55s37qx4vadqhn3fhikchxi-libunistring-0.9.10
  /nix/store/dj6n505iqrk7srn96a27jfp3i0zgwa1l-acl-2.2.53
  /nix/store/ifayp0kvijq0n4x0bv51iqrb0yzyz77g-perl-5.32.0
  /nix/store/w9wc0d31p4z93cbgxijws03j5s2c4gyf-coreutils-8.31
  /nix/store/xim9l8hym4iga6d4azam4m0k0p1nw2rm-libidn2-2.3.0
  /nix/store/y7i47qjmf10i1ngpnsavv88zjagypycd-attr-2.4.48
  /nix/store/z45mp61h51ksxz28gds5110rf3wmqpdc-perl5.32.0-ack-3.3.1
copying path '/nix/store/34l8jdg76kmwl1nbbq84r2gka0kw6rc8-perl5.32.0-ack-3.3.1-man' from 'https://cache.nixos.org'...
copying path '/nix/store/czc3c1apx55s37qx4vadqhn3fhikchxi-libunistring-0.9.10' from 'https://cache.nixos.org'...
copying path '/nix/store/9fd4pjaxpjyyxvvmxy43y392l7yvcwy1-perl5.32.0-File-Next-1.18' from 'https://cache.nixos.org'...
copying path '/nix/store/xim9l8hym4iga6d4azam4m0k0p1nw2rm-libidn2-2.3.0' from 'https://cache.nixos.org'...
copying path '/nix/store/9df65igwjmf2wbw0gbrrgair6piqjgmi-glibc-2.31' from 'https://cache.nixos.org'...
copying path '/nix/store/y7i47qjmf10i1ngpnsavv88zjagypycd-attr-2.4.48' from 'https://cache.nixos.org'...
copying path '/nix/store/dj6n505iqrk7srn96a27jfp3i0zgwa1l-acl-2.2.53' from 'https://cache.nixos.org'...
copying path '/nix/store/w9wc0d31p4z93cbgxijws03j5s2c4gyf-coreutils-8.31' from 'https://cache.nixos.org'...
copying path '/nix/store/ifayp0kvijq0n4x0bv51iqrb0yzyz77g-perl-5.32.0' from 'https://cache.nixos.org'...
copying path '/nix/store/z45mp61h51ksxz28gds5110rf3wmqpdc-perl5.32.0-ack-3.3.1' from 'https://cache.nixos.org'...
building '/nix/store/m0rl62grplq7w7k3zqhlcz2hs99y332l-user-environment.drv'...
created 49 symlinks in user environment
real	0m 5.60s
user	0m 3.21s
sys	0m 1.66s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@1996bb94a2d1:/# time (apt update && apt install -y ack-grep)
Get:1 http://deb.debian.org/debian sid InRelease [146 kB]
Get:2 http://deb.debian.org/debian sid/main amd64 Packages [8400 kB]
Fetched 8546 kB in 1s (8088 kB/s)
[…]
The following NEW packages will be installed:
  ack libfile-next-perl libgdbm-compat4 libgdbm6 libperl5.30 netbase perl perl-modules-5.30
0 upgraded, 8 newly installed, 0 to remove and 23 not upgraded.
Need to get 7341 kB of archives.
After this operation, 46.7 MB of additional disk space will be used.
[…]
real	0m9.544s
user	0m2.839s
sys	0m0.775s

Arch Linux’s pacman takes a little under 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9f6672688a64 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            130.8 KiB  1090 KiB/s 00:00
 extra          1655.8 KiB  3.48 MiB/s 00:00
 community         5.2 MiB  6.11 MiB/s 00:01
resolving dependencies...
looking for conflicting packages...

Packages (2) perl-file-next-1.18-2  ack-3.4.0-1

Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
[…]
real	0m2.936s
user	0m0.375s
sys	0m0.160s

Alpine’s apk takes a little over 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/4) Installing libbz2 (1.0.8-r1)
(2/4) Installing perl (5.30.3-r0)
(3/4) Installing perl-file-next (1.18-r0)
(4/4) Installing ack (3.3.1-r0)
Executing busybox-1.31.1-r16.trigger
OK: 43 MiB in 18 packages
real	0m 1.24s
user	0m 0.40s
sys	0m 0.15s

qemu

You can expand each of these:

Fedora’s dnf takes over 4 minutes to fetch and unpack 226 MB.

% docker run -t -i fedora /bin/bash
[root@6a52ecfc3afa /]# time dnf install -y qemu
Fedora 32 openh264 (From Cisco) - x86_64     3.1 kB/s | 2.5 kB     00:00
Fedora Modular 32 - x86_64                   6.3 MB/s | 4.9 MB     00:00
Fedora Modular 32 - x86_64 - Updates         6.0 MB/s | 3.7 MB     00:00
Fedora 32 - x86_64 - Updates                 334 kB/s |  23 MB     01:10
Fedora 32 - x86_64                            33 MB/s |  70 MB     00:02
[…]

Total download size: 181 M
Downloading Packages:
[…]

real	4m37.652s
user	0m38.239s
sys	0m6.321s

NixOS’s Nix takes almost 34s to fetch and unpack 180 MB.

% docker run -t -i nixos/nix
83971cf79f7e:/# time sh -c 'nix-channel --update && nix-env -iA nixpkgs.qemu'
unpacking channels...
created 1 symlinks in user environment
installing 'qemu-5.1.0'
these paths will be fetched (180.70 MiB download, 1146.92 MiB unpacked):
[…]
real	0m 33.64s
user	0m 16.96s
sys	0m 3.05s

Debian’s apt takes over 95 seconds to fetch and unpack 224 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 http://deb.debian.org/debian sid InRelease [146 kB]
Get:2 http://deb.debian.org/debian sid/main amd64 Packages [8400 kB]
Fetched 8546 kB in 1s (5998 kB/s)
[…]
Fetched 216 MB in 43s (5006 kB/s)
[…]
real	1m25.375s
user	0m29.163s
sys	0m12.835s

Arch Linux’s pacman takes almost 44s to fetch and unpack 142 MB.

% docker run -t -i archlinux/base
[root@58c78bda08e8 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core          130.8 KiB  1055 KiB/s 00:00
 extra        1655.8 KiB  3.70 MiB/s 00:00
 community       5.2 MiB  7.89 MiB/s 00:01
[…]
Total Download Size:   135.46 MiB
Total Installed Size:  661.05 MiB
[…]
real	0m43.901s
user	0m4.980s
sys	0m2.615s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
[…]
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Appendix B: measurement details (2019)

ack

You can expand each of these:

Fedora’s dnf takes almost 30 seconds to fetch and unpack 107 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y ack
Fedora Modular 30 - x86_64            4.4 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  3.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           17 MB/s |  19 MB     00:01
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  44 Packages

Total download size: 13 M
Installed size: 42 M
[…]
real	0m29.498s
user	0m22.954s
sys	0m1.085s

NixOS’s Nix takes 14s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i perl5.28.2-ack-2.28'
unpacking channels...
created 2 symlinks in user environment
installing 'perl5.28.2-ack-2.28'
these paths will be fetched (14.91 MiB download, 80.83 MiB unpacked):
  /nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2
  /nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48
  /nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man
  /nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27
  /nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31
  /nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53
  /nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16
  /nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28
copying path '/nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man' from 'https://cache.nixos.org'...
copying path '/nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27' from 'https://cache.nixos.org'...
copying path '/nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16' from 'https://cache.nixos.org'...
copying path '/nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48' from 'https://cache.nixos.org'...
copying path '/nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53' from 'https://cache.nixos.org'...
copying path '/nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31' from 'https://cache.nixos.org'...
copying path '/nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2' from 'https://cache.nixos.org'...
copying path '/nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28' from 'https://cache.nixos.org'...
building '/nix/store/q3243sjg91x1m8ipl0sj5gjzpnbgxrqw-user-environment.drv'...
created 56 symlinks in user environment
real	0m 14.02s
user	0m 8.83s
sys	0m 2.69s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y ack-grep)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [233 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8270 kB]
Fetched 8502 kB in 2s (4764 kB/s)
[…]
The following NEW packages will be installed:
  ack ack-grep libfile-next-perl libgdbm-compat4 libgdbm5 libperl5.26 netbase perl perl-modules-5.26
The following packages will be upgraded:
  perl-base
1 upgraded, 9 newly installed, 0 to remove and 60 not upgraded.
Need to get 8238 kB of archives.
After this operation, 42.3 MB of additional disk space will be used.
[…]
real	0m9.096s
user	0m2.616s
sys	0m0.441s

Arch Linux’s pacman takes a little over 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            132.2 KiB  1033K/s 00:00
 extra          1629.6 KiB  2.95M/s 00:01
 community         4.9 MiB  5.75M/s 00:01
[…]
Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
[…]
real	0m3.354s
user	0m0.224s
sys	0m0.049s

Alpine’s apk takes only about 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
/ # time apk add ack
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/4) Installing perl-file-next (1.16-r0)
(2/4) Installing libbz2 (1.0.6-r7)
(3/4) Installing perl (5.28.2-r1)
(4/4) Installing ack (3.0.0-r0)
Executing busybox-1.30.1-r2.trigger
OK: 44 MiB in 18 packages
real	0m 0.96s
user	0m 0.25s
sys	0m 0.07s

qemu

You can expand each of these:

Fedora’s dnf takes over a minute to fetch and unpack 266 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y qemu
Fedora Modular 30 - x86_64            3.1 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  2.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           20 MB/s |  19 MB     00:00
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  262 Packages
Upgrade    4 Packages

Total download size: 172 M
[…]
real	1m7.877s
user	0m44.237s
sys	0m3.258s

NixOS’s Nix takes 38s to fetch and unpack 262 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i qemu-4.0.0'
unpacking channels...
created 2 symlinks in user environment
installing 'qemu-4.0.0'
these paths will be fetched (262.18 MiB download, 1364.54 MiB unpacked):
[…]
real	0m 38.49s
user	0m 26.52s
sys	0m 4.43s

Debian’s apt takes 51 seconds to fetch and unpack 159 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [149 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8426 kB]
Fetched 8574 kB in 1s (6716 kB/s)
[…]
Fetched 151 MB in 2s (64.6 MB/s)
[…]
real	0m51.583s
user	0m15.671s
sys	0m3.732s

Arch Linux’s pacman takes 1m2s to fetch and unpack 124 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core       132.2 KiB   751K/s 00:00
 extra     1629.6 KiB  3.04M/s 00:01
 community    4.9 MiB  6.16M/s 00:01
[…]
Total Download Size:   123.20 MiB
Total Installed Size:  587.84 MiB
[…]
real	1m2.475s
user	0m9.272s
sys	0m2.458s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
[…]
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Planet DebianMichael Stapelberg: distri: a Linux distribution to research fast package management

Over the last year or so I have worked on a research linux distribution in my spare time. It’s not a distribution for researchers (like Scientific Linux), but my personal playground project to research linux distribution development, i.e. try out fresh ideas.

This article focuses on the package format and its advantages, but there is more to distri, which I will cover in upcoming blog posts.

Motivation

I was a Debian Developer for the 7 years from 2012 to 2019, but using the distribution often left me frustrated, ultimately resulting in me winding down my Debian work.

Frequently, I was noticing a large gap between the actual speed of an operation (e.g. doing an update) and the possible speed based on back of the envelope calculations. I wrote more about this in my blog post “Package managers are slow”.

To me, this observation means that either there is potential to optimize the package manager itself (e.g. apt), or what the system does is just too complex. While I remember seeing some low-hanging fruit¹, through my work on distri, I wanted to explore whether all the complexity we currently have in Linux distributions such as Debian or Fedora is inherent to the problem space.

I have completed enough of the experiment to conclude that the complexity is not inherent: I can build a Linux distribution for general-enough purposes which is much less complex than existing ones.

① Those were low-hanging fruit from a user perspective. I’m not saying that fixing them is easy in the technical sense; I know too little about apt’s code base to make such a statement.

Key idea: packages are images, not archives

One key idea is to switch from using archives to using images for package contents. Common package managers such as dpkg(1) use tar(1) archives with various compression algorithms.

distri uses SquashFS images, a comparatively simple file system image format that I happen to be familiar with from my work on the gokrazy Raspberry Pi 3 Go platform.

This idea is not novel: AppImage and snappy also use images, but only for individual, self-contained applications. distri however uses images for distribution packages with dependencies. In particular, there is no duplication of shared libraries in distri.

A nice side effect of using read-only image files is that applications are immutable and can hence not be broken by accidental (or malicious!) modification.

Key idea: separate hierarchies

Package contents are made available under a fully-qualified path. E.g., all files provided by package zsh-amd64-5.6.2-3 are available under /ro/zsh-amd64-5.6.2-3. The mountpoint /ro stands for read-only, which is short yet descriptive.

Perhaps surprisingly, building software with custom prefix values of e.g. /ro/zsh-amd64-5.6.2-3 is widely supported, thanks to:

  1. Linux distributions, which build software with prefix set to /usr, whereas FreeBSD (and the autotools default), which build with prefix set to /usr/local.

  2. Enthusiast users in corporate or research environments, who install software into their home directories.

Because using a custom prefix is a common scenario, upstream awareness for prefix-correctness is generally high, and the rarely required patch will be quickly accepted.

Key idea: exchange directories

Software packages often exchange data by placing or locating files in well-known directories. Here are just a few examples:

  • gcc(1) locates the libusb(3) headers via /usr/include
  • man(1) locates the nginx(1) manpage via /usr/share/man.
  • zsh(1) locates executable programs via PATH components such as /bin

In distri, these locations are called exchange directories and are provided via FUSE in /ro.

Exchange directories come in two different flavors:

  1. global. The exchange directory, e.g. /ro/share, provides the union of the share sub directory of all packages in the package store.
    Global exchange directories are largely used for compatibility, see below.

  2. per-package. Useful for tight coupling: e.g. irssi(1) does not provide any ABI guarantees, so plugins such as irssi-robustirc can declare that they want e.g. /ro/irssi-amd64-1.1.1-1/out/lib/irssi/modules to be a per-package exchange directory and contain files from their lib/irssi/modules.

Search paths sometimes need to be fixed

Programs which use exchange directories sometimes use search paths to access multiple exchange directories. In fact, the examples above were taken from gcc(1) ’s INCLUDEPATH, man(1) ’s MANPATH and zsh(1) ’s PATH. These are prominent ones, but more examples are easy to find: zsh(1) loads completion functions from its FPATH.

Some search path values are derived from --datadir=/ro/share and require no further attention, but others might derive from e.g. --prefix=/ro/zsh-amd64-5.6.2-3/out and need to be pointed to an exchange directory via a specific command line flag.

FHS compatibility

Global exchange directories are used to make distri provide enough of the Filesystem Hierarchy Standard (FHS) that third-party software largely just works. This includes a C development environment.

I successfully ran a few programs from their binary packages such as Google Chrome, Spotify, or Microsoft’s Visual Studio Code.

Fast package manager

I previously wrote about how Linux distribution package managers are too slow.

distri’s package manager is extremely fast. Its main bottleneck is typically the network link, even at high speed links (I tested with a 100 Gbps link).

Its speed comes largely from an architecture which allows the package manager to do less work. Specifically:

  1. Package images can be added atomically to the package store, so we can safely skip fsync(2) . Corruption will be cleaned up automatically, and durability is not important: if an interactive installation is interrupted, the user can just repeat it, as it will be fresh on their mind.

  2. Because all packages are co-installable thanks to separate hierarchies, there are no conflicts at the package store level, and no dependency resolution (an optimization problem requiring SAT solving) is required at all.
    In exchange directories, we resolve conflicts by selecting the package with the highest monotonically increasing distri revision number.

  3. distri proves that we can build a useful Linux distribution entirely without hooks and triggers. Not having to serialize hook execution allows us to download packages into the package store with maximum concurrency.

  4. Because we are using images instead of archives, we do not need to unpack anything. This means installing a package is really just writing its package image and metadata to the package store. Sequential writes are typically the fastest kind of storage usage pattern.

Fast installation also make other use-cases more bearable, such as creating disk images, be it for testing them in qemu(1) , booting them on real hardware from a USB drive, or for cloud providers such as Google Cloud.

Fast package builder

Contrary to how distribution package builders are usually implemented, the distri package builder does not actually install any packages into the build environment.

Instead, distri makes available a filtered view of the package store (only declared dependencies are available) at /ro in the build environment.

This means that even for large dependency trees, setting up a build environment happens in a fraction of a second! Such a low latency really makes a difference in how comfortable it is to iterate on distribution packages.

Package stores

In distri, package images are installed from a remote package store into the local system package store /roimg, which backs the /ro mount.

A package store is implemented as a directory of package images and their associated metadata files.

You can easily make available a package store by using distri export.

To provide a mirror for your local network, you can periodically distri update from the package store you want to mirror, and then distri export your local copy. Special tooling (e.g. debmirror in Debian) is not required because distri install is atomic (and update uses install).

Producing derivatives is easy: just add your own packages to a copy of the package store.

The package store is intentionally kept simple to manage and distribute. Its files could be exchanged via peer-to-peer file systems, or synchronized from an offline medium.

distri’s first release

distri works well enough to demonstrate the ideas explained above. I have branched this state into branch jackherer, distri’s first release code name. This way, I can keep experimenting in the distri repository without breaking your installation.

From the branch contents, our autobuilder creates:

  1. disk images, which…

  2. a package repository. Installations can pick up new packages with distri update.

  3. documentation for the release.

The project website can be found at https://distr1.org. The website is just the README for now, but we can improve that later.

The repository can be found at https://github.com/distr1/distri

Project outlook

Right now, distri is mainly a vehicle for my spare-time Linux distribution research. I don’t recommend anyone use distri for anything but research, and there are no medium-term plans of that changing. At the very least, please contact me before basing anything serious on distri so that we can talk about limitations and expectations.

I expect the distri project to live for as long as I have blog posts to publish, and we’ll see what happens afterwards. Note that this is a hobby for me: I will continue to explore, at my own pace, parts that I find interesting.

My hope is that established distributions might get a useful idea or two from distri.

There’s more to come: subscribe to the distri feed

I don’t want to make this post too long, but there is much more!

Please subscribe to the following URL in your feed reader to get all posts about distri:

https://michael.stapelberg.ch/posts/tags/distri/feed.xml

Next in my queue are articles about hermetic packages and good package maintainer experience (including declarative packaging).

Feedback or questions?

I’d love to discuss these ideas in case you’re interested!

Please send feedback to the distri mailing list so that everyone can participate!

Planet DebianSven Hoexter: Nice Helper to Sanitize File Names - sanity.pl

One of the most awesome helpers I carry around in my ~/bin since the early '00s is the sanity.pl script written by Andreas Gohr. It just recently came back to use when I started to archive some awesome Corona enforced live session music with youtube-dl.

Update: Francois Marier pointed out that Debian contains the detox package, which has a similar functionality.

Planet DebianThomas Goirand: The Gnocchi package in Debian

This is a follow-up from the blog post of Russel as seen here: https://etbe.coker.com.au/2020/10/13/first-try-gnocchi-statsd/. There’s a bunch of things he wrote which I unfortunately must say is inaccurate, and sometimes even completely wrong. It is my point of view that none of the reported bugs are helpful for anyone that understand Gnocchi and how to set it up. It’s however a terrible experience that Russell had, and I do understand why (and why it’s not his fault). I’m very much open on how to fix this on the packaging level, though some things aren’t IMO fixable. Here’s the details.

1/ The daemon startups

First of all, the most surprising thing is when Russell claimed that there’s no startup scripts for the Gnocchi daemons. In fact, they all come with both systemd and sysv-rc support:

# ls /lib/systemd/system/gnocchi-api.service
/lib/systemd/system/gnocchi-api.service
# /etc/init.d/gnocchi-api
/etc/init.d/gnocchi-api

Russell then tried to start gnocchi-api without the good options that are set in the Debian scripts, and not surprisingly, this failed. Russell attempted to do what was in the upstream doc, which isn’t adapted to what we have in Debian (the upstream doc is probably completely outdated, as Gnocchi is unfortunately not very well maintained upstream).

The bug #972087 is therefore, IMO not valid.

2/ The database setup

By default for all things OpenStack in Debian, there are some debconf helpers using dbconfig-common to help users setup database for their services. This is clearly for beginners, but that doesn’t prevent from attempting to understand what you’re doing. That is, more specifically for Gnocchi, there are 2 databases: one for Gnocchi itself, and one for the indexer, which not necessarily is using the same backend. The Debian package already setups one database, but one has to do it manually for the indexer one. I’m sorry this isn’t well enough documented.

Now, if some package are supporting sqlite as a backend (since most things in OpenStack are using SQLAlchemy), it looks like Gnocchi doesn’t right now. This is IMO a bug upstream, rather than a bug in the package. However, I don’t think the Debian packages are to be blame here, as they simply offer a unified interface, and it’s up to the users to know what they are doing. SQLite is anyway not a production ready backend. I’m not sure if I should close #971996 without any action, or just try to disable the SQLite backend option of this package because it may be confusing.

3/ The metrics UUID

Russell then thinks the UUID should be set by default. This is probably right in a single server setup, however, this wouldn’t work setting-up a cluster, which is probably what most Gnocchi users will do. In this type of environment, the metrics UUID must be the same on the 3 servers, and setting-up a random (and therefore different) UUID on the 3 servers wouldn’t work. So I’m also tempted to just close #972092 without any action on my side.

4/ The coordination URL

Since Gnocchi is supposed to be setup with more than one server, as in OpenStack, having an HA setup is very common, then a backend for the coordination (ie: sharing the workload) must be set. This is done by setting an URL that tooz understand. The best coordinator being Zookeeper, something like this should be set by hand:

coordination_url=zookeeper://192.168.101.2:2181/

Here again, I don’t think the Debian package is to be blamed for not providing the automation. I would however accept contributions to fix this and provide the choice using debconf, however, users would still need to understand what’s going on, and setup something like Zookeeper (or redis, memcache, or any other backend supported by tooz) to act as coordinator.

5/ The Debconf interface cannot replace a good documentation

… and there’s not so much I can do at my package maintainer level for this.

Russell, I’m really sorry for the bad user experience you had with Gnocchi. Now that you know a little big more about it, maybe you can have another go? Sure, the OpenStack telemetry system isn’t an easy to understand beast, but it’s IMO worth trying. And the recent versions can scale horizontally…

Cryptogram Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Worse Than FailureCodeSOD: Nothing But Garbage

Janell found herself on a project where most of her team were offshore developers she only interacted with via email, and a tech lead who had not ever programmed on the .NET Framework, but did some VB6 “back in the day”, and thus “knew VB”.

The team dynamic rapidly became a scenario where the tech lead issued an edict, the offshore team blindly applied it, and then Janell was left staring at the code wondering how to move forward with this.

These decrees weren’t all bad. For example: “Avoid repeated code by re-factoring into methods,” isn’t the worst advice. It’s not complete advice- there’s a lot of gotchas in that- but it fits on a bumper-sticker and generally leads to better code.

There were other rules that… well:

To improve the performance of the garbage collector, all variables must be set to nothing (null) at the end of each method.

Any time someone says something like “to improve the performance of the garbage collector,” you know you’re probably in for a bad time. This is no exception. Now, in old versions of VB, there’s a lot of “stuff” about whether or not you need to do this. It was considered a standard practice in a lot of places, though the real WTF was clearly VB in this case.

In .NET, this is absolutely unnecessary, and there are much better approaches if you need to do some sort of cleanup, like implementing the IDisposable interface. But, since this was the rule, the offshore team followed the rule. And, since we have repeated code, the offshore team followed that rule too.

Thus:

Public Sub SetToNothing(ByVal object As Object)
    object = Nothing
End Sub

If there is a platonic ideal of bad code, this is very close to that. This method attempts to solve a non-problem, regarding garbage collection. It also attempts to be “DRY”, by replacing one repeated line with… a different repeated line. And, most important: it doesn’t do what it claims.

The key here is that the parameter is ByVal. The copy of the reference in this method is set to nothing, but the original in the calling code is left unchanged in this example.

Oh, but remember when I said, “they were attempting to be DRY”? I lied. I’ll let Janell explain:

I found this gem in one of the developers classes. It turned out that he had copy and pasted it into every class he had worked on for good measure.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianFrançois Marier: Making an Apache website available as a Tor Onion Service

As part of the #MoreOnionsPorFavor campaign, I decided to follow brave.com's lead and make my homepage available as a Tor onion service.

Tor daemon setup

I started by installing the Tor daemon locally:

apt install tor

and then setting the following in /etc/tor/torrc:

SocksPort 0
SocksPolicy reject *
HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 80 [2600:3c04::f03c:91ff:fe8c:61ac]:80
HiddenServicePort 443 [2600:3c04::f03c:91ff:fe8c:61ac]:443
HiddenServiceVersion 3
HiddenServiceNonAnonymousMode 1
HiddenServiceSingleHopMode 1

in order to create a version 3 onion service without actually running a Tor relay.

Note that since I am making a public website available over Tor, I do not need the location of the website to be hidden and so I used the same settings as Cloudflare in their public Tor proxy.

Also, I explicitly used the external IPv6 address of my server in the configuration in order to prevent localhost bypasses.

After restarting the Tor daemon to reload the configuration file:

systemctl restart tor.service

I looked for the address of my onion service:

$ cat /var/lib/tor/hidden_service/hostname 
ixrdj3iwwhkuau5tby5jh3a536a2rdhpbdbu6ldhng43r47kim7a3lid.onion

Apache configuration

Next, I enabled a few required Apache modules:

a2enmod mpm_event
a2enmod http2
a2enmod headers

and configured my Apache vhosts in /etc/apache2/sites-enabled/www.conf:

<VirtualHost *:443>
    ServerName fmarier.org
    ServerAlias ixrdj3iwwhkuau5tby5jh3a536a2rdhpbdbu6ldhng43r47kim7a3lid.onion

    Protocols h2, http/1.1
    Header set Onion-Location "http://ixrdj3iwwhkuau5tby5jh3a536a2rdhpbdbu6ldhng43r47kim7a3lid.onion%{REQUEST_URI}s"
    Header set alt-svc 'h2="ixrdj3iwwhkuau5tby5jh3a536a2rdhpbdbu6ldhng43r47kim7a3lid.onion:443"; ma=315360000; persist=1'
    Header add Strict-Transport-Security: "max-age=63072000"

    Include /etc/fmarier-org/www-common.include

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/fmarier.org/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/fmarier.org/privkey.pem
</VirtualHost>

<VirtualHost *:80>
    ServerName fmarier.org
    Redirect permanent / https://fmarier.org/
</VirtualHost>

<VirtualHost *:80>
    ServerName ixrdj3iwwhkuau5tby5jh3a536a2rdhpbdbu6ldhng43r47kim7a3lid.onion
    Include /etc/fmarier-org/www-common.include
</VirtualHost>

Note that /etc/fmarier-org/www-common.include contains all of the configuration options that are common to both the HTTP and the HTTPS sites (e.g. document root, caching headers, aliases, etc.).

Finally, I restarted Apache:

apache2ctl configtest
systemctl restart apache2.service

Testing

In order to test that my website is correctly available at its .onion address, I opened the following URLs in a Brave Tor window:

I also checked that the main URL (https://fmarier.org/) exposes a working Onion-Location header which triggers the display of a button in the URL bar (recently merged and available in Brave Nightly):

Testing that the Alt-Svc is working required using the Tor Browser since that's not yet supported in Brave:

  1. Open https://fmarier.org.
  2. Wait 30 seconds.
  3. Reload the page.

On the server side, I saw the following:

2a0b:f4c2:2::1 - - [14/Oct/2020:02:42:20 +0000] "GET / HTTP/2.0" 200 2696 "-" "Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0"
2600:3c04::f03c:91ff:fe8c:61ac - - [14/Oct/2020:02:42:53 +0000] "GET / HTTP/2.0" 200 2696 "-" "Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0"

That first IP address is from a Tor exit node:

$ whois 2a0b:f4c2:2::1
...
inet6num:       2a0b:f4c2::/40
netname:        MK-TOR-EXIT
remarks:        -----------------------------------
remarks:        This network is used for Tor Exits.
remarks:        We do not have any logs at all.
remarks:        For more information please visit:
remarks:        https://www.torproject.org

which indicates that the first request was not using the .onion address.

The second IP address is the one for my server:

$ dig +short -x 2600:3c04::f03c:91ff:fe8c:61ac
hafnarfjordur.fmarier.org.

which indicates that the second request to Apache came from the Tor relay running on my server, hence using the .onion address.

Planet DebianDirk Eddelbuettel: tidyCpp 0.0.1: New package

A new package arrived on CRAN a few days ago. It offers a few headers files which wrap (parts) of the C API for R, but in a form that may be a little easier to use for C++ programmers. I have always liked how in Rcpp we offer good parts of the standalone R Math library in a namespace R::. While working recently with a particular C routine (for checking non-ASCII characters that will be part of the next version of the dang package which collecting various goodies in one place), I realized there may be value in collecting a few more such wrappers. So I started a few simple ones starting from simple examples.

Currently we have five headers defines.h, globals.h, internals.h, math.h, and shield.h. The first four each correpond to an R header file of the same or similar name, and the last one brings a simple yet effective alternative to PROTECT and UNPROTECT from Rcpp (in a slightly simplified way). None of the headers are “complete”, for internals.h in particular a lot more could be added (as I noticed today when experimenting with another source file that may be converted). All of the headers can be accessed with a simple #include <tidyCpp> (which, following another C++ convention, does not have a .h or .hpp suffix). And a the package ships these headers, packages desiring to use them only need LinkingTo: tidyCpp.

As usage examples, we (right now) have four files in the snippets/ directory of the package. Two of these, convolveExample.cpp and dimnamesExample.cpp both illustrate how one could change example code from Writing R Extensions. Then there are also a very simple defineExample.cpp and a shieldExample.cpp illustrating how much easier Shield() is compared to PROTECT and UNPROTECT.

Finally, there is a nice vignette discussing the package motivation with two detailed side-by-side ‘before’ and ‘after’ examples that are the aforementioned convolution and dimnames examples.

Over time, I expect to add more definitions and wrappers. Feedback would be welcome—it seems to hit a nerve already as it currently has more stars than commits even though (prior to this post) I had yet to tweet or blog about it. Please post comments and suggestions at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Cryptogram Swiss-Swedish Diplomatic Row Over Crypto AG

Previously I have written about the Swedish-owned Swiss-based cryptographic hardware company: Crypto AG. It was a CIA-owned Cold War operation for decades. Today it is called Crypto International, still based in Switzerland but owned by a Swedish company.

It’s back in the news:

Late last week, Swedish Foreign Minister Ann Linde said she had canceled a meeting with her Swiss counterpart Ignazio Cassis slated for this month after Switzerland placed an export ban on Crypto International, a Swiss-based and Swedish-owned cybersecurity company.

The ban was imposed while Swiss authorities examine long-running and explosive claims that a previous incarnation of Crypto International, Crypto AG, was little more than a front for U.S. intelligence-gathering during the Cold War.

Linde said the Swiss ban was stopping “goods” — which experts suggest could include cybersecurity upgrades or other IT support needed by Swedish state agencies — from reaching Sweden.

She told public broadcaster SVT that the meeting with Cassis was “not appropriate right now until we have fully understood the Swiss actions.”

EDITED TO ADD (10/13): Lots of information on Crypto AG.

Krebs on SecurityMicrosoft Patch Tuesday, October 2020 Edition

It’s Cybersecurity Awareness Month! In keeping with that theme, if you (ab)use Microsoft Windows computers you should be aware the company shipped a bevy of software updates today to fix at least 87 security problems in Windows and programs that run on top of the operating system. That means it’s once again time to backup and patch up.

Eleven of the vulnerabilities earned Microsoft’s most-dire “critical” rating, which means bad guys or malware could use them to gain complete control over an unpatched system with little or no help from users.

Worst in terms of outright scariness is probably CVE-2020-16898, which is a nasty bug in Windows 10 and Windows Server 2019 that could be abused to install malware just by sending a malformed packet of data at a vulnerable system. CVE-2020-16898 earned a CVSS Score of 9.8 (10 is the most awful).

Security vendor McAfee has dubbed the flaw “Bad Neighbor,” and in a blog post about it said a proof-of-concept exploit shared by Microsoft with its partners appears to be “both extremely simple and perfectly reliable,” noting that this sucker is imminently “wormable” — i.e. capable of being weaponized into a threat that spreads very quickly within networks.

“It results in an immediate BSOD (Blue Screen of Death), but more so, indicates the likelihood of exploitation for those who can manage to bypass Windows 10 and Windows Server 2019 mitigations,” McAfee’s Steve Povolny wrote. “The effects of an exploit that would grant remote code execution would be widespread and highly impactful, as this type of bug could be made wormable.”

Trend Micro’s Zero Day Initiative (ZDI) calls special attention to another critical bug quashed in this month’s patch batch: CVE-2020-16947, which is a problem with Microsoft Outlook that could result in malware being loaded onto a system just by previewing a malicious email in Outlook.

“The Preview Pane is an attack vector here, so you don’t even need to open the mail to be impacted,” said ZDI’s Dustin Childs.

While there don’t appear to be any zero-day flaws in October’s release from Microsoft, Todd Schell from Ivanti points out that a half-dozen of these flaws were publicly disclosed prior to today, meaning bad guys have had a jump start on being able to research and engineer working exploits.

Other patches released today tackle problems in Exchange Server, Visual Studio, .NET Framework, and a whole mess of other core Windows components.

For any of you who’ve been pining for a Flash Player patch from Adobe, your days of waiting are over. After several months of depriving us of Flash fixes, Adobe’s shipped an update that fixes a single — albeit critical — flaw in the program that crooks could use to install bad stuff on your computer just by getting you to visit a hacked or malicious website.

Chrome and Firefox both now disable Flash by default, and Chrome and IE/Edge auto-update the program when new security updates are available. Mercifully, Adobe is slated to retire Flash Player later this year, and Microsoft has said it plans to ship updates at the end of the year that will remove Flash from Windows machines.

It’s a good idea for Windows users to get in the habit of updating at least once a month, but for regular users (read: not enterprises) it’s usually safe to wait a few days until after the patches are released, so that Microsoft has time to iron out any chinks in the new armor.

But before you update, please make sure you have backed up your system and/or important files. It’s not uncommon for a Windows update package to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Planet DebianJonathan Dowland: The Cure — Pornography

picture of a vinyl record

Last weekend, Tim Burgess’s twitter listening party covered The Cure’s short, dark 1982 album “Pornography”. I realised I’d never actually played the record, which I picked up a couple of years ago from a shop in the Grainger Market which is sadly no longer there. It was quite a wallet-threatening shop so perhaps it’s a good thing it’s gone.

Monday was a dreary, rainy day which seemed the perfect excuse to put it on. It’s been long enough since I last listened to my CD copy of the album that there were a few nice surprises to rediscover. The closing title track sounded quite different to how I remembered it, with Robert Smith’s vocals buried deeper in the mix, but my memory might be mixing up a different session take.

Truly a fitting closing lyric for our current times: I must fight this sickness / Find a cure

Cryptogram US Cyber Command and Microsoft Are Both Disrupting TrickBot

Earlier this month, we learned that someone is disrupting the TrickBot botnet network.

Over the past 10 days, someone has been launching a series of coordinated attacks designed to disrupt Trickbot, an enormous collection of more than two million malware-infected Windows PCs that are constantly being harvested for financial data and are often used as the entry point for deploying ransomware within compromised organizations.

On Sept. 22, someone pushed out a new configuration file to Windows computers currently infected with Trickbot. The crooks running the Trickbot botnet typically use these config files to pass new instructions to their fleet of infected PCs, such as the Internet address where hacked systems should download new updates to the malware.

But the new configuration file pushed on Sept. 22 told all systems infected with Trickbot that their new malware control server had the address 127.0.0.1, which is a “localhost” address that is not reachable over the public Internet, according to an analysis by cyber intelligence firm Intel 471.

A few days ago, the Washington Post reported that it’s the work of US Cyber Command:

U.S. Cyber Command’s campaign against the Trickbot botnet, an army of at least 1 million hijacked computers run by Russian-speaking criminals, is not expected to permanently dismantle the network, said four U.S. officials, who spoke on the condition of anonymity because of the matter’s sensitivity. But it is one way to distract them at least for a while as they seek to restore operations.

The network is controlled by “Russian speaking criminals,” and the fear is that it will be used to disrupt the US election next month.

The effort is part of what Gen. Paul Nakasone, the head of Cyber Command, calls “persistent engagement,” or the imposition of cumulative costs on an adversary by keeping them constantly engaged. And that is a key feature of CyberCom’s activities to help protect the election against foreign threats, officials said.

Here’s General Nakasone talking about persistent engagement.

Microsoft is also disrupting Trickbot:

We disrupted Trickbot through a court order we obtained as well as technical action we executed in partnership with telecommunications providers around the world. We have now cut off key infrastructure so those operating Trickbot will no longer be able to initiate new infections or activate ransomware already dropped into computer systems.

[…]

We took today’s action after the United States District Court for the Eastern District of Virginia granted our request for a court order to halt Trickbot’s operations.

During the investigation that underpinned our case, we were able to identify operational details including the infrastructure Trickbot used to communicate with and control victim computers, the way infected computers talk with each other, and Trickbot’s mechanisms to evade detection and attempts to disrupt its operation. As we observed the infected computers connect to and receive instructions from command and control servers, we were able to identify the precise IP addresses of those servers. With this evidence, the court granted approval for Microsoft and our partners to disable the IP addresses, render the content stored on the command and control servers inaccessible, suspend all services to the botnet operators, and block any effort by the Trickbot operators to purchase or lease additional servers.

To execute this action, Microsoft formed an international group of industry and telecommunications providers. Our Digital Crimes Unit (DCU) led investigation efforts including detection, analysis, telemetry, and reverse engineering, with additional data and insights to strengthen our legal case from a global network of partners including FS-ISAC, ESET, Lumen’s Black Lotus Labs, NTT and Symantec, a division of Broadcom, in addition to our Microsoft Defender team. Further action to remediate victims will be supported by internet service providers (ISPs) and computer emergency readiness teams (CERTs) around the world.

This action also represents a new legal approach that our DCU is using for the first time. Our case includes copyright claims against Trickbot’s malicious use of our software code. This approach is an important development in our efforts to stop the spread of malware, allowing us to take civil action to protect customers in the large number of countries around the world that have these laws in place.

Brian Krebs comments:

In legal filings, Microsoft argued that Trickbot irreparably harms the company “by damaging its reputation, brands, and customer goodwill. Defendants physically alter and corrupt Microsoft products such as the Microsoft Windows products. Once infected, altered and controlled by Trickbot, the Windows operating system ceases to operate normally and becomes tools for Defendants to conduct their theft.”

This is a novel use of trademark law.

Cryptogram 2020 Workshop on Economics of Information Security

The Workshop on Economics of Information Security will be online this year. Register here.

Cory DoctorowAttack Surface is out!

Today is the US/Canada release-date for Attack Surface, the third Little Brother book. It’s been a long time coming (Homeland, the second book, came out in 2013)!

It’s the fourth book I’ve published in 2020, and it’s my last book of the year.

https://us.macmillan.com/books/9781250757531

When the lockdown hit in March, I started thinking about what I’d do if my US/Canada/UK/India events were all canceled, but I still treated it as a distant contingency. But well, here we are!

My US publisher, Tor Books, has put together a series of 8 ticketed events, each with a pair of brilliant, fascinating guests, to break down all the themes in the book. Each is sponsored by a different bookstore and each comes with a copy of the book.

https://read.macmillan.com/torforge/cory-doctorow-virtual-lecture-series/

We kick off the series TONIGHT at 5PM Pacific/8PM Eastern with “Politics and Protest,” sponsored by The Strand NYC, with guests Eva Galperin (EFF Threat Lab) and Ron Deibert (U of T Citizen Lab).

https://www.strandbooks.com/events/event93?title=cory_doctorow_attack_surface

There will be video releases of these events eventually, but if you want to attend more than one and don’t need more than one copy of the book, you can donate your copy to a school, prison, library, etc. Here’s a list of institutions seeking copies:

https://craphound.com/littlebrother/2020/10/05/as-freebies/

And if you are affiliated with an organization or institution that would like to put your name down for a freebie, here’s the form. I’m checking it several times/day and adding new entries to the list:

https://docs.google.com/forms/d/1uIXO0iV2RyVN6AtDYd1C7ImxS6V0jHWnMNFK179g6H4/

I got a fantastic surprise this morning: a review by Paul Di Filippo in the Washington Post:

https://www.washingtonpost.com/entertainment/books/cory-doctorows-attack-surface-is-a-riveting-techno-thriller/2020/10/13/a3a178d0-0cb9-11eb-8074-0e943a91bf08_story.html

He starts by calling me “among the best of the current practitioners of near-future sf,” and, incredibly, the review only gets better after that!

Di Filippo says the book is a “political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance,” whose hero, Masha, is “a protagonist worth rooting for, whose inner conflicts and cognitive dissonances propel her to surprising, heroic actions.”

He closes by saying that my work “charts the universal currents of the human heart and soul with precision.”

I mean, wow.

If you’d prefer an audiobook of Attack Surface; you’re in luck! I produced my own audio edition of the book, with Amber Benson narrating, and it’s amazing!

Those of you who backed the audio on Kickstarter will be getting your emails from BackerKit shortly (I’ve got an email in to them and will post an update to the KS as soon as they get back to me.

If you missed the presale, you can still get the audio, everywhere EXCEPT Audible, who refuse to carry my work as it’s DRM-free (that’s why I had to fund the audiobook; publishers aren’t interested in the rights to audio that can’t be sold on the dominant platform).

Here’s some of the stores carrying the book today:

Libro.fm:
https://libro.fm/audiobooks/9781664913257-attack-surface

Supporting Cast (Audiobooks from Slate):
https://attacksurface.supportingcast.fm/

Bandcamp:
https://corydoctorow.bandcamp.com/album/attack-surface

I expect that both Downpour and Google Play should have copies for sale any minute now (both have the book in their systems but haven’t made it live yet).

And of course, you can get it direct from me, along with all my other ebooks and audiobooks:

https://craphound.com/shop

Worse Than FailureSlow Load

LED traffic light on red

After years spent supporting an enterprisey desktop application with a huge codebase full of WTFs, Sammy thought he had seen all there was to be seen. He was about to find out how endlessly deep the bottom of the WTF barrel truly was.

During development, Sammy frequently had to restart said application: a surprisingly onerous process, as it took about 30 seconds each and every time to return to a usable state. Eventually, a mix of curiosity and annoyance spurred him into examining just why it took so long to start.

He began by profiling the performance. When the application first initialized, it performed 10 seconds of heavy processing. Then the CPU load dropped to 0% for a full 16 seconds. After that, it increased, pegging out one of the eight cores on Sammy's machine for 4 seconds. Finally, the application was ready to accept user input. Sammy knew that, for at least some of the time, the application was calling out to and waiting for a response from a server. That angle would have to be investigated as well.

Further digging and hair-pulling led Sammy to a buried bit of code, a Very Old Mechanism for configuring the visibility of items on the main menu. While some menu items were hard-coded, others were dynamically added by extension modules. The application administrator had the ability to hide or show any of them by switching checkboxes in a separate window.

When the application first started up, it retrieved the user's latest configuration from the server, applied it to their main menu, then sent the resulting configuration back to the server. The server, in turn, called DELETE * FROM MENU_CFG; INSERT INTO MENU_CFG (…); INSERT INTO MENU_CFG (…); …

Sammy didn't know what was the worst thing about all this. Was it the fact that the call to the server was performed synchronously for no reason? Or, that after multiple DELETE / INSERT cycles, the table of a mere 400 rows weighed more than 16 MB? When the users all came in to work at 9:00 AM and started up the application at roughly the same time, their concurrent transactions caused quite the bottleneck—and none of it was necessary. The Very Old Mechanism had been replaced with role-based configuration years earlier.

All Sammy could do was write up a change request to put in JIRA. Speaking of bottlenecks ...

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Planet DebianDirk Eddelbuettel: GitHub Streak: Round Seven

Six years ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

and then showed the first chart of GitHub streaking 366 days:

github activity october 2013 to october 2014

And five years ago a first follow-up appeared in this post about 731 days:

github activity october 2014 to october 2015

And four years ago we had a followup at 1096 days

github activity october 2015 to october 2016

And three years ago we had another one marking 1461 days

github activity october 2016 to october 2017

And two years ago another one for 1826 days

github activity october 2017 to october 2018

And last year another one bringing it to 2191 days

github activity october 2018 to october 2019

And as today is October 12, here is the newest one from 2019 to 2020 with a new total of 2557 days:

github activity october 2018 to october 2019

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 18)

Here’s part eighteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

Content warning for domestic abuse and sexual violence.

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Some show notes:


Here’s the form for getting a free Little Brother story, “Force Multiplier” by pre-ordering the print edition of Attack Surface (US/Canada only)

Here’s the schedule for the Attack Surface lectures


Here’s the list of schools and other institutions in need of donated copies of Attack Surface.

Here’s the form to request a copy of Attack Surface for schools, libraries, classrooms, etc.

Here’s how my publisher described it when it came out:


Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Cryptogram Google Responds to Warrants for “About” Searches

One of the things we learned from the Snowden documents is that the NSA conducts “about” searches. That is, searches based on activities and not identifiers. A normal search would be on a name, or IP address, or phone number. An about search would something like “show me anyone that has used this particular name in a communications,” or “show me anyone who was at this particular location within this time frame.” These searches are legal when conducted for the purpose of foreign surveillance, but the worry about using them domestically is that they are unconstitutionally broad. After all, the only way to know who said a particular name is to know what everyone said, and the only way to know who was at a particular location is to know where everyone was. The very nature of these searches requires mass surveillance.

The FBI does not conduct mass surveillance. But many US corporations do, as a normal part of their business model. And the FBI uses that surveillance infrastructure to conduct its own about searches. Here’s an arson case where the FBI asked Google who searched for a particular street address:

Homeland Security special agent Sylvette Reynoso testified that her team began by asking Google to produce a list of public IP addresses used to google the home of the victim in the run-up to the arson. The Chocolate Factory [Google] complied with the warrant, and gave the investigators the list. As Reynoso put it:

On June 15, 2020, the Honorable Ramon E. Reyes, Jr., United States Magistrate Judge for the Eastern District of New York, authorized a search warrant to Google for users who had searched the address of the Residence close in time to the arson.

The records indicated two IPv6 addresses had been used to search for the address three times: one the day before the SUV was set on fire, and the other two about an hour before the attack. The IPv6 addresses were traced to Verizon Wireless, which told the investigators that the addresses were in use by an account belonging to Williams.

Google’s response is that this is rare:

While word of these sort of requests for the identities of people making specific searches will raise the eyebrows of privacy-conscious users, Google told The Register the warrants are a very rare occurrence, and its team fights overly broad or vague requests.

“We vigorously protect the privacy of our users while supporting the important work of law enforcement,” Google’s director of law enforcement and information security Richard Salgado told us. “We require a warrant and push to narrow the scope of these particular demands when overly broad, including by objecting in court when appropriate.

“These data demands represent less than one per cent of total warrants and a small fraction of the overall legal demands for user data that we currently receive.”

Here’s another example of what seems to be about data leading to a false arrest.

According to the lawsuit, police investigating the murder knew months before they arrested Molina that the location data obtained from Google often showed him in two places at once, and that he was not the only person who drove the Honda registered under his name.

Avondale police knew almost two months before they arrested Molina that another man ­ his stepfather ­ sometimes drove Molina’s white Honda. On October 25, 2018, police obtained records showing that Molina’s Honda had been impounded earlier that year after Molina’s stepfather was caught driving the car without a license.

Data obtained by Avondale police from Google did show that a device logged into Molina’s Google account was in the area at the time of Knight’s murder. Yet on a different date, the location data from Google also showed that Molina was at a retirement community in Scottsdale (where his mother worked) while debit card records showed that Molina had made a purchase at a Walmart across town at the exact same time.

Molina’s attorneys argue that this and other instances like it should have made it clear to Avondale police that Google’s account-location data is not always reliable in determining the actual location of a person.

“About” searches might be rare, but that doesn’t make them a good idea. We have knowingly and willingly built the architecture of a police state, just so companies can show us ads. (And it is increasingly apparent that the advertising-supported Internet is heading for a crash.)

Sociological ImagesSociology IRL

One of the goals of this blog is to help get sociology to the public by offering short, interesting comments on what our discipline looks like out in the world.

A sociologist can unpack this!
Photo Credit: Mario A. P., Flickr CC

We live sociology every day, because it is the science of relationships among people and groups. But because the name of our discipline is kind of a buzzword itself, I often find excellent examples of books in the nonfiction world that are deeply sociological, even if that isn’t how their authors or publishers would describe them.

Last year, I had the good fortune to help a friend as he was working on one of these books. Now that the release date is coming up, I want to tell our readers about the project because I think it is an excellent example of what happens when ideas from our discipline make it out into the “real” world beyond academia. In fact, the book is about breaking down that idea of the “real world” itself. It is called IRL: Finding realness, meaning, and belonging in our digital lives, by Chris Stedman.

In IRL, Chris tackles big questions about what it means to be authentic in a world where so much of our social interaction is now taking place online. The book goes to deep places, but it doesn’t burden the reader with an overly-serious tone. Instead, Chris brings a lightness by blending memoir, interviews, and social science, all arranged in vignettes so that reading feels like scrolling through a carefully curated Instagram feed.

What makes this book stand out to me is that Chris really brings the sociology here. In the pages of IRL I spotted Zeynep Tufekci’s Twitter and Tear Gas, Mario Small’s Someone to Talk To, Nathan Jurgenson’s work on digital dualism, Jacqui Frost’s work on navigating uncertainty, Paul McClure on technology and religion, and a nod to some work with yours truly about nonreligious folks. To see Chris citing so many sociologists, among the other essayists and philosophers that inform his work, really gives you a sense of the intellectual grounding here and what it looks like to put our field’s ideas into practice.

Above all, I think the book is worth your time because it is a glowing example of what it means to think relationally about our own lives and the lives of others. That makes Chris’ writing a model for the kind of reflections many of us have in mind when we assign personal essays to our students in Sociology 101—not because it is basic, but because it is willing to deeply consider how we navigate our relationships today and how those relationships shape us, in turn.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityMicrosoft Uses Trademark Law to Disrupt Trickbot Botnet

Microsoft Corp. has executed a coordinated legal sneak attack in a bid to disrupt the malware-as-a-service botnet Trickbot, a global menace that has infected millions of computers and is used to spread ransomware. A court in Virginia granted Microsoft control over many Internet servers Trickbot uses to plunder infected systems, based on novel claims that the crime machine abused the software giant’s trademarks. However, it appears the operation has not completely disabled the botnet.

A spam email containing a Trickbot-infected attachment that was sent earlier this year. Image: Microsoft.

“We disrupted Trickbot through a court order we obtained as well as technical action we executed in partnership with telecommunications providers around the world,” wrote Tom Burt, corporate vice president of customer security and trust at Microsoft, in a blog post this morning about the legal maneuver. “We have now cut off key infrastructure so those operating Trickbot will no longer be able to initiate new infections or activate ransomware already dropped into computer systems.”

Microsoft’s action comes just days after the U.S. military’s Cyber Command carried out its own attack that sent all infected Trickbot systems a command telling them to disconnect themselves from the Internet servers the Trickbot overlords used to control them. The roughly 10-day operation by Cyber Command also stuffed millions of bogus records about new victims into the Trickbot database in a bid to confuse the botnet’s operators.

In legal filings, Microsoft argued that Trickbot irreparably harms the company “by damaging its reputation, brands, and customer goodwill. Defendants physically alter and corrupt Microsoft products such as the Microsoft Windows products. Once infected, altered and controlled by Trickbot, the Windows operating system ceases to operate normally and becomes tools for Defendants to conduct their theft.”

From the civil complaint Microsoft filed on October 6 with the U.S. District Court for the Eastern District of Virginia:

“However, they still bear the Microsoft and Windows trademarks. This is obviously meant to and does mislead Microsoft’s customers, and it causes extreme damage to Microsoft’s brands and trademarks.”

“Users subject to the negative effects of these malicious applications incorrectly believe that Microsoft and Windows are the source of their computing device problems. There is great risk that users may attribute this problem to Microsoft and associate these problems with Microsoft’s Windows products, thereby diluting and tarnishing the value of the Microsoft and Windows trademarks and brands.”

Microsoft said it will leverage the seized Trickbot servers to identify and assist Windows users impacted by the Trickbot malware in cleaning the malware off of their systems.

Trickbot has been used to steal passwords from millions of infected computers, and reportedly to hijack access to well more than 250 million email accounts from which new copies of the malware are sent to the victim’s contacts.

Trickbot’s malware-as-a-service feature has made it a reliable vehicle for deploying various strains of ransomware, locking up infected systems on a corporate network unless and until the company agrees to make an extortion payment.

A particularly destructive ransomware strain that is closely associated with Trickbot — known as “Ryuk” or “Conti” — has been responsible for costly attacks on countless organizations over the past year, including healthcare providers, medical research centers and hospitals.

One recent Ryuk victim is Universal Health Services (UHS), a Fortune 500 hospital and healthcare services provider that operates more than 400 facilities in the U.S. and U.K.

On Sunday, Sept. 27, UHS shut down its computer systems at healthcare facilities across the United States in a bid to stop the spread of the malware. The disruption caused some of the affected hospitals to redirect ambulances and relocate patients in need of surgery to other nearby hospitals.

Microsoft said it did not expect its action to permanently disrupt Trickbot, noting that the crooks behind the botnet will likely make efforts to revive their operations. But so far it’s not clear whether Microsoft succeeded in commandeering all of Trickbot’s control servers, or when exactly the coordinated seizure of those servers occurred.

As the company noted in its legal filings, the set of Internet address used as Trickbot controllers is dynamic, making attempts to disable the botnet more challenging.

Indeed, according to real-time information posted by Feodo Tracker, a Swiss security site that tracks Internet servers used as controllers for Trickbot and other botnets, nearly two dozen Trickbot control servers — some of which first went active at beginning of this month — are still live and responding to requests at the time of this publication.

Trickbot control servers that are currently online. Source: Feodotracker.abuse.ch

Cyber intelligence firm Intel 471 says fully taking down Trickbot would require an unprecedented level of collaboration among parties and countries that most likely would not cooperate anyway. That’s partly because Trickbot’s primary command and control mechanism supports communication over The Onion Router (TOR) — a distributed anonymity service that is wholly separate from the regular Internet.

“As a result, it is highly likely a takedown of the Trickbot infrastructure would have little medium- to long-term impact on the operation of Trickbot,” Intel 471 wrote in an analysis of Microsoft’s action.

What’s more, Trickbot has a fallback communications method that uses a decentralized domain name system called EmerDNS, which allows people to create and use domains that cannot be altered, revoked or suspended by any authority. The highly popular cybercrime store Joker’s Stash — which sells millions of stolen credit cards — also uses this setup.

From the Intel 471 report [malicious links and IP address defanged with brackets]:

“In the event all Trickbot infrastructure is taken down, the cybercriminals behind Trickbot will need to rebuild their servers and change their EmerDNS domain to point at their new servers. Compromised systems then should be able to connect to the new Trickbot infrastructure. Trickbot’s EmerDNS fall-back domain safetrust[.]bazar recently resolved to the IP address 195.123.237[.]156. Not coincidentally, this network neighborhood also hosts Bazar malware control servers.”

“Researchers previously attributed the development of the Bazar malware family to the same group behind Trickbot, due to code similarities with the Anchor malware family and its methods of operation, such as shared infrastructure between Anchor and Bazar. On Oct. 12, 2020 the fall-back domain resolved to the IP address 23.92.93[.]233, which was confirmed by Intel 471 Malware Intelligence systems to be a Trickbot controller URL in May 2019. This suggests the fall-back domain is still controlled by the Trickbot operators at the time of this report.”

Intel 471 concluded that the Microsoft action has so far has done little to disrupt the botnet’s activity.

“At the time of this report, Intel 471 has not seen any significant impact on Trickbot’s infrastructure and ability to communicate with Trickbot-infected systems,” the company wrote.

The legal filings from Microsoft are available here.

Update, 9:51 a.m. ET: Feodo Tracker now lists just six Trickbot controllers as responding. All six were first seen online in the past 48 hours. Also added perspective from Intel 471.

Kevin RuddABC TV: Murdoch Royal Commission

E&OE TRANSCRIPT
TV INTERVIEW
ABC NEWS BREAKFAST
12 OCTOBER 2020

The post ABC TV: Murdoch Royal Commission appeared first on Kevin Rudd.

Kevin RuddABC RN: Murdoch Royal Commission

E&OE TRANSCRIPT
RADIO INTERVIEW

ABC RN BREAKFAST
12 OCTOBER 2020

Topics: #MurdochRoyalCommission petition

Fran Kelly
Former prime minister Kevin Rudd has escalated his war with the Murdoch media empire, which includes the Australian newspaper and a number of major selling daily tabloids. Kevin Rudd has set up a parliamentary petition, which calls for a royal commission into News Corp’s domination of the Australian media landscape, which he has branded, quote, a cancer on democracy. More than 73,000 signatures have been gathered in the 48 hours since the petition went live on the parliamentary website on Saturday morning. In the meantime, Rupert Murdoch’s son James Murdoch, has told The New York Times that he left his family’s company over concerns it was disguising facts and spreading disinformation. Kevin Rudd joins us from Brisbane. Kevin Rudd, welcome back to Breakfast.

Kevin Rudd
Thanks for having me on the programme, Fran.

Fran Kelly
You’re you want a royal commission into what you generalise as, quote, the threats to media diversity, and that includes Nine’s takeover of the Melbourne Age and the Sydney Morning Herald. But when you drill down, you are going after News Corp, aren’t you? You are out to get Rupert Murdoch. Everyone else would just be collateral damage.

Kevin Rudd
The bottom line here is, as a former prime minister of the country, Fran, I’m passionate about the future of our democracy and a free independent and balanced media is the lifeblood of democracy. And I think most people would agree, I think most of your listeners, in the last decade or so this has just eroded. It’s the further concentration of Murdoch’s control over print media. 70% of the print readership in this country is owned by Murdoch. In my home state of Queensland virtually every single newspaper up and down the coast from Cairns to the Gold Coast is owned by Murdoch. And the ABC’s funding is under assault. So for those reasons, I think it’s high time that we had an independent, dispassionate examination of the impact of media monopoly on Australian democratic life and alternative models aimed at diversifying media ownership into the future.

Fran Kelly
And in the petition, you refer to, quote, how Australia’s print media is overwhelmingly controlled by News Corp, as you just said, with two thirds of daily newspaper readership, this power is routinely used to attack opponents in business and politics by blending editorial opinion with news reporting. What’s the the example that you want to give us know how the blending of editorial opinion with news reporting has actually become an arrogant cancer and harmed Australia?

Kevin Rudd
Well I could give you dozens of examples of this across the nation’s tabloid newspapers and with the national daily, which is the Australian. If you were simply to go back to, for example, the 2013 election, the front page of I think the Sunday Telegraph was ‘we need Tony’, that was it. ‘We need Tony’ and a huge photograph of Tony Abbott. Now, I would suggest to you, Fran, that by any person’s definition, that’s a blending of an editorial view with news coverage. If on the front page of the same paper, it has me and Anthony Albanese, or The Daily Telegraph, dressed in Nazi uniforms, I would say that’s a blending of editorial opinion with news.

Fran Kelly
It’s also transparent isn’t it, Kevin Rudd? I mean, it’s transparent what they’re doing. The readers presumably know what they’re doing. And by those papers, knowing what they’re doing, it’s not as if newspaper circulation, you know, indicating that they’re necessarily the leader of you know, opinion in the media anymore. I mean, things are changing. We’ve got news websites, people access news from all over the world.

Kevin Rudd
So why does Murdoch choose to continue to invest in essentially loss-making newspapers, Fran?

Fran Kelly
For influence.

Kevin Rudd
Yeah, the reason is political influence and power. So if you’re in my home state of Queensland, and as I said before, you can’t go anywhere else in terms of print. Your daily paper in southeast Queensland is the Courier-Mail. And then it’s the Gold Coast Bulletin on the Sunshine Coast Daily, the Bundaberg News-Mail the Mackay Mercury, the Rockhampton Morning Bulletin, the Townsville Bulletin, the Cairns Post and others. And the reason is –

Fran Kelly
But it’s not like people only get their news from reading the Townsville Bulletin.

Kevin Rudd
No, but the reason they do it, and I’ve been in many many regional radio bureaux right around the country where, what flops onto the desk of the regional newsroom? It’s that day’s Murdoch print media because they know what happens is it helps set the agenda and the tone and the framing and the parameters of the national political debate. That’s why they do it. And so therefore, my question simply to a royal commission would be: is this poisoning our democracy? I believe over time, it is. And, for example, today we have this stunning statement by James Murdoch that the reason he left the board of News Corporation was because it was participating in the legitimisation of disinformation. Yet in today’s Australian Murdoch media, not a single reference to what James Murdoch has said; James, having helped run the organisation for decades. So this is monumental news for those of us concerned about whether in fact, we’ve got balanced news reporting or the slanting of editorial opinion.

Fran Kelly
Indeed, the comments from James Murdoch are startling and do dovetail pretty closely with some of the things you say in your petition. Have you had any discussions with him as you’ve been formulating your moves against News Corp?

Kevin Rudd
No. Not at all. I think I’ve met James Murdoch twice in my life. I just observe what he has said as someone who has been at the absolute heart of the operation. You see, people have often asked me: why now? Why are you calling for this now? The trend has got worse over the last decade. In Queensland, the ACCC’s, in my view, wrong decision to allow Murdoch to purchase Australian Provincial Newspapers delivered Queensland, the state which swings so many federal elections, and its print media, almost completely into Murdoch’s hands. So that’s a big change, the assault on Australian Associated Press, an institution in this country since the 1930s, where Murdoch is de-funding and seeking to set up a rival institution. And on top of that, look, 18 out of the last 18 federal and state elections, the Murdoch print media, in its daily newspapers campaigning viciously for one side of politics and viciously against the other in the news coverage. Look, you get to a point where enough is enough. And what I find stunning, Fran, is that in less than 48 hours, despite an inhospitable Parliament House website, 75,000 people have gone online to register already their names on this petition.

Fran Kelly
Sure, a lot of people agree with you, but you didn’t agree with this necessarily. You didn’t have a problem when News Corp backed due in the Kevin07 election, did you?

Kevin Rudd
If you were leader, the Labor Party in 2007 and in elections prior to that, and to some extent subsequent to that, you would do everything you could to reduce the negative coverage from the Murdoch media against the Australian Labor Party. That’s the responsibility of the parliamentary leader. What I’m saying is, prior to 2010, News Corp ran a somewhat more balanced approach to their overall news coverage.

Fran Kelly
Did they or is balance in the eye of the beholder?

Kevin Rudd
No, no. Balance means everyone getting a fair go in the news coverage. That’s what it means. And since then, I think any objective observer would say 18 out of 18 federal and state elections on the run, Tasmania, South Australia, Victoria, New South Wales, Queensland, federally, what you have seen is News Corporation running a vicious editorial line reflected in their news coverage against one side of politics against the other when they control 70% of the show. All I’m saying to you, Fran, is that enough is enough. And when the same party campaigning to kill your budget at the ABC that’s the other. That’s the other arm of have a balanced media environment in Australia. I think our democracy is under threat. And that’s why I’ve launched this petition.

Fran Kelly
I know but you’re calling for a royal commission, so it’s a serious call. And it would, you would also need to take note of the fact that in that 10 years Labor’s had a leadership merry-go-round, it’s had clear policy failings, a split over climate policy that can’t seem to be resolved. I mean, Labor’s woes are not all about News Corp, are they?

Kevin Rudd
Of course not. And if you’ve seen everything I’ve written on this subject, we own full responsibility for problems of our own making. Since I changed the rules of the Labor Party, we’ve had two leaders in the last seven or eight years. Contrast that, the other side of politics, we’ve had a rolling door of serving prime ministers. So let’s not be distracted by the question of whether one side of politics commits political errors or not. We have and the Coalition have. What I’m talking about, is simply whether or not we have a balanced media in this country, which is capable of giving what I would describe as objective coverage to the news, facts-based news. And if you want to look at the way in which this is trending in Australia, look no further than Fox News in the United States, again a Murdoch operation, which has been the lifeblood of the Trump Administration and Trump campaign for the last four years. That’s where we’re headed unless we choose to stop the rot here in Australia.

Fran Kelly
So then, are you disappointed that Anthony Albanese is not joining you in this call for a royal commission? Or do you accept that a current political leader would be crazy to endorse a royal commission into a major media organisation?

Kevin Rudd
Well, I didn’t speak to Albo before I launched this petition, nor have I spoken to Mr Morrison before launching this petition for a royal commission. This is a time for the Australian people including all your listeners to decide whether it’s worth putting their name to it or not. And it’s not just targeted at News Corporation. But as you rightly say, they’re the dominant players so they come in for the primary attention. Fairfax Media has been taken over by Nine, whose chairman is Peter Costello. Your budget, the ABC, is under challenge and under attack. And then we have the question of the emerging monopolies in terms of Google and Facebook. These are all legitimate questions to be examined. My interest as a former prime minister, is having the lifeblood of the democracy kept alive and flowing with diverse media platforms. We are no longer getting that. Okay.

Fran Kelly
Kevin Rudd, thank you very much for joining us.

Kevin Rudd
Thanks very much, Fran.

 

Image: World Economic Forum

The post ABC RN: Murdoch Royal Commission appeared first on Kevin Rudd.

Charles StrossBooks I Will Not Write #8: The Year of the Conspiracy

Global viral pandemics, insane right-wing dictator-wannabes trying to set fire to the planet, and climate change aside, I'm officially declaring 2020 to be the Year of the Conspiracy Theory.

This was the year when QAnon, a frankly puerile rehashing of antisemitic conspiracy theories going back to the infamous Tsarist secret police fabrication The Protocols of the Elders of Zion, went viral: its true number of followers is unclear but in the tens of thousands, and they've begun showing up in US politics as Republican candidates capable of displacing the merely crazy, such as Tea Partiers, who at least were identifiably a political movement (backed by Koch brothers lobbying money).

Nothing about the toxic farrago of memes stewing in the Qanon midden should come as a surprise to anyone who read the Illuminatus! trilogy back in the 1970s, except possibly the fact that this craziness has leached into mainstream politics. But I think it's worrying indicative of the way our post-1995, internet-enabled media environment is messing with the collective subconscious: conspiratorial thinking is now mainstream.

Anyway. When life hands you lemons its time to make lemonade. How could I (if I had more energy and fewer plans) monetize this trend, without sacrificing my dignity, sanity, and sense of integrity along the way?

I'm calling it time for the revival of the big fat 1960s-1980s cold war spy/conspiracy thriller. A doozy of a plot downloaded itself into my head yesterday, and I have neither the time nor the marketing stance to write it, so here it is. (Marketing: I'm positioned as an SF author, not a thriller/mens adventure author, so I'd be selling to a different editorial and marketing department and the book advances for starting out again wouldn't be great.)

So, some background for a Richard Condon style comedy spy/conspiracy thriller:

The USA is an Imperial hegemonic power, and is structured as such internally (even though its foundational myth--plucky colonials rebelling against an empire--is problematically at odds with the reality of what it has become nearly 250 years later). In particular, it has an imperial-scale bureaucracy with an annual budget measured in the trillions of dollars, and baroque excrescences everywhere. Nowhere is this more evident than in the intelligence sector.

The USA has spies and analysts and cryptographers and spooks coming out of its metaphorical ears. For example, the CIA, the best-known US espionage agency, is a sprawling bureaucracy with an estimated 21,000 employees and a budget of $15Bn/year. But it's by no means the largest or most expensive agency: the NRO (the folks who run spy satellites) used to have a bigger budget than NASA. And the mere existence and name of the National Reconnaissance Office were classified secrets until 1992.

It has come to light that about 80% of the people who work in the intelligence sector in the US are not actual government officials or civil servants, but private sector contractors, mostly employed by service corporations who are cleared to handle state secrets. By some estimates there are two million security-cleared civilians working in the United States--more than the number of uniformed service personnel.

Keeping track of this baroque empire of espionage is such a headache that there's an entire government agency devoted to it: the United States Intelligence Community, established in 1981 by an executive order issued by Ronald Reagan. Per wikipedia:

The Washington Post reported in 2010 that there were 1,271 government organizations and 1,931 private companies in 10,000 locations in the United States that were working on counterterrorism, homeland security, and intelligence, and that the intelligence community as a whole includes 854,000 people holding top-secret clearances. According to a 2008 study by the ODNI, private contractors make up 29% of the workforce in the U.S. intelligence community and account for 49% of their personnel budgets.

USIC has 17 declared member agencies and a budget that hit $53.9Bn in 2012, up from $40.9Bn in 2006. Obviously this is a growth industry.

Furthermore, I find it hard to believe--even bearing in mind that this decade's normalization of conspiratorial thinking predisposes one towards such an attitude--that there aren't even more agencies out there which, like the NRO prior to 1992, remain under cover of a top secret classification. I'd expect some such agencies to focus on obvious tasks such as deniable electronic espionage (like the Russian government's APT29/Cozy Bear hacking group), weaponization of memes in pursuit of national strategic objectives (the British Army's 77th Brigade media influencers)), forensic analysis of offshore money laundering paper trails--the importance of which should be glaringly obvious to anyone with even a passing interest in world politics over the past two decades, or trying to identify foreign assets by analyzing the cornucopia of social data available from the likes of Facebook and Twitter's social graphs. (For example: it used to be the case that applicants for security clearance jobs with federal agencies were required not to have Facebook, Twitter, or similar social media accounts. They were also required not to have travelled overseas, with very limited exceptions, not to have criminal records, and so on. Bear in mind that Facebook maintains "ghost" accounts for everyone who doesn't already have a Facebook account, populated with data derived from their contacts who do. If you have access to FB's social graph you can in principle filter out all ghost accounts of the correct age and demographic (educational background, etc), cross-reference against twitter and other social media, and with a bit more effort find out if they've ever travelled abroad or had a criminal conviction. The result doesn't confirm that they're a security-cleared government employee but it's highly suggestive.)

But I digress.

The 1271 government organizations and 1931 private companies in 2010 have almost certainly mushroomed since then, during the global war on terror. Per Snowden, the proportion who are private contractors rather than civil servants, has also exploded. And, due to regulatory capture, it has become the norm for outsourcing contracts to be administered by former employees in the industries to which the contracts are awarded. There's a revolving door between civil service management and senior management in the contractor companies, simply because you need to understand the workload in order to allocate it to contractors, and because if you're a contractor knowing how the bidding process works from the inside gives you a huge advantage.

Let us posit a small group of golfing buddies in DC in the 2000s who are deeply disillusioned and cynical about the way things are developing, and conspire to milk the system. (They don't think of themselves as a conspiracy: they're golfing buddies, what could be more natural than helping your mates out?) They've all got a background in intelligence, either working in middle-to-senior management as government agency officers, or in senior management in a corporate contractor. They know the ropes: they know how the game is played.

Two of them take early retirement and invest their pensions in a pair of new startups: one of them--call them "A"--remains in place in, say, whatever department of the Defense Clandestine Service is the successor to the Counterintelligence Field Activity. They raise a slightly sketchy proposal for a domestic operation targeting proxies for Advanced Persistent Threats operating on US soil. For example: imagine QAnon is a fabrication of APT29. We know QAnon followers have carried out domestic terrorism attacks; if Q is actually a foreign intelligence service, is it plausible that they might also be using it to radicalize and recruit agents within other US intelligence services?

One of our retirees, "B", has established a small corporation that just happens to specialize in searching for signs of radicalization at home and by some magical coincidence fits the exact bill of requirements that our insider is looking for in a contractor.

Our other retiree, "C", has established a small corporation that produces Artificial Reality Games. As has been noted elsewhere Qanon bears a striking resemblance to a huge Artificial Reality Game. One of their products is not unlike "Spooks", from my 2007 novel Halting State; it's a game that encourages the players to carry out real world tasks on behalf of a shadowy national counter-espionage agency. In the novel, the players are unaware that they're working for a real national counter-espionage agency. In this scenario, the game is just a game ... but it's designed to make the players look plausibly similar to actual HUMINT assets working in a climate of surveillance capitalism and so reverting to classic tradecraft techniques in order to avoid being located by their dead letter drop's bluetooth pairing ID. But because they're actually gamers, on close examination they prove to not be actual spies. In other words, C generates lots of interesting false leads for B to explore and A to report on, but they never quite pan out.

So far, so plausible. But where's the story?

The clockwork powering the novel is simple: A runs his own pet counter-espionage project within the bigger agency and arranges to outsource the leg work to B's contractors on a cost-plus basis. Meanwhile, C's ARG designers create a perfectly balanced honeypot for B's agents. (The boots on the ground are all ignorant of the true relationship.) B is a major investor in C via a couple of offshore trusts and cut-outs: B also funnels money into A's offshore retirement fund. It's a nice little earner for everybody, bilking the federal government out of a few million bucks a year on an activity nobody expects to succeed but which might bear fruit one day and which meanwhile burnishes the status of the parent organization because it's clearly conducting innovative and proactive counter-espionage activity.

Then the wheels fall off.

C's team are running a handful of ARGs (because to run only one would be kind of suspicious). They are approached by the FBI to set up a honeypot for whichever radical group is the target of the day, be it Boogaloo Boys, Antifa, Al Qaida, Y'all Qaida, or whoever. And it turns out the FBI expect them to do something with the half-ton of ANFO they've conveniently provided (as an arrest pretext),

B's team meanwhile discover a scarily real-looking conspiracy who are planning to start some sort of war over the purity of their precious bodily fluids. A countdown is running and A's expected to actually make progress and arrest a ring of radicals before they blow anything up.

And A gradually comes to the realization that he and his golfing buddies are not the first people to have had this idea: they're not even the biggest. In fact, it begins to come to light that an entire top level division of the Department of Homeland Security (founded 2003! 240,000 employees! budget $51.67Bn!) is running plan B and funneling money to prop up various adversarial sock-puppets, one of which appears to have accidentally stolen half a dozen W76 physics packages (which, just like the good ole' days with the Minuteman III stockpile, have a permissive action lock code of "00000000"). The nukes are now in the possession of a bunch of nutters led by a preacher man who insists Jesus is coming, and his return will be heralded by a nuclear attack on the USA. But the folks behind the DHS grift can't do anything about it for obvious reasons (involving orange jumpsuits and long jail terms, or maybe actual risk of execution).

A can't expose this grift without attracting unwelcome attention, and a likely lifetime vacation in Club Fed along with his buddies B and C. But he doesn't want to be anywhere close to DC when the nukes go off. So what's a grifter to do?

C gets to give his ARG designers a new task: to set up a game targeting a very specific set of customers--conspiratorially-minded millenarian believers who are already up to their eyes in one plot and who need to be gently weaned onto a more potent brew of lies, right under the nose of the rival APT agents who have radicalized them ...

Climax: the nukes are defused, the idiot conspiracy believers are arrested, then the FBI turn up and arrest A, B, and C. On their way out the door it becomes apparent that they've been set up: they, themselves, were suckered into setting up their scam via another conspiracy ring to generate arrests for the FBI ...

(Fade to black.)

Worse Than FailureCodeSOD: Intergral to Dating

Date representations are one of those long-term problems for programmers, which all ties into the problem that "dates are hard". Nowadays, we have all these fancy date-time types that worry about things like time zones and leap years, and all of that stuff for us. Pretty much everything, at some level, relies on the Unix Epoch. But there is a time before that was the standard.

In the mainframe era, not all the numeric representations worked the same way that we're used to, and it was common to simply define the number of digits you wanted to store. So, if you wanted to store a date, you could define an 8-digit field, and store the date as 20201012: October 10th, 2020.

This is a pretty great date format, for that era. Relatively compact (and yes, the whole Y2K thing means that you might have defined that as a six digit field), inherently sortable, and it's not too bad to slice it back up into date parts, when you need it. And like anything else which was a good idea a long time ago, you still see it lurking around today. Which does become a problem when you inherit code written by people who didn't understand why things worked that way.

Virginia N inherited some C# code which meets that criteria. And the awkward date handling isn't even the WTF. There's a lot to unpack in this particular sample, so let's start with… the unpack function.

public static DateTime UnpackDateC(DateTime dateI, long sDate) { if (sDate.ToString().Length != 8) return dateI; try { return new DateTime(Convert.ToInt16(sDate.ToString().Substring(0, 4)), Convert.ToInt16(sDate.ToString().Substring(4, 2)), Convert.ToInt16(sDate.ToString().Substring(6, 2))); } catch { return dateI; } }

sDate is our integer date: 20201012. Instead of converting it to a string once, and then validating and slicing it, we call ToString four times. We also reconvert each date part back into an integer so we can pass them to DateTime, and of course, DateTime.ParseExact is just sitting there in the documentation, shaking its head at all of this.

The really weird choice to me, though, is that we pass in dateI, which appears to be the "fallback" date value. That… worries me.

Well, let's take a peek deep in the body of a method called GetMC, because that's where this unpack function is called.

while (oDataReader.Read()) { //... DataRow oRow = oDataTable.NewRow(); if (oDataReader["DEB"] != DBNull.Value) { DateTime dt = DateTime.Today; dt = UnpackDateC(dt, Convert.ToInt64(oDataReader["DEB"])); } else { oRow["DEB"] = DBNull.Value; } //... }

It's hard to know for absolute certain, based on the code provided, but I don't think UnpackDateC is actually doing anything. We can see that the default dateI value is DateTime.Today. So perhaps the desired behavior is that every invalid/unknown date is today? Seems problematic, but maybe that jives with the requriements.

But note the logic. If the database value is null, we store a null in oRow["DEB"]- our output data. If it isn't null, we unpack the date and store it in… dt. Also, if you trace the type conversions, we convert an integer in the database into an integer in our program (which it already would have been) so that we can convert that integer into a string so that we can split the string and convert each portion into integers so we can convert it into a date.

How do I know that the field is an integer in the database? Well, I don't know for sure, but let's look at the query which drives that loop.

public static void GetMC(string sConnectionString, ref DataTable dtToReturn, string sOrgafi, string valid, string exe, int iOperateur, string sColTri, bool Asc, bool DBLink, string alias) // iOperateur 0, 1, 2 { sSql = " select * from (select ENTLCOD as ENTAFI, MARKYEAR as EXE, nvl(to_char(MARKNUM),'')||'-'||nvl(MARKNUMLOT,'') as COD, to_char(MARKNUM) as NUM, MARKOBJ1 as COM, MARKSTARTDATE as DEB, MARKENDDATE as FIN, MARKNUMLOT as NUMLOT, MARKVALIDDATE, TIERNUM as FOUR, MARTBASTIT as TYP " + " from SOMEEXTERNALVIEW" + (DBLink ? alias + " " : " ") + " WHERE 1=1"; if (valid != null && valid.Length > 0) sSql += " and (MARKVALIDDATE >= " + valid + " or MARKVALIDDATE=0 or MARKVALIDDATE is null)"; if (exe != null && exe.Length > 0) sSql += " and TRIM( MARKYEAR ) ='" + exe.Trim() + "' "; sSql += " ) where 1=1"; //...

We can see that MARKSTARTDATE is the database field we call DEB. We can also see some conditional string concatenation to build our query, so hello possible SQL injection attacks. Now, I don't know that MARKSTARTDATE is an integer, but I can see that a similar field, MARKVALIDDATE is. Note the lack of quotes in the query string: "…(MARKVALIDDATE >= " + valid + " or MARKVALIDDATE=0 or MARKVALIDDATE is null)"

So MARKVALIDDATE is numeric in the database, which is great because the variable valid is passed in as a string, so we're just all over the place with types.

The structure of this query also adds on an extra layer of unnecessary complexity, as for some reason, we wrap the actual query up as a subquery, but the outer query is just SELECT * FROM (subquery) WHERE 1=1, so there is literally no reason to do that.

To finish this off, let's look at where GetMC is actually invoked, a method called CallWSM.

private void CallWSM(ref DataTable oDataTable, string sCode, string sNom, string sFourn, int iOperateur) // iOperateur 0, 1, 2 { try { m_bError = false; string sColTri = m_Grid_SortRequest.FieldName; SortOperator oDirection = m_Grid_SortRequest.SortDirection; m_sAnnee = ctlRecherche.GetValueFilterItem(1); string svalid = ""; string sdt = ctlRecherche.GetValueDateItem(0); if (sdt.Length > 0) { DateTime dtvalid = DateTime.Parse(sdt); long ldt = dtvalid.Year * 10000 + dtvalid.Month * 100 + dtvalid.Day; svalid = ldt.ToString(); } m_AppLogic.GetMC(sColTri, oDirection, m_sAdresseWS, ref oDataTable, svalid, m_sAnnee, iOperateur, PageCurrent, m_PageSize); } catch (WebException ex) { if (ex.Status == WebExceptionStatus.Timeout) { frmMessageBox.Show(ML.GetLibelle(4137), CONST.AppTITLE, MessageBoxButtons.OK, MessageBoxIcon.Error); m_bError = true; } else { frmMessageBox.Show(ex.Message); m_bError = true; } } catch (Exception ex) { frmMessageBox.Show(ex.Message); m_bError = true; } }

Now, I'm reading between the lines a bit, and maybe making some assumptions that I shouldn't be, but this method is called CallWSM, and one of the parameters we pass to GetMC is stored in a variable called m_sAdresseWS, and GetMC can apparently throw a WebException.

Are… are we building a query and then passing it off to a web service to execute? And then wrapping the response in a data reader? Because that would be terrible. But if we're not, does that mean that we're also calling a web service in some of the code Virginia didn't supply? Query the DB and call the web service in the same method? Or are we catching an exception that just could never happen, and all the WS stuff has nothing to do with web services?

Any one of those options would be a WTF.

Virginia adds, "I had the job to make a small change in the call. ...I'm used to a good amount of Daily WTF-erry in our code." After reading through the code though, Virginia had some second thoughts about changing the code. "At this point I decided not the change anything, because it hurts my head."

You and me both.

$WORD
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Cory DoctorowDanish Little Brother, now a free, CC-licensed download

Science Fiction Cirklen is a member-funded co-op of Danish science fiction fans; they raise money to produce print translations of sf novels that Danes would otherwise have to read in English. They work together to translate the work, commission art, and pay to have the book printed and distributed to bookstores in order to get it into Danish hands.

The SFC folks just released their Danish edition of Little Brother — translated by Lea Thume — as a Creative Commons licensed epub file, including the cover art they produced for their edition.


I’m so delighted by this! My sincere thanks to the SFC people for bringing my work to their country, and I hope someday we can toast each other in Copenhagen.

Krebs on SecurityReport: U.S. Cyber Command Behind Trickbot Tricks

A week ago, KrebsOnSecurity broke the news that someone was attempting to disrupt the Trickbot botnet, a malware crime machine that has infected millions of computers and is often used to spread ransomware. A new report Friday says the coordinated attack was part of an operation carried out by the U.S. military’s Cyber Command.

Image: Shutterstock.

On October 2, KrebsOnSecurity reported that twice in the preceding ten days, an unknown entity that had inside access to the Trickbot botnet sent all infected systems a command telling them to disconnect themselves from the Internet servers the Trickbot overlords used to control compromised Microsoft Windows computers.

On top of that, someone had stuffed millions of bogus records about new victims into the Trickbot database — apparently to confuse or stymie the botnet’s operators.

In a story published Oct. 9, The Washington Post reported that four U.S. officials who spoke on condition of anonymity said the Trickbot disruption was the work of U.S. Cyber Command, a branch of the Department of Defense headed by the director of the National Security Agency (NSA).

The Post report suggested the action was a bid to prevent Trickbot from being used to somehow interfere with the upcoming presidential election, noting that Cyber Command was instrumental in disrupting the Internet access of Russian online troll farms during the 2018 midterm elections.

The Post said U.S. officials recognized their operation would not permanently dismantle Trickbot, describing it rather as “one way to distract them for at least a while as they seek to restore their operations.”

Alex Holden, chief information security officer and president of Milwaukee-based Hold Security, has been monitoring Trickbot activity before and after the 10-day operation. Holden said while the attack on Trickbot appears to have cut its operators off from a large number of victim computers, the bad guys still have passwords, financial data and reams of other sensitive information stolen from more than 2.7 million systems around the world.

Holden said the Trickbot operators have begun rebuilding their botnet, and continue to engage in deploying ransomware at new targets.

“They are running normally and their ransomware operations are pretty much back in full swing,” Holden said. “They are not slowing down because they still have a great deal of stolen data.”

Holden added that since news of the disruption first broke a week ago, the Russian-speaking cybercriminals behind Trickbot have been discussing how to recoup their losses, and have been toying with the idea of massively increasing the amount of money demanded from future ransomware victims.

“There is a conversation happening in the back channels,” Holden said. “Normally, they will ask for [a ransom amount] that is something like 10 percent of the victim company’s annual revenues. Now, some of the guys involved are talking about increasing that to 100 percent or 150 percent.”

,

Cryptogram Hacking Apple for Profit

Five researchers hacked Apple Computer’s networks — not their products — and found fifty-five vulnerabilities. So far, they have received $289K.

One of the worst of all the bugs they found would have allowed criminals to create a worm that would automatically steal all the photos, videos, and documents from someone’s iCloud account and then do the same to the victim’s contacts.

Lots of details in this blog post by one of the hackers.

Worse Than FailureError'd: Math is HARD!

"Boy oh boy! You can't beat that free shipping for the low, low price of $4!" Zenith wrote.

 

Brendan wrote, "So, as of September 24th, my rent is both changing and not changing, in the future, but also in the past?"

 

"Does waiting for null people take forever or no time at all?" Pascal writes.

 

"Whoever at McDonalds is responsible for calculating their cheeseburger deals needs to lay off of the special sauce," Valts S. wrote.

 

"So, assuming I were to buy 0.9 of a figure, what's the missing 10%?" writes Zenith.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Cryptogram Friday Squid Blogging: Squid-like Nebula

Pretty astronomical photo.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Saving the Humboldt Squid

Genetic research finds the Humboldt squid is vulnerable to overfishing.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Chinese Squid Fishing Near the Galapagos

The Chinese have been illegally squid fishing near the Galapagos Islands.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Krebs on SecurityAmid an Embarrassment of Riches, Ransom Gangs Increasingly Outsource Their Work

There’s an old adage in information security: “Every company gets penetration tested, whether or not they pay someone for the pleasure.” Many organizations that do hire professionals to test their network security posture unfortunately tend to focus on fixing vulnerabilities hackers could use to break in. But judging from the proliferation of help-wanted ads for offensive pentesters in the cybercrime underground, today’s attackers have exactly zero trouble gaining that initial intrusion: The real challenge seems to be hiring enough people to help everyone profit from the access already gained.

One of the most common ways such access is monetized these days is through ransomware, which holds a victim’s data and/or computers hostage unless and until an extortion payment is made. But in most cases, there is a yawning gap of days, weeks or months between the initial intrusion and the deployment of ransomware within a victim organization.

That’s because it usually takes time and a good deal of effort for intruders to get from a single infected PC to seizing control over enough resources within the victim organization where it makes sense to launch the ransomware.

This includes pivoting from or converting a single compromised Microsoft Windows user account to an administrator account with greater privileges on the target network; the ability to sidestep and/or disable any security software; and gaining the access needed to disrupt or corrupt any data backup systems the victim firm may have.

Each day, millions of malware-laced emails are blasted out containing booby-trapped attachments. If the attachment is opened, the malicious document proceeds to quietly download additional malware and hacking tools to the victim machine (here’s one video example of a malicious Microsoft Office attachment from the malware sandbox service any.run). From there, the infected system will report home to a malware control server operated by the spammers who sent the missive.

At that point, control over the victim machine may be transferred or sold multiple times between different cybercriminals who specialize in exploiting such access. These folks are very often contractors who work with established ransomware groups, and who are paid a set percentage of any eventual ransom payments made by a victim company.

THE DOCTOR IS IN

Enter subcontractors like “Dr. Samuil,” a cybercriminal who has maintained a presence on more than a dozen top Russian-language cybercrime forums over the past 15 years. In a series of recent advertisements, Dr. Samuil says he’s eagerly hiring experienced people who are familiar with tools used by legitimate pentesters for exploiting access once inside of a target company — specifically, post-exploit frameworks like the closely-guarded Cobalt Strike.

“You will be regularly provided select accesses which were audited (these are about 10-15 accesses out of 100) and are worth a try,” Dr. Samuil wrote in one such help-wanted ad. “This helps everyone involved to save time. We also have private software that bypasses protection and provides for smooth performance.”

From other classified ads he posted in August and September 2020, it seems clear Dr. Samuil’s team has some kind of privileged access to financial data on targeted companies that gives them a better idea of how much cash the victim firm may have on hand to pay a ransom demand. To wit:

“There is huge insider information on the companies which we target, including information if there are tape drives and clouds (for example, Datto that is built to last, etc.), which significantly affects the scale of the conversion rate.

Requirements:
– experience with cloud storage, ESXi.
– experience with Active Directory.
– privilege escalation on accounts with limited rights.

* Serious level of insider information on the companies with which we work. There are proofs of large payments, but only for verified LEADs.
* There is also a private MEGA INSIDE , which I will not write about here in public, and it is only for experienced LEADs with their teams.
* We do not look at REVENUE / NET INCOME / Accountant reports, this is our MEGA INSIDE, in which we know exactly how much to confidently squeeze to the maximum in total.

According to cybersecurity firm Intel 471, Dr. Samuil’s ad is hardly unique, and there are several other seasoned cybercriminals who are customers of popular ransomware-as-a-service offerings that are hiring sub-contractors to farm out some of the grunt work.

“Within the cybercriminal underground, compromised accesses to organizations are readily bought, sold and traded,” Intel 471 CEO Mark Arena said. “A number of security professionals have previously sought to downplay the business impact cybercriminals can have to their organizations.”

“But because of the rapidly growing market for compromised accesses and the fact that these could be sold to anyone, organizations need to focus more on efforts to understand, detect and quickly respond to network compromises,” Arena continued. “That covers faster patching of the vulnerabilities that matter, ongoing detection and monitoring for criminal malware, and understanding the malware you are seeing in your environment, how it got there, and what it has or could have dropped subsequently.”

WHO IS DR. SAMUIL?

In conducting research for this story, KrebsOnSecurity learned that Dr. Samuil is the handle used by the proprietor of multi-vpn[.]biz, a long-running virtual private networking (VPN) service marketed to cybercriminals who are looking to anonymize and encrypt their online traffic by bouncing it through multiple servers around the globe.

Have a Coke and a Molotov cocktail. Image: twitter.com/multivpn

MultiVPN is the product of a company called Ruskod Networks Solutions (a.k.a. ruskod[.]net), which variously claims to be based in the offshore company havens of Belize and the Seychelles, but which appears to be run by a guy living in Russia.

The domain registration records for ruskod[.]net were long ago hidden by WHOIS privacy services. But according to Domaintools.com [an advertiser on this site], the original WHOIS records for the site from the mid-2000s indicate the domain was registered by a Sergey Rakityansky.

This is not an uncommon name in Russia or in many surrounding Eastern European nations. But a former business partner of MultiVPN who had a rather public falling out with Dr. Samuil in the cybercrime underground told KrebsOnSecurity that Rakityansky is indeed Dr. Samuil’s real surname, and that he is a 32- or 33-year-old currently living in Bryansk, a city located approximately 200 miles southwest of Moscow.

Neither Dr. Samuil nor MultiVPN have responded to requests for comment.

Worse Than FailureTranslation by Column

Content management systems are… an interesting domain in software development. On one hand, they’re one of the most basic types of CRUD applications, at least at their core. On the other, it’s the sort of problem domain where you can get 90% of what you need really easily, but the remaining 10% are the cases that are going to make you pull your hair out.

Which is why pretty much every large CMS project supports some kind of plugin architecture, and usually some sort of marketplace to curate those plugins. That kind of curation is hard, writing plugins tends to be more about “getting the feature I need” and less about “releasing a reliable product”, and thus the available plugins for most CMSes tend to be of wildly varying quality.

Paul recently installed a plugin for Joomla, and started receiving this error:

Row size too large. The maximum row size for the used table type, not counting BLOBs, is 8126. This includes storage overhead, check the manual. You have to change some columns to TEXT or BLOBs

Paul, like a good developer, checked with the plugin documentation to see if he could fix that error. The documentation had this to say:

you may have too many languages installed

This left Paul scratching his head. Sure, his CMS had eight different language packs installed, but how, exactly, was that “too many”? It wasn’t until he checked into the database schema that he understood what was going on:

A screencap of the database schema, which shows that EVERY text field has a column created for EVERY language, e.g. 'alias_ru', 'alias_fr', 'alias_en'

This plugin’s approach to handling multiple languages was simply to take every text field it needed to track and add a column to the table for every language. The result was a table with 231 columns, most of them language-specific duplicates.

Now, there are a few possible reasons why this is this way. It may simply be whoever wrote the plugin didn’t know or care about setting up a meaningful lookup table. Maybe they were thinking about performance, and thought that “well, if I denormalize I can get more data with a single query”. Maybe they weren’t thinking at all. Or maybe there’s some quirk about creating your own tables as a Joomla plugin that the developer didn’t want to create more tables than absolutely necessary.

Regardless, it’s definitely one of the worst schemas you could come up with to handle localization.

Paul adds: “I vote the developer for Best Database Designer of 2020.”

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Krebs on SecurityPromising Infusions of Cash, Fake Investor John Bernard Walked Away With $30M

September featured two stories on a phony tech investor named John Bernard, a pseudonym used by a convicted thief named John Clifton Davies who’s fleeced dozens of technology companies out of an estimated $30 million with the promise of lucrative investments. Those stories prompted a flood of tips from Davies’ victims that paints a much clearer picture of this serial con man and his cohorts, including allegations of hacking, smuggling, bank fraud and murder.

KrebsOnSecurity interviewed more than a dozen of Davies’ victims over the past five years, none of whom wished to be quoted here out of fear of reprisals from a man they say runs with mercenaries and has connections to organized crime.

As described in Part II of this series, John Bernard is in fact John Clifton Davies, a 59-year-old U.K. citizen who absconded from justice before being convicted on multiple counts of fraud in 2015. Prior to his conviction, Davies served 16 months in jail before being cleared of murdering his third wife on their honeymoon in India.

The scam artist John Bernard (left) in a recent Zoom call, and a photo of John Clifton Davies from 2015.

After eluding justice in the U.K., Davies reinvented himself as The Private Office of John Bernard, pretending to a be billionaire Swiss investor who made his fortunes in the dot-com boom 20 years ago and who was seeking investment opportunities.

In case after case, Bernard would promise to invest millions in tech startups, and then insist that companies pay tens of thousands of dollars worth of due diligence fees up front. However, the due diligence company he insisted on using — another Swiss firm called Inside Knowledge — also was secretly owned by Bernard, who would invariably pull out of the deal after receiving the due diligence money.

Bernard found a constant stream of new marks by offering extraordinarily generous finders fees to investment brokers who could introduce him to companies seeking an infusion of cash. When it came time for companies to sign legal documents, Bernard’s victims interacted with a 40-something Inside Knowledge employee named “Katherine Miller,” who claimed to be his lawyer.

It turns out that Katherine Miller is a onetime Moldovan attorney who was previously known as Ecaterina “Katya” Dudorenko. She is listed as a Romanian lawyer in the U.K. Companies House records for several companies tied to John Bernard, including Inside Knowledge Solutions Ltd., Docklands Enterprise Ltd., and Secure Swiss Data Ltd (more on Secure Swiss data in a moment).

Another of Bernard’s associates listed as a director at Docklands Enterprise Ltd. is Sergey Valentinov Pankov. This is notable because in 2018, Pankov and Dudorenko were convicted of cigarette smuggling in the United Kingdom.

Sergey Pankov and Ecaterina Dudorenco, in undated photos. Source: Mynewsdesk.com

According to the Organized Crime and Corruption Reporting Project, “illicit trafficking of tobacco is a multibillion-dollar business today, fueling organized crime and corruption [and] robbing governments of needed tax money. So profitable is the trade that tobacco is the world’s most widely smuggled legal substance. This booming business now stretches from counterfeiters in China and renegade factories in Russia to Indian reservations in New York and warlords in Pakistan and North Africa.”

Like their erstwhile boss Mr. Davies, both Pankov and Dudorenko disappeared before their convictions in the U.K. They were sentenced in absentia to two and a half years in prison.

Incidentally, Davies was detained by Ukrainian authorities in 2018, although he is not mentioned by name in this story from the Ukrainian daily Pravda. The story notes that the suspect moved to Kiev in 2014 and lived in a rented apartment with his Ukrainian wife.

John’s fourth wife, Iryna Davies, is listed as a director of one of the insolvency consulting businesses in the U.K. that was part of John Davies’ 2015 fraud conviction. Pravda reported that in order to confuse the Ukrainian police and hide from them, Mr. Davies constantly changed their place of residence.

John Clifton Davies, a.k.a. John Bernard. Image: Ukrainian National Police.

The Pravda story says Ukrainian authorities were working with the U.K. government to secure Davies’ extradition, but he appears to have slipped away once again. That’s according to one investment broker who’s been tracking Davies’ trail of fraud since 2015.

According to that source — who we’ll call “Ben” — Inside Knowledge and The Private Office of John Bernard have fleeced dozens of companies out of nearly USD $30 million in due diligence fees over the years, with one company reportedly paying over $1 million.

Ben said he figured out that Bernard was Davies through a random occurrence. Ben said he’d been told by a reliable source that Bernard traveled everywhere in Kiev with several armed guards, and that his entourage rode in a convoy that escorted Davies’ high-end Bentley. Ben said Davies’ crew was even able to stop traffic in the downtown area in what was described as a quasi military maneuver so that Davies’ vehicle could proceed unobstructed (and presumably without someone following his car).

Ben said he’s spoken to several victims of Bernard who saw phony invoices for payments to be made to banks in Eastern Europe appear to come from people within their own organization shortly after cutting off contact with Bernard and his team.

While Ben allowed that these invoices could have come from another source, it’s worth noting that by virtue of participating in the due diligence process, the companies targeted by these schemes would have already given Bernard’s office detailed information about their finances, bank accounts and security processes.

In some cases, the victims had agreed to use Bernard’s Secure Swiss Data software and services to store documents for the due diligence process. Secure Swiss Data is one of several firms founded by Davies/Inside Knowledge and run by Dudorenko, and it advertised itself as a Swiss company that provides encrypted email and data storage services. In February 2020, Secure Swiss Data was purchased in an “undisclosed multimillion buyout” by SafeSwiss Secure Communication AG.

Shortly after the first story on John Bernard was published here, virtually all of the employee profiles tied to Bernard’s office removed him from their work experience as listed on their LinkedIn resumes — or else deleted their profiles altogether. Also, John Bernard’s main website — the-private-office.ch — replaced the content on its homepage with a note saying it was closing up shop.

Incredibly, even after the first two stories ran, Bernard/Davies and his crew continued to ply their scam with companies that had already agreed to make due diligence payments, or that had made one or all of several installment payments.

One of those firms actually issued a press release in August saying it had been promised an infusion of millions in cash from John Bernard’s Private Office. They declined to be quoted here, and continue to hold onto hope that Mr. Bernard is not the crook that he plainly is.

Worse Than FailureCodeSOD: A Long Time to Master Lambdas

At an old job, I did a significant amount of VB.Net work. I didn’t hate VB.Net. Sure, the syntax was clunky, but autocomplete mostly solved that, and it was more OR
less feature-matched to C# (and, as someone who needed to handle XML, the fact that VB.Net had XML literals was handy).

Every major feature in C# had a VB.Net equivalent, including lambdas. And hey, lambdas are great! What a wonderful way to express a filter condition.

Well, Eric O sends us this filter lambda. Originally sent to us as a single line, I’m adding line breaks for readability, because I care about this more than the original developer did.

Function(row, index) index <> 0 AND
(row(0).ToString().Equals("DIV10106") OR
row(0).ToString().Equals("326570") OR
row(0).ToString().Equals("301100") OR
row(0).ToString().Equals("305622") OR
row(0).ToString().Equals("305623") OR
row(0).ToString().Equals("317017") OR
row(0).ToString().Equals("323487") OR
row(0).ToString().Equals("323488") OR
row(0).ToString().Equals("324044") OR
row(0).ToString().Equals("317016") OR
row(0).ToString().Equals("316875") OR
row(0).ToString().Equals("323976") OR
row(0).ToString().Equals("324813") OR
row(0).ToString().Equals("147000") OR
row(0).ToString().Equals("326984") OR
row(0).ToString().Equals("326634") OR
row(0).ToString().Equals("306039") OR
row(0).ToString().Equals("307021") OR
row(0).ToString().Equals("307050") OR
row(0).ToString().Equals("307603") OR
row(0).ToString().Equals("307604") OR
row(0).ToString().Equals("307632") OR
row(0).ToString().Equals("307704") OR
row(0).ToString().Equals("308184") OR
row(0).ToString().Equals("308531") OR
row(0).ToString().Equals("309930") OR
row(0).ToString().Equals("104253") OR
row(0).ToString().Equals("104532") OR
row(0).ToString().Equals("104794") OR
row(0).ToString().Equals("104943") OR
row(0).ToString().Equals("105123") OR
row(0).ToString().Equals("105755") OR
row(0).ToString().Equals("106075") OR
row(0).ToString().Equals("108062") OR
row(0).ToString().Equals("108417") OR
row(0).ToString().Equals("108616") OR
row(0).ToString().Equals("108625") OR
row(0).ToString().Equals("108689") OR
row(0).ToString().Equals("108851") OR
row(0).ToString().Equals("108997") OR
row(0).ToString().Equals("109358") OR
row(0).ToString().Equals("109551") OR
row(0).ToString().Equals("110081") OR
row(0).ToString().Equals("111501") OR
row(0).ToString().Equals("111987") OR
row(0).ToString().Equals("112136") OR
row(0).ToString().Equals("11229") OR
row(0).ToString().Equals("112261") OR
row(0).ToString().Equals("113127") OR
row(0).ToString().Equals("113266") OR
row(0).ToString().Equals("114981") OR
row(0).ToString().Equals("116527") OR
row(0).ToString().Equals("121139") OR
row(0).ToString().Equals("121469") OR
row(0).ToString().Equals("142449") OR
row(0).ToString().Equals("144034") OR
row(0).ToString().Equals("144693") OR
row(0).ToString().Equals("144900") OR
row(0).ToString().Equals("150089") OR
row(0).ToString().Equals("194340") OR
row(0).ToString().Equals("214950") OR
row(0).ToString().Equals("215321") OR
row(0).ToString().Equals("215908") OR
row(0).ToString().Equals("216531") OR
row(0).ToString().Equals("217151") OR
row(0).ToString().Equals("220710") OR
row(0).ToString().Equals("221265") OR
row(0).ToString().Equals("221387") OR
row(0).ToString().Equals("300011") OR
row(0).ToString().Equals("300013") OR
row(0).ToString().Equals("300020") OR
row(0).ToString().Equals("300022") OR
row(0).ToString().Equals("300024") OR
row(0).ToString().Equals("300026") OR
row(0).ToString().Equals("300027") OR
row(0).ToString().Equals("300050") OR
row(0).ToString().Equals("300059") OR
row(0).ToString().Equals("300060") OR
row(0).ToString().Equals("300059") OR
row(0).ToString().Equals("300125") OR
row(0).ToString().Equals("300139") OR
row(0).ToString().Equals("300275") OR
row(0).ToString().Equals("300330") OR
row(0).ToString().Equals("300342") OR
row(0).ToString().Equals("300349") OR
row(0).ToString().Equals("300355") OR
row(0).ToString().Equals("300363") OR
row(0).ToString().Equals("300413") OR
row(0).ToString().Equals("301359") OR
row(0).ToString().Equals("302131") OR
row(0).ToString().Equals("302595") OR
row(0).ToString().Equals("302621") OR
row(0).ToString().Equals("302649") OR
row(0).ToString().Equals("302909") OR
row(0).ToString().Equals("302955") OR
row(0).ToString().Equals("302986") OR
row(0).ToString().Equals("303096") OR
row(0).ToString().Equals("303249") OR
row(0).ToString().Equals("303753") OR
row(0).ToString().Equals("304010") OR
row(0).ToString().Equals("304016") OR
row(0).ToString().Equals("304047") OR
row(0).ToString().Equals("304566") OR
row(0).ToString().Equals("305347") OR
row(0).ToString().Equals("305486") OR
row(0).ToString().Equals("305487") OR
row(0).ToString().Equals("305489") OR
row(0).ToString().Equals("305526") OR
row(0).ToString().Equals("305568") OR
row(0).ToString().Equals("305769") OR
row(0).ToString().Equals("305773") OR
row(0).ToString().Equals("305824") OR
row(0).ToString().Equals("305998") OR
row(0).ToString().Equals("306039") OR
row(0).ToString().Equals("307021") OR
row(0).ToString().Equals("307050") OR
row(0).ToString().Equals("307603") OR
row(0).ToString().Equals("307604") OR
row(0).ToString().Equals("307632") OR
row(0).ToString().Equals("307704") OR
row(0).ToString().Equals("308184") OR
row(0).ToString().Equals("308531") OR
row(0).ToString().Equals("309930") OR
row(0).ToString().Equals("322228") OR
row(0).ToString().Equals("121081") OR
row(0).ToString().Equals("321879") OR
row(0).ToString().Equals("327391") OR
row(0).ToString().Equals("328933") OR
row(0).ToString().Equals("325038")) AND DateTime.ParseExact(row(2).ToString(), "dd.MM.yyyy", System.Globalization.CultureInfo.InvariantCulture).CompareTo(DateTime.Today.AddDays(-14)) <= 0

That is 5,090 characters of lambda right there, clearly copy/pasted with modifications on each line. The original developer, at no point, paused to think a moment about whether or not this was the way to achieve their goal?

If you’re wondering about those numeric values, I’ll let Eric explain:

The magic numbers are all customer object references, except the first one, which I have no idea what is.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Kevin RuddThe Guardian: Under the cover of Covid, Morrison wants to scrap my government’s protections against predatory lending

Published in The Guardian on 5 October 2020.

Pardon me for being just a little suspicious, but when I see an avalanche of enthusiasm from such reputable institutions as the Morrison government, the Murdoch media and the Australian Banking Association (anyone remember the Hayne royal commission?) about the proposed “reform” of the National Consumer Credit Protection Act, I smell a very large rodent. “Reform” here is effectively code language for repeal. And it means the repeal of major legislation introduced by my government to bring about uniform national laws to protect Australian consumers from unregulated and often predatory lending practices.

The banks of course have been ecstatic at Morrison’s announcement, chiming in with the government’s political chimeric that allowing the nation’s lenders once again to just let it rip was now essential for national economic recovery. Westpac, whose reputation was shredded during the royal commission, was out of the blocks first in welcoming the changes: CEO Peter King said they would “reduce red tape”, “speed up the process for customers to obtain approval”, and help small businesses access credit to invest and grow.

And right on cue, Westpac shares were catapulted 7.2% to $17.54 just before midday on the day of announcement. National Australia Bank was up more than 6% at $18.26, ANZ up more than 5% at $17.76, and Commonwealth Bank was trading almost 3.5% higher at $66.49. The popping of champagne corks could be heard right around the country as the banks, once again, saw the balance of market power swing in their direction and away from consumers. And that means more profit and less consumer protection.

A little bit of history first. Back in the middle of the global financial crisis, when the banks came on bended knee to our government seeking sovereign guarantees to preserve their financial lifelines to international lines of credit, we acted decisively to protect them, and their depositors, and to underpin the stability of the Australian financial system. And despite a hail of abuse from both the Liberal party and the Murdoch media at the time, we succeeded. Not only did we keep Australia out of the global recession then wreaking havoc across all developed economies, we also prevented any single financial institution from falling over and protected every single Australian’s savings deposits. Not bad, given the circumstances.

But we were also crystal-clear with the banks and other lenders at the time that we would be acting to protect working families from future predatory lending practices. And we did so. The national consumer credit protection bill 2009 (credit bill) and the national consumer credit protection (fees) bill 2009 (fees bill) collectively made up the consumer credit protection reform package. It included:

  • A uniform and comprehensive national licensing regime for the first time across the country for those engaging in credit activities via a new Australian credit licence administered by the Australian Securities and Investments Commission as the sole regulator;
  • industry-wide responsible lending conduct requirements for licensees;
  • improved sanctions and enhanced enforcement powers for the regulator; and
  • enhanced consumer protection through dispute resolution mechanisms, court arrangements and remedies.

This reform was not dreamed up overnight. It gave effect to the Council of Australian Governments’ agreements of 26 March and 3 July 2008 to transfer responsibility for regulation of consumer credit, and a related cluster of additional financial services, to the commonwealth. It also implemented the first phase of a two-phase implementation plan to transfer credit regulation to the commonwealth endorsed by Coag on 2 October 2008. It was the product of much detailed work over more than 12 months.

Scott Morrison’s formal argument to turn all this on its head is that “as Australia continues to recover from the Covid-19 pandemic, it is more important than ever that there are no unnecessary barriers to the flow of credit to households and small businesses”.

But hang on. Where is Morrison’s evidence that there is any problem with the flow of credit at the moment? And if there were a problem, where is Morrison’s evidence that the proposed emasculation of our consumer credit protection law is the only means to improve credit flow? Neither he nor the treasurer, Josh Frydenberg, have provided us with so much as a skerrick. Indeed, the Commonwealth Bank said recently that the flow of new lending is now a little above pre-Covid levels and that, in annual terms, lending is growing at a strong pace.

More importantly, we should turn to volume VI of royal commissioner Kenneth Hayne’s final report into the Australian banking industry. Hayne, citing the commonwealth Treasury as his authority, explicitly concluded that the National Consumer Credit Protection Act has not in fact hindered the flow of credit but instead had provided system stability. As Hayne states: “I think it important to refer to a number of aspects of Treasury’s submissions in response to the commission’s interim report. Treasury indicated that ‘there is little evidence to suggest that the recent tightening in credit standards, including through Apra’s prudential measures or the actions taken by Asic in respect of [responsible lending obligations], has materially affected the overall availability of credit’.”

So once again, we find the emperor has no clothes. The truth is this attack on yet another of my government’s reforms has nothing to do with the macro-economy. It has everything to do with a Morrison government bereft of intellectual talent and policy ideas in dealing with the real challenge of national economic recovery. Just as it has everything to do with Frydenberg’s spineless yielding to the narrow interests of the banking lobby, using the Covid-19 crisis as political cover in order to lift the profit levels of the banks while throwing borrowers’ interests to the wind.

This latest flailing in the wind by Morrison et al is part of a broader pattern of failed policy responses by the government to the economic impact of the crisis. Morrison had to be dragged kicking and screaming into accepting the reality of stimulus, having rejected RBA advice last year to do precisely the same – and that was before Covid hit. And despite a decade of baseless statements about my government’s “unsustainable” levels of public debt and budget deficit, Morrison is now on track to deliver five times the level of debt and six times the level of our budget deficit. But it doesn’t stop there. Having destroyed a giant swathe of the Australian manufacturing industry by destroying our motor vehicle industry out of pure ideology, Morrison now has the temerity to make yet another announcement about the urgent need now for a new national industry policy for Australia. Hypocrisy thy name is Liberal.

Notwithstanding these stellar examples of policy negligence, incompetence and hypocrisy, there is a further pattern to Morrison’s response to the Covid crisis: to use it as political cover to justify the introduction of a number of regressive measures that will hurt working families. They’ve used Covid cover to begin dismantling successive Labor government reforms for our national superannuation scheme, the net effect of which will be to destroy the retirement income of millions of wage and salary earners. They are also on the cusp of introducing tax changes, the bulk of which will be delivered to those who do not need them, while further eroding the national revenue base at a time when all fiscal discipline appears to have been thrown out the window altogether. And that leaves to one side what they are also threatening to do – again under the cover of Covid – to undermine wages and conditions for working families under proposed changes to our industrial relations laws.

The bottom line is that Morrison’s “reform” of the National Consumer Credit Law forms part of a wider pattern of behaviour: this is a government that is slow to act, flailing around in desperate search for a substantive economic agenda to lead the nation out of recession, while using Covid to further load the dice against the interests of working families for the future.

The post The Guardian: Under the cover of Covid, Morrison wants to scrap my government’s protections against predatory lending appeared first on Kevin Rudd.

Worse Than FailureNews Roundup: Excellent Data Gathering

In a global health crisis, like say, a pandemic, accurate and complete data about its spread is a "must have". Which is why, in the UK, there's a great deal of head-scratching over how the government "lost" thousands of cases.

Oops.

Normally, we don't dig too deeply into current events on this site, but the circumstances here are too "WTF" to allow them to pass without comment.

From the BBC, we know that this system was used to notify individuals if they have tested positive for COVID-19, and notify their close contacts that they have been exposed. That last bit is important. Disease spread can quickly turn exponential, and even though COVID-19 has a low probability of causing fatalities, the law of large numbers means that a lot of people will die anyway on that exponential curve. If you can track exposure, get exposed individuals tested and isolated before they spread the disease, you can significantly cut down its spread.

People are rightfully pretty upset about this mistake. Fortunately, the BBC has a followup article discussing the investigation, where an analyst explores what actually happened, and as it turns out, we're looking at an abuse of everyone's favorite data analytics tool: Microsoft Excel.

The companies administering the tests compile their data into plain text which appear to be CSV files. No real surprise there. Each test created multiple rows within the CSV file. Then, the people working for Public Health England imported that data into Excel… as .xls files.

.xls is the old Excel format, dating back into far-off years, and retained for backwards compatibility. While modern .xlsx files can support a little over a million rows, the much older format caps out at 65,536.

So: these clerks imported the CSV file, hit "save as…" and made a .xls, and ended up truncating the results. With the fact that these input datasets had multiple rows per tests, "in practice it meant that each template was limited to about 1,400 cases."

Again, "oops".

I've discussed how much users really want to do everything in Excel, and this is clearly what happened here. The users had one tool, Excel, and it looked like a hammer to them. Arcane technical details like how many rows different versions of Excel may or may not support aren't things it's fair to expect your average data entry clerk to know.

On another level, this is a clear failing of the IT services. Excel was not the right tool for this job, but in the middle of a pandemic, no one is entirely sure what they needed. Excel becomes a tempting tool, because pretty much any end user can look at complicated data and slice/shape/chart/analyze it however they like. There's a good reason why they want to use Excel for everything: it's empowering to the users. When they have an idea for a new report, they don't need to go through six levels of IT management, file requests in triplicate, and have a testing and approval cycle to ensure the report meets the specifications. They just… make it.

There are packaged tools that offer similar, purpose built functionality but still give users all the flexibility they could want for slicing data. But they're expensive, and many organizations (especially government offices) will be stingy about who gets a license. They may or may not be easy to use. And of course, the time to procure such a thing was in the years before a massive virus outbreak. Excel is there, on everyone's computer already, and does what they need.

Still, they made the mistake, they saw the consequences, so now we know, they will definitely start taking steps to correct the problem, right? They know that Excel isn't fit-for-purpose, so they're switching tools, right?

From the BBC:

To handle the problem, PHE is now breaking down the data into smaller batches to create a larger number of Excel templates in order to make sure none hit their cap.

Oops.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Cory DoctorowFree copies of Attack Surface for institutions (schools, libraries, classrooms, etc)



Figuring out how to tour a book in the lockdown age is hard. Many authors have opted to do a handful of essentially identical events with a couple of stores as a way of spreading out the times so that readers with different work-schedules, etc can make it.

But not me. My next novel, Attack Surface (the third Little Brother book) comes out in the US/Canada on Oct 13 and it touches on so many burning contemporary issues that I rounded up 16 guests for 8 different themed “Attack Surface Lectures.

This has many advantages: it allows me to really explore a wide variety of subjects without trying to cram them all into a single event and it allows me to spread out the love to eight fantastic booksellers.

Half of those bookstores (the ones WITH asterices in the listing) have opted to have ME fulfil their orders, meaning that Tor is shipping me all their copies, and I’m going to sign, personalize and mail them from home the day after each event!

(The other half will be sending out books with adhesive-backed bookplates I’ve signed for them)

All of that is obviously really cool, but there is a huge fly in the ointment: given that all these events are different, what if you want to attend more than one?

This is where things get broken. Each of these booksellers is under severe strain from the pandemic (and the whole sector was under severe strain even before the pandemic), and they’re allocating resources – payrolled staff – to after-hours events that cost them real money.

So each of them has a “you have to buy a book to attend” policy. These were pretty common in pre-pandemic times, too, because so many attendees showed up at indie stores that were being destroyed by Amazon, having bought the books on Amazon.

What’s more, booksellers with in-person events at least got the possibility that attendees would buy another book while in the store, and/or that people would discover their store through the event and come back – stuff that’s not gonna happen with virtual events.

There is, frankly, no good answer to this: no one in the chain has the resources to create and deploy a season’s pass system (let alone agree on how the money from it should be divided among booksellers) – and no reader has any use for 8 (or even 2!) copies of the book.

It was my stupid mistake, as I explain here.

After I posted, several readers suggested one small way I could make this better: let readers who want to attend more than one event donate their extra copies to schools, libraries and other institutions.

Which I am now doing. If you WANT to attend more than one event and you are seeking to gift a copy to an institution, I have a list for you! It’s beneath this explanatory text.

And if you are affiliated with an institution and you want to put yourself on this list, please complete this form.

If you want to attend more than one event and you want to donate your copy of the book to one of these organizations, choose one from the list and fill its name in on the ticket-purchase page, then email me so I can cross it off the list: doctorow@craphound.com.

I know this isn’t great, and I apologize. We’re all figuring out this book-launch-in-a-pandemic business here, and it was really my mistake. I can’t promise I won’t make different mistakes if I have to do another virtual tour, but I can promise I won’t make this one again.

Covenant House – Oakland
200 Harrison St
Oakland
CA
94603

Madison Public Library Central Branch
201 W Mifflin St,
madison
wi
53703

Monona Public Library
1000 Nichols Rd
Madison
wi
53716

Madison Public Library Pinney Branch
516 Cottage Grove Rd
Madison
Wi
53716

La Cueva High School
Michael Sanchez
7801 Wilshire NE
Albuquerque
New Mexico
87122

YouthCare
800-495-7802
2500 NE 54th St
Seattle
WA
98105

Nuestro Mundo Community School
Hollis Rudiger
902 Nichols Rd, Monona
Monona
Wi
53716

New Horizons Young Adult Shelter
info@nhmin.org
2709 3rd Avenue
Seattle
WA
98121

Salem Community College Library
Jennifer L. Pierce
460 Hollywood Avenue
Carneys Point
NJ
8094

Sumas Branch of the Whatcom County Library System
Laurie Dawson – Youth Focus Librarian
461 2nd Street
Sumas
Washington
98295

Riverview Learning Center
Kris Rodger
32302 NE 50th Street
Carnation
WA
98014

Cranston Public Library
Ed Garcia
140 Sockanosset Cross Rd
Cranston
RI
2920

Paideia School Library
Anna Watkins
1509 Ponce de Leon Ave
Atlanta
GA
30307

Westfield Community School
Stephen Fodor
2100 sleepy hollow
algonquin
Illinois
60102

Worldbuilders Nonprofit
Gray Miller, Executive Director
1200 3rd St
Stevens Point
WI
54481

Northampton Community College
Marshal Miller
3835 Green Pond Road
Bethlehem
PA
18020

Metropolitan Business Academy Magnet High School
Steve Staysniak
115 Water Street
New Haven
CT
6511

New Haven Free Public Library
Meghan Currey
133 Elm Street
New Haven
CT
6510

New Haven Free Public Library – Mitchell Branch
Marian Huggins
37 Harrison Street
New Haven
CT
6515

New Haven Free Public Library – Wilson Branch
Luis Chavez-Brumell
303 Washington Ave.
New Haven
CT
6519

New Haven Free Public Library – Stetson Branch
Diane Brown
200 Dixwell Ave.
New Haven
CT
6511

New Haven Free Public Library – Fair Haven Branch
Kirk Morrison
182 Grand Ave.
New Haven
CT
6513

University of Chicago
Acquisitions Room 170
1100 E. 57th St
Chicago
IL
60637

Greenburgh Public Library
John Sexton, Director
300 Tarrytown Rd
Elmsford
NY
10523

Red Rocks Community College Library
Karen Neville
13300 W Sixth Avenue
Lakewood
CO
80228

Biola University Library
Chuck Koontz
13800 Biola Ave.
La Mirada
CA
90639

Otto Bruyns Public Library
Aubrey Hiers
241 W Mill Rd
Northfield
NJ
08225

California Rehabilitation Center Library
William Swafford
5th Street and Western Ave.
Norco
CA
92860

Hastings High School
Rachel Haider
200 General Sieben Drive
Hastings
MN
55033

Ballard High School Library
TuesD Chambers
1418 NW 65th St.
Seattle
WA
98117

Southwest Georgia Regional Library System
Catherine Vanstone
301 South Monroe Street
Bainbridge
GA
39819

Los Angeles Center For Enriched Studies (LACES) Library
Rustum Jacob
5931 W. 18th St
Los Angeles
CA
90018

SOUTH SIDE HACKERSPACE: CHICAGO
Dmitriy Vysotskiy or Shawn Coyle
1048 W. 37th St. Suite 105
Chicago
Illinois
60609

Rising Tide Charter Public School
Katie Klein
59 Armstrong
Plymouth
MA
02360

Doolen Middle School
Carmen Coulter
2400 N Country Club Rd
Tucson AZ
85716

Fruchthendler Elementary School
Jessica Carter
7470 E Cloud Rd
Tucson
AZ
85750

Brandon Branch Library
Mary Erickson
305 S. Splitrock Blvd.
Brandon South Dakota
57005

Alternative Family Education of the Santa Cruz City Schools
Dorothee Ledbetter
185 Benito Av
Santa Cruz
CA
95062

Helios School
Elizabeth Wallace
597 Central Ave.
Sunnyvale
CA
94086

Eden Prairie High School
Jenn Nelson
17185 Valley View Rd
Eden Prairie
MN
55346
44

Helios School
Elizabeth Wallace
597 Central Ave.
Sunnyvale
CA
94086

Flatbush Commons mutual aid library
Joshua Wilkerson
101 Kenilworth Pl
Brooklyn
NY
11210
46

Cryptogram On Risk-Based Authentication

Interesting usability study: “More Than Just Good Passwords? A Study on Usability and Security Perceptions of Risk-based Authentication“:

Abstract: Risk-based Authentication (RBA) is an adaptive security measure to strengthen password-based authentication. RBA monitors additional features during login, and when observed feature values differ significantly from previously seen ones, users have to provide additional authentication factors such as a verification code. RBA has the potential to offer more usable authentication, but the usability and the security perceptions of RBA are not studied well.

We present the results of a between-group lab study (n=65) to evaluate usability and security perceptions of two RBA variants, one 2FA variant, and password-only authentication. Our study shows with significant results that RBA is considered to be more usable than the studied 2FA variants, while it is perceived as more secure than password-only authentication in general and comparably se-cure to 2FA in a variety of application types. We also observed RBA usability problems and provide recommendations for mitigation.Our contribution provides a first deeper understanding of the users’perception of RBA and helps to improve RBA implementations for a broader user acceptance.

Paper’s website. I’ve blogged about risk-based authentication before.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 17)

Here’s part seventeen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Some show notes:

Here’s the form for getting a free Little Brother story, “Force Multiplier” by pre-ordering the print edition of Attack Surface (US/Canada only)

Here’s the Kickstarter for the Attack Surface audiobook, where every backer gets Force Multiplier.

Here’s the schedule for the Attack Surface lectures

Here’s the form to request a copy of Attack Surface for schools, libraries, classrooms, etc.

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Sam VargheseSomething fishy about Trump’s taxes? Did nobody suspect it all along?

Recently, the New York Times ran an article about Donald Trump having paid no federal income taxes for 10 of the last 15 years; many claimed it was a blockbuster story and that it would have far-reaching effects on the forthcoming presidential election.

If this was the first time Trump’s tax evasion was being mentioned, then, sure, it would have been a big deal.

But right from the time he first refused to make his tax returns public — before he was elected — this question has been hanging over Trump’s head.
And the only logical conclusion has been that if he was making excuses, then there must be something fishy about it. Five years later, we find that he was highly adept at claiming exemptions from paying income tax for various reasons.

Surprised? Some people assume that one would have to be. But this was only to be expected, given all the excuses that Trump has trotted out over all these years to avoid releasing his tax returns.

When I posted a tweet pointing out that Trump had been doing exactly what the big tech companies — Apple, Google, Microsoft, Amazon and Facebook, among others — do, others were prone to try and portray Trump’s actions as somehow different. But nobody specified the difference.

The question that struck me was: why is the New York Times publishing this story at this time? The answer is obvious: it would like to influence the election in favour of the Democrat candidate, Joe Biden.

The newspaper is not the only institution or individual trying to carry water for Biden: in recent days, we have seen the release of a two-part TV series The Comey Rule, based on a book written by James Comey, the former FBI director, about the run-up to the 2016 election. It is extremely one-sided.

Also out is a two-part documentary made by Alex Gibney, titled Agents of Chaos. which depends mostly on government sources to spread the myth that the Russians were responsible for Trump’s election.

Another documentary, Totally Under Control, about the Trump administration’s response to the coronavirus pandemic, will be coming out on October 13. Again, Gibney is the director and producer.

On the Republican side, there has been nothing of this kind. Whether that says something about the level of confidence in the Trump camp is open to question.

With less than a month to go for the election, it should only be expected that the propaganda war will intensify as both sides try to push their man into the White House.

Worse Than FailureCodeSOD: Switched Requirements

Code changes over time. Sometimes, it feels like gremlins sweep through the codebase and change it for us. Usually, though, we have changes to requirements, which drives changes to the code.

Thibaut was looking at some third party code to implement tooling to integrate with it, and found this C# switch statement:

if (effectID != 10) { switch (effectID) { case 21: case 24: case 27: case 28: case 29: return true; case 22: case 23: case 25: case 26: break; default: switch (effectID) { case 49: case 50: return true; } break; } return false; } return true;

I'm sure this statement didn't start this way. And I'm sure that, to each of the many developers who swung through to add their own little case to the switch their action made some sort of sense, and maybe they knew they should refactor, they need to get this functionality in and they need it now, and code cleanliness can wait. And wait. And wait.

Until Thibaut comes through, and replaces it with this:

switch (effectID) { case 10: case 21: case 24: case 27: case 28: case 29: case 49: case 50: return true; } return false;
[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Cryptogram COVID-19 and Acedia

Note: This isn’t my usual essay topic. Still, I want to put it on my blog.

Six months into the pandemic with no end in sight, many of us have been feeling a sense of unease that goes beyond anxiety or distress. It’s a nameless feeling that somehow makes it hard to go on with even the nice things we regularly do.

What’s blocking our everyday routines is not the anxiety of lockdown adjustments, or the worries about ourselves and our loved ones — real though those worries are. It isn’t even the sense that, if we’re really honest with ourselves, much of what we do is pretty self-indulgent when held up against the urgency of a global pandemic.

It is something more troubling and harder to name: an uncertainty about why we would go on doing much of what for years we’d taken for granted as inherently valuable.

What we are confronting is something many writers in the pandemic have approached from varying angles: a restless distraction that stems not just from not knowing when it will all end, but also from not knowing what that end will look like. Perhaps the sharpest insight into this feeling has come from Jonathan Zecher, a historian of religion, who linked it to the forgotten Christian term: acedia.

Acedia was a malady that apparently plagued many medieval monks. It’s a sense of no longer caring about caring, not because one had become apathetic, but because somehow the whole structure of care had become jammed up.

What could this particular form of melancholy mean in an urgent global crisis? On the face of it, all of us care very much about the health risks to those we know and don’t know. Yet lurking alongside such immediate cares is a sense of dislocation that somehow interferes with how we care.

The answer can be found in an extreme thought experiment about death. In 2013, philosopher Samuel Scheffler explored a core assumption about death. We all assume that there will be a future world that survives our particular life, a world populated by people roughly like us, including some who are related to us or known to us. Though we rarely or acknowledge it, this presumed future world is the horizon towards which everything we do in the present is oriented.

But what, Scheffler asked, if we lose that assumed future world — because, say, we are told that human life will end on a fixed date not far after our own death? Then the things we value would start to lose their value. Our sense of why things matter today is built on the presumption that they will continue to matter in the future, even when we ourselves are no longer around to value them.

Our present relations to people and things are, in this deep way, future-oriented. Symphonies are written, buildings built, children conceived in the present, but always with a future in mind. What happens to our ethical bearings when we start to lose our grip on that future?

It’s here, moving back to the particular features of the global pandemic, that we see more clearly what drives the restlessness and dislocation so many have been feeling. The source of our current acedia is not the literal loss of a future; even the most pessimistic scenarios surrounding COVID-19 have our species surviving. The dislocation is more subtle: a disruption in pretty much every future frame of reference on which just going on in the present relies.

Moving around is what we do as creatures, and for that we need horizons. COVID-19 has erased many of the spatial and temporal horizons we rely on, even if we don’t notice them very often. We don’t know how the economy will look, how social life will go on, how our home routines will be changed, how work will be organized, how universities or the arts or local commerce will survive.

What unsettles us is not only fear of change. It’s that, if we can no longer trust in the future, many things become irrelevant, retrospectively pointless. And by that we mean from the perspective of a future whose basic shape we can no longer take for granted. This fundamentally disrupts how we weigh the value of what we are doing right now. It becomes especially hard under these conditions to hold on to the value in activities that, by their very nature, are future-directed, such as education or institution-building.

That’s what many of us are feeling. That’s today’s acedia.

Naming this malaise may seem more trouble than its worth, but the opposite is true. Perhaps the worst thing about medieval acedia was that monks struggled with its dislocation in isolation. But today’s disruption of our sense of a future must be a shared challenge. Because what’s disrupted is the structure of care that sustains why we go on doing things together, and this can only be repaired through renewed solidarity.

Such solidarity, however, has one precondition: that we openly discuss the problem of acedia, and how it prevents us from facing our deepest future uncertainties. Once we have done that, we can recognize it as a problem we choose to face together — across political and cultural lines — as families, communities, nations and a global humanity. Which means doing so in acceptance of our shared vulnerability, rather than suffering each on our own.

This essay was written with Nick Couldry, and previously appeared on CNN.com.

Krebs on SecurityAttacks Aimed at Disrupting the Trickbot Botnet

Over the past 10 days, someone has been launching a series of coordinated attacks designed to disrupt Trickbot, an enormous collection of more than two million malware-infected Windows PCs that are constantly being harvested for financial data and are often used as the entry point for deploying ransomware within compromised organizations.

A text snippet from one of the bogus Trickbot configuration updates. Source: Intel 471

On Sept. 22, someone pushed out a new configuration file to Windows computers currently infected with Trickbot. The crooks running the Trickbot botnet typically use these config files to pass new instructions to their fleet of infected PCs, such as the Internet address where hacked systems should download new updates to the malware.

But the new configuration file pushed on Sept. 22 told all systems infected with Trickbot that their new malware control server had the address 127.0.0.1, which is a “localhost” address that is not reachable over the public Internet, according to an analysis by cyber intelligence firm Intel 471.

It’s not known how many Trickbot-infected systems received the phony update, but it seems clear this wasn’t just a mistake by Trickbot’s overlords. Intel 471 found that it happened yet again on Oct. 1, suggesting someone with access to the inner workings of the botnet was trying to disrupt its operations.

“Shortly after the bogus configs were pushed out, all Trickbot controllers stopped responding correctly to bot requests,” Intel 471 wrote in a note to its customers. “This possibly means central Trickbot controller infrastructure was disrupted. The close timing of both events suggested an intentional disruption of Trickbot botnet operations.”

Intel 471 CEO Mark Arena said it’s anyone’s guess at this point who is responsible.

“Obviously, someone is trying to attack Trickbot,” Arena said. “It could be someone in the security research community, a government, a disgruntled insider, or a rival cybercrime group. We just don’t know at this point.

Arena said it’s unclear how successful these bogus configuration file updates will be given that the Trickbot authors built a fail-safe recovery system into their malware. Specifically, Trickbot has a backup control mechanism: A domain name registered on EmerDNS, a decentralized domain name system.

“This domain should still be in control of the Trickbot operators and could potentially be used to recover bots,” Intel 471 wrote.

But whoever is screwing with the Trickbot purveyors appears to have adopted a multi-pronged approach: Around the same time as the second bogus configuration file update was pushed on Oct. 1, someone stuffed the control networks that the Trickbot operators use to keep track of data on infected systems with millions of new records.

Alex Holden is chief technology officer and founder of Hold Security, a Milwaukee-based cyber intelligence firm that helps recover stolen data. Holden said at the end of September Trickbot held passwords and financial data stolen from more than 2.7 million Windows PCs.

By October 1, Holden said, that number had magically grown to more than seven million.

“Someone is flooding the Trickbot system with fake data,” Holden said. “Whoever is doing this is generating records that include machine names indicating these are infected systems in a broad range of organizations, including the Department of Defense, U.S. Bank, JP Morgan Chase, PNC and Citigroup, to name a few.”

Holden said the flood of new, apparently bogus, records appears to be an attempt by someone to dilute the Trickbot database and confuse or stymie the Trickbot operators. But so far, Holden said, the impact has been mainly to annoy and aggravate the criminals in charge of Trickbot.

“Our monitoring found at least one statement from one of the ransomware groups that relies on Trickbot saying this pisses them off, and they’re going to double the ransom they’re asking for from a victim,” Holden said. “We haven’t been able to confirm whether they actually followed through with that, but these attacks are definitely interfering with their business.”

Intel 471’s Arena said this could be part of an ongoing campaign to dismantle or wrest control over the Trickbot botnet. Such an effort would hardly be unprecedented. In 2014, for example, U.S. and international law enforcement agencies teamed up with multiple security firms and private researchers to commandeer the Gameover Zeus Botnet, a particularly aggressive and sophisticated malware strain that had enslaved up to 1 million Windows PCs globally.

Trickbot would be an attractive target for such a takeover effort because it is widely viewed as a platform used to find potential ransomware victims. Intel 471 describes Trickbot as “a malware-as-a-service platform that caters to a relatively small number of top-tier cybercriminals.”

One of the top ransomware gangs in operation today — which deploys ransomware strains known variously as “Ryuk” and “Conti,” is known to be closely associated with Trickbot infections. Both ransomware families have been used in some of the most damaging and costly malware incidents to date.

The latest Ryuk victim is Universal Health Services (UHS), a Fortune 500 hospital and healthcare services provider that operates more than 400 facilities in the U.S. and U.K.

On Sunday, Sept. 27, UHS shut down its computer systems at healthcare facilities across the United States in a bid to stop the spread of the malware. The disruption has reportedly caused the affected hospitals to redirect ambulances and relocate patients in need of surgery to other nearby hospitals.

Cryptogram Friday Squid Blogging: After Squidnight

Review of a squid-related children’s book.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: COVID-19 Found on Chinese Squid Packaging

I thought the virus doesn’t survive well on food packaging:

Authorities in China’s northeastern Jilin province have found the novel coronavirus on the packaging of imported squid, health authorities in the city of Fuyu said on Sunday, urging anyone who may have bought it to get themselves tested.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: No Escape

"Last night&#39;s dinner was delicious!" Drew W. writes.

 

"Gee, thanks Microsoft! It's great to finally see progress on this long-awaited enhancement. That bot is certainly earning its keep," writes Tim R.

 

Derek wrote, "Well, I guess the Verizon app isn't totally useless when it comes to telling me how many notifications I have. I mean, one out of three isn't THAT bad."

 

"Perhaps Ubisoft UK are trying to reach bilingual gamers?" writes Craig.

 

Jeremy H. writes, "Whether this is a listing for a pair of pet training underwear or a partial human torso, I'm not sure if I'm comfortable with that 'metal clicker'."

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Krebs on SecurityRansomware Victims That Pay Up Could Incur Steep Fines from Uncle Sam

Companies victimized by ransomware and firms that facilitate negotiations with ransomware extortionists could face steep fines from the U.S. federal government if the crooks who profit from the attack are already under economic sanctions, the Treasury Department warned today.

Image: Shutterstock

In its advisory (PDF), the Treasury’s Office of Foreign Assets Control (OFAC) said “companies that facilitate ransomware payments to cyber actors on behalf of victims, including financial institutions, cyber insurance firms, and companies involved in digital forensics and incident response, not only encourage future ransomware payment demands but also may risk violating OFAC regulations.”

As financial losses from cybercrime activity and ransomware attacks in particular have skyrocketed in recent years, the Treasury Department has imposed economic sanctions on several cybercriminals and cybercrime groups, effectively freezing all property and interests of these persons (subject to U.S. jurisdiction) and making it a crime to transact with them.

A number of those sanctioned have been closely tied with ransomware and malware attacks, including the North Korean Lazarus Group; two Iranians thought to be tied to the SamSam ransomware attacks; Evgeniy Bogachev, the developer of Cryptolocker; and Evil Corp, a Russian cybercriminal syndicate that has used malware to extract more than $100 million from victim businesses.

Those that run afoul of OFAC sanctions without a special dispensation or “license” from Treasury can face several legal repercussions, including fines of up to $20 million.

The Federal Bureau of Investigation (FBI) and other law enforcement agencies have tried to discourage businesses hit by ransomware from paying their extortionists, noting that doing so only helps bankroll further attacks.

But in practice, a fair number of victims find paying up is the fastest way to resume business as usual. In addition, insurance providers often help facilitate the payments because the amount demanded ends up being less than what the insurer might have to pay to cover the cost of the affected business being sidelined for days or weeks at a time.

While it may seem unlikely that companies victimized by ransomware might somehow be able to know whether their extortionists are currently being sanctioned by the U.S. government, they still can be fined either way, said Ginger Faulk, a partner in the Washington, D.C. office of the law firm Eversheds Sutherland.

Faulk said OFAC may impose civil penalties for sanctions violations based on “strict liability,” meaning that a person subject to U.S. jurisdiction may be held civilly liable even if it did not know or have reason to know it was engaging in a transaction with a person that is prohibited under sanctions laws and regulations administered by OFAC.

“In other words, in order to be held liable as a civil (administrative) matter (as opposed to criminal), no mens rea or even ‘reason to know’ that the person is sanctioned is necessary under OFAC regulations,” Faulk said.

But Fabian Wosar, chief technology officer at computer security firm Emsisoft, said Treasury’s policies here are nothing new, and that they mainly constitute a warning for individual victim firms who may not already be working with law enforcement and/or third-party security firms.

Wosar said companies that help ransomware victims negotiate lower payments and facilitate the financial exchange are already aware of the legal risks from OFAC violations, and will generally refuse clients who get hit by certain ransomware strains.

“In my experience, OFAC and cyber insurance with their contracted negotiators are in constant communication,” he said. “There are often even clearing processes in place to ascertain the risk of certain payments violating OFAC.”

Along those lines, OFAC said the degree of a person/company’s awareness of the conduct at issue is a factor the agency may consider in assessing civil penalties. OFAC said it would consider “a company’s self-initiated, timely, and complete report of a ransomware attack to law enforcement to be a significant mitigating factor in determining an appropriate enforcement outcome if the situation is later determined to have a sanctions nexus.”

Kevin RuddCNBC: On US-China tensions, growing export restrictions

E&OE TRANSCRIPT
TV INTERVIEW
CNBC WORLDWIDE EXCHANGE
28 SEPTEMBER 2020

 

 

The post CNBC: On US-China tensions, growing export restrictions appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Imploded Code

Cassi’s co-worker (previously) was writing some more PHP. This code needed to take a handful of arguments, like id and label and value, and generate HTML text inputs.

Well, that seems like a perfect use case for PHP. I can’t possibly see how this could go wrong.

echo $sep."<SCRIPT>CreateField(" . implode(', ', array_map('json_encode', $args)) . ");</SCRIPT>\n";

Now, PHP’s array_map is a beast of a function, and its documentation has some pretty atrocious examples. It’s not just a map, but also potentially a n-ary zip, but if you use it that way, then the function you’re passing needs to be able to handle the right number of arguments, and– sorry. Lost my train of thought there when checking the PHP docs. Back to the code.

In this case, we use array_map to make all our fields JavaScript safe by json_encodeing them. Then we use implode to mash them together into a comma separated string. Then we concatenate all that together into a CreateField call.

CreateField is a JavaScript function. Cassi didn’t supply the implementation, but lets us know that it uses document.write to output the input tag into the document body.

So, this is server side code which generates calls to client side code, where the client side code runs at page load to document.write HTML elements into the document… elements which the server side code could have easily supplied.

I’ll let Cassi summarize:

I spent 5 minutes staring at this trying to figure out what to say about it. I just… don’t know. What’s the worst part? json_encoding individual arguments and then imploding them? Capital HTML tags? A $sep variable that doesn’t need to exist? (Trust me on this one.) Maybe it’s that it’s a line of PHP outputting a Javascript function call that in turn uses document.write to output a text HTML input field? Yeah, it’s probably that one.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Cryptogram Detecting Deep Fakes with a Heartbeat

Researchers can detect deep fakes because they don’t convincingly mimic human blood circulation in the face:

In particular, video of a person’s face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.

Deep fakes don’t lack such circulation-induced shifts in color, but they don’t recreate them with high fidelity. The researchers at SUNY and Intel found that “biological signals are not coherently preserved in different synthetic facial parts” and that “synthetic content does not contain frames with stable PPG.” Translation: Deep fakes can’t convincingly mimic how your pulse shows up in your face.

The inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person’s face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.

Of course, this is an arms race. I expect deep fake programs to become good enough to fool FakeCatcher in a few months.

,

Cory DoctorowAttack Surface on the MMT Podcast

I was incredibly happy to appear on the MMT Podcast again this week, talking about economics, science fiction, interoperability, tech workers and tech ethics, and my new novel ATTACK SURFACE, which comes out in the UK tomorrow (Oct 13 US/Canada):

https://pileusmmt.libsyn.com/68-cory-doctorow-digital-rights-surveillance-capitalism-interoperable-socks

We also delved into my new nonfiction book, HOW TO DESTROY SURVEILLANCE CAPITALISM, and how it looks at the problem of misinformation, and how that same point of view is weaponized by Masha, the protagonist and antihero of Attack Surface.

https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59

It’s no coincidence that both of these books came out at the same time, as they both pursue the same questions from different angles:

  • What does technology do?

  • Who does it do it TO and who does it do it FOR?

  • How can we change how technology works?

And:

  • Why do people make tools of oppression, and what will it take to make them stop?

Here’s the episode’s MP3:

http://traffic.libsyn.com/preview/pileusmmt/The_MMT_Podcast_ep_68_Cory_Doctorow_v2.mp3

and here’s their feed:

http://pileusmmt.libsyn.com/rss

Sam VargheseOne feels sorry for Emma Alberici, but that does not mask the fact that she was incompetent

Last month, the Australian Broadcasting Corporation, a taxpayer-funded entity, made several people redundant, due to a cut in funding by the Federal Government. Among them was Emma Alberici, a presenter who has been lionised a great deal as someone with great talents, but is actually a mediocre hack who lacks ability.

What marked Alberici out is the fact that she had the glorious title of chief economics correspondent at the ABC, but was never seen on any TV show giving her opinion about anything to do with economics. Over the last two years, China and the US have been engaged in an almighty stoush; Australia, as a country that considers the US as its main ally and has China as its major trading partner, has naturally been of interest too.

But the ABC always put forward people like Peter Ryan, a senior business correspondent, or Ian Verrender, the business editor, when there was a need for someone to appear during a news bulletin and provide a little insight into these matters. Alberici, it seemed, was persona non grata.

The reason why she was kept behind the curtains, it turns out, was because she did not really know much about economics. This was made plain in April 2018 when she wrote what columnist Joe Aston of the Australian Financial Review described as “an undergraduate yarn touting ‘exclusive analysis’ of one publicly available document from which she derived that ‘one in five of Australia’s top companies has paid zero tax for the past three years’ and that ‘Australia’s largest companies haven’t paid corporate tax in 10 years’.”

This article drew much criticism from many people, including politicians, who complained to the ABC. Her magnum opus has now disappeared from the web – an archived version is here. The fourth paragraph read: “Exclusive analysis released by ABC today reveals one in five of Australia’s top companies has paid zero tax for the past three years.” A Twitter teaser said: “Australia’s largest companies haven’t paid corporate tax in 10 years.”

As Aston pointed out in withering tones: “Both premises fatally expose their author’s innumeracy. The first is demonstrably false. Freely available data produced by the Australian Taxation Office show that 32 of Australia’s 50 largest companies paid $19.33 billion in company tax in FY16 (FY17 figures are not yet available). The other 18 paid nothing. Why? They lost money, or were carrying over previous losses.

“Company tax is paid on profits, so when companies make losses instead of profits, they don’t pay it. Amazing, huh? And since 1989, the tax system has allowed losses in previous years to be carried forward – thus companies pay tax on the rolling average of their profits and losses. This is stuff you learn in high school. Except, obviously, if your dream by then was to join the socialist collective at Ultimo, to be a superstar in the cafes of Haberfield.”

As expected, Alberici protested and even called in her lawyer when the ABC took down the article. It was put up again with several corrections. But the damage had been done and the great economics wizard had been unmasked. She did not take it well, protesting that 17 years ago she was nominated for a Walkley Award on tax minimisation.

Underlining her lack of knowledge of economics, Alberici, who was paid a handsome $189,000 per annum by the ABC, exposed herself again in May the same year. In an interview with Labor shadow treasurer Jim Chalmers, she kept pushing him as to what he would do with the higher surpluses that he proposed to run were a Labor government to be elected.

Aston has his own inimitable style, so let me use his words: “This time, Alberici’s utter non-comprehension of public sector accounting is laid bare in three unwitting confessions in her studio interview with Labor’s finance spokesman Jim Chalmers after Tuesday’s Budget.

“[Shadow Treasurer] Chris Bowen on the weekend told my colleague Barrie Cassidy that you want to run higher surpluses than the Government. How much higher than the Government and what would you do with that money?”

“Wearing that unmistakable WTF expression on his dial, Chalmers was careful to evade her illogical query.

“Undeterred, Alberici pressed again. ‘And what will you do with those surpluses?’ A second time, Chalmers dissembled.

“A third time, the cock crowed (this really was as inevitable as the betrayal of Jesus by St Peter). ‘Sorry, no, you said you would run higher surpluses, so what happens to that money?’

“Hanged [sic] by her own persistence. Chalmers, at this point, put her out of her blissful ignorance – or at least tried! ‘Surpluses go towards paying down debt’.

“Bingo. C’est magnifique! Hey, we majored in French. After 25 years in business and finance reporting, Alberici apparently thinks a budget surplus is an account with actual money in it, not a figure reached by deducting the Commonwealth’s expenditure from its revenue.”

Alberici has continued to complain that her knowledge of economics is adequate. She has been extremely annoyed when such criticism comes from any Murdoch publication. But the fact is that she is largely ignorant of the basics of economics.

Her incompetence is not limited to this one field alone. As I have pointed out, there have been occasions in the past, when she has shown that her knowledge of foreign affairs is as good as her knowledge of economics.

After she was made redundant, Alberici published a series of tweets which she later removed. An archive of that is here.

Alberici was the host of a program called Business Breakfast some years ago. It failed and had to be taken off the air. Then she was made the main host of the ABC’s flagship late news program, Lateline. That program was taken off the air last year due to budget cuts. However, Alberici’s performance did not set the Yarra on fire, to put it mildly.

Now she has joined an insurance comparison website, Compare The Market. That company has a bit of a dodgy reputation as the Australian news website The New Daily pointed out in 2014. As reporter George Lekakis wrote: “The website is owned by a leading global insurer that markets many of its own its products on the site. While comparethemarket.com.au offers a broad range of products for life insurance and travel cover, most of its general insurance offerings are underwritten by an insurer known as Auto & General.

“Auto & General is a subsidiary of global financial services group Budget Holdings Limited and is the ultimate owner of comparethemarket.com.au. An investigation by The New Daily of the brands marketed by the website for auto and home insurance reveals a disproportionate weighting to A&G products.”

Once again, the AFR, this time through columnist Myriam Robyn, was not exactly complimentary about Alberici’s new billet. But perhaps it is a better fit for Alberici than the ABC where she was really a square peg in a round hole. She is writing a book in which, no doubt, she will again try to put her case forward and prove she was a brilliant journalist who lost her job due to other people politicking against her. But the facts say otherwise.

Worse Than FailureCodeSOD: A Poor Facsimile

For every leftpad debacle, there are a thousand “utility belt” libraries for JavaScript. Whether it’s the “venerable” JQuery, or lodash, or maybe Google’s Closure library, there’s a pile of things that usually end up in a 50,000 line util.js file available for use, and packaged up a little more cleanly.

Dan B had a co-worker who really wanted to start using Closure. But they also wanted to be “abstract”, and “loosely coupled”, so that they could swap out implementations of these utility functions in the future.

This meant that they wrote a lot of code like:

initech.util.clamp = function(value, min, max) {
  return goog.math.clamp(value, min, max);
}

But it wasn’t enough to just write loads of functions like that. This co-worker “knew” they’d need to provide their own implementations for some methods, reflecting how the utility library couldn’t always meet their specific business needs.

Unfortunately, Dan noticed some of those “reimplementations” didn’t always behave correctly, for example:

initech.util.isDefAndNotNull(null); //returns true

Let’s look at the Google implementations of some methods, annotated by Dan:

goog.isDef = function(val) {
  return val !== void 0; // tests for undefined
}
goog.isDefAndNotNull = function(val) {
  return val != null; // Also catches undefined since ==/!= instead of ===/!== allows for implicit type conversion to satisfy the condition.
}

Setting aside the problems of coercing vs. non-coercing comparison operators even being a thing, these both do the job they’re supposed to do. But Dan’s co-worker needed to reinvent this particular wheel.

initech.util.isDef = function (val) {
  return val !== void 0; 
}

This code is useless, since we already have it in our library, but whatever. It’s fine and correct.

initech.util.isDefAndNotNull = initech.util.isDef; 

This developer is the kid who copied someone else’s homework and still somehow managed to get a failing grade. They had a working implementation they could have referenced to see what they needed to do. The names of the methods themselves should have probably provided a hint that they needed to be mildly different. There was no reason to write any of this code, and they still couldn’t get that right.

Dan adds:

Thankfully a lot of this [functionality] was recreated in C# for another project…. I’m looking to bring all that stuff back over… and just destroy this monstrosity we created. Goodbye JavaScript!

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Krebs on SecurityWho’s Behind Monday’s 14-State 911 Outage?

Emergency 911 systems were down for more than an hour on Monday in towns and cities across 14 U.S. states. The outages led many news outlets to speculate the problem was related to Microsoft‘s Azure web services platform, which also was struggling with a widespread outage at the time. However, multiple sources tell KrebsOnSecurity the 911 issues stemmed from some kind of technical snafu involving Intrado and Lumen, two companies that together handle 911 calls for a broad swath of the United States.

Image: West.com

On the afternoon of Monday, Sept. 28, several states including Arizona, California, Colorado, Delaware, Florida, Illinois, Indiana, Minnesota, Nevada, North Carolina, North Dakota, Ohio, Pennsylvania and Washington reported 911 outages in various cities and localities.

Multiple news reports suggested the outages might have been related to an ongoing service disruption at Microsoft. But a spokesperson for the software giant told KrebsOnSecurity, “we’ve seen no indication that the multi-state 911 outage was a result of yesterday’s Azure service disruption.”

Inquiries made with emergency dispatch centers at several of the towns and cities hit by the 911 outage pointed to a different source: Omaha, Neb.-based Intrado — until last year known as West Safety Communications — a provider of 911 and emergency communications infrastructure, systems and services to telecommunications companies and public safety agencies throughout the country.

Intrado did not respond to multiple requests for comment. But according to officials in Henderson County, NC, which experienced its own 911 failures yesterday, Intrado said the outage was the result of a problem with an unspecified service provider.

“On September 28, 2020, at 4:30pm MT, our 911 Service Provider observed conditions internal to their network that resulted in impacts to 911 call delivery,” reads a statement Intrado provided to county officials. “The impact was mitigated, and service was restored and confirmed to be functional by 5:47PM MT.  Our service provider is currently working to determine root cause.”

The service provider referenced in Intrado’s statement appears to be Lumen, a communications firm and 911 provider that until very recently was known as CenturyLink Inc. A look at the company’s status page indicates multiple Lumen systems experienced total or partial service disruptions on Monday, including its private and internal cloud networks and its control systems network.

Lumen’s status page indicates the company’s private and internal cloud and control system networks had outages or service disruptions on Monday.

In a statement provided to KrebsOnSecurity, Lumen blamed the issue on Intrado.

“At approximately 4:30 p.m. MT, some Lumen customers were affected by a vendor partner event that impacted 911 services in AZ, CO, NC, ND, MN, SD, and UT,” the statement reads. “Service was restored in less than an hour and all 911 traffic is routing properly at this time. The vendor partner is in the process of investigating the event.”

It may be no accident that both of these companies are now operating under new names, as this would hardly be the first time a problem between the two of them has disrupted 911 access for a large number of Americans.

In 2019, Intrado/West and CenturyLink agreed to pay $575,000 to settle an investigation by the Federal Communications Commission (FCC) into an Aug. 2018 outage that lasted 65 minutes. The FCC found that incident was the result of a West Safety technician bungling a configuration change to the company’s 911 routing network.

On April 6, 2014, some 11 million people across the United States were disconnected from 911 services for eight hours thanks to an “entirely preventable” software error tied to Intrado’s systems. The incident affected 81 call dispatch centers, rendering emergency services inoperable in all of Washington and parts of North Carolina, South Carolina, Pennsylvania, California, Minnesota and Florida.

According to a 2014 Washington Post story about a subsequent investigation and report released by the FCC, that issue involved a problem with the way Intrado’s automated system assigns a unique identifying code to each incoming call before passing it on to the appropriate “public safety answering point,” or PSAP.

“On April 9, the software responsible for assigning the codes maxed out at a pre-set limit,” The Post explained. “The counter literally stopped counting at 40 million calls. As a result, the routing system stopped accepting new calls, leading to a bottleneck and a series of cascading failures elsewhere in the 911 infrastructure.”

Compounding the length of the 2014 outage, the FCC found, was that the Intrado server responsible for categorizing and keeping track of service interruptions classified them as “low level” incidents that were never flagged for manual review by human beings.

The FCC ultimately fined Intrado and CenturyLink $17.4 million for the multi-state 2014 outage. An FCC spokesperson declined to comment on Monday’s outage, but said the agency was investigating the incident.

Kevin RuddThe Market: China’s fears about America

E&OE TRANSCRIPT
PAPER INTERVIEW
THE MARKET
29 SEPTEMBER 2020

Few Western observers know China better than The Honorable Kevin Rudd. As a young diplomat, the Australian, who speaks fluent Mandarin, was stationed in Beijing in the 1980s. As Australia’s Prime Minister and then Foreign Minister from 2007 to 2012, he led his country through the delicate tension between its most important alliance partner (the USA) and its largest trading partner (China).

Today Mr. Rudd is President of the Asia Society Policy Institute in New York. In an in-depth conversation via Zoom, he explains why a fundamental competition has begun between the two great powers. He would not rule out a hot war: «We know from history that it is easy to start a conflict, but it is bloody hard to end it», he warns.

Mr. Rudd, the conflict between the U.S. and China has escalated significantly over the past three years. To what extent has that escalation been driven by the presence of two strongmen, i.e. Donald Trump and Xi Jinping?

The strategic competition between the two countries is the product of both structural and leadership factors. The structural factors are pretty plain, and that is the continuing change in the balance of power in military, economic, and technological terms. This has an impact on China’s perception of its ability to be more assertive in the region and the world against America. The second dynamic is Xi Jinping’s leadership style, which is more assertive and aggressive than any of his post-Mao predecessors, Deng Xiaoping, Jiang Zemin and Hu Jintao. The third factor is Donald Trump, who obsesses about particular parts of the economy, namely trade and to a lesser degree technology.

Would America’s position be different if someone else than Trump was President?

No. The structural factors about the changing balance of power, as well as Xi Jinping’s leadership style, have caused China to rub up against American interests and values very sharply. Indeed, China is rubbing up against the interests and values of most other Western countries and some Asian democracies as well. Had Hillary Clinton won in 2016, her response would have been very robust. Trump has for the most part been superficially robust, principally on trade and technology. He was only triggered into more comprehensive robustness by the Covid-19 crisis threatening his reelection. If the next President of the U.S. is a Democrat, my judgement would be that the new Administration will be equally but more systematically hard-line in their reaction to China.

Has a new Cold War started?

I don’t like to embrace the language of a Cold War 2.0, because we should not forget that the Cold War of the 20th century had three big defining characteristics: One, the Soviets and the Americans threatened each other with nuclear Armageddon for forty years; two, they fought more than twenty proxy wars around the world; and three, they had zero economic engagement with each other. The current conflict between the U.S. and China on the other hand is characterised by two things: One, an economic decoupling in areas such as trade, supply chains, foreign direct investment, capital markets, technology, and talent. At the same time, it is also an increasingly sharp ideological war on values. The Chinese authoritarian capitalist model has asserted itself beyond China and challenges America.

How do you see that economic decoupling playing out?

The three formal instruments of power in the U.S. to enforce decoupling are entity listing, the new export control regime, and thirdly, the new powers given to the Committee on Foreign Investment in the United States, CFIUS. Those are powerful instruments which potentially affect third countries as well, through sanctions imposed under the entity list. You can take the example of semiconductors, where the recent changes of the entity list virtually limits the exports of semiconductors to a defined list of Chinese enterprises from anywhere in the world, as long as they are based on American intellectual property.

These measures have cut off Chinese companies like Huawei or SMIC from acquiring high-end semiconductor technology anywhere in the world. The reaction in Beijing has been muted so far, with no direct retaliation. Why?

In China there is a division of opinion on the question of how to respond. The hawks have an «eye for an eye» posture, that’s driven both by a perception of strategy, but also with an eye on domestic sentiment. The America doves within the leadership – and they do exist – argue a different proposition. They think China is not yet ready for a complete decoupling. If it’s going to happen, they at least try to slow it down. Plus, they want to keep their powder dry until they see the outcome of the election and what the next Administration will do. That’s the reason why we have seen only muted responses so far.

Isn’t it the case that both sides would lose if they drive decoupling too far? And given that, could it be that there won’t be any further decoupling?

We are past that point. Whoever wins the election, America will resolve in decoupling in a number of defined areas. First and foremost in those global supply chains where the products are of too crucial importance to the U.S. to depend on Chinese supply. Think medical supplies or pharmaceuticals. The second area is in defined critical technologies. The Splinternet is not just a term, it’s becoming a reality. Thirdly, you will see a partial decoupling on the global supply of semiconductors to China. Not just those relevant to 5G and Artificial Intelligence, but semiconductors in general. The centrality of microchips to computing power for all purposes, and the spectrum of application in the civilian and military economy is huge. Fourth, I think foreign direct investment in both directions will shrink to zero. The fifth area of decoupling is happening in talent markets. The hostility towards Chinese students in the U.S. is reaching ridiculous proportions.

Do you see a world divided into two technology spheres, one with American standards and one with Chinese standards?

This is the logical consequence. Assume you have Huawei 5G systems rolled out across the 75 countries that take part in the Belt and Road Initiative, then what follows from that is a series of industry standards that become accepted and compatible within those countries, as opposed to those that rely on American systems. But then another set of questions arises: Let’s say China is effectively banned from purchasing semiconductors based on American technology and is dependent on domestic supply. Chinese semiconductors are slower than their American counterparts, and likely to remain for the decade ahead. Do the BRI countries accept a slower microprocessor product standard for being part of the Chinese technological ecosystem? I don’t know the answer to that, but I think your prognosis of two technology spheres is correct.

China throws huge amounts of money into the project of building up its semiconductor capabilities. Are they still so far behind?

Despite their acts to buy, borrow or steal intellectual property, they constantly remain three to seven years behind the U.S., Taiwan and South Korea, i.e. behind the likes of Intel, TSMC and Samsung. It’s obviously hard to reverse engineer semiconductors, as opposed to a Tupolev, as the Soviets had to find out, which can be reverse engineered over a weekend.

Wouldn’t America hurt itself too much if it cut off China from its semiconductor industry altogether?

There is an argument that 50% of the profits of the US semiconductor industry come from their clients in China. That money funds their R&D in order to keep three to seven years ahead of China. The U.S. Department of Defense knows that, what’s why the Pentagon doesn’t necessarily side with the anti China hawks on this issue. So the debate between the US semiconductor industry versus the perceived national security interest has yet to be resolved. It has been resolved in terms of AI chips, and Huawei is the first big casualty of that. But for semiconductors in general the question is not solved yet.

Will countries in Europe or Southeast Asia be forced to decide which technology sphere they want to belong to?

Until the 5G revolution, they have managed to straddle both worlds. But now China has banked on the strategy of being the technology leader in certain categories, and the one they are in at the moment is in 5G technology and systems. If you look at the next five years, and if you look at the success of China in the other technology categories in its Made in China 2025 Strategy, then it becomes clear that we increasingly are going to end up in a binary technology world. Policy makers in various nations will have to answer questions around the relative sophistication of the technology, industry standards, concerns on privacy and data governance, and of course a very big question: What are our points of exposure to the U.S. or China? What will we lose in our China relationship by joining the American sphere in certain fields, and vice versa? Those variables will impact the decision making processes everywhere from Bern to Berlin to Bangkok.

But in the end, they will have to choose?

Yes. India for example has done it in the field of 5G. For India, that was a big call, given the size of its market and China’s desire to bring India slowly but surely towards its side of the Splinternet.

The third field of conflict after trade and technology lies in financial markets. We know that Washington can weaponise the Dollar if it wants to. So far, this front has been rather quiet. What are your expectations?

Two measures have been taken so far by the Trump Administration. One, the direction to U.S. government pension funds not to invest in Chinese listed companies; and two, the direction to the New York Stock Exchange not to sustain the listing of Chinese firms unless they conform to global accounting standards. I regard these as two simple shots across the bow.

With more to follow?

Like on most other things including technology, the U.S. is divided in terms of where its interests lie. Just like Silicon Valley, Wall Street has a big interest in not having too harsh restrictions on financial markets. Just look at the volume of business. Financial market collaboration between the Chinese and American financial systems in terms of investment flows for Treasuries, bonds and equities is a $5 trillion per year business. This is not small. I presume the reason we have only seen two rather small warning shots so far is that the Administration is deeply sensitive to Wall Street interests, led by Secretary of the Treasury Steven Mnuchin. Make no mistake: Given the serious Dollars at stake in financial markets, an escalation there will make the trade war look like child’s play.

Which way will it resolve?

The risk I see is that if the Chinese crack down further in Hong Kong. If there is an eruption of protests resulting in violence, we should not be surprised by the possibility of Washington deciding to de-link the U.S. financial system from the Hong Kong Dollar and the Hong Kong financial market. That would be a huge step.

How probable is it that Washington would choose to weaponise the Dollar?

We don’t know. The Democrats in my judgement do not have a policy on that at present. Perhaps the best way to respond to your question is to say this: There are three things that China still fears about America. The US military, its semiconductor industry, and the Dollar. If you are attentive to the internalities of the Chinese national economic self-sufficiency debate at present, it often is expressed in terms of «Let’s not allow to happen to us in financial markets what is happening in technology markets». But if the U.S. goes into hardline mode against China for general strategy purposes, then the only thing that would deter Washington is the amount of self-harm it would inflict on Wall Street, if it is forced to decouple from the Chinese market. If that would happen, it would place Frankfurt, Zurich, Paris or the City of London in a pretty interesting position.

Would you say that there is a form of mutually assured destruction, MAD, in financial markets, which would prevent the U.S. from going into full hardline mode?

If the Americans wanted to send a huge warning shot to the Chinese, they are probably more disposed towards using sectoral measures, like the one I outlined for Hong Kong, and not comprehensive measures. But never forget: American political elites, Republicans and Democrats, have concluded that Xi Jinping’s China is not a status quo power, but that it wishes to replace the U.S. in its position of global leadership. Therefore, the inherent rationality or irrationality of individual measures is no longer necessarily self-evident against the general strategic question. The voices in America to prevent a financial decoupling from China are strong at present, but that does not necessarily mean they will prevail.

China’s strategy, meanwhile, is to welcome U.S. banks with open arms. Is it working?

The general strategy of China is that the more economic enmeshment occurs – not just with the U.S., but also with Europe, Japan and the likes –, then the less likely countries are going to take a hard-line policy against Beijing. China is a master in using its economic leverage to secure foreign policy and national security policy ends. They know this tool very well. The more friends you have, be it JPMorgan, Morgan Stanley or the big technology firms of Silicon Valley, the more it complicates the decision making process in the West. China knows that. I’m sure you’ve heard it a thousand times from Swiss companies as well: How can we grow a global business and ignore the Chinese market? Every company in the world is asking that question.

You wrote an article in Foreign Affairs in August, warning of a potential military conflict triggered by events in the South China Sea or Taiwan. Do you really see the danger of a hot war?

I don’t mean to be alarmist, not at all. But I was talking to too many people both in Washington and Beijing that were engaged in scenario planning, to believe that this was any longer just theoretical. My simple thesis is this: These things can happen pretty easily once you have whipped up nationalist narratives on both sides and then have a major incident that goes wrong. A conflict is easy to start, but history tells us they are bloody hard to stop.

Of course the main argument against that is that there is too much at stake on both sides, which will prevent an escalation into a hot war.

You see, within that argument lies the perceived triumph of European rationality over East Asian reality. All that European rationality worked really well in 1914, when nobody thought that war was inevitable. The title of my article Beware the Guns of August referred to the time between the assassination in Sarajevo at the end of June, the failure of diplomacy in July, and then miscommunication, poor signalling and the dynamics of mobilisation in the end led to a situation that neither the Kaiser nor the Czar could stop. Nationalism is as poisonous today as it was in Europe for centuries. It’s just that you’ve all killed each other twice before you found out it was a bad idea. Remember, in East Asia, we have the rolling problems of our own version of Alsace-Lorraine: it’s called Taiwan.

Influential voices in Washington say that the time of ambiguity is over. The U.S. should make its support for Taiwan explicit. Do you agree?

I don’t. If you change your declaratory policy on Taiwan, then there is a real danger that you by accident create the crossing of a red line in Chinese official perception, and you bring on the very crisis you are seeking to avoid. It’s far better if you simply had an operational strategy, which aims to maximally enhance Taiwan’s ability to deter a Chinese attack.

Over the past years, the Chinese Communist Party has morphed into the Party of Xi. How do you see the internal dynamics within the CCP playing out over the coming years?

Xi Jinping’s position as Paramount Leader makes him objectively the most powerful Chinese leader since Mao. During the days of Deng Xiaoping, there were counterweighting voices to Deng, represented at the most senior levels, and there was a debate of economic and strategic policy between them. The dynamics of collective leadership applied then, they applied under Jiang Zemin, they certainly applied under Hu Jintao. They now no longer apply.

What will that mean for the future?

In the seven years he’s been in power so far, China moved to the left on domestic politics, giving a greater role to the Party. In economic policy, we’ve seen it giving less headroom for the private sector. China has become more nationalist and more internationally assertive as a consequence of it becoming more nationalist. There are however opposing voices among the top leadership, and the open question is whether these voices can have any coalescence in the lead-up to the 20th Party Congress in 2022, which will decide on whether Xi Jinping’s term is extended. If it is extended, you can say he then becomes leader for life. That will be a seminal decision for the Party.

What’s your prediction?

For a range of internal political reasons, which have to do with power politics, plus disagreements on economic policy and some disagreements on foreign policy, the internal political debate in China will become sharper than we have seen so far. If I was a betting man, at this stage, I would say it is likely that Xi will prevail.

«It's obviously hard to reverse engineer semiconductors.»

The post The Market: China’s fears about America appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Taking Your Chances

A few months ago, Sean had some tasks around building a front-end for some dashboards. Someone on the team picked a UI library for managing their widgets. It had lovely features like handling flexible grid layouts, collapsing/expanding components, making components full screen and even drag-and-drop widget rearrangement.

It was great, until one day it wasn't. As more and more people started using the front end, they kept getting more and more reports about broken dashboards, misrendering, duplicated widgets, and all sorts of other difficult to explain bugs. These were bugs which Sean and the other developers had a hard time replicating, and even on the rare occassions that they did replicate it, they couldn't do it twice in a row.

Stumped, Sean took a peek at the code. Each on-screen widget was assigned a unique ID. Except at those times when the screen broke- the IDs weren't unique. So clearly, something went wrong with the ID generation, which seemed unlikely. All the IDs were random-junk, like 7902699be4, so there shouldn't be a collision, right?

generateID() { return sha256(Math.floor(Math.random() * 100)). substring(1, 10); }

Good idea: feeding random values into a hash function to generate unique IDs. Bad idea: constraining your range of random values to integers between 0 and 99.

SHA256 will take inputs like 0, and output nice long, complex strings like "5feceb66ffc86f38d952786c6d696c79c2dbc239dd4e91b46729d73a27fb57e9". And a mildly different input will generate a wildly different output. This would be good for IDs, if you had a reasonable expectation that the values you're seeding are themselves unique. For this use case, Math.random() is probably good enough. Even after trimming to the first ten characters of the hash, I only hit two collisions for every million IDs in some quick tests. But Math.floor(Math.random() * 100) is going to give you a collision a lot more frequently. Even if you have a small number of widgets on the screen, a collision is probable. Just rare enough to be hard to replicate, but common enough to be a serious problem.

Sean did not share what they did with this library. Based on my experience, it was probably "nothing, we've already adopted it" and someone ginned up an ugly hack that patches the library at runtime to replace the method. Despite the library being open source, nobody in the organization wants to maintain a fork, and legal needs to get involved if you want to contribute back upstream, so just runtime patch the object to replace the method with one that works.

At least, that's a thing I've had to do in the past. No, I'm not bitter.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 16)

Here’s part sixteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Cryptogram Negotiating with Ransomware Gangs

Really interesting conversation with someone who negotiates with ransomware gangs:

For now, it seems that paying ransomware, while obviously risky and empowering/encouraging ransomware attackers, can perhaps be comported so as not to break any laws (like anti-terrorist laws, FCPA, conspiracy and others) ­ and even if payment is arguably unlawful, seems unlikely to be prosecuted. Thus, the decision whether to pay or ignore a ransomware demand, seems less of a legal, and more of a practical, determination ­ almost like a cost-benefit analysis.

The arguments for rendering a ransomware payment include:

  • Payment is the least costly option;
  • Payment is in the best interest of stakeholders (e.g. a hospital patient in desperate need of an immediate operation whose records are locked up);
  • Payment can avoid being fined for losing important data;
  • Payment means not losing highly confidential information; and
  • Payment may mean not going public with the data breach.

The arguments against rendering a ransomware payment include:

  • Payment does not guarantee that the right encryption keys with the proper decryption algorithms will be provided;
  • Payment further funds additional criminal pursuits of the attacker, enabling a cycle of ransomware crime;
  • Payment can do damage to a corporate brand;
  • Payment may not stop the ransomware attacker from returning;
  • If victims stopped making ransomware payments, the ransomware revenue stream would stop and ransomware attackers would have to move on to perpetrating another scheme; and
  • Using Bitcoin to pay a ransomware attacker can put organizations at risk. Most victims must buy Bitcoin on entirely unregulated and free-wheeling exchanges that can also be hacked, leaving buyers’ bank account information stored on these exchanges vulnerable.

When confronted with a ransomware attack, the options all seem bleak. Pay the hackers ­ and the victim may not only prompt future attacks, but there is also no guarantee that the hackers will restore a victim’s dataset. Ignore the hackers ­ and the victim may incur significant financial damage or even find themselves out of business. The only guarantees during a ransomware attack are the fear, uncertainty and dread inevitably experienced by the victim.

Cryptogram Hacking a Coffee Maker

As expected, IoT devices are filled with vulnerabilities:

As a thought experiment, Martin Hron, a researcher at security company Avast, reverse engineered one of the older coffee makers to see what kinds of hacks he could do with it. After just a week of effort, the unqualified answer was: quite a lot. Specifically, he could trigger the coffee maker to turn on the burner, dispense water, spin the bean grinder, and display a ransom message, all while beeping repeatedly. Oh, and by the way, the only way to stop the chaos was to unplug the power cord.

[…]

In any event, Hron said the ransom attack is just the beginning of what an attacker could do. With more work, he believes, an attacker could program a coffee maker — ­and possibly other appliances made by Smarter — ­to attack the router, computers, or other devices connected to the same network. And the attacker could probably do it with no overt sign anything was amiss.

Kevin RuddBBC World: Kevin Rudd on Xi Jinping’s zero-carbon pledge

E&OE TRANSCRIPT
TV INTERVIEW
BBC World
26 SEPTEMBER 2020

 

Mike Embley
Let’s speak to a man who has a lot of experience with China. Kevin Rudd, former Prime Minister of Australia, now president of the Asia Society, Policy Institute. Mr. Rudd, is that headline grabbing thing to say, isn’t it? the United Nations General Assembly? Do you think in a very directed economy with a population that’s given very little individual choice, it is possible? And if so, how?

Kevin Rudd
I think it is possible. And the reason I say so is because China does have a state planning system. And on top of that, China has concluded, I think that it’s in its own national interests, to bring about a radical reduction in carbon over time. So the two big announcements coming out of Xi Jinping at the UN General Assembly have been A, the one you’ve just referred to, which is to achieve carbon neutrality before 2060. And also for China to reach what’s called peak greenhouse gas emissions before 2030. The international community have been pressuring them to bring that forward as much as possible, respectively, we hope to close to 2050. And we hope close to 2025, as far as peak emissions are concerned, but we’ll have to wait and see until China produces its 14th five year plan.

Mike Embley
I guess we can probably assume that President Trump wasn’t impressed, he’s made his position pretty clear on these matters. How do you think other countries will respond? Or will they just say, well, that’s China, China can do that we really can’t?

Kevin Rudd
Well, I think you’re right to characterize President Trump’s reaction as dismissive, because ultimately, he does not seem to be persuaded at all by the climate change science. And the United States under his presidency has been completely missing in action, in terms of global work on climate change, mitigation activity. But when you look at the other big player on global climate change actions, the European Union, and I think positively speaking, both the European Union and working as they did, through their virtual discussions with the Chinese leadership last week, had been encouraging directly the Chinese to take these sorts of actions. I think the Europeans will be pleased by what they have heard. But there’s a caveat to this as well. One of the other things that Europeans asked is for China, not just to make concrete its commitments in terms of peak emissions and carbon neutrality, but also not to offshore China’s responsibilities by building a lot of carbon based or fossil fuel based power generation plants in the Belt and Road Initiative countries beyond China’s borders. That’s where we have to see further action by China as well.

Mike Embley
Just briefly, if you can, you know, there will be people yelling at the screen, given how fast climate change is becoming a climate emergency, that 2050-2060 is nowhere near soon enough that we have to inflict really massive changes on ourselves.

Kevin Rudd
Well, the bottom line is that if you look at the science, it requires us to be carbon neutral by 2050. As a planet, given the China is the world’s largest emitter by a country mile, then I’d say to those throwing shoes at the screen, getting China to that position, given where they were, frankly 10 years ago when I first began negotiating with the Chinese leadership on climate change matters is a big step forward. And secondly, what I find most encouraging though, is that China’s national political interests, I think, have been engaged. They don’t want to suffocate their own national economic future, any more than they want to suffocate the world’s.

The post BBC World: Kevin Rudd on Xi Jinping’s zero-carbon pledge appeared first on Kevin Rudd.

Kevin RuddCNN: Kevin Rudd on Rio Tinto and Indigenous Heritage Sites

E&OE TRANSCRIPT
TV INTERVIEW
CNN
25 SEPTEMBER 2020

Michael Holmes
Former Australian Prime Minister and president of the Asia Society Policy Institute, Kevin Rudd joins me now. Prime Minister, I remember this, this well, you know, it was it was back in, you know, 2008, when you did something that was historic, you presented an apology to Indigenous Australians for past wrongs. 12 years later, this is happening, or what what has changed in Australia in relation to how Aboriginals are treated?

Kevin Rudd
Well, the Apology in 2008, helped turn a page in the recognition by white Australia, about the level of mistreatment of black Australia over the previous couple of hundred years. And some progress has been made in closing the gap in some areas of health and education between indigenous and non Indigenous Australians. But when you see actions, like we’ve just seen with Rio Tinto, in this willful destruction of a 46,000 year old archaeological site, which even their own internal archaeological advisors said, was a major site of archaeological significance in Australia, you scratch your head, and can only conclude that Rio Tinto really do see themselves as above and beyond government, and above and beyond the public interests shared by all Australians.

Michael Holmes
And we heard in Angus’ report, you know, that some of these sites the notion of destroying some of these sites for coal mine or whatever, you know, that it’s like destroying a war cemetery to indigenous people? Why are indigenous voices not being heard the way they should be? And, you know, let’s be honest coal, that’s not going to be a thing in 20-30 years, and meanwhile, you destroy something that’s been there for 30,000 years. Why aren’t they being heard?

Kevin Rudd
Well, you’re right to pose the question, because when I was Prime Minister, I began by acknowledging indigenous people8/7

* of this country as the oldest continuing culture on Earth. That is not a small statement. It’s a very large state now, but it’s also an historical and pre historical fact. And therefore, when we look at the architectural and the archaeological legacy of indigenous settlement in this country, we have a unique global responsibility to act responsibly. I can only put down this buccaneer cavalier attitude to the corporate arrogance of Rio Tinto and one or two of the other major mining companies that I have a very skeptical view of BHP Billiton in this respect, and their attitude to Indigenous Australians, but also a very comfortable, far too comfortable relationship with the conservative federal government of Australia, which replaced my government. Which, as you’ve just demonstrated, through one of the decisions made by their environment minister, are quite happy to allow indigenous interests and archaeological significance to take a back place.

Michael Holmes
Yeah, as I said, I remember 2008 well, when that apology was made. It was such a significant moment for Australia and Australians. I mean, I haven’t lived in Australia for 25 years, but you know, have have attitudes changed in recent years in Australia, or changed enough when it comes to these sorts of national treasures and appreciation of them? Has white Australia altered its view of Aboriginals Aboriginal heritage enough.

Kevin Rudd
I think the National Apology and the broader processes of reconciliation in this country, which my government took forward from 2008 onwards, have a continuing effect on the whole country, including white Australians. And what is my evidence of that? The evidence is that when Rio Tinto in their monumental arrogance, thought they could just slide through by blowing this 46,000 year old archaeologically, significant ancient caves to bits. Even let me call it, white Australia, turned around and said, this is just a bridge too far. Even conservative members of parliament in a relevant parliamentary inquiry, scratch their head and said this is just beyond the pale. And so if you want evidence of the fact that underlying community attitudes have changed, and political attitudes have changed, that’s it. But we still have a buccaneer approach on the part of Rio Tinto, who I think not just in Australia, but globally have demonstrated a level of arrogance to elected democratic governments around the world whereby Rio Tinto thinks it’s up here, and the elected governments and the norms of the societies in which they operate it down here, Rio Tinto has a major challenge. itself, as do other big miners like BHP Billiton, which has to understand that they are dealing with the people’s resource. And they are working within a framework of laws, which places an absolute priority on the centrality of the indigenous heritage of all these countries including Australia.

 

The post CNN: Kevin Rudd on Rio Tinto and Indigenous Heritage Sites appeared first on Kevin Rudd.

LongNowCharting Earth’s (Many) Mass Extinctions

How many mass extinctions has the Earth had, really? Most people talk today as if it’s five, but where one draws the line determines everything, and some say over twenty.

However many it might be, new mass extinctions seem to reveal themselves with shocking frequency. Just last year researchers argued for another giant die-off just preceding the Earth’s worst, the brutal end-Permian.

Now, one more has come into focus as stratigraphy improves. With that, four of the most awful things that life has ever suffered all came down (in many cases, literally, as volcanic ash) within 60 million years. Not a great span of time with which to spin the wheel, in case your time machine is imprecise…