Planet Russell

,

Charles Stross — Countdown to Crazy

This is your official thread for discussing the upcoming US presidential and congressional election on November 3rd; along with its possible outcomes.

Do not chat about the US supreme court, congress, presidency, constitution, constitutional crises (possible), coup (possible), Donald Trump and his hellspawn offspring and associates, or anything about US politics in general on the Laundry Files book launch threads. If you do, your comments will be ruthlessly moderated into oblivion.

You are allowed and encouraged to discuss those topics in the comments below this topic.

(If you want to discuss "Dead Lies Dreaming" here I won't stop you, but there's plenty of other places for that!)

Charles Stross — Upcoming Attractions!

As you know by now, my next novel, Dead Lies Dreaming comes out next week—on Tuesday the 27th in the US and Thursday 29th in the UK, because I've got different publishers in different territories).

Signed copies can be ordered from Transreal Fiction in Edinburgh via the Hive online mail order service.

(You can also order it via Big River co and all good bookshops, but they don't stock signed copies: Link to Amazon US: Link to Amazon UK. Ebooks are available too, and I gather the audiobook—again, there's a different version in the US, from Audible, and the UK, from Hachette Digital—should be released at the same time.)

COVID-19 has put a brake on any plans I might have had to promote the book in public, but I'm doing a number of webcast events over the next few weeks. Here are the highlights:

Outpost 2020 is a virtual SF convention taking place from Friday 23rd (tomorrow!) to Sunday 25th. I'm on a discussion panel on Saturday 24th at 4pm (UK time), on the subject of "Reborn from the Apocalypse": Both history and current events teach that a Biblical-proportioned apocalypse is not necessarily confined to the realms of fiction. How can we reinvent ourselves, and more importantly, will we?. (Panelists: Charlie Stross, Gabriel Partida, David D. Perlmutter. Moderator: Mike Fatum.)

Orbit Live! As part of a series of Crowdcast events, at 8pm GMT on Thursday 27th RJ Barker is going to host myself and Luke Arnold in conversation about our new books: sign up for the crowdcast here.

Reddit AmA: No book launch is complete these days without an Ask me Anything on Reddit, which in my case is booked for Tuesday 3rd, starting at 5pm, UK time (9am on the US west coast, give or take an hour—the clocks change this weekend in the UK but I'm not sure when the US catches up).

The NÃ¼rnberg Digital Festival is a community driven Festival with about 20.000 attendees in Nuremberg, to discuss the future, change and everything that comes with it. Obviously this year it's an extra-digital (i.e. online-only) festival, which has the silver lining of enabling the organizers to invite guests to connect from a long way away. Which is why I'm doing an interview/keynote on Monday November 9th at 5pm (UK time). You can find out more about the Festival here (as well as buying tickets for any or all days' events). It's titled "Are we in dystopian times?" which seems to be an ongoing theme of most of the events I'm being invited to these days, and probably gives you some idea of what my answer is likely to be ...

Anyway, that's all for now: I'll add to this post if new events show up.

Planet Debian — Ian Jackson: Gazebo out of scaffolding

Today we completed our gazebo, which we designed and built out of scaffolding:

Scaffolding is fairly expensive but building things out of it is enormous fun! You can see a complete sequence of the build process, including pictures of the "engineering maquette", at https://www.chiark.greenend.org.uk/~ijackson/2020/scaffold/

Post-lockdown maybe I will build a climbing wall or something out of it...

Planet Debian — Mike Gabriel: Welcome, Fre(i)e Software GmbH

Last week I received the official notice: There is now a German company named "Fre(i)e Software GmbH" registered with the German Trade Register.

Founding a New Company

Over the past months I have put my energy into founding a new company. As a freelancing IT consultant I started facing the limitation of other companies having strict policies that forbid the cooperation with one person businesses (Personengesellschaften).

Thus, the requirement for setting up a GmbH business came onto my agenda. I will move some of my business activities into this new company, starting next year.

Policy Ideas

The "Fre(i)e Software GmbH" will be a platform to facilitate the growth and spreading of Free Software on this planet.

Here are some first ideas for company policies:

• The idea is to bring together teams of developers and consultants that provide the highest expertise in FLOSS.

• Everything this company will do, will finally (or already during the development cycles) be published under some sorf of a free software / content license (for software, ideally a copyleft license).

• Staff members will work and live across Europe, freelancers may possibly live in any country where German businesses may do business with.

• Ideally, staff members and freelancers work on projects that they can identify themselves with, projects that they love.

• Software development and software design is an art. In the company we will honour this. We will be artists.

• In software development, we will enroll our customers in non-CLA FLOSS copyright holdership policies: developers can become copyright holders of the worked-on code projects as persons. This will strengthen the liberty nature of the FLOSS licensed code brought forth in the company.

• The Fre(i)e Software GmbH will be a bussiness focussing on sustainability and sufficiency. We will be gentle to our planet. We won't work on projects that create artificial needs.

• We all will be experts in communication. We all will continually work on improving our communication skills.

• Integrity shall be a virtue to strive for in the company.

• We will be honest to ourselves and the customers about the mistakes we do, the misassumptions we have.

• We will honour and support diversity.

This is all pretty fresh. I'll be happy about hearing your feedback, ideas and worries. If you are interested in joining the company, please let me know. If you are interested in supporting a company with such values, please also let me know.

light+love
Mike Gabriel (aka sunweaver)

Planet Debian — Jonathan Dowland: PhD Year 3 progression

I'm now into my 4th calendar year of my part-time PhD, corresponding to half-way through Stage 2, or Year 2 for a full-time student. Here's a report I wrote that describes what I did last year and what I'm working on going forward.

year3 progression report.pdf (223K PDF)

Worse Than Failure — CodeSOD: Frist Item

In .NET, if you want to get the first item from an IList object, you could just use the index: list[0]. You also have a handy-dandy function called First, or even better FirstOrDefault. FirstOrDefault helpfully doesn’t throw an exception if the list is empty (though depending on what’s in the list, it may give you a null).

What I’m saying is that there are plenty of easy, and obvious ways to get the first element of a list.

IList<Order> orderList = db.GetOrdersByDateDescending().ToList();
int i = 1;
foreach (Order order in orderList)
{
if (i == 1)
{
PrintOrder(order);
}
i++;
}

So, for starters, GetOrdersByDateDescending() is a LINQ-to-SQL call which invokes a stored procedure. Because LINQ does all sorts of optimizations on how that SQL gets generated, if you were to do GetOrdersByDateDescending().FirstOrDefault(), it would fetch only the first row, cutting down on how much data crosses the network.

But because they did ToList, it will fetch all the rows.

And then… then they loop over the result. Every single row. But they only want the first one, so they have an if that only triggers when i == 1, which I mean, at this point, doing 1-based indexing is just there to taunt us.

Stevie adds: “This is a common ‘pattern’ throughout the project.” Well clearly, the developer responsible isn’t going to do something once when they could do it every single time.

Planet Debian — Junichi Uekawa: Sent a pull request to document sshfs slave mode.

Sent a pull request to document sshfs slave mode. Every time I try to do it I forget, so at least I have a document about how to do it. Also changed the name from slave to passive, but I don't think that will help me remember... Not feeling particularly creative about the name.

,

Planet Debian — Ben Hutchings: Debian LTS work, October 2020

I was assigned 6.25 hours of work by Freexian's Debian LTS initiative and carried over 17.5 hours from earlier months. I worked 11.5 hours this month and returned 7.75 hours to the pool, so I will carry over 4.5 hours to December.

I updated linux-4.19 to include the changes in DSA-4772-1, and issued DLA-2417-1 for this.

I updated linux (4.9 kernel) to include upstream stable fixes, and issued DLA-2420-1. This resulted in a regression on some Xen PV environments. Ian Jackson identified the upstream fix for this, which had not yet been applied to all the stable branches that needed it. I made a further update with just that fix, and issued DLA-2420-2.

I have also been working to backport fixes for some less urgent security issues in Linux 4.9, but have not yet applied those fixes.

Krebs on Security — Why Paying to Delete Stolen Data is Bonkers

Companies hit by ransomware often face a dual threat: Even if they avoid paying the ransom and can restore things from scratch, about half the time the attackers also threaten to release sensitive stolen data unless the victim pays for a promise to have the data deleted. Leaving aside the notion that victims might have any real expectation the attackers will actually destroy the stolen data, new research suggests a fair number of victims who do pay up may see some or all of the stolen data published anyway.

The findings come in a report today from Coveware, a company that specializes in helping firms recover from ransomware attacks. Coveware says nearly half of all ransomware cases now include the threat to release exfiltrated data.

“Previously, when a victim of ransomware had adequate backups, they would just restore and go on with life; there was zero reason to even engage with the threat actor,” the report observes. “Now, when a threat actor steals data, a company with perfectly restorable backups is often compelled to at least engage with the threat actor to determine what data was taken.”

Coveware said it has seen ample evidence of victims seeing some or all of their stolen data published after paying to have it deleted; in other cases, the data gets published online before the victim is even given a chance to negotiate a data deletion agreement.

“Unlike negotiating for a decryption key, negotiating for the suppression of stolen data has no finite end,” the report continues. “Once a victim receives a decryption key, it canâ€™t be taken away and does not degrade with time. With stolen data, a threat actor can return for a second payment at any point in the future. The track records are too short and evidence that defaults are selectively occurring is already collecting.”

Image: Coveware Q3 2020 report.

The company said it advises clients never to pay a data deletion ransom, but rather to engage competent privacy attorneys, perform an investigation into what data was stolen, and notify any affected customers according to the advice of counsel and application data breach notification laws.

Fabian Wosar, chief technology officer at computer security firm Emsisoft, said ransomware victims often acquiesce to data publication extortion demands when they are trying to prevent the public from learning about the breach.

“The bottom line is, ransomware is a business of hope,” Wosar said. “The company doesn’t want the data to be dumped or sold. So they pay for it hoping the threat actor deletes the data. Technically speaking, whether they delete the data or not doesn’t matter from a legal point of view. The data was lost at the point when it was exfiltrated.”

Ransomware victims who pay for a digital key to unlock servers and desktop systems encrypted by the malware also are relying on hope, Wosar said, because it’s also not uncommon that a decryption key fails to unlock some or all of the infected machines.

“When you look at a lot of ransom notes, you can actually see groups address this very directly and have texts that say stuff along the lines of, Yeah, you are fucked now. But if you pay us, everything can go back to before we fucked you.'”

Planet Debian — Martin-Éric Racine: Migrating to Predictable Network Interface Names

A couple of years ago, I moved into a new flat that comes with RJ45 sockets wired for 10 Gigabit (but currently offering 1 Gigabit) Ethernet.

This also meant changing the settings on my router box for my new ISP.

I took this opportunity to review my router's other settings too. I'll be blogging about these over the next few posts.

Migrating to Predictable Network Interface Names

Ever since Linus decided to flip the network interface enumeration order in the Linux kernel, I had been relying on udev's persistent network interface rules to maintain some semblance of consistency in the NIC naming scheme of my hosts. It has never been a totally satisfactory method, since it required manually editing the file to list the MAC addresses of all Ethernet cards and WiFi dongles likely to appear on that host to consistently use an easy-to-remember name that I could adopt for ifupdown configuration files.

Enter predictable interface names. What started as a Linux kernel module project at Dell was eventually re-implemented in systemd. However, clear documentation on the naming scheme had been difficult to find and udev's persistent network interface rules gave me what I needed, so I postponed the transition for years. Relocating to a new flat and rethinking my home network to match gave me an opportunity to revisit the topic.

The naming scheme is surprisingly simple and logical, once proper explanations have been found. The short version:

• Ethernet interfaces are called en i.e. Ether Net.
• Wireless interfaces are called wl i.e. Wire Less. (yes, the official documentation call this Wireless Local but, in everyday usage, remembering Wire Less is simpler)

The rest of the name specifies on which PCI bus and which slot the interface is found. On my old Dell laptop, it looks like this:

• enp9s0: Ethernet interface at PCI bus 9 slot 0.
• wlp12s0: Wireless interface at PCI bus 12 slot 0.

An added bonus of the naming scheme is that it makes replacing hardware a breeze, since the naming scheme is bus and slot specific, not MAC address specific. No need to edit any configuration file. I saw this first-hand when I got around upgrading my router's network cards to Gigabit-capable ones to take advantage of my new home's broadband LAN. All it took was to power off the host, swap the Ethernet cards and power on the host. That's it. systemd took care of everything else.

Still, migrating looked like a daunting task. Debian's wiki page gave me some answers, but didn't provide a systematic approach. I came up with the following shell script:

#!/bin/sh
lspci | grep -i -e ethernet -e network
sudo dmesg | grep -i renamed
for n in $(ls -X /sys/class/net/ | grep -v lo); do echo$n: && udevadm test-builtin net_id /sys/class/net/$n 2>/dev/null | grep NAME; sudo rgrep$n /etc
sudo find /etc -name '*n*' done  This combined ideas found on the Debian wiki with a few of my own. Running the script before and after the migration ensured that I hadn't missed any configuration file. Once I was satisfied with that, I commented out the old udev persistent network interface rules, ran dpkg-reconfigure on all my Linux kernel images to purge the rules from the initrd images, and called it a day. ... well, not quite. It turns out that with bridge-utils, bridge_ports all no longer works. One must manually list all interfaces to be bridged. Debian bug report filed. PS: Luca Capello pointed out that Debian 10/Buster's Release Notes include migration instructions. Planet Debian — Jonathan Dowland: Amiga mouse pointer I've started struggling to see my mouse pointer on my desktop. I probably need to take an eye test and possibly buy some new glasses. In the meantime, I've changed my mouse pointer to something larger and more colourful: The Amiga Workbench 1.3 mouse pointer that I grew up with, but scaled up to 128x128 pixels (from 16x16, I think) See if you can spot it The X file format for mouse pointers is a bit strange but it turned out to be fairly easy to convert it over. An enormous improvement! CryptogramDetermining What Video Conference Participants Are Typing from Watching Shoulder Movements Accuracy isn’t great, but that it can be done at all is impressive. Murtuza Jadiwala, a computer science professor heading the research project, said his team was able to identify the contents of texts by examining body movement of the participants. Specifically, they focused on the movement of their shoulders and arms to extrapolate the actions of their fingers as they typed. Given the widespread use of high-resolution web cams during conference calls, Jadiwala was able to record and analyze slight pixel shifts around users’ shoulders to determine if they were moving left or right, forward or backward. He then created a software program that linked the movements to a list of commonly used words. He says the “text inference framework that uses the keystrokes detected from the video â€¦ predict[s] words that were most likely typed by the target user. We then comprehensively evaluate[d] both the keystroke/typing detection and text inference frameworks using data collected from a large number of participants.” In a controlled setting, with specific chairs, keyboards and webcam, Jadiwala said he achieved an accuracy rate of 75 percent. However, in uncontrolled environments, accuracy dropped to only one out of every five words being correctly identified. Other factors contribute to lower accuracy levels, he said, including whether long sleeve or short sleeve shirts were worn, and the length of a user’s hair. With long hair obstructing a clear view of the shoulders, accuracy plummeted. Kevin Rudd — Reimagine Podcast with Eric Schmidt: Democracy After the Pandemic Podcast originally published 4 November 2020 Madeleine Albright (00:05): We can’t expect miracles immediately. But there has to be an assessment of how the international system works, and also, what America’s role in the world is. I happen to believe that president Clinton was the person that said that we were an indispensable power. He said it first, I just repeated it so often it became identified with me. But there’s nothing about the word indispensable that says, alone. It means that we need to be engaged and a partner, not some country that bosses everybody around and then says that we’ve been victimized. But to have, as a partnership, that deals with what are a whole set of new problems. Eric Schmidt (00:50): The Coronavirus pandemic is a global tragedy, but it’s also an opportunity to rethink the world. To make it better, faster for more people than ever before. I’m Eric Schmidt, former CEO of Google and now co-founder of Schmidt Futures, and this is Reimagine, a podcast where trailblazing leaders imagine how we can build back better. Eric Schmidt (01:17): In 1945 the world was reeling from successive catastrophes, the previous 30 years included two world wars, The Great Depression, and a global pandemic that left hundreds of millions dead or impoverished. That summer, leaders from democratic nations around the world convened in San Francisco to reimagine global cooperation and leadership. They created a set of institutions and norms that we refer to as the liberal world order, to insure the tragic calamities of the prior decades would never happen again. 75 years later, we find ourselves in a similar place, amid another devastating public health crisis, authoritarianism is surging, the leaders and institutions that have historically guided us through various crises are faltering amid rampant tribalism, conflict and fear. Eric Schmidt (02:18): On this episode of Reimagine, former Secretary of State, Madeleine Albright, and former Prime Minister of Australia, Kevin Rudd, will help us understand the trajectory of democracy in global leadership in an increasingly unstable world order. The pandemic has deepened divisions and mistrust and set the world on a different course than it was just a short while ago. We must find a way back to the right path for peace and prosperity to flourish. Eric Schmidt (02:48): Joining us now, is former Secretary of State, Madeleine Albright. Secretary Albright was our nation’s 64th Secretary of State, and the first woman to lead the state department. During her illustrious career, she has helped the United States navigate many international crises, and has spent much of the last 50 years advocating for freedom around the world. Born in Prague on the cusp of World War II, Secretary Albright has also seen fascism up close. Her experiences have made her one of the worlds foremost experts on democracy and authoritarianism. She presently teaches at Georgetown and chairs the National Democratic Institute, which works to safeguard elections and promote openness and accountability in governments around the world. Secretary Albright, welcome. Madeleine Albright (03:31): Eric, it’s great to be with you. Thank you so much. Eric Schmidt (03:33): Now, you’re doing a book tour and promoting a book that you’ve just recently published called Hell and Other Destinations. Tell us about the book. What’s special about this insight? Madeleine Albright (03:44): Well, first of all, let me tell you, it kind of starts out by saying, people want to know how I want to be remembered. And I say, “I don’t want to be remembered, because I’m still here.” And I wanted to kind of show the things that I’ve done since I left office. And one of the things I’ve always tried to do is to make whatever I do next more interested than what I did before. Which is a little hard if you’ve been Secretary of State. So the book is based on the fact that as we were leaving the department people were saying, “Well, what are you going to do? You can go back to teaching. You can write books. You can start a company. You can do your democracy work. You can continue with the Truman Foundation. So what do you want to do?” And I said, “I want to do it all.” Madeleine Albright (04:32): And so what I’m doing are all those different things, and rationalizing that they all go together and that one informs another, and that I learn an awful lot, and I have. The only problem that I’m having was that I was trying to prove that I was not old, by showing how much I do. And then all of a sudden, I’m categorized as “elderly”, and so making that point, while I’m doing something virtually, is a little bit harder. But I had fun. Eric Schmidt (05:04): But you were peripatetic and incredibly productive as Secretary of State, so I think this is just a character of who you are. I don’t think it’s true of before state and after state. You just work this way. This is who you are. Madeleine Albright (05:17): Well, I certainly love traveling, and maybe I didn’t like airports, but once I got on the plane it was nice. Eric Schmidt (05:24): I’ve always been interested in America’s view of fascism and our lack of understanding of kind of bad government outcomes that we have. In the United States, we assume that democracy is first, always the winner, and second, that it’s always been true. But you have personal knowledge that this is not true. And we hear about fascism, but we don’t really know what it is. Tell us in a way that we can understand, why fascism is to be fought at all costs. Madeleine Albright (05:53): Well, first of all, I think people throw around fascism as a term without understanding it, a fascist is somebody that you disagree with or I often talk about a teenage boy who’s father doesn’t allow him to drive and he calls him a fascist. First of all, fascism is not an ideology, it is a method for gaining control, and it is a way of controlling population. And the way I describe it is that, a fascist leader is somebody who take the divisions in society, which happened just to exist for any number of reasons, and exacerbates them. So that it is based on the fact that a fascist leader identifies himself, and by the way, they’re all himselves, and identifies with that group at an expense of another which is then the scapegoat that is to blame for everything and makes the divisions worse. The second characteristic of fascism is that the leader thinks that he’s above the law, and then also calls the press, the enemy of the people. But it is a way to control the population ultimately. But to gain power, and control the population. Madeleine Albright (07:06): I decided that in order to understand fascism, I had to go back and see where it originated. And it did obviously originate with Mussolini. And what was interesting about him and how he gained power, was the Italians felt unappreciated because of the role that they had played at the end of World War I by supporting The Allies. So there was an anger and a disappointment. And then also, there were divisions in society and all of a sudden, this leader who was an outsider took advantage of those divisions and exacerbated them. The interesting part was that both he and Hitler, gained power constitutionally. And I think that is also something that is worth thinking about. Madeleine Albright (07:52): And so then, I began to look at some of the things that I saw going on in Europe, in Hungary, and in Poland, and then in the Philippians with Duterte, Venezuela. So it’s not something that’s gone, it’s definitely there. Eric Schmidt (08:07): So when you think about fascism and you think about democracy, we obviously prefer democracy. We also have authoritarian systems, which don’t appear to me to be too fascist. So for example, China, clearly authoritarian, but it’s at least a system of governance without a lot of freedom. What’s happening with democracy? Is democracy weakening now? For a decade or so, democracy was getting quite a bit stronger. Madeleine Albright (08:35): First of all, by the way, I decided that I would say communists were fascist also, because they do control the system. But what I do think is true, is that democracy is a process as much as anything, and it is complicated and it takes time. And it is based on a social contract in which people gave up some of their individual rights, in order to have the government take on duties which were protective or did help the system move forward in exchange for the fact that the citizens would participate and vote and play the role that they need to do in a free society. Madeleine Albright (09:16): But, and this is where we have found the issues complicated is, democracy depends on information in many different ways, and democracy also has to provide a system which allows people to speak freely and figure out who they are. But at the same time, also allows them to make a living. And so I always say that, democracy has to deliver both in the political and in the economic field, because people want to vote and eat. But it is complicated. Eric Schmidt (09:50): It seems to me that, people are positing what I think is a false choice, between order and freedom. And it should be possible to achieve both. The narrative today about democracy has to do with the impact of the internet and social media, and the fact that specialized groups are getting weaponized if you will, by a combination of finding each other and then exploiting either vulnerabilities loopholes or features of the social media world, where they can get an outlandish level on impact, far greater than they would have before that. Do you believe that this is a threat to the way democracy works or do you think that this is going to get solved relatively easily as people understand it? Madeleine Albright (10:34): I think it will get solved. And by the way Eric, something that you don’t know about me is that, I wrote my dissertation on the role of the Czechoslovak press in 1968 because I was always interested in the role of information and political change. And the thing that happened in that was that the people actually knew what the truth was because of Radio for Europe and Voice of America, but their censored pressed wasn’t printing it in any way. So they weren’t able to act on it. They couldn’t figure out how it all went together. And what was interesting was, systematically, the press became uncensored. Also, information played a huge role in what was happening with solidarity in Poland. Which by the way, had a new form of passing on information which was a taped cassette. So when Lech Wałęsa spoke in one factory, they could send it to another one and motivate people to be supportive. Madeleine Albright (11:33): And so I’ve always been fascinated by the role of information, and so I am very much, by the technology that is taking place now. I do think that in order for people to participate in a democracy, they need information. That is a key to being able to be a participant that knows what is going on. The question is, and I think this is obviously something that you and others are dealing with is, “How do you allow the freedom to put all kinds of information into the system and yet, not have it be undercut by those who are trying to do something else with it? And how do people distinguish between what is true and what isn’t?” Madeleine Albright (12:18): And so I hate to be a relativist in this, but I think it is hard to figure out what the truth is these days. And therefore, just the way any professor, I will say, read or listen to a lot of different sources and try to figure it out. But I do think that at the moment, there is an exploitation by some of the incredible advances in technology that have been made. And the question is, how one has some kind of regulation without undercutting the aspect of the freedom of it. And I think that is very hard, as all of you in silicon valley are really experiencing. Eric Schmidt (12:58): It remains an unsolved problem. But lets consider the Chinese argument, and their argument goes something like this, the West has had a disease, and failing for a long time. The Chinese model, which is much more organized, much less free if you will, is more effective at producing the things that people care about. And indeed, if you look at the Coronavirus, even if you take a factor of ten discount on the numbers that they quote, there’s no question that China is largely working. The economic growth is quite strong now, there’s plenty of signals that their demand is growing, while the rest of the world is still struggling with no end in sight to the impact of the virus. One scenario is that this is the beginning of the acceleration of the Chinese model, and that the democracies can not get their act together because of the reasons that we discussed. How do you argue that one way or the other? Madeleine Albright (13:59): Well, I can take the opposite view, frankly. First of all, I do think it’s worth going back on something in Chinese history. There is an anger that has created a lot of this, from the fact that China felt disrespected by the West all the years, and imposed upon by some of the Western systems, some good some bad, like the opium war and variety of aspects of things, and felt that there needed to be one party. What is interesting is that we all had a theory, which turns out to have been wrong. Which is, having looked at South Korea, that had a dictatorship, and then that was disposed of and all of a sudden there was the development of a middle class, that the middle class brought with it, a sense of wanting to be able to make decisions about their own lives. Madeleine Albright (14:53): They were doing fairly well, but having that capability of not working under a dictatorship, they then began to adopt democratic principles. So there was the thought, that as China was experiencing economic growth and developing a middle class, that they would also go in the direction of having a more open system. It didn’t work, because there was a question about what had happened to the communist party and a new leadership with Xi Jinping, who felt that he had to reinvigorate the base of the party by calling on nationalism in a very strong way, i.e. then going back and say, “We had been limited by the imposition of Western ideas and now we’re going to do things our way.” Madeleine Albright (15:44): I do think that I could also argue, that the Chinese system made it difficult for on the virus itself in the beginning, because the people that knew about it were quickly expunged from the system and they weren’t able to speak outside about what was happening in Wuhan and how that was effecting people. And then, because of the way that they hid what was going on, we don’t have to speak about what was going on here, but the Chinese were undermining a lot of the really, way of getting information out. They clearly, have a better system of controlling things. Even if we were functioning better, they can tell people what to do in a way that we never can or want to do. Madeleine Albright (16:35): So I think it’s a system that is aggressive in the way that it sees itself and the world. It is, as I said earlier, operating off of the base of nationalism, that they were mistreated, and they still have people that would like to be doing something else. So I don’t see it as a better system. And I can’t, given my own background, see any kind of authoritarian system as one that allows for the evolution of society in a way where people feel that they want and can’t make decisions about their own lives. I can see where it is a competitive system, because at the moment, we are totally disorganized. They have somehow managed also, to get some control over the virus. They have no compunctions. And it isn’t just tracing, as far as the virus is concerned, but it’s literally having images of everybody and knowing where people and what people are doing in society. So once the virus is dealt with, it will be hard to get rid of the control system that has been established by the Chinese Communist Party. Eric Schmidt (17:48): I agree with that fear. It turns out the ranking system and the rating system can clearly be used for other forms of social oppression as well as of course, tracking the spread of the disease. Madame Secretary, you mentioned earlier a little bit about fascism and that they were always men. Why is it that most of the successful governments dealing with these problems seem to be headed by women now? Madeleine Albright (18:15): Well, I’ve been asked that question and I’ve tried to analyze it, and it is very interesting. First of all, I do think that women have a way of worrying about how other people are doing, and these are generalizations, and our caregivers. I think that one of the aspects is that, I think women are better at multitasking, which allows there to be peripheral vision, to see where the problems are coming from and look at them as ways to solve the problem rather than blaming it one somebody else. I think also, there is an attempt I think, to tell the truth to people and not try to hide how to deal with it and not domineer the aspect of being able to really use the various parts of their governments to spread the word without dominating it. Madeleine Albright (19:10): And frankly, I have also made clear that fighting fascism, the women do better with that, frankly. Because again, it is not trying to divide people. Mothers do not like to have one set of their children arguing with another. And I think that thinks are not based so much on ego. The countries that have been successful are Taiwan and New Zealand and Germany, and then Norway and Sweden, Iceland. And a lot has to do with having good communication between the head of state and the people, and trying not to treat them as if they can be totally manipulated, but to level with them and say, “You need to be part of the solution.” And they actually believe in science too, that helps. Eric Schmidt (20:01): You were the first female Secretary of State for this country, what advice would you have for Kamala Harris if she were to become the first female Vice President? Madeleine Albright (20:09): Well, first of all, it’s an honor to be the first but it’s not the easiest to be first. Eric Schmidt (20:15): Yes. Madeleine Albright (20:16): Because you are constantly being compared with your predecessors, and there are those, and I’ll say this in my own case, who wonder how I ever got the job. And I have to tell you, when my name came up to be Secretary of State, there were people who said, “Well, Arab countries will not deal with a woman Secretary of State.” And so the Arab ambassadors at the UN got together and said, “We’ve had no problems dealing with Ambassador Albright, we won’t have any trouble deal with Secretary Albright.” I had more problems with the men in our own government. Eric Schmidt (20:51): Oh my God. Really? Madeleine Albright (20:52): And partially, it had to do with the fact that they had known me too long. I had had them over for dinner, which I helped to pass the plates around. I had been a carpool mother. I was good friends with their family. And then, and I’m sure that this will also happen to Senator Harris, many of them thought, well, why weren’t they in the job when they should be the ones doing it. So I think there will be issues. I think also, that one has to be conscious of the fact that you are also being judged by other women. And I think we have a tendency to be very critical of each other, judgemental, and then also, many times, do project our own sense of inadequacy on other women. Madeleine Albright (21:40): And that is partially what I was writing about in this book. Because the most famous statement I ever made was that, “There’s a special place in hell for women who don’t help each other,” which came out of my own experience. So when I was writing that dissertation I was talking about, there were other women who said, “Why aren’t you home with your children or in the carpool line?” And then, and this is very germane to your question just now, I was Geraldine Ferraro’s foreign policy advisor, and traveled with her in 1984, when she was the first women to be on a national ticket. And we were somewhere and a women came up to me and said, “How can she deal with a Russian? I can’t deal with a Russian.” Well, nobody was asking this woman to deal with a Russian. Madeleine Albright (22:26): So I think that Kamala will also be judged I think, as to whether X woman could be doing the job that she is doing. So I think we do need to be supportive of each other. That has sometimes been interpreted to mean that I say, “Women have to vote for each other.” I have never said that. I do think however, we need to be supportive of each other. Eric Schmidt (22:49): On the COVID response, you’ve actually written extensively about how we need to reorganize ourselves and in particular, around international responses. You’re uniquely, I think, concerned about the structure of the world going forward, after this is hopefully over. I was reading about this, you talked about additional resources for low income countries, especially Latin America and Africa, conflict areas where the disease is going to be terrible, but more importantly, they’re in conflict anyway, and then support democracy and good governance in general. Is that going to happen? How will it happen? How will you make that happen from your position? Madeleine Albright (23:31): Well, let me just say, one of the things that I have been very conscious of is, we are operating with international organizations that were created, most of them 1945, at the end of World War II. And they do need refurbishing. They need updating in a number of different ways. So that’s for number one. But I also think that we have to recognize that the threats that are out there now know no borders. So whereas the virus might have started in China, it has definitely spread, climate change in another issue that is multinational, nuclear proliferation. So there are a number of aspects that have to be considered multilaterally. And by the way, Americans don’t like the word multilateralism, it has too many syllables and it ends in an ism, but the bottom line, is that some of the issues can only be solved by more than one country. So that is for starters. Madeleine Albright (24:31): I think that what has to happen is to recognize the fact that the virus has hit different countries in different ways, and countries have their own way of dealing with it. And part of the issue, and as you raised it, is that the developing countries have been working very hard in terms of dealing with some of their economic issues as well as their governance issues. And this is hitting them very hard now in terms of how they deal with, what are the combination of the issues of environmental problems that pushes them to have to move into refugee camps or in fact, how to deal with the various struggles that are going on, and then not enough in terms of resources. If they are told to wash their hands every five minutes, they don’t have enough water to drink. So one has to consider what the issues are. Madeleine Albright (25:30): I also do believe that the international system has the capabilities of helping them economically as well as with advice. And we have done that in other cases in terms of being able to control smallpox or working also on control of Polio or later, Ebola. But the system has failed on dealing with COVID. And it’s partially because of what the Chinese didn’t do, and then what they did do. Which is I do think that they have contributed a lot more than was expected to the World Health Organization, and there are politics everywhere and it all needs to be fixed in some form or another. But also, the fact that the United States has not seen it as a threat and has not recognized the fact that not only is it that the virus knows no borders, but that it’s effect will also affect our economic policies, trade, what can be done, and how people can exist within their countries and whether it is then contributing to a deficit in democracy. Because we are not the best example at the moment. Madeleine Albright (26:46): So there are an awful lot of things that have to happen, we can’t expect miracles immediately. But there has to be an assessment of how the international system works and also what America’s role in the world is. I happen to believe that President Clinton was the person that said that we were an indispensable power. He said it first, I just repeated so often it became identified with me. But there’s nothing about the word indispensable that says, alone. It means that we need to be engaged and a partner. Not some country that bosses everybody around and then says that we’ve been victimized, but as a partnership that deals with what are a whole set of new problems. Eric Schmidt (27:32): A few weeks ago, you wrote an op-ed about all of this, talking about the American election. And you wrote and I’ll quote, “Mr. Biden, if elected, will inherit a country diminished by his predecessor’s surge for greatness in all the wrong places. The new president’s task will be daunting to reassure allies, reassert leadership on climate change and world health, forage effect coalitions to check the ambitions of China, Russia, and Iran, and establish the U.S.’s identity as a champion of democracy.” Do you believe incoming Biden presidency will be able to do this? Madeleine Albright (28:08): I do believe. I don’t think that it can happen all at once. And I also believe the opposite, that another four years of Trump will make our situation impossible in so many different ways. I really do think that another four years of this will be a disaster. There’s no other way to state that. I have been around enough and even now, virtually, to think that it is un-American in every single way, and we are part of the major issue in the functioning of the world. Madeleine Albright (28:40): But I do think that Vice President Biden is Uniquely qualified given his experience, to deal with a variety of these issue. He has seen how the system can work in terms of the international aspect of it. He believes and he’s talked about, having a summit of democracies, which would really look at best practices and what can be done. He also, I think, has talked about the power of our example, that I mentioned, just generally. But I think we have to also recognize that it’s going to take a certain amount of humility. We can’t all of a sudden say, “Okay, we’ve had the selection and now we’re in charge again.” Madeleine Albright (29:24): I think it is going to take a deliberate effort to explain where we are, the issues that we’ve had. Then in fact, also spend time as a partner trying to sort out how to generally behave in this 21st century, and think about how technology can be out partner and our friend, how we can acclimate our selves to… I don’t think anythings going to be the same after this whole pandemic. And that we need to sort out what the tools are that we have, in order to have a functional world, where we do not divide people more, and where the United States does have a partnership role, and understand that our domestic situation can only be made better in partnership with others. Madeleine Albright (30:18): So it’s a very big assignment, there’s no question about that. And my last foreign trip frankly, was to go to Munich, for the Munich Security Conference. And we were a joke, because Pompeo and Esper were there, and the way they talked about the United States was just totally out of lala land. And the other countries were looking at what some of the solutions could be, were concerned. And I think that we need to get a reality check about the way we are viewed. And by the way, one of the things that we need to go back and look at is, how did this all happen. And the best quote in my book on fascism is from Mussolini, that, “If you pluck a chicken one feather at a time, nobody notices.” So there has been an awful lot of feather plucking and we need to either get a new chicken or stop the feather plucking. Eric Schmidt (31:15): Madame Secretary, I want to congratulate you on your new book which is called, Hell and Other Destinations: A 21st-Century Memoir. Thank you again, I look forward to your next book and the product of your great work at Georgetown. Madeleine Albright (31:28): Thank you very much. I’ve enjoyed being with you Eric, and what you’re doing in your podcast. Eric Schmidt (31:32): The primary goal of a democracy is to keeps it’s people safe and get them to be prosperous. Our democracies have failed on both parts of that so far. We accept that democracies are really groups of people who are lobbying and shaping information, and so forth. But ultimately, great leaders should emerge, leaders that somehow can judge where the risks are and make the right balance of trade offs, so that the society can, at the end of the transaction, be more prosperous, safer, and so forth. Where will those leaders come from? They’re not going to come from leaders who spend all their day testing their popularity, and they’re not going to come from leaders that are beholden to special interests. They’re going to come from the leaders of the old time. The ones who started with a principle of what they were trying to do and stuck to it, a principle around greatness, and success, and safety, and so forth. The leaders who choose to pander to the crowds, to ignore facts, and to focus only on themselves and their own narcissism are destined for a terrible history. Eric Schmidt (32:48): Secretary Albright’s experience is invaluable as we lay the foundation for the next chapter of international coexistence. Our next guest, former Australian Prime Minister, Kevin Rudd, will help us continue to look toward the future by helping us understand one of the growing forces shaping world affairs, China. Prime Minister Rudd is an expert on China and currently serves as the president of the Asia Society Policy Institute in New York City. As Prime Minister of Australia from 2007 to 2010, and then in 2013, Kevin was an active leader in global affairs. He ratified the Kyoto Protocol and committed Australia to decreasing carbon emissions. On the domestic front, he helped Australia survive the global financial crisis as the only major developed country to not slip into a recession. Among many other accomplishments, he delivered Australia’s first national apology to indigenous Australians as his first act as prime minister, and made significant investments in schools and education. Prime Minister Rudd, thank you so much for being here with me. Kevin Rudd (33:50): It’s good to be with you, Eric. Eric Schmidt (33:51): So lets look at what happened with Australia and COVID. As best I can tell, the COVID crisis accelerated a break between Australia and China. Can you explain how this break happened and how the positioning of COVID in China now feel if you’re in Australia? Kevin Rudd (34:11): I think the first think is, as you and I both know, is that China has significantly changed. Xi Jinping’s China is radically different from the China before 2012, 2013. It’s certainly more assertive in terms of it’s international policy across the board. And so that’s been building over the last six or seven years. Plus the second point is this, being a Western country located in the East, we’ve kind of been the first Western country down the Chinese mineshaft, that is first Western canary down Chinese mineshaft so we’ve experienced first and upfront a lot of the direct challenges in terms of the ultimate tension between economic policy and security policy. Australia is one of America’s oldest allies. China takes, would you believe, more than one third of Australian turtle exports. And of course, we’re from radically different human rights traditions. So for those reasons, it’s structural. Kevin Rudd (35:05): And finally, what’s happened most recently, I think it’s because we had the eruption of the virus coming out of China, we had it’s impact on all countries in the world, including the horror that unfolds in the United States, and a more manageable problem here in Australia. But still, big questions on the mind of the Australian public as to how this thing came about in the first place. Put all those things together and Australian advocacy for international inquiry into the origins of the Coronavirus, and it adds up to a cocktail of a deeply negative state of the Australia, China relationship. Kevin Rudd (35:46): And one final point is, what our Chinese friends have been doing with various American allies around the world, and friends around the world, is kind of make and example of them. You’ve seen that with the Canadians, over the Madame Meng affair on Huawei. You’ve seen it recently with the Swedes who have had their own human rights challenges with China considering various of their Chinese Swedish citizens. You now see it of course, with emerging problems for the British over Huawei. And then there’s the Australians. So I think what tends to happen is that, individual countries are singled out to particular treatment if they don’t comply with China’s foreign policy wishes, in order to set examples for the rest. Eric Schmidt (36:31): Well, it’s interesting that Australia was the first to call for an independent investigation of what was going on, which ultimately the WHO took on, and Australia and the current prime minister pushed very hard. What penalty has China extracted from Australia today, in your opinion? Kevin Rudd (36:54): Well, the complexity of this is a bit like this, I suppose number one it, as a middle power like Australia again, to call for such an independent investigation of the origins of Coronavirus, it’s usually helpful to hunt in packs. By which I mean, bring a Coalition of the Policy Willing with you. What the Australian government did was, went out there and unilaterally call for this, which makes it much easier for the Chinese then to single you out. Kevin Rudd (37:18): The second point is, just for the clarity of the record, that the independent inquiry into the origins of the virus is somewhat different to what we ended up with, with this WHO investigation into effectively, the WHO’s performance and not much beyond that. But it is something. And I suppose on the key question of, how is Australia been singled out, I suppose I’d point to three measures. One, is travel warnings to Chinese tourists not to come back to Australia because it’s unsafe. Two, warnings to Chinese students studying in Australia, that it’s also unsafe because of alleged racist reaction to Chinese in Australia. And number three, in specific commodity areas, like Australian barley, Australian Beef, and potentially Australian wines. The Chinese have used various so called quarantine and WTO related measures to effectively switch their sources of supply. And ironically, American supplies is moving into some of those opportunities. So there you go. That’s the background. Eric Schmidt (38:30): But building on this, you have been critical of the American response. I’m quoting you, “America would have mobilized the world, but in this time, in America’s absence, no one did.” And indeed France convened the G7, and the G20 was summited by Saudi Arabia and so forth. Do you have a view now of this that’s different? Do you see any change in the American role? Is it getting worse or better from your perspective? Kevin Rudd (39:02): If you’re concerned about the stability and effectiveness of the global rules based order, which through painstaking leadership, Americans, together with their friends and allies have put together out of the ashes of the second world war, then you’ve got to stand back and look at the policies and posture and actions of the Trump administration and just kind of scratch your head. So take the COVID-19 crisis, yes, it’s been a domestic challenge for all of us. But when you have a monumental global assault on public health and a global assault on the economy and employment in virtually all countries, than the instantaneous response for those of us who are friends and allies in the United States, and others, is to look for American global leadership. Kevin Rudd (39:51): Instead what we found with Trump was, the guy behaving domestically, as I read recently, like some 19th century quack apothecary, recommending kind of unbelievable medical treatments for this condition. But when it came to global action, either the global provision of PPE protective equipment, or global leadership on vaccine development, et cetera then the America we’ve come to know and respect, and most of us to love over the decades, was just not there. Kevin Rudd (40:28): So this creates a significant vacuum in the mind of global public opinion. And this fall, we look forward to the next presidential election and Joe Biden’s elected, it’s what I’ve described and stuff I’ve written recently for Foreign Affairs Magazine, is kind of the last chance saloon for American global leadership. We want America back. We want you to work closely with your allies. There’s so much to be done in the world not just on pandemics but climate and the rest, and having America back in the saddle is what we’d really like to see. But it is frankly, a last chance saloon to get this done. Eric Schmidt (41:05): In the Foreign Affairs piece that you’d recently published, you actually argue that both china and the U.S. will emerge “Severely damaged,” I think is the phrase. And severe damage is a pretty strong statement. And it seems to me that it’s a race to the bottom, whether it’s the politics or the change in the politics inside of China, she is as you pointed out, much more authoritarian, Trump is a different kind of leader than our previous presidents, as everyone is established. Describe the weakening and then tell us how you would fix it, on both sides. Kevin Rudd (41:45): Wow, there’s a big question, or a couple of big questions. Firstly on the diagnostics, lets just look at the United States first. There’s a huge economic hit on the United States, which we will not know the full dimensions of for several years. And that’s going to effect the future budget resilience of the United States as well. Ultimately, America can only print money for so long. Ultimately, there has to be a rebalancing of the system. And I say that as someone who has a deeply cleansing approach for how you fix economies in a time of systemic international crisis. But the truth is, the objective truth is, it’s a massive economic hit and it’s a massive budgetary hit. Which obviously then has implications to what you can do with the government in the future and funding the future of the U.S. [inaudible 00:42:39] and the rest. Kevin Rudd (42:40): But do you know something? There’s also been this hit on the American soft power. What we talked about before Eric, was American global leadership. And frankly, you friends and allies are just around the world, holding their breath and waiting for November for a decision by the American people as to what leadership they want America to exercise in the world in the future as well. But in the meantime, there’s been a huge reputational hit on the U.S. standing. Kevin Rudd (43:07): But what I find, is people often then therefore go into an automatic equation which says, “America down, therefore China up.” Well, not so. The Chinese economy has taken a huge hit itself. We really have to go back to the cultural revolution to see such disastrous economic numbers as we’ve seen emerge from China in recent quarters. And therefore, that flows through their own budgetary capacity to fund The Belt and Road Initiative, to fund what they’re doing through their military, to fund their expanding international development program, et cetera. And so it becomes a huge economic and financial equation for the Chinese State as well. And remember, China is probably, I wouldn’t double dependent, but significantly dependent on the global economy through trade and investment flows as a key part of their formula for long term, sustainable growth for themselves. Kevin Rudd (44:06): So what do I say emerges as a result of that, post COVID, whenever post COVID comes, Eric, is likely to see these two wounded elephants roaming around in the global living room, and as a consequence, we no longer have anyone leading effectively, the global order, and the systems and institutions of international governance, which have kept us basically, outside of barbarism for the last three quarters of a century. And what I see is these institutions dying the death of a thousand cuts, and now increasingly becoming, as it were, vulcanized into pro-American and pro-Chinese camps with neither of the super powers willing or able to exercise effective leadership. So it leads to what I’ve described as an emerging international anarchy. Kevin Rudd (44:54): So what can you do about it? Two things, perhaps three. Start with those of us who are not Americans and Chinese, what I’ve written about extensively in the Economist and elsewhere is, it’s time for a Coalition of the Policy Willing what I call the M7 or the M10, the middle power 7, or the middle power 10, countries like France, Germany, the U.K., once it decides what it wants to do in the future, maybe the Swedes, the Japanese, the South Koreans, the Australians, the Indians, and the Canadians, and the Mexicans, these are all democracies, they’re all middle powers, and that is, how do you exercise through them financial, diplomatic, and political measures to triage the international system until we have the reestablishment of the level of geo-political equilibrium, involving the great powers. Kevin Rudd (45:49): And as for the United States, as I said, it really hinges on November. If Americans decide they wish to be the worlds leaders in the future or be it perhaps in a different way in the past, and not simply a replication of past forms, then the world is looking to see what America under Biden would do. And that means fixing your house at home, Black Lives Matter, but basically the inequality which drives it, and rediscovering your confidence in the world. Kevin Rudd (46:19): And as for China, China’s not a done deal under Xi Jinping. Now you’ve just said before in your intro to this part of our conversation, Eric, that you and I share many friends. And lets say there are world views in China quite different to the ones we see articulated by Xi Jinping’s administration. And these are essentially internationalizing world views. These are more liberal internationalist world views. These are more open economy and increasingly open society world view, though with a question mark on the continued centrality of the Chinese Communist Party in a one party state. And so it really depends where shakes down in Chinese politics and the lead up to the 2022, 20th Party Congress, and whether Xi Jinping easily secures his reappointment. Eric Schmidt (47:06): My final question, you’ve spent your whole life studying China, you studied the language, you did it academically, you wrote a PhD on the dissidents. Did you foresee the rise of China in this way, the new strong, powerful china? When did you know this was the path? Kevin Rudd (47:29): Did I actually see China turning out this way? I think most of us who lived and worked Beijing as I did in the 1980s, when I was a Junior Woodchuck in the Australian Embassy back then, analyzing the earliest days of political and economic reform in the Chinese system, we had a degree of optimism that China would evolve in the direction of more open economy, more open society, and perhaps in time open politics. I think though, having been myself, in Tiananmen Square about a week or so before the tanks moved in, and having spent the better part of a week prior to that walking around and talking to the students back then in the square, many of whom were subsequently killed, I was always deeply skeptical about whether a Leninist Party, like the Chinese Communist Party, would ever voluntarily surrender political power. As we saw with a combination of Galsnost, and Perestroika in the then, Soviet Union. Kevin Rudd (48:38): So I’ve seen China as moving in the direction of certainly a more open economy, because they don’t want to return to poverty. I see that as generating the social pressures that you and I have both experienced in China in people wanting more freedom in their personal lives. But to be honest, I’ve always been skeptical as to whether the communist party, being deeply rooted in it’s Leninist traditions, would ever see itself and it’s self interest handing over power to a more open elected political entity. The Chinese communist party calls this the theory of “Peaceful transitionism,” and it’s something which the communist party regards internally, as political enemy number one. So yes, I saw China becoming more open, but always with a big doubt in my mind having been in Tiananmen way back when, 30 years ago now, that it would every voluntarily open it’s politics to sort of transitions we’ve seen elsewhere. Eric Schmidt (49:46): Thank you Prime Minister Kevin Rudd, you’re incredibly insightful on all such matters. Thank you again. Kevin Rudd (49:53): Thanks for having me on your podcast, Eric. Eric Schmidt (49:58): Where are we now? The liberal world order is not as free, global, or organized as it could be 75 years after the democratic nations created it. COVID has deepened fissures in the international system and accelerated our slide toward anti-democracy. In a pandemic, we have not seen tremendous leadership out of the largest democracies. Instead, we’ve seen compromise, and in compromise comes death. Because they have not figured out how to collectively manage both health and economic growth. It’s a false choice to tell people to choose between heath and economic growth, you have to solve both at the same time. Eric Schmidt (50:40): 75 years ago, not just the winner of the war, but the leader of the free world, the United States, set the global world order, set the rules, set the way that the institutions would work, and set a style of approach of solving problems. Today, the United States has relegated that role to others. That loss of leadership means that the world does not have a natural organizational point. It’s probable that the world will devolve a bit, becoming a little bit more confusing. And during a pandemic, you need strong centralized leadership as opposed to confusion and lack of leadership. The most important thing now with democracies, is to recognize that democracies have a certain shape and a certain set of values and to restate them, and to call out behaviors that are inconsistent with democratic values, and strengthen those democratic values. Eric Schmidt (51:35): I’m quite convinced that democracies with strong values and a lot of voter participation, will do just fine. The most important thing in our democracy is to increase voter participation so that people have a share in the outcome. Study after study indicates that generations that don’t participate, don’t buy into the leadership, they don’t buy into the decisions, they don’t have a shared sense of the outcome, and they ultimately become troublemakers. Over and over again, we want very high participation and I think we’re going to get it this time. Eric Schmidt (52:08): Secretary Albright and Prime Minister Rudd, have helped us understand some of the major past, present, and future forces shaping the story, but thankfully, the story’s not over. We must reimagine democracy in global leadership for a hyper connected and technological world. We must reaffirm liberal democracy as the most fair and effective form of governance. And we must call on the nations that uphold these human values and rights to steer the international system through this century and beyond. We’ve done this before, I know we can do it again. Eric Schmidt (52:40): On the next episode of Reimagine, we’ll finish our season by reimagining our lives, planet, and universe with astrophysicist, Neil deGrasse Tyson. The post Reimagine Podcast with Eric Schmidt: Democracy After the Pandemic appeared first on Kevin Rudd. Kevin Rudd — Nikkei Asia: China, Japan and South Korea have good news for planet Earth Published in Nikkei Asia on 4 November 2020 No matter who is declared the winner of the U.S. presidential election, Asia’s pathway to becoming a carbon-neutral continent is now increasingly clear. Six months ago, Asia was lagging desperately behind the rest of the world, including South America and Africa, in its commitment to achieving net zero emissions by midcentury. Only the governments of New Zealand, plus the Marshall Islands and Fiji as the usual vanguards of international climate leadership, had made such a commitment and — importantly — also enshrined it in domestic legislation. The recent groundbreaking commitments by China, Japan, and South Korea, mean the three largest economies in East Asia now have clear pathways to decarbonization by mid-century. In terms of Asia’s G-20 membership, only India, Australia and Indonesia now lag behind. Importantly, Japan and Korea’s announcements will also help put pressure on China to hopefully reach carbon neutrality closer to 2050 — around the time of the 100th anniversary of the founding of the People’s Republic of China — and to achieve net zero greenhouse gas emissions a decade later. These pathways remain an open debate in Beijing’s political circles, including in the wake of last week’s Fifth Plenum, and as preparations continue toward their next Five Year Plan. We may see further signals from China on this by the time of the UN Secretary-General’s event to celebrate the 5-year anniversary of the signing of the Paris Agreement on December 12, and the world will certainly be watching closely. More so if Joe Biden is set to move into the White House the following month, meaning close to 60% of the world’s carbon emissions will then be from countries committed to net zero emissions. Beyond the symbolism of these political commitments, they are first and foremost massive market signals. This is especially the case for China, Japan and Korea’s major trading partners, including their largest import markets for coal. But, it is important to note, they are not out of step with the direction of Asia’s biggest companies in recent years. For example, the Thai conglomerate C.P. Group, one of the world’s largest agri-food producers, had already committed to net zero emissions. In recent days, Malaysia’s Petronas — the region’s largest oil and gas producer — joined them. Even BHP Billiton in my own country — one of the world’s largest mining companies and no fan of climate action — has adopted the same goal. These announcements reflect what is happening at a subnational level in Asia. In the last year, Asia has outpaced the rest of the world in terms of commitments by cities and regions to net zero emissions with Tokyo, Wuhan, Hong Kong and eight Australian states and territories all joining the list. Taken together, they alone represent over 223 million people, or 10% of the region’s population. This leadership has been a key part of why the approach of these three national governments has now shifted. The challenge now for the region is threefold. First, at a political level, to holistically embrace the vision of becoming a carbon-neutral continent in the same way Europe has done. This will be a much harder enterprise than it has been in Europe, even with some of their coal-dependent economies and right-wing governments, but it is not impossible. Key to this will be driving consideration of more national-level commitments to net zero emissions, especially among the 10-member Association of Southeast Asian Nations, which represent more of a mixed bag in this regard. This could be a key area for cooperative regional leadership between China’s President Xi Jinping, Japan’s Prime Minister Yoshihide Suga, and South Korea’s President Moon Jae-in, including in the lead-up to next year’s COP26 Climate Conference in Glasgow. Second, these governments must put their money where their mouths are and stop underpinning the development of carbon-intensive infrastructure — especially coal-fired power plants — across the rest of the region. Japan and Korea have taken important steps in this regard recently, but there is more still to be done. China’s Belt and Road Initiative is obviously a particular concern in that context, especially when Chinese investment, development finance, and support via equipment or personnel is taken together. Third, these governments should align their short-term actions with their long-term vision. In China, Japan and South Korea, the challenges to do so may be different, but the problem is the same. Unless each of these three countries can also enhance their Paris targets for 2030 by the time they get to COP26, the depth and sincerity of their long-term commitments will come increasingly under the spotlight. For China, this must mean peaking emissions by 2025, accelerating action in other areas that they committed to do in Paris, and getting on a pathway to phase out coal by 2040. For Japan and Korea, it must mean phasing out coal even sooner — by 2030 — and seriously ramping up the share of renewables in their energy mix. There is clearly a new wave of climate leadership emerging across Asia. The main question now for the region is whether it is able to ride that wave successfully, or whether its own actions in the short term, or lack of wider regional momentum, risks bringing it to a shuddering halt as the rest of the world moves forward. The post Nikkei Asia: China, Japan and South Korea have good news for planet Earth appeared first on Kevin Rudd. Worse Than Failure — Announcements: What The Fun Holiday Activity? The holidays are a time of traditions, but traditions do change. For example, classic holiday specials have gone from getting cut down for commercials, to getting snapped up by streaming services. Well, perhaps it's time for a new holiday tradition. A holiday tradition which includes a minor dose of… WTF. We're happy to announce our "Worse Than Failure Holiday Special" Contest. This is your chance to submit your own take on a very special holiday story. Not only is it your chance to get your place in history secured for all eternity, but also win some valuable prizes. What We Want We want your best holiday story. Any holiday is valid, though given the time of year, we're expecting one of the many solstice-adjacent holidays. This story can be based on real experiences, or it can be entirely fictional, because what we really want is a new holiday tradition. The best submissions will: • Contain a core WTF, whether it's a bad boss, bad technology decisions, or incompetent team members • Prominently feature your chosen holiday • End with a valuable moral lesson, that leave us feeling full of holiday cheer Are you going to write a traditional story? Or maybe a Dr. Seussian rhyme? A long letter to Santa? That's up to you. How We Want It Submissions are open from now until December 11th. Use our submission form. Check the "Story" box, and set the subject to WTF Holiday Special. Make sure to fill out the email address field, so we can contact you if you win! What You Get The best story will be a feature on our site, and also receive some of our new swag: a brand new TDWTF hoodie, a TDWTF mug, and a variety of stickers and other small swag. The 2 runners up will also get a mug, stickers and other small swag. Get writing, and let's create a new holiday tradition which helps us remember the true meaning of WTFs. [Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more. Planet Debian — Christian Kastner: qemu-sbuild-utils 0.1: sbuild with QEMU qemu-sbuild-utils, which were recently accepted into unstable, are a collection of scripts that wrap standard features of sbuild, autopkgtest-build-qemu and vmdb2 to provide a trivial interface for creating and using QEMU-based environments for package building: • qemu-sbuild-create creates VM images • qemu-sbuild-update is to VM images what sbuild-update is to chroots • qemu-sbuild wraps sbuild, adding all necessary (and some useful) options for autopkgtest mode. Here's a simple two-line example for creating an image, and then using it to build a package:  sudo qemu-sbuild-create -o simple.img unstable http://deb.debian.org/debian

$qemu-sbuild --image simple.img -d unstable [sbuild-options] FOO.dsc  That's it. Both qemu-sbuild-create and qemu-sbuild automate certain things, but also accept a number of options. For example, qemu-sbuild-create can install additional packages either from one of the APT sources, or .deb packages from the local filesystem. qemu-sbuild will pass on every option it does not consume itself to sbuild, so it should mostly work as a drop-in replacement for it (see the Limitations section below for where it doesn't). The created images can also be used for running autopkgtest itself, of course. Advantages Excellent isolation. One can go nuts in an environment, change or even break things, and the VM can always simply be reset, or rolled back to an earlier state. Snapshots are just terrific for so many reasons. With KVM acceleration and a fast local APT cache, builds are really fast. There's an overhead of a few seconds for booting the VM on my end, but that overhead is negligible in comparison to the overall build time. On the upside, with everything being memory-backed, even massive dependency installations are lightning fast. With the parameters of the target environment being configurable, it's possible to test builds in various settings (for example: single-core vs. multi-core, or with memory constraints). Technically, it should be possible to emulate, on one host, any other guest architecture (even if emulation might be slow because of missing hardware acceleration). This would present an attractive alternative to (possibly distant and/or slow) porter boxes. However, support for that in qemu-sbuild-utils is not quite there yet. Limitations The utilities are currently only available on the amd64 architecture, for building packages in amd64 and i386 VMs. There are plans to support arm64 in the near future. qemu-sbuild-create needs root, for the debootstrap stage. I'm looking into ways around this (by extending vmdb2). In any case, image updating and package building do not need privileges. autopkgtest mode does not yet support interactivity, so one cannot drop into a shell with --build-failed-commands, for example. The easy workaround is to connect to the VM with SSH. For this, the image must contain the openssh-server package. Alternatives I looked at qemubuilder, but had troubles getting it to work. In any case, the autopkgtest chroot mode of sbuild seemed far more powerful and useful to me. vectis looks incredibly promising, but I had already written qemu-sbuild-utils by the time I stumbled over it, and as my current setup works well for me for now and is simple enough to maintain, I decided to polish and publish it. I'm also looking into Docker- and other namespace-based isolation solutions (of which there are many), which I think are the way forward for the majority of packages (those that aren't too close to the kernel and/or hardware). Rather than relying on the kernel for isolation, KVM-based solutions like Amazon's Firecracker and QEMU's microvm machine type provide minimal VMs with almost no boot overhead. Firecracker, for example, claims less than 125ms from launch to /sbin/init. Investigating these is a medium-term project for me. Why not schroot? I have a strong aversion towards chroot-based build environments. The concept is archaic. Far superior software- and/or hardware-based technologies for process isolation have emerged in the past two decades, and I think it is high time to leave chroot-based solutions behind. Acknowledgments These utilities are just high-level abstractions. All the heavy lifting is done by sbuild, autopkgtest, and vmdb2. Worse Than Failure — CodeSOD: When All You Have Is .Sort, Every Problem Looks Like a List(of String) When it comes to backwards compatibility, Microsoft is one of those vendors that really commits to it. It’s not that they won’t make breaking changes once in awhile, but they recognize that they need to be cautious about it, and give customers a long window to transition. This was true back when Microsoft made it clear that .NET was the future, and that COM was going away. To make the transition easier, they created a COM Interop system which let COM code call .NET code, and vice versa. The idea was that you would never need to rewrite everything from scratch, you could just transition module by module until all the COM code was gone and just .NET remained. This also meant you could freely mix Visual Basic and Visual Basic.Net, which never caused any problems. Well Moritz sends us some .NET code that gets called by COM code, and presents us with the rare case where we probably should just rewrite everything from scratch.  ''' <summary> ''' Order the customer list alphabetically ''' </summary> ''' <returns></returns> ''' <remarks></remarks> Public Function orderCustomerAZ() As Boolean Try Dim tmpStrList As New List(Of String) Dim tmpCustomerList As New List(Of Customer) ' We create a list of ID strings and order it For i = 0 To CustomerList.Count - 1 tmpStrList.Add(CustomerList(i).ID) Next i tmpStrList.Sort() ' We create the new tmp list of customers For i = 0 To tmpStrList.Count - 1 For j = 0 To CustomerList.Count - 1 If CustomerList(j).ID = tmpStrList(i) Then tmpCustomerList.Add(CustomerList(j).Clone) Exit For End If Next j Next i ' We update the list of customers CustomerList.Clear() CustomerList = tmpCustomerList Return True Catch ex As Exception CompanyName.Logging.ErrorLog.LogException(ex) Return False End Try End Function As the name implies, our goal is to sort a list of customers… by ID. That’s not really implied by the name. The developer responsible knew how to sort a list of strings, and didn’t feel any need to learn what the correct way to sort a list of objects were. So first, they build a tmpStrList which holds all their IDs. Then they Sort() that. Now that the IDs are sorted, they need to organize the original data in that order. So they compare each element of the sorted list to each element of the unsorted list, and if there’s a match, copy the element into tmpCustomerList, ensuring that list holds the elements in the sorted order. Finally, we clear out the original list and replace it with the sorted version. Return True on success, return False on failure. This last bit makes the most sense: chucking exceptions across COM Interop is fraught, so it’s easier to just return status codes. Everything else though is a clear case of someone who didn’t want to read the documentation. They knew that a list had a Sort method which would sort things like numbers or strings, so boom. Why look at all the other ways you can sort lists? What’s a “comparator” or a lambda? Seems like useless extra classes. [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how! Planet Debian — Norbert Preining: Debian KDE/Plasma Status 2020-11-04 About a month worth of updates on KDE/Plasma in Debian has accumulated, so here we go. The highlights are: Plasma 5.19.5 based on Qt 5.15 is in Debian/experimental and hopefully soon in Debian/unstable, and my own builds at OBS have been updated to Plasma 5.20.2, Frameworks 5.75, Apps 20.08.2. Thanks to the dedicated work of the Qt maintainers, Qt 5.15 has finally entered Debian/unstable and we can finally target Plasma 5.20. OBS packages The OBS packages as usual follow the latest release, and currently ship KDE Frameworks 5.75, KDE Apps 20.08.2, and new, Plasma 5.20.2. The package sources are as usual (note the different path for the Plasma packages and the App packages, containing the release version!), for Debian/unstable: deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Unstable/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma520/Debian_Unstable/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2008/Debian_Unstable/ ./ deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./ and the same with Testing instead of Unstable for Debian/testing. The update to Plasma 5.20 took a bit of time, not only because of the wait for Qt 5.15, but also because I couldn’t get it running on my desktop, only in the VM. It turned out that the Plasmoid Event Calendar needed an update, and the old version crashed Plasma (“v68 and below crash in Arch after the Qt 5.15.1 update.”). After I realized that, it was only a question of updating to get Plasma 5.20 running. There are two points I have to mention (and I will fix sooner or later): • Update will need two trials because files moved from plasma-desktop to plasma-workspace. I will add the required replace/conflicts later. • Make sure that kwayland-server packages (libkwaylandserver5, libkwaylandserver-dev are at version 5.20.2. Some old versions had an epoch so automatic updates will not work. As usual, let me know your experience! Debian main packages The packages in Debian/experimental are at the most current state, 5.19.5. We have waited with the upload to unstable until the Qt 5.15 transition is over, but hope to upload to unstable rather soon. After the upload is done, we will work on getting 5.20 into unstable. My aim is to get the most recent version of Plasma 5.20 into Debian Bullseye, so we need to do that before the freeze early next year. Let us hope for the best. , Planet Debian — Martin-Éric Racine: Adding IPv6 support to my home LAN A couple of year ago, I moved into a new flat that comes with RJ45 sockets wired for 10 Gigabit (but currently offering 1 Gigabit) Ethernet. This also meant changing the settings on my router box for my new ISP. I took this opportunity to review my router's other settings too. I'll be blogging about these over the next few posts. Adding IPv6 support to my home LAN I have been following the evolution of IPv6 ever since the KAME project produced the first IPv6 implementation. I have also been keeping track of the IPv4 address depletion. Around the time the IPv6 Day was organized in 2011, I started investigating the situation of IPv6 support at local ISPs. Well, never mind all those rumors about Finland being some high-tech mecca. Back then, no ISP went beyond testing their routers for IPv6 compatibility and producing white papers on what their limited test deployments accomplished. Not that it matters much, in practice. Most IPv6 documentation out there, including Debian's own, still focuses on configuring transitional mechanisms, especially how to connect to a public IPv6 tunnel broker. Relocating to a new flat and rethinking my home network to match gave me an opportunity to revisit the topic. Much to my delight, my current ISP offers native IPv6. This prompted me to go back and read up on IPv6 one more time. One important detail: IPv6 hosts are globally reachable. The implications of this don't immediately spring to mind for someone used to IPv4 network address translation (NAT): Any network service running on an IPv6 host can be reached by anyone anywhere. Contrary to IPv4, there is no division between private and public IP addresses. Whereas a device behind an IPv4 NAT essentially is shielded from the outside world, IPv6 breaks this assumption in more than one way. Not only is the host reachable from anywhere, its default IPv6 address is a mathematical conversion (EUI-64) of the network interface's MAC address, which makes every connection forensically traceable to a unique device. Basically, if you hadn't given much thought to firewalls until now, IPv6 should give you enough goose bumps to get around it. Tightening the configuration of every network service is also an absolute must. For instance, I configured sshd to only listen to private IPv4 addresses. What /etc/network/interfaces might look like on an dual-stack (IPv4 + IPv6) host: allow-hotplug enp9s0 iface enp9s0 inet dhcp iface enp9s0 inet6 auto privext 2 dhcp 1  The auto method means that IPv6 will be auto-configured using SLAAC; privext 2 enables IPv6 privacy options and specifies that we prefer connecting via the randomly-generated IPv6 address, rather than the EUI-64 calculated MAC specific address; dhcp 1 enables passive DHCPv6 to fetch additional routing information. The above works for most desktop and laptop configurations. Where things got more complicated is on the router. I decided early on to keep NAT to provide an IPv4 route to the outside world. Now how exactly is IPv6 routing done? Every node along the line must have its own IPv6 address... including the router's LAN interface. This is accomplished using the sample script found in Debian's IPv6 prefix delegation wiki page. I modified mine as follows (the rest of the script is omitted for clarity): #Both LAN interfaces on my private network are bridged via br0 IA_PD_IFACE="br0" IA_PD_SERVICES="dnsmasq" IA_PD_IPV6CALC="/usr/bin/ipv6calc"  Just put the script at the suggested location. We'll need to request a prefix on the router's outside interface to utilize it. This gives us the following interfaces file: allow-hotplug enp2s4 enp2s8 enp2s9 auto br0 iface enp2s4 inet dhcp iface enp2s4 inet6 auto request_prefix 1 privext 2 dhcp 1 iface enp2s8 inet manual iface enp2s8 inet6 manual iface enp2s9 inet manual iface enp2s9 inet6 manual iface br0 inet static bridge_ports enp2s8 enp2s9 address 10.10.10.254 iface br0 inet6 manual bridge_ports enp2s8 enp2s9 # IPv6 from /etc/dhcp/dhclient-exit-hooks.d/prefix_delegation  The IPv4 NAT and IPv6 Bridge script on my router looks as follows: #!/bin/sh PATH="/usr/sbin:/sbin:/usr/bin:/bin" wan=enp2s4 lan=br0 ######################################################################## # IPv4 NAT iptables -F; iptables -t nat -F; iptables -t mangle -F iptables -X; iptables -t nat -X; iptables -t mangle -X iptables -Z; iptables -t nat -Z; iptables -t mangle -Z iptables -t nat -A POSTROUTING -o$wan -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward
########################################################################
# IPv6 bridge
ip6tables -F; ip6tables -X; ip6tables -Z
# Default policy DROP
ip6tables -P FORWARD DROP
# Allow ICMPv6 forwarding
ip6tables -A FORWARD -p ipv6-icmp -j ACCEPT
# Allow established connections
ip6tables -I FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
# Accept packets FROM LAN to everywhere
ip6tables -I FORWARD -i $lan -j ACCEPT echo 1 > /proc/sys/net/ipv6/conf/all/forwarding echo 1 > /proc/sys/net/ipv6/conf/default/forwarding # IPv6 propagation via /etc/dhcp/dhclient-exit-hooks.d/prefix_delegation  The above already provided enough IPv6 connectivity to pass the IPv6 test on my desktop inside the LAN. To make things more fun, I enabled DHCPv6 support for my LAN on the router's dnsmasq by adding the last 3 lines to the configuration: dhcp-hostsfile=/etc/dnsmasq-ethersfile bind-interfaces interface=br0 except-interface=enp2s4 no-dhcp-interface=enp2s4 dhcp-range=tag:br0,10.10.10.0,static,infinite dhcp-range=tag:br0,::1,constructor:br0,ra-names,ra-stateless,infinite enable-ra dhcp-option=option6:dns-server,[::],[2606:4700:4700::1111],[2001:4860:4860::8888]  The 5 first lines (included here for emphasis) are extremely important: they ensure that dnsmasq won't provide any IPv4 or IPv6 service to the outside interface (enp2s4) and that DHCP will only be provided for LAN hosts whose MAC address is known. Line 6 shows how dnsmasq's DHCP service syntax differs between IPv4 and IPv6. The rest of my configuration was omitted on purpose. Enabling native IPv6 on my LAN has been an interesting experiment. I'm sure that someone could come up with even better ip6tables rules for the router or for my desktop hosts. Feel free to mention them in the blog's comment. Krebs on Security — Two Charged in SIM Swapping, Vishing Scams Two young men from the eastern United States have been hit with identity theft and conspiracy charges for allegedly stealing bitcoin and social media accounts by tricking employees at wireless phone companies into giving away credentials needed to remotely access and modify customer account information. Prosecutors say Jordan K. Milleson, 21 of Timonium, Md. and 19-year-old Kingston, Pa. resident Kyell A. Bryan hijacked social media and bitcoin accounts using a mix of voice phishing or “vishing” attacks and “SIM swapping,” a form of fraud that involves bribing or tricking employees at mobile phone companies. Investigators allege the duo set up phishing websites that mimicked legitimate employee portals belonging to wireless providers, and then emailed and/or called employees at these providers in a bid to trick them into logging in at these fake portals. According to the indictment (PDF), Milleson and Bryan used their phished access to wireless company employee tools to reassign the subscriber identity module (SIM) tied to a target’s mobile device. A SIM card is a small, removable smart chip in mobile phones that links the device to the customer’s phone number, and their purloined access to employee tools meant they could reassign any customer’s phone number to a SIM card in a mobile device they controlled. That allowed them to seize control over a target’s incoming phone calls and text messages, which were used to reset the password for email, social media and cryptocurrency accounts tied to those numbers. Interestingly, the conspiracy appears to have unraveled over a business dispute between the two men. Prosecutors say on June 26, 2019, “Bryan called the Baltimore County Police Department and falsely reported that he, purporting to be a resident of the Milleson family residence, had shot his father at the residence.” “During the call, Bryan, posing as the purported shooter, threatened to shoot himself and to shoot at police officers if they attempted to confront him,” reads a statement from the U.S. Attorney’s Office for the District of Maryland. “The call was a ‘swatting’ attack, a criminal harassment tactic in which a person places a false call to authorities that will trigger a police or special weapons and tactics (SWAT) team response — thereby causing a life-threatening situation.” The indictment alleges Bryan swatted his alleged partner in retaliation for Milleson failing to share the proceeds of a digital currency theft. Milleson and Bryan are facing charges of wire fraud, unauthorized access to protected computers, aggravated identity theft and wire fraud conspiracy. The indictment doesn’t specify the wireless companies targeted by the phishing and vishing schemes, but sources close to the investigation tell KrebsOnSecurity the two men were active members of OGusers, an online forum that caters to people selling access to hijacked social media accounts. Bryan allegedly used the nickname “Champagne” on OGusers. On at least two occasions in the past few years, the OGusers forum was hacked and its user database — including private messages between forum members — were posted online. In a private message dated Nov. 15, 2019, Champagne can be seen asking another OGusers member to create a phishing site mimicking T-Mobile’s employee login page (t-mobileupdates[.]com). Sources tell KrebsOnSecurity the two men are part of a larger conspiracy involving individuals from the United States and United Kingdom who’ve used vishing and phishing to trick work-at-home employees into giving away credentials needed to remotely access their employers’ networks. Planet Debian — Wouter Verhelst: Dear Google ... Why do you have to be so effing difficult about a YouTube API project that is used for a single event per year? FOSDEM creates 600+ videos on a yearly basis. There is no way I am going to manually upload 600+ videos through your webinterface, so we use the API you provide, using a script written by Stefano Rivera. This script grabs video filenames and metadata from a YAML file, and then uses your APIs to upload said videos with said metadata. It works quite well. I run it from cron, and it uploads files until the quota is exhausted, then waits until the next time the cron job runs. It runs so well, that the first time we used it, we could upload 50+ videos on a daily basis, and so the uploads were done as soon as all the videos were created, which was a few months after the event. Cool! The second time we used the script, it did not work at all. We asked one of our key note speakers who happened to be some hotshot at your company, to help us out. He contacted the YouTube people, and whatever had been broken was quickly fixed, so yay, uploads worked again. I found out later that this is actually a normal thing if you don't use your API quota for 90 days or more. Because it's happened to us every bloody year. For the 2020 event, rather than going through back channels (which happened to be unavailable this edition), I tried to use your normal ways of unblocking the API project. This involves creating a screencast of a bloody command line script and describing various things that don't apply to FOSDEM and ghaah shoot me now so meh, I created a new API project instead, and had the uploads go through that. Doing so gives me a limited quota that only allows about 5 or 6 videos per day, but that's fine, it gives people subscribed to our channel the time to actually watch all the videos while they're being uploaded, rather than being presented with a boatload of videos that they can never watch in a day. Also it doesn't overload subscribers, so yay. About three months ago, I started uploading videos. Since then, every day, the "fosdemtalks" channel on YouTube has published five or six videos. Given that, imagine my surprise when I found this in my mailbox this morning... This is an outright lie, Google. The project has been created 90 days ago, yes, that's correct. It has been used every day since then to upload videos. I guess that means I'll have to deal with your broken automatic content filters to try and get stuff unblocked... ... or I could just give up and not do this anymore. After all, all the FOSDEM content is available on our public video host, too. Planet Debian — Martin-Éric Racine: GRUB fine-tuning A couple of years ago, I moved into a new flat that comes with RJ45 sockets wired for 10 Gigabit (but currently offering 1 Gigabit) Ethernet. This also meant changing the settings on my router box for my new ISP. I took this opportunity to review my router's other settings too. I'll be blogging about these over the next few posts. GRUB fine-tuning One thing that had been annoying me ever since Debian migrated to systemd as /sbin/init is that boot message verbosity hasn't been the same. Previously, the cmdline option quiet merely suppressed the kernel's output to the bootscreen, but left the daemon startup messages alone. Not anymore. Nowadays, quiet produces a blank screen. After some googling, I found the solution to that: GRUB_CMDLINE_LINUX_DEFAULT="noquiet loglevel=5" The former restores daemon startup messages, while the later makes the kernel output only significant notices or more serious messages. On most of my hosts, it mostly reports inconsistencies in the ACPI configuration of the BIOS. Another setting I find useful is a reboot delay in case a kernel panic happens: GRUB_CMDLINE_LINUX="panic=15" This gives me enough time to snap a picture of the screen output to attach to the bug report that will follow. LongNow — Explorers Discover Pinnacle of Coral Taller Than Empire State Building in Great Barrier Reef Even now, even in shallow waters, the sea continues to surprise us with new wonders (many of them rich in “living fossils” like the chambered nautilus and various sharks). Reefs are themselves fabulous living examples of multitudinous pace layers, not unlike the structural layers of a house Stewart Brand details in How Buildings Learn—only these buildings literally do learn, as scaffolded colonial organisms with their own inarguable (and manifold) agencies: Explorers of the Great Barrier Reef have discovered a giant pinnacle of coral taller than the Empire State Building. Mariners long ago charted seven pinnacle reefs off the cape that, by definition, lie apart from the main barrier system. Bathed in clear waters, the detached reefs swarm with sponges, corals and brightly colored fish — as well as sharks — and are oases for migrating sea life. Their remoteness makes the pinnacles little-studied, and Australia’s Great Barrier Reef Marine Park Authority has assigned them its highest levels of protection, which limit such activities as commercial fishing. One detached reef at Raine Island is the world’s most important nesting area for green sea turtles. The new pinnacle was found a mile and a half from a known detached reef. Dr. Beaman, who formerly served in the Royal Australian Navy as a hydrographic surveyor, said he and his team were certain it was previously unknown. Its seven relatives, he added, were all charted in the 1880s, more than 120 years ago. Charles Stross — Editorial Entanglements A young editor once asked me what was the biggest secret to editing a fiction magazine. My answer was "confidence." I have to be confident that the stories I choose will fit together, that people will read them and enjoy them, and most importantly, that each month I'll receive enough publishable material to fill the pages of the magazine. Asimov's Science Fiction comes out as combined monthly issues six times a year. A typical issue contains ten to twelve stories. That means I buy about 65 stories a year. Roughly speaking, I need to buy five to six stories per month--although I may actually buy two one month and ten the next. That I will receive these stories should seem inevitable. I get to choose them from about eight hundred submissions per month. Yet, since I know that I will have to reject over 99 percent of the stories that wing their way to me, there is always a slight concern that that someday 100 percent of the submissions won't be right for the magazine. Luckily, this anxiety is strongly offset by a lifetime of experience. For sixteen years as the editor-in-chief, and far longer as a staff member, I've seen that each issue of the magazine has been filled with wonderful stories. Asimov's tales are balanced, they are long and short, amusing and tragic, near- and distant-future explorations of hard SF, far-flung space opera, time travel, surreal tales and a little fantasy. They're by well-known names and brand new authors. I have confidence these stories will show up and that I'll know them when I see them. I have edited or co-edited more than two-dozen reprint anthologies. These books consisted of stories that previously appeared in genre magazines. Pulling them together mostly required sifting through years and years of published fiction. The tales have been united by a common theme such as Robots or Ghosts or The Solar System. Editing my first original anthology was not like editing these earlier books or like editing an issue of the magazine. Entanglements: Tomorrows Lovers, Families, and Friends, which I edited as part of the Twelve Tomorrow Series, has just come out from MIT Press. The tales are connected by a theme--the effect of emerging technologies on relationships--but the stories are brand new. Instead of waiting for eight hundred stories to come to me, I asked specific authors for their tales. I approached prominent authors like Nancy Kress (who is also profiled in the book by Lisa Yaszek), Annalee Newitz, James Patrick Kelly, and Mary Robinette Kowal, as well as up and coming authors like Sam J. Miller, Cadwell Turnbull, and Rich Larson. I was working with some writers for the first time. Others, like Suzanne Palmer and Nick Wolven, were people I'd published on several occasions. I deliberately chose authors who I felt were capable of writing the sort of hard science fiction that the Twelve Tomorrows series is famous for. I was also pretty sure that I was contacting people who were good at making deadlines! I knew I enjoyed the work of Chinese author Xia Jia and I was delighted to have an opportunity to work with her translator, Ken Liu. I was also thrilled to get artwork from Tatiana Plakhova. Once I commissioned the stories, I had to wait with fingers crossed. What if an author went off in the wrong direction? What if an author failed to get inspired? What if they all missed their deadlines? It turned out that I had no need to worry. Each author came through with a story that perfectly fit the anthology's theme. The material was diverse, with stories ranging from tales about lovers and mentors and friends to stories populated with children and grandparents. The book includes charming and amusing tales, heart-rending stories, and exciting thrillers. I learned so much from editing Entanglements. The next time I edit an original anthology, I expect to approach it with a self-assurance akin to the confidence I feel when I read through a month of submissions to Asimov's. Planet Debian — Martin Michlmayr: ledger2beancount 2.5 released I released version 2.5 of ledger2beancount, a ledger to beancount converter. Here are the changes in 2.5: • Don't create negative cost for lot without cost • Support complex implicit conversions • Handle typed metadata with value 0 correctly • Set per-unit instead of total cost when cost is missing from lot • Support commodity-less amounts • Convert transactions with no amounts or only 0 amounts to notes • Fix parsing of transaction notes • Keep tags in transaction notes on same line as transaction header • Add beancount config options for non-standard root names automatically • Fix conversion of fixated prices to costs • Fix removal of price when price==cost but when they use different number formats • Fix removal of price when price==cost but per-unit and total notation mixed • Fix detection of tags and metadata after posting/aux date • Use D directive to set default commodity for hledger • Improve support for postings with commodity-less amounts • Allow empty comments • Preserve leading whitespace in comments in postings and transaction headers • Preserve indentation for tags and metadata • Preserve whitespace between amount and comment • Refactor code to use more data structures • Remove dependency on Config::Onion module Thanks to input from Remco RÄ³nders, Yuri Khan, and Thierry. Thanks to Stefano Zacchiroli and Kirill Goncharov for testing my changes. You can get ledger2beancount from GitHub Worse Than Failure — Sweet Release Release Notes: October 31, 2019 • Added auto-save feature every five minutes. Auto-saves can be found in C:\Users\[username]\Documents\TheApp\autosaves. • Added ability to format text with bold, underline, and italics. • Removed confusing About page. Terms and conditions can now be found under Help. "And ... send." Mark sent the weekly release notes to the distribution list, copying them from where the app itself would display them on boot. "Now everyone should be on the same page, and I can get to work on my next big feature." Two hours later, Janine, the product manager, stopped by his cube. "Hey, Mark. I was thinking. You know that About page? I keep getting complaints. What would it take to just axe it?" "Already done in the latest version," he replied, not even looking up from the code. "So that's, what, three hours of work?" Mark had to tear his eyes away from the screen to look at Janine, baffled. "Huh? No, it's done. Already. It's gone. Didn't you update this morning?" "Oh! Already! Okay, thanks. Good work." She vanished, leaving him to reload his train of thought and focus on the refactor he was doing. Half an hour later, just as he was in the middle of something, one of the users, Roger, dropped in. "Hey, Mark! I know this should go through Janine, but I have a great idea, and I wanted to see if it was feasible." "Hang on ... okay ... shoot." Mark hit Ctrl-S and focused on Roger. Remember, think customer service. "Listen," Roger said. "Every once in a while, right, I'm working on something, and someone comes by to interrupt, right?" "Okay?" began Mark, unclear where this was going. "And you know how it goes. One thing leads to another, and so on, and eventually, I forget what I was doing, and I close out the program." "Sure." Mark risked a glance at his IDE, wondering if he had time to start compiling or not. "So, what if the program saved automatically, like, when I exit or something?" Roger asked. "Oh, actually, as of this morning it auto-saves every five minutes," Mark said. "Okay, cool, cool, but like, it should save when I exit." "Um, I think it asks if you want to save, but I could maybe put thatâ€”" "Or," Roger interrupted, "better yet, it should know when I get distracted, and save then, so I don't lose anything." "It should ... know? How would it know?" "Eh, you're right. Maybe it should just save every ten minutes." Mark pinched the bridge of his nose. "I can do that. What about every five?" "Perfect! Get right on that," Roger declared, striding away. "Good man." He'll figure it out eventually, Mark decided, going back to his IDE. He compiled, ran the software, and was in the middle of testing when Janine came by in a panic, carrying her open laptop. "Mark! We have to roll back the release!" He didn't wait for auto-save, but exited his debugger, immediately pulling up the release console. "What, what's wrong? What happened?" "You know how you killed the About page?" she demanded, eyes wide with horror. "Yeah?" "Well the Terms and conditions were in there! Legal says we can't ship without terms and conditions! This is a huge priority-one bug, I don't know how you missed it!" Mark's shoulders slumped as he stopped logging into the release console. "Oh. I put them under Help." "But I told you to put them under About!" "And then you told me to kill the About page but keep the Terms and Conditions, so I moved them under Help. Didn't you read the release notes?" "Oh, right, right, hang on, let me just pull it up here ... oh, never mind, it's under Help. False alarm! Carry on." So Mark carried on, one eye on the time. I barely got anything done, as usual for a Monday. I really don't want to stay late tonight ... Still, he managed to get into the flow of things, and was just refactoring a critical class when Sue, Mark's boss, stopped by. Mark of course pulled his attention away from the code to talk to the boss, though already he was beginning to resent the constant interruptions. "Hey, Marky Mark, how's it going?" asked Sue. "Fine." "Good, good. Listen, I know you're busy, so I'll get right to it: we have a request from the CEO, so it'll need to get into next week's release for sure." Feeling his odds of getting the refactor committed evaporating, Mark nodded. "All right, I'm on it. What is it?" "So, you know how the product can send email, right?" My least favorite feature. "Yup. What about it?" "Well the CEO was thinking, he can do stuff in Gmail that you can't do in our product, and he wants to know why." He wants me to replicate all of Gmail in the product?! "What things, specifically?" Mark managed to ask calmly. "He's not super technical, but he's talking about things like bold, italics, and underlines. Those are the big three." Mark smashed his forehead into the keyboard for a moment before lifting his head to mutter: "Why do I even send release notes?" "What?" "We released that feature this morning!" "Oh. Good show! Thanks Mark, you're the best." Just as he was packing up for the day, Janine stopped by again, knocking on the edge of his cubicle, a phone to her ear. "Mark! Listen, I've got the CEO on the phone, he wants to know where we find the autosaves, and I can't figure it out. Do you know?" Mark looked at the clock: 5:10. "Nope!" he said cheerily. "Check the release notes, I'm sure it's in there somewhere." "I looked, I didn't see it." "Shame, but I'm already logged out of everything. Tell him to do a real save and we'll get back to him in the morning." "Oh, never mind, he found it! Turns out it was in the release notes. Thanks Mark, you're a lifesaver!" If you say so. Mark walked out the door, not bothering to reply, and headed directly across the street to the pub for his weekly Monday Evening Beer. Six days until we start from the top, he thought. [Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today! , Planet Debian — Joerg Jaspert: Debian NEW Queue, Rust packaging Debian NEW Queue So for some reason I got myself motivated again to deal with some packages in Debians NEW Queue. We had 420 source packages waiting for some kind of processing when I started, now we are down to something around 10. (Silly, people keep uploading stuff…) That’s not entirely my own work, others from the team have been active too, but for those few days I went through a lot of stuff waiting. And must say it still feels mostly like it did when I somehow stopped doing much in NEW. Except - well, I feel that maintainers are much better in preparing their packages, especially that dreaded task of getting the copyright file written seems to be one that is handled much better. Now, thats not supported by any real numbers, just a feeling, but a good one, I think. Rust Dealing with NEW meant I got in contact with one part that currently generates some friction between the FTP Team and one group of package maintainers - the Rust team. Note: this is, of course, entirely written from my point of view. Though with the intention of presenting it as objective as possible. Also, I know what rust is, and have tried a “Hello world” in it, but that’s about my deep knowledge of it… The problem Libraries in rust are bundled/shipped/whatever in something called crates, and you manage what your stuff needs and provides with a tool called cargo. A library (one per crate) can provide multiple features, say a TLS lib can link against gnutls or openssl or some other random implementation. Such features may even be combinable in various different ways, so one can have a high number of possible feature combinations for one crate. There is a tool called debcargo which helps creating a Debian package out of a crate. And that tool generates so-called feature-packages, one per feature / combination thereof. Those feature packages are empty packages, only containing a symlink for their /usr/share/doc/… directory, so their size is smaller than the metadata they will produce. Inside the archive and the files generated by it, stuff that every user everywhere has to download and their apt has to process. Additionally, any change of those feature sets means one round through NEW, which is also not ideal. So, naturally, the FTP Team dislikes those empty feature packages. Really, a lot. There appears to be a different way. Not having the feature packages, but putting all the combinations into a Provides header. That sometimes works, but has two problems: • It can generate really long Provides: lines. I mean, REALLY REALLY REALLY long. Somewhat around 250kb is the current record. Thats long enough that a tool (not dak itself) broke on it. Sure, that tool needs to be fixed, but still, that’s not nice. Currently preferred from us, though. • Some of the features may need different dependencies (say, gnutls vs openssl), should those conflict with each other, you can not combine them into one package. Solutions Currently we do not have a good one. The rust maintainers and the ftp team are talking, exploring various ideas, we will see what will come out. Devel archive / Component One of the possible solutions for the feature package problem would be something that another set of packages could also make good use of, I think. The introduction of a new archive or component, meant only for packages that are needed to build something, but where users are discouraged from ever using them. What? Well, take golang as an example. While we have a load of golang-something packages in Debian, and they are used for building applications written in go - none of those golang-something are meant to be installed by users. If you use the language and develop in it, the go get way is the one you are expected to use. So having an archive (or maybe component like main or contrib) that, by default, won’t be activated for users, but only for things like buildds or archive rebuilds, will make one problem (hated metadata bloat) be evaluated wildly different. It may also allow a more relaxed processing of binary-NEW (easier additions of new feature packages). But but but Yes, it is not the most perfect solution. Without taking much energy to think about, it requires • an adjustment in how main is handled. Right now we have the golden rule that main is self contained, that is, things in it may not need anything outside it for building or running. That would need to be adjusted for building. (Go as well as currently rust are always building static binaries, so no library dependencies there). • It would need handling for the release, that is, the release team would need to deal with that archive/component too. We haven’t, yet, talked to them (still, slowly, discussing inside FTP Team). So, no idea how many rusty knives they want to sink into our nice bodies for that idea… Final Well, it is still very much open. Had an IRC meeting with the rust people, will have another end of November, it will slowly go forward. And maybe someone comes up with an entire new idea that we all love. Don’t know, time will tell. Planet Debian — Joey Hess: how to publish git repos that cannot be republished to github So here's an interesting thing. Certain commit hashes are rapidly heading toward being illegal on Github. So, if you clone a git repo from somewhere else, you had better be wary of pushing it to Github. Because if it happened to contain one of those hashes, that could get you banned from Github. Which, as we know, is your resume. Now here's another interesting thing. It's entirely possible for me to add one of those commit hashes to any of my repos, which of course, I self host. I can do it without adding any of the content which Github/Microsoft, as a RIAA member, wishes to suppress. When you clone the my repo, here's how it looks: # git log commit 1fff890c0980a72d669aaffe9b13a7a077c33ecf (HEAD -> master, origin/master, origin/HEAD) Author: Joey Hess <joeyh@joeyh.name> Date: Mon Nov 2 18:29:17 2020 -0400 remove submodule commit 8864d5c1182dccdd1cfc9ee6e5d694ae3c70e7af Author: Joey Hess <joeyh@joeyh.name> Date: Mon Nov 2 18:29:00 2020 -0400 add # git ls-tree HEAD^ 160000 commit b5[redacted cuz DMCA+Nov 3 = too much] back up your cat videos with this 100644 blob 45b983be36b73c0788dc9cbcb76cbb80fc7bb057 hello  I did this by adding a submodule in one commit, without committing the .gitmodules file, and them removing the submodule in a subsequent commit. What would then happen if you cloned my git repo and pushed it to Github? The next person to complain at me about my not having published one of my git repos to Github, and how annoying it is that they have to clone it from somewhere else in order to push their own fork of it to Github, and how no, I would not be perpertuating Github's monopolism in doing so, and anyway, Github's monopoloy is not so bad actually ... #!/bin/sh printf "Enter the url of the illegal repo, Citizen: " read wha git submodule add "$wha" wha
git rm .gitmodules
git commit -m wha
git rm wha
git commit -m wha


CryptogramNew Windows Zero-Day

Google’s Project Zero has discovered and published a buffer overflow vulnerability in the Windows Kernel Cryptography Driver. The exploit doesn’t affect the cryptography, but allows attackers to escalate system privileges:

Attackers were combining an exploit for it with a separate one targeting a recently fixed flaw in Chrome. The former allowed the latter to escape a security sandbox so the latter could execute code on vulnerable machines.

The vulnerability is being exploited in the wild, although Microsoft says it’s not being exploited widely. Everyone expects a fix in the next Patch Tuesday cycle.

Planet Debian — Vincent Bernat: My collection of vintage PC cards

Recently, I have been gathering some old hardware at my parents’ house, notably PC extension cards, as they don’t take much room and can be converted to a nice display item. Unfortunately, I was not very concerned about keeping stuff around. Compared to all the hardware I have acquired over the years, only a few pieces remain.

Tseng Labs ET4000AX (1989)​

This SVGA graphics card was installed into a PC powered by a 386SX CPU running at 16 MHz. This was a good card at the time as it was pretty fast. It didn’t feature 2D acceleration, unlike the later ET4000/W32. This version only features 512 KB of RAM. It can display 1024×768 images with 16 colors or 800×600 with 256 colors. It was also compatible with CGA, EGA, VGA, MDA, and Hercules modes. No contemporary games were using the SVGA modes but the higher resolutions were useful with Windows 3.

This card was manufactured directly by Tseng Labs.

My first sound card was an AdLib. My parents bought it in Canada during the summer holidays in 1992. It uses a Yamaha OPL2 chip to produce sound via FM synthesis. The first game I have tried is Indiana Jones and the Last Crusade.

I think I gave this AdLib to a friend once I upgraded my PC with a Sound Blaster Pro 2. Recently, I needed one for a side project, but they are rare and expensive on eBay. Someone mentioned a cheap clone on Vogons, so I bought it. It was sold by Sun Moon Star in 1992 and shipped with a CD-ROM of Doom shareware.

Sound Blaster Pro 2 (1992)​

Later, I switched the AdLib sound card with a Sound Blaster Pro 2. It features an OPL3 chip and was also able to output digital samples. At the time, this was a welcome addition, but not as important as the FM synthesis introduced earlier by the AdLib.

Promise EIDE 2300 Plus (1995)​

I bought this card mostly for the serial port. I was using a 486DX2 running at 66 MHz with a Creatix LC 288 FC external modem. The serial port was driven by an 8250 UART with no buffer. Thanks to Terminate, I was able to connect to BBSes with DOS, but this was not possible with Windows 3 or OS/2. I needed one of these fancy new cards with a 16550 UART, featuring a 16-byte buffer. At the time, this was quite difficult to find in France. During a holiday trip, I convinced my parent to make a short detour from Los Angeles to San Diego to buy this Promise EIDE 2300 Plus controller card at a shop I located through an advertisement in a local magazine!

The card also features an EIDE controller with multi-word DMA mode 2 support. In contrast with the older PIO modes, the CPU didn’t have to copy data from disk to memory.

3dfx Voodoo2 Magic 3D II (1998)​

The 3dfx Voodoo2 was one of the first add-in graphics cards implementing hardware acceleration of 3D graphics. I bought it from a friend along with his Pentium II box in 1999. It was a big evolutionary step in PC gaming, as games became more beautiful and fluid. A traditional video controller was still required for 2D. A pass-through VGA cable daisy-chained the video controller to the Voodoo, which was itself connected to the monitor.

In the early 2000s, in college, the Internet connection on the campus was provided by a student association through a 100 Mbps Ethernet cable. If you wanted to reach the maximum speed, the 3Com 3C905C-TX-M PCI network adapter, nicknamed “Tornado”, was the card you needed. We would buy it second-hand by the dozen and sell them to other students for around 30 €.

Planet Debian — Dirk Eddelbuettel: RcppSimdJson 0.1.2: New Upstream, New Utilities

A new RcppSimdJson release arrived on CRAN late yesterday bringing along the one recently updated simdjson release 0.6.0.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon (also voted best talk).

Other than the upstream update, Brendan added some new utilities to check for valid utf-8 or json format, and to minify json plus a small workaround for a clang-9 bug we encountered. We can confirm Daniel’s statement on ridiculously fast utf-8 validattion. It is so cool to work with amazing tools.

The NEWS entry follows.

Changes in version 0.1.3 (2020-11-01)

• Added URLs to DESCRIPTION (Dirk closing #50).

• Upgraded to simdjson 0.6.0 (Dirk in #52).

• New policy option to always convert integers to int64_t (Brendan in #55 closing #54).

• Added workaround for odd clang-9 bug (Brendan in #57).

• New utility functions is_valid_utf8(), is_valid_json() and fminify() (Brendan in #58).

Courtesy of my CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than Failure — CodeSOD: An Impossible Problem

One of the lines between code that's "not great, but like, it's fine, I guess" and "wow, WTF" is confidence.

For example, Francis Gauthier inherited a WinForms application. One of the form fields in this application was a text box that held a number, and the developers wanted to always display whatever the user entered without leading zeroes.

Now, WinForms is pretty simplistic as UI controls go, so there isn't really a great "oh yes, do this!" solution to solving that simple problem. A mix of using MVC-style patterns with a formatter between the model and the UI would be "right", but might be more setup than the problem truly calls for.

Which is why, at first blush, without more context, I'd be more apt to put this bad code into the "not great, but whatever" category:

   int percent = Int32.Parse(ctrl.Text);
ctrl.Text = percent.ToString();


On an update, we grab the context of the text box, parse it as an integer, and then store the result back into the text box. This will effectively strip off the leading zeroes.

It's fine. Until we zoom out a step.

 // Matched - Remove leading zero
try {
int percent = Int32.Parse(ctrl.Text);
ctrl.Text = percent.ToString();
}
catch {
// impossible..
}


Here, we can see that theyâ€¦ "wisely" have wrapped the Parse in an exception handler. The developer knew that there was a validator on the control which would prevent non-numeric characters from being entered, and thus they were able with a great degree of confidence to declare that an exception was "impossible".

There's just one problem with that. The validator in question allows numeric characters, not just integer characters. So the validator would allow you to enter 0.99. Which of course, won't parse. So the exception gets triggered, the catch ignores it, the user believes their input- a percentage- has been accepted as valid. The end result is that many users might enter "0.99" to mean "99%", and then see "0" be what actually gets stored as the unexpected floating point gets truncated.

All because an exception was declared "impossible". To misapply a quote: "You keep using that word. I do not think it means what you think it means."

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cory Doctorow — Someone Comes to Town, Someone Leaves Town (part 21)

Here’s part twenty-one of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

Planet Debian — Sandro Knauß: Bugzilla integration for KDE Project API

The KDE Bugzilla handles a lot of projects and they often match with the repo name, but not always. For instance we have ancient products and components at Bugzilla, as projects have a lifecycle from playground into Release Service, or Frameworks, sometimes with a change of name. So you may end up searching Bugzilla quite awhile for the correct product and component to be able to confirm or create bug reports against an application. Let's have a look at KPeople, and see why the situation is complicated. You find two products in KDE Bugzilla: kpeople (the repository's name) and on the other hand Frameworks have the scheme of a "frameworks-" prefix: frameworks-kpeople. From the data displayed even I as a developer am unable to tell which is the correct product to add new bug reports. Both have bug reports this year that got fixed and the number of bug reports is too low to get a clear picture of which to choose.

This is not only a problem of KDE; it is a general problem in different communities that it is hard for newcomers to find the correct place to search and add new bug reports.

That's why Debian added the bug report information for every package. This should help users to search the upstream bug reports or create new ones (Bug-Submit and Bug-Database): https://wiki.debian.org/UpstreamMetadata#Fields

While I was collecting this information for Frameworks and KDE PIM, I wondered why KDE does not have links between each project and Bugzilla. After some searching and discussions it became obvious, that KDE does not have this information that can be processed. Okay let's fix this. The obvious place to reach this information is the Project API available under https://projects.kde.org/api/. To reach this goal I began adding Bugzilla information to the data source of the Project API named Git Repo Metadata. Then a merge request later the Project API is able to generate the links to Bugzilla invent:sysadmin/projects-api!2.

Where you should search for bugs in kontact? Go to https://projects.kde.org/api/v1/identifier/kontact:

After implementing the needed bits I found out that Nicolas Alvarez had the same idea, to store Bugzilla information in Git Repo Metadata invent:sysadmin/repo-metadata@085d878ea. Fortunately I can say that since June the information are used by Project API.

So now, back to my task to add upstream metadata to Debian packages. After I filled the needed information to Repo Metadata I created a script to update the links in Debian salsa:qt-kde-team/pkg-kde-dev-scripts/function_collection/functions_plasma.py:addMissingBugMetadatafields. This should hopefully help to always point to the correct Bugzilla links in future.

A random list came to my mind of places that may benefit from the Bugzilla information in Git Metadata Repository:

• Links from API documentation directly to the Bugzilla.
• We can use this information in scripts that adds the new version to Bugzilla (e.g. invent:sdk/releaseme/plasma/plasma-add-bugzilla-versions has currently a static list of Bugzilla products).
• For DrKonqui this information may also useful. The mapping binary -> bugzilla is currently handcrafted in invent:plasma/drkonqi/src/data/mappings. This is NOT the low hanging fruit, but maybe someone feels inspired to play with the available data and who is checking each handcrafted entry in that file to still be correct? The mapping repo -> binary we get by our CI, because when we build the stuff, we know what we built.

The truth is, that Nicolas is right that adding the Bugzilla links are a manual task. But on the other hand, let's add this information once at one place and than we can use it at several places.

Please help adding this information; it is simple a yaml file. If you see the data missing example commit and create a merge request. Or you can also give me the data and I'll add it.

,

Charles Stross — The Laundry Files: an updated chronology

I've been writing Laundry Files stories since 1999, and there's now about 1.4 million words in that universe. That's a lot of stuff: a typical novel these days is 100,000 words, but these books trend long, and this count includes 11 novels (of which, #10 comes out later this month) and some shorter work. It occurs to me that while some of you have been following them from the beginning, a lot of people come to them cold in the shape of one story or another.

So below the fold I'm going to explain the Laundry Files time line, the various sub-series that share the setting, and give a running order for the series—including short stories as well as novels.

(The series title, "The Laundry Files", was pinned on me by editorial fiat at a previous publisher whose policy was that any group of 3 or more connected novels had to have a common name. It wasn't my idea: my editor at the time also published Jim Butcher, and Bob—my sole protagonist at that point in the series—worked for an organization disparagingly nicknamed "the Laundry", so the inevitable happened. Using a singular series title gives the impression that it has a singular theme, which would be like calling Terry Pratchett's Discworld books as "the Unseen University series". Anyway ...)

TLDR version: If you just want to know where to start reading, pick one of: The Atrocity Archives, The Rhesus Chart, The Nightmare Stacks, or Dead Lies Dreaming. These are all safe starting points for the series, that don't require prior familiarity. Other books might leave you confused if you dive straight in, so here's an exhaustive run-down of all the books and short stories.

Typographic conventions: story titles are rendered in italics (like this). Book titles are presented in boldface (thus).

Publication dates are presented like this: (pub: 2016). The year in which a story is set is presented like so: (set: 2005).

The list is sorted in story order rather than publication order.

The Atrocity Archive (set: 2002; pub: 2002-3)

• The short novel which started it all. Originally published in an obscure Scottish SF digest-format magazine called Spectrum SF, it ran from 2002 to 2003, and introduced our protagonist Bob Howard, his (eventual) love interest Mo O'Brien, and a bunch of eccentric minor characters and tentacled horrors. Is a kinda-sorta tribute to spy thriller author Len Deighton.

The Concrete Jungle (set: 2003: pub: see below)

• Novella, set a year after The Atrocity Archive, in which Bob is awakened in the middle of the night to go and count the concrete cows in Milton Keynes. Winner of the 2005 Hugo award for best SF/F novella.

The Atrocity Archives (set 2002-03, pub: 2003 (hbk), 2006 (trade ppbk))

• Start reading here! A smaller US publisher, Golden Gryphon, liked The Atrocity Archive and wanted to publish it, but considered it to be too short on its own. So The Concrete Jungle was written, and along with an afterword they were published together as a two-story collection/episodic novel, The Atrocity Archives (note the added 's' at the end). A couple of years later, Ace (part of Penguin group) picked up the US trade and mass market paperback rights and Orbit published it in the UK. (Having won a Hugo award in the meantime really didn't hurt; it's normally quite rare for a small press item such as TAA to get picked up and republished like this.)

The Jennifer Morgue (set: 2005, pub: 2007 (hbk), 2008 (trade ppbk))

• Golden Gryphon asked for a sequel, hence the James Bond episode in what was now clearly going to be a trilogy of comedy Lovecraftian/spy books. Note that it's riffing off the Broccoli movie franchise version of Bond, not Iain Fleming's original psychopathic British government assassin. Orbit again took UK rights, while Ace picked up the paperbacks. Because I wanted to stick with the previous book's two-story format, I wrote an extra short story:

Pimpf (set: 2006, pub: collected in The Jennifer Morgue)

• A short story set in what I think of as the Chibi-Laundry continuity; Bob ends up inside a computer running a Neverwinter Nights server (hey, this was before World of Warcraft got big). Chibi-Laundry stories are self-parodies and probably shouldn't be thought of as canonical. (Ahem: there's a big continuity blooper tucked away in this one what comes back to bite me in later books because I forgot about it.)

Down on the Farm (novelette: set 2007, pub. 2008, Tor.com)

• Novelette: Bob has to investigate strange goings-on at a care home for Laundry agents whose minds have gone. Introduces Krantzberg Syndrome, which plays a major role later in the series.

Equoid (novella: set 2007, pub: 2013, Tor.com)

• A novella set between The Jennifer Morgue and The Fuller Memorandum; Bob is married to Mo and working for Iris Carpenter. Bob learns why Unicorns are Bad News. Won the 2014 Hugo award for best SF/F novella. Also published as the hardback novella edition Equoid by Subterranean Press.

The Fuller Memorandum (set: 2008, pub: 2010 (US hbk/UK ppbk))

• Third novel, first to be published in hardback by Ace, published in paperback in the UK by Orbit. The title is an intentional call-out to Adam Hall (aka Elleston Trevor), author of the Quiller series of spy thrillers—but it's actually an Anthony Price homage. This is where we begin to get a sense that there's an overall Laundry Files story arc, and where I realized I wasn't writing a trilogy. Didn't have a short story trailer or afterword because I flamed out while trying to come up with one before the deadline. Bob encounters skullduggery within the organization and has to get to the bottom of it before something really nasty happens: also, what and where is the misplaced "Teapot" that the KGB's London resident keeps asking him about?

Overtime (novelette: set 2009, pub 2009, Tor.com)

• A heart-warming Christmas tale of Terror. Shortlisted for the Hugo award for best novelette, 2010.

Three Tales from the Laundry Files (ebook-only collection)

• Collection consisting of Down on the Farm, Overtime, and Equoid published the Tor.com as an ebook.

The Apocalypse Codex (set: 2010, pub: 2012 (US hbk/UK ppbk))

• Fourth novel, and a tribute to the Modesty Blaise comic strip and books by Peter O'Donnell. A slick televangelist is getting much to cosy with the Prime Minister, and the Laundry—as a civil service agency—is forbidden from investigating. We learn about External Assets, and Bob gets the first inkling that he's being fast-tracked for promotion. Won the Locus Award for best fantasy novel in 2013.

A Conventional Boy (set: ~2011-12, not yet written)

• Projected interstitial novella, introducing Derek the DM (The Nightmare Stacks) and Camp Sunshine (The Delirium Brief). Not yet written.

The Rhesus Chart (set: spring 2013, pub: 2014 (US hbk/UK hbk))

• Fifth novel, a new series starting point if you want to bypass the early novels. First of a new cycle remixing contemporary fantasy sub-genres (I got bored with British spy thriller authors). Subject: Banking, Vampires, and what happens when an agile programming team inside a merchant bank develops PHANG syndrome. First to be published in hardcover in the UK by Orbit.

• Note that the books are now set much closer together. This is a key point: the world of the Laundry Files has now developed its own parallel and gradually diverging history as the supernatural incursions become harder to cover up. Note also that Bob is powering up (the Bob of The Atrocity Archive wouldn't exactly be able to walk into a nest of vampires and escape with only minor damage to his dignity). This is why we don't see much of Bob in the next two novels.

The Annihilation Score (set: summer/autumn 2013, pub: 2015 (US hbk/UK ppbk))

• Sixth novel, first with a non-Bob viewpoint protagonist—it's told by Mo, his wife, and contains spoilers for The Rhesus Chart. Deals with superheroes, mid-life crises, nervous breakdowns, and the King in Yellow. We're clearly deep into ahistorical territory here as we have a dress circle box for the very last Last Night of the Proms, and Orbit's lawyers made me very carefully describe the female Home Secretary as clearly not being one of her non-fictional predecessors, not even a little bit.

Escape from Puroland (set: March-April 2014, pub: summer 2021, forthcoming)

• Interstitial novella, explaining why Bob wasn't around in the UK during the events described in The Nightmare Stacks. He was on an overseas liason mission, nailing down the coffin lid on one of Angleton's left-over toxic waste sites—this time, it's near Tokyo.

The Nightmare Stacks (set: March-April 2014, pub: June 2016 (US hbk/UK ppbk))

• Seventh novel, and another series starting point if you want to dive into the most recent books in the series. Viewpoint character: Alex the PHANG. Deals with, well ... the Laundry has been so obsessed by CASE NIGHTMARE GREEN that they're almost completely taken by surprise when CASE NIGHTMARE RED happens. Implicitly marks the end of the Masquerade. Features a Maniac Pixie Dream Girl and the return of Bob's Kettenkrad from The Atrocity Archive. Oh, and it also utterly destroys the major British city I grew up in, because revenge is a dish best eaten cold.

The Delirium Brief (set: May-June 2014, pub: June 2017 (US hbk/UK ppbk))

• Eighth novel, primary viewpoint character: Bob again, but with an ensemble of other viewpoints cropping up in their own chapters. And unlike the earlier Bob books it no longer pastiches other works or genres. Deals with the aftermath of The Nightmare Stacks; opens with Bob being grilled live on Newsnight by Jeremy Paxman and goes rapidly downhill from there. (I'm guessing that if the events of the previous novel had just taken place, the BBC's leading current affairs news anchor might have deferred his retirement for a couple of months ...)

The Labyrinth Index (set: winter 2014/early 2015, pub: October 2018, (US hbk/UK ppbk))

• Ninth novel, viewpoint character: Mhari, working for the New Management in the wake of the drastic governmental changes that took place at the end of "The Delirium Brief". The shit has well and truly hit the fan on a global scale, and the new Prime Minister holds unreasonable expectations ...

Dead Lies Dreaming (set: December 2016: pub: Oct 2020 (US hbk/UK hbk))

• New spin-off series, new starting point! The marketing blurb describes it as "book 10 in the Laundry Files" but by the time this book is set—after CASE NIGHTMARE GREEN and the end of the main Laundry story arc (some time in 2015-16) the Laundry no longer exists. We meet a cast of entirely new characters, civilians (with powers) living under the aegis of the New Management, ruled by his Dread Majesty, the Black Pharaoh. The start of a new trilogy, Dead Lies Dreaming riffs heavily off "Peter and Wendy", the original grimdark version of Peter Pan (before Walt Disney made him twee).

In His House (set: December 2016, pub: probably 2022)

• Second book in the Dead Lies Dreaming trilogy: continues the story, riffs off Sweeney Todd and Mary Poppins—again: the latter was much darker than the Disney musical implies. (The book is written, but COVID19 has done weird things to publishers' schedules and it's provisionally in the queue behind Invisible Sun, the final Empire Games book, which is due out in September 2021.)

Bones and Nightmares (set: December 2016 and summer of 1820, pub: possibly 2023)

• Third book in the Dead Lies Dreaming trilogy: finishes the story, riffs off The Prisoner and Jane Austen: also Kingsley's The Water Babies (with Deep Ones). In development.

Further novels are planned but not definite: there need to be 1-2 more books to finish the main Laundry Files story arc with Bob et al, filling in the time line before Dead Lies Dreaming, but the Laundry is a civil service security agency and the current political madness gripping the UK makes it really hard to satirize HMG, so I'm off on a side-quest following the tribulations of Imp, Eve, Wendy, and the gang (from Dead Lies Dreaming) until I figure out how to get back to the Laundry proper.

That's all for now. I'll attempt to update this entry as I write/publish more material.

Planet Debian — Enrico Zini: Gender and pop culture links

Growing up framed as a somewhat nerdy male, I was exposed to a horrible mainstream narrative for relationships. I really like how these two links take some of it apart and make its serious problems visible:

The narrative for mainstream-desirable males also needs questioning:

And since I posted a link about unreasonable expectations towards male bodies:

Sam Varghese — Australia seems to be living in another world when it comes to rugby contests with New Zealand

When Australian scrum-half Nic White was walking off the field after the whistle blew for half-time in the third Bledisloe Cup game on 31 October, he was given a headset and microphone by Fox Sports and asked for his take on the game upto that point.

Australia had been outplayed by New Zealand in the first 40 minutes and were trailing 0-26, meaning that the horse had well and truly bolted and any chance of them making a fight of it had disappeared.

But White seemed to be in an alternate universe. “No disrespect, but they haven’t done a whole lot, it’s just been all our mistakes. We’re just gifting them points,” was what he had to offer.

When commentator Phil Kearns, a man who played 67 Tests for the Wallabies, came back with “Sixty-seven percent possession they got, mate,” White quickly took off the headphones, handed them to a man on the Fox Sports team and walked away into the change rooms.

The exchange reminded me of the way the American tennis player Serena Williams reacts when she loses during a Grand Slam – it’s because she played badly, not because her vanquisher played a good game.

One offers this exchange to illustrate one point: unless one acknowledges one’s mistakes, it is not possible to correct them. True, White may have been indulging in spin as many people do when confronted by the media, but had he acknowledged that Australia was behind because it had come up against a side that was doing all the basics extremely well, he probably would have been more accurate. Like many other Australians, White seems to have a big blind spot when it comes to acknowledging that one has been outplayed.

To the match itself, it was practically over after the first half. Few teams can come back from such a deficit – and bear in mind that two additional tries were not awarded to the men in black. One was due to a marvellous save by Australian winger Marike Koroibete, who got under the ball when Kiwi wing-three-quarter Caleb Clarke, no easy customer to tackle, was trying to force down.

The other try that was disallowed was debatable; hooker Dane Coles charged onto a kick into the in-goal area by fly-half Richie Mo’unga, and tried to get his hands on the ball and effect a touchdown. From some angles, it looked like he had succeeded. From others, it appeared that he did not have full control of the ball.

But even then, the All Blacks ran in four tries, some of which should never be allowed at the international level. Mo’unga was in top form and used all his guile and skills to cross for two of the four tries which his team scored before half-time.

The great West Indies teams of the 1980s and early 1990s had a tactic of targeting bowlers who either were becoming a threat to them, or else thought they were becoming a threat, and demoralising them. The main method used was for Viv Richards to attack the bowler in question and take him to the cleaners. It worked in many cases.

Similarly, the All Blacks appear to have a strategy of making newcomers in opposing teams feel out of place and this often results in the newbie suffering a major crisis of confidence. In the case of Noah Lolesio, picked to make his debut as standoff, perhaps their task was easier for the pint-sized fly-half seemed to be intent on making himself a small target.

The No 10 is normally the playmaker, but Lolesio seemed content to operate from where a full-back normally positions himself/herself, and kick when he got the ball. His kicking was poor, and in one case, when kicking for touch, he landed the ball in the in-goal area. It looks like picking him for his international debut against the All Blacks was not the most judicious move by Australian coach Dave Rennie.

The fact that Rennie had to pick Lolesio to fill the No 10 spot is a glaring admission that Australia has very little depth when it comes to players. Bernard Foley, a reliable if unspectacular fly-half, went off to Japan after the 2019 World Cup, but it is doubtful he would have been picked this year given his disastrous performance in the one game he played during that tournament.

One wonders what Rennie will do in the remaining game against New Zealand. He cannot drop Lolesio for it would destroy the man’s confidence. He will have to bring back James O’Connor to fill this pivotal role as he is now fit to resume playing. But Lolesio will have to be in the match-day 23.

One hopes that Rennie will bring Tom Banks who did a decent job at full-back in the first two Tests against New Zealand before being suddenly dropped for the third. The coach should also pick Isi Naisarani in the No 8 position and jettison Harry Wilson; the latter appears to be a hot-head and woefully short of common-sense and ability. Exactly why Naisarani, a Fijian who did a wonderful job last year, has been kept out of the team is not known.

Brisbane has been a somewhat happier hunting ground for Australia against New Zealand. But the scars suffered in the third game — where they went down by the biggest margin in any game against New Zealand — may not be so easy to heal. But at least this time there will be no overblown expectations that Australia will make a contest of the game.

Planet Debian — Vincent Bernat: Running Isso on NixOS in a Docker container

This short article documents how I run Isso, the commenting system used by this blog, inside a Docker container on NixOS, a Linux distribution built on top of Nix. Nix is a declarative package manager for Linux and other Unix systems.

While NixOS 20.09 includes a derivation for Isso, it is unfortunately broken and relies on Python 2. As I am also using a fork of Isso, I have built my own derivation, heavily inspired by the one in master:1

issoPackage = with pkgs.python3Packages; buildPythonPackage rec {
pname = "isso";
version = "custom";

src = pkgs.fetchFromGitHub {
# Use my fork
owner = "vincentbernat";
repo = pname;
rev = "vbe/master";
sha256 = "0vkkvjcvcjcdzdj73qig32hqgjly8n3ln2djzmhshc04i6g9z07j";
};

propagatedBuildInputs = [
itsdangerous
jinja2
misaka
html5lib
werkzeug
bleach
];

buildInputs = [
cffi
];

checkInputs = [ nose ];

checkPhase = ''
${python.interpreter} setup.py nosetests ''; };  I want to run Isso through Gunicorn. To this effect, I build an Python environment combining Isso and Gunicorn. Then, I can invoke the latter with "${issoEnv}/bin/gunicorn", like with a virtual environment.

issoEnv = pkgs.python3.buildEnv.override {
extraLibs = [
issoPackage
pkgs.python3Packages.gunicorn
pkgs.python3Packages.gevent
];
};


Before building a Docker image, I also need to specify the Isso configuration file for Isso:

issoConfig = pkgs.writeText "isso.conf" ''
[general]
host =
https://vincent.bernat.ch
http://localhost:8080
notify = smtp
[…]
'';


NixOS comes with a convenient tool to build a Docker image without a Dockerfile:

issoDockerImage = pkgs.dockerTools.buildImage {
name = "isso";
tag = "latest";
runAsRoot = ''
mkdir -p /db
'';
config = {
Cmd = [ "${issoEnv}/bin/gunicorn" "--name" "isso" "--bind" "0.0.0.0:${port}"
"--worker-class" "gevent"
"--workers" "2"
"--worker-tmp-dir" "/dev/shm"
"isso.run"
];
Env = [
"ISSO_SETTINGS=${issoConfig}" "SSL_CERT_FILE=${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt"
];
};
};


Because we refer to the issoEnv derivation in config.Cmd, the whole derivation, including Isso and Gunicorn, is copied inside the Docker image. The same applies for issoConfig, the configuration file we created earlier, and pkgs.cacert, the derivation containing trusted root certificates. The resulting image is 171 MB once installed, which is comparable to the Debian Buster image generated by the official Dockerfile.

NixOS features an abstraction to run Docker containers. It is not currently documented in NixOS manual but you can look at the source code of the module for the available options. I choose to use Podman instead of Docker as the backend because it does not require running an additional daemon.

virtualisation.oci-containers = {
backend = "podman";
containers = {
isso = {
image = "isso";
imageFile = issoDockerImage;
ports = ["127.0.0.1:${port}:${port}"];
volumes = [
"/var/db/isso:/db"
];
};
};
};


A systemd unit file is automatically created to run and supervise the container:

$systemctl status podman-isso.service ● podman-isso.service Loaded: loaded (/nix/store/a66gzqqwcdzbh99sz8zz5l5xl8r8ag7w-unit-> Active: active (running) since Sun 2020-11-01 16:04:16 UTC; 4min 44s ago Process: 14564 ExecStartPre=/nix/store/95zfn4vg4867gzxz1gw7nxayqcl> Main PID: 14697 (podman) IP: 0B in, 0B out Tasks: 10 (limit: 2313) Memory: 221.3M CPU: 10.058s CGroup: /system.slice/podman-isso.service ├─14697 /nix/store/pn52xgn1wb2vr2kirq3xj8ij0rys35mf-podma> └─14802 /nix/store/7vsba54k6ag4cfsfp95rvjzqf6rab865-conmo> nov. 01 16:04:17 web03 podman[14697]: container init (image=localhost/isso:latest) nov. 01 16:04:17 web03 podman[14697]: container start (image=localhost/isso:latest) nov. 01 16:04:17 web03 podman[14697]: container attach (image=localhost/isso:latest) nov. 01 16:04:19 web03 conmon[14802]: INFO: connected to SMTP server nov. 01 16:04:19 web03 conmon[14802]: INFO: connected to https://vincent.bernat.ch nov. 01 16:04:19 web03 conmon[14802]: [INFO] Starting gunicorn 20.0.4 nov. 01 16:04:19 web03 conmon[14802]: [INFO] Listening at: http://0.0.0.0:8080 (1) nov. 01 16:04:19 web03 conmon[14802]: [INFO] Using worker: gevent nov. 01 16:04:19 web03 conmon[14802]: [INFO] Booting worker with pid: 8 nov. 01 16:04:19 web03 conmon[14802]: [INFO] Booting worker with pid: 9  As the last step, we configure Nginx to forward requests for comments.luffy.cx to the container. NixOS provides a simple integration to grab a Let’s Encrypt certificate. services.nginx.virtualHosts."comments.luffy.cx" = { root = "/data/webserver/comments.luffy.cx"; enableACME = true; forceSSL = true; extraConfig = '' access_log /var/log/nginx/comments.luffy.cx.log anonymous; ''; locations."/" = { proxyPass = "http://127.0.0.1:${port}";
extraConfig = ''
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host$host;
proxy_set_header X-Forwarded-Proto $scheme; proxy_hide_header Set-Cookie; proxy_hide_header X-Set-Cookie; proxy_ignore_headers Set-Cookie; ''; }; }; security.acme.certs."comments.luffy.cx" = { email = lib.concatStringsSep "@" [ "letsencrypt" "vincent.bernat.ch" ]; };  While I still struggle with Nix and NixOS, I am convinced this is how declarative infrastructure should be done. I like how in one single file, I can define the derivation to build Isso, the configuration, the Docker image, the container definition, and the Nginx configuration. The Nix language is used both for building packages and for managing configurations. Moreover, the Docker image is updated automatically like a regular NixOS host. This solves an issue plaguing the Docker ecosystem: no more stale images! My next step would be to combine this approach with Nomad, a simple orchestrator to deploy and manage containers. 1. There is a subtle difference: I am using buildPythonPackage instead of buildPythonApplication. This is important for the next step. I didn’t investigate if an application can be converted to a package easily. ↩︎ Planet Debian — Steve Kemp: Archiving Debian-Administration.org, for real Back in 2017 I announced that the https://Debian-Administration.org website was being made read-only, and archived. At the time I wrote a quick update to save each requested page as a flat-file, hashed beneath /tmp, with the expectation that after a few months I'd have a complete HTML-only archive of the site which I could serve as a static-website, instead of keeping the database and pile of CGI scripts running. Unfortunately I never got round to archiving the pages in a git-repository, or some other store, and I usually only remembered this local tree of content was available a few minutes after I'd rebooted the server and lost the stuff - as the reboot would reap the contents of /tmp! Thinking about it today I figured I probably didn't even need to do that, instead I just need to redirect to the wayback machine. Working on the assumption that the site has been around for "a while" it should have all the pages mirrored by now I've made a "final update" to Apache:  RewriteEngine on RewriteRule ^/(.*) "http://web.archive.org/web/https://debian-administration.org/$1"  [R,L]


Assuming nobody reports a problem in the next month I'll retire the server and make a simple docker container to handle the appropriate TLS certificate renewal, and hardwire the redirection(s) for the sites involved.

Planet Debian — Utkarsh Gupta: FOSS Activites in October 2020

Here’s my (thirteenth) monthly update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 22nd month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/

Whilst busy with my undergrad, I could still take some time out for contributing to Debian (I always do!). Here are the following things I did in Debian this month:

Other $things: • Attended the Debian Ruby team meeting. Logs here. • Mentoring for newcomers. • FTP Trainee reviewing. • Moderation of -project mailing list. • Sponsored phpmyadmin, php-bacon-baconqrcode, twig, php-dasprid-enum, sql-parser, and mariadb-mysql-kbs for William. Debian (E)LTS This was my thirteenth month as a Debian LTS and fourth month as a Debian ELTS paid contributor. I was assigned 20.75 hours for LTS and 30.00 hours for ELTS and worked on the following things: LTS CVE Fixes and Announcements: • Issued DLA 2389-1, fixing CVE-2019-18978, for ruby-rack-cors. For Debian 9 Stretch, these problems have been fixed in version 0.4.0-1+deb9u2. • Issued DLA 2390-1, fixing CVE-2019-18848, for ruby-json-jwt. For Debian 9 Stretch, these problems have been fixed in version 1.6.2-1+deb9u2. • Issued DLA 2391-1, fixing CVE-2020-25613, for ruby2.3. For Debian 9 Stretch, these problems have been fixed in version 2.3.3-1+deb9u9. • Issued DLA 2392-1, fixing CVE-2020-25613, for jruby. For Debian 9 Stretch, these problems have been fixed in version 1.7.26-1+deb9u3. • Uploaded ruby2.5 to buster, fixing CVE-2020-25613. For Debian 10 Buster, these problems have been fixed in version 2.5.5-3+deb10u3. • Uploaded ruby2.7 to unstable, fixing CVE-2020-25613. For Debian Sid, these problems have been fixed in version 2.7.1-4. • Uploaded rails to unstable, fixing CVE-2020-8264. For Debian Sid, these problems have been fixed in version 2:6.0.3.4+dfsg-1. ELTS CVE Fixes and Announcements: Other (E)LTS Work: • Front-desk duty from 28-09 to 04-10 and from 26-10 until 01-10 for both LTS and ELTS. • Triaged libproxy, libvirt, libonig, ant, erlang, ruby2.3, jruby, dpdk, php7.0, spice, spice-gtk, wireshark, djangorestframework, python-urllib3, python-cryptography, qtsvg-opensource-src, and open-build-service. • Marked CVE-2020-26137/python-urllib3 as no-dsa for Stretch and Jessie. • Marked CVE-2020-1437{4,5,6,7,8}/dpdk as no-dsa for Stretch. • Marked CVE-2020-2586{2,3}/wireshark as postponed for Stretch. • Marked CVE-2020-25626/djangorestframework as no-dsa for Stretch. • Marked CVE-2020-11979/ant as not-affected for Jessie. • Marked CVE-2020-25623/erlang as not-affected for Jessie. • Marked CVE-2020-25659/python-cryptography as no-dsa for Stretch and Jessie. • Auto EOL’ed jruby, libjs-handlebars, linux, pluxml, mupdf, and djangorestframework for Jessie. • [E/LTS] Worked on putting survey online, deployed LTS Team Pages \o/ • [ELTS] Fix suite-name in ela-needed file and fix other tags and ordering of triages to fix errors in the security tracker. • [LTS] Sent out invitations for the meeting. • Attended the sixth private LTS meeting. • General discussion on LTS private and public mailing list. Until next time. :wq for today. Planet Debian — Paul Wise: FLOSS Activities October 2020 Focus This month I didn't have any particular focus. I just worked on issues in my info bubble. Changes Issues Review • Spam: reported 2 Debian bug reports and 147 Debian mailing list posts • Patches: merged libicns patches • Debian packages: sponsored iotop-c • Debian wiki: RecentChanges for the month • Debian screenshots: Administration • Debian: get us removed from an RBL • Debian wiki: reset email addresses, approve accounts Communication Sponsors The pytest-rerunfailures/pyemd/morfessor work was sponsored by my employer. All other work was done on a volunteer basis. Planet Debian — Junichi Uekawa: I'm playing more computer games recently. I'm playing more computer games recently. Splatoon2, Ring Fit Adventure, Fit Boxing, and NieR Automata. , Planet Debian — Chris Lamb: Free software activities in October 2020 Here is my monthly update covering what I have been doing in the free software world during October 2020 (previous month): • As part of my role of being the assistant Secretary of the Open Source Initiative and a board director of Software in the Public Interest, I attended their respective monthly meetings and participated in various licensing and other discussions occurring on the internet as well as the usual internal discussions, etc., including continuing 'onboarding' of a new project to SPI. • ora2pg is a tool used to migrate an Oracle database to PostgreSQL. This month, I submitted a patch to make it build reproducibly. [...] • Addressed an issue filed by Bob Tanner by updating the documentation of my django-slack library which provides a convenient wrapper between projects using the Django and the Slack chat platform. [...] • Opened a pull request against libsass-python (a straightforward binding of libsass for Python, to compile the Sass CSS extensions in Python without the conventional Ruby stack) in order to make the build reproducible. [...] For Lintian, the static analysis tool for Debian packages, I uploaded versions 2.97.0, 2.98.0, 2.99.0 & 2.100.0 as well as updated the declares-possibly-conflicting-debhelper-compat-versions tag as we may specify the Debhelper compatibility level in debian/rules or debian/control (#972464) and dropped a reference to missing manual page [...]. § Reproducible Builds One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter. This month, I: • In Debian: • Kept isdebianreproducibleyet.com up to date. [...] • Submitted the following patches to fix reproducibility-related toolchain issues within Debian: • perl: Please make the build mostly reproducible. (#972559) • fckit: Please make the build (mostly) reproducible. (#972378) • libgrokj2k: Please make the documentation reproducible. (#972494) • netcdf-parallel: Please make the settings file reproducible. (#972930) • dh-fortran-mod: Please make the output reproducible version graph. (#965255) • Filed a bug against the emacs package to make the generated .el files reproducible, a regression that is causing many packages to become unreproducible. (#972861) • Helped draft a mailing list post to update dpkg-buildflags to enable reproducible=+fixfilepath by default. • I also submitted 10 patches to fix specific reproducibility issues in gmerlin-avdecoder, libsass-python, node-proxy, ora2pg, pcbasic, pitivi, ruby-appraiser, softether-vpn, sound-juicer & yard. • Categorised a large number of packages and issues in the Reproducible Builds 'notes' repository. • Drafted, published and publicised our monthly report. I also updated the main Reproducible Builds website and documentation: • Wrote and published two announcement blog posts regarding the restarting of our IRC meetings. [...][...] • Added a citation link to the academic article regarding dettrace [...], and added yet another supply-chain security attack publication [...]. • Reformat Jekyll's Liquid templating language and CSS formatting to be consistent [...] as well as expand a number of tab characters [...]. • Use relative_url to fix missing translation icon on various pages. [...] • Add an explicit note regarding the lack of an in-person summit in 2020 to our events page. [...] Lastly, I made the following changes to diffoscope, including preparing and uploading version 161 to Debian: • Reviewed and merged functionality from Jean-Romain Garnier to add support for radare, a decompiler/reverse-engineering framework [...] and update debian/tests/control to match [...]. • Move test_ocaml to the assert_diff helper. [...] • Update tests to support OCaml version 4.11.1. Thanks to Sebastian Ramacher for the report. (#972518) • Bump minimum version of the Black source code formatter to 20.8b1. (#972518) § Debian Debian LTS This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project. • Investigated and triaged junit4 (CVE-2020-15250), libass (CVE-2020-26682), php5, ros-ros-comm (CVE-2020-16124), sympa & zabbix. • Frontdesk duties, responding to user/developer questions, reviewing others' packages, participating in mailing list discussions, etc. • Issued DLA 2406-1 for jackson-databind, a Java library for processing JSON, to address an external entity expansion vulnerability. • Issued DLA 2407-1 for tomcat8, the Java application server. This was to fix an issue where an excessive number of concurrent streams could have resulted in users seeing responses for unexpected resources. • Issued DLA 2410-1 and ELA 301-1 for the BlueZ suite of Bluetooth tools, utilities and daemons to prevent a double-free vulnerability. You can find out more about the project via the following video: Uploads • python-django (3.1.2-1) — New upstream bugfix release. • 6.0.8-2 — Apply a patch from Yossi Gottlieb to fix a crash when reporting RDB/AOF file errors. (#972683) • 6.0.9-1 — New upstream release. • memcached (1.6.8+dfsg-1) — New upstream release) • mtools (4.0.25-1) — New upstream release, where parsing configuration file now works correctly with Turkish locale. (#972387) • bfs (2.0-1) — New upstream release. • black (20.8b1-2) — Non-maintainer upload to correct version handling to avoid a ModuleNotFoundError error which was affecting a number of related packages. (#970901) Bugs filed • lintian: Please detect sed -e 's@$(CURDIR)@...@' in debian/rules. (#972629)

• mdtraj: Manual pages appear to contain error messages instead of actual examples. (#972635)

• gita: Missing build-depends on python3-yaml. (#972493)

• git-buildpackage: Correct 'option' typo in manual page. (#972081)

Planet Debian — Jonathan Carter: Free Software Activities for 2020-10

Another month, another bunch of uploads. The freeze for Debian 11 (bullseye) is edging closer, so I’ve been trying to get my package list in better shape ahead of that. Thanks to those who worked on fixing lintian.debian.org and the lintian reports on the QA pages, those are immensely useful and it’s great to have that back!

2020-10-04: Upload package gnome-shell-extension-draw-on-your-screen (8-1) to Debian unstable.

2020-10-05: Sponsor package python-potr (1.0.2-3) for Debian unstable (Python Team request).

2020-10-06: Sponsor package python-pyld (2.0.3-1) for Debian unstable (Python Team request).

2020-10-06: Sponsor package qosmic (1.6.0-4) for Debian unstable (E-mail request).

2020-10-07: File removal for gnome-shell-extension-workspace-to-dock (RC Buggy, no longer maintained: #971803).

2020-10-07: Upload package gnome-shell-extension-pixelsaver (1.20-2) to Debian unstable (Closes:  #971689).

2020-10-07: Upload package calamares (3.2.31-1) to Debian unstable.

2020-10-07: Upload package gnome-shell-extension-dashtodock (69-1) to Debian unstable (Closes: #971654).

2020-10-08: Sponsor package python3-libcloud (3.020-1) for Debian unstable.

2020-10-09: Upload package gnome-shell-extension-dashtopanel (40-1) to Debian unstable (Closes: #971087).

2020-10-09: Upload package gnome-shell-extension-draw-on-your-screen (8.1-1) to Debian unstable.

2020-10-12: Upload package gnome-shell-extension-pixelsaver (1.24-1) to Debian unstable.

2020-10-14: Sponsor package python3-onewire (0.2-1) for Debian unstable (Python Team request).

2020-10-15: Sponsor package cheetah (3.2.5-1) for Debian unstable (Python Team request).

2020-10-15: Sponsor package xmodem (0.4.6+dfsg-1) for Debian unstable (Python Team request).

2020-10-15: Sponsor package ansi (0.1.5-1) for Debian unstable (Python Team request).

2020-10-15: Sponsor package cbor2 (5.2.0-1) for Debian unstable (Python Team request).

2020-10-16: Upload package calamares (3.2.32-1) to Debian unstable.

2020-10-17: Upload package calamares (3.2.32.1-1) to Debian unstable.

2020-10-18: Upload package kpmcore (4.2.0-1) to Debian unstable.

2020-10-18: Upload package gnome-shell-extension-draw-on-your-screen (9-1) to Debian unstable.

2020-10-18: Upload package bundlewrap (4.2.1-1) to Debian unstable.

2020-10-18: Upload package bcachefs-tools (0.1+git20201017.8a4408-1~exp1) to Debian experimental.

2020-10-18: Upload package calamares (3.2.32.1-2) to Debian unstable.

2020-10-18: Upload package partitionmanager (4.1.0-2) to Debian unstable.

2020-10-19: Upload package kpmcore (4.2.0-2) to Debian unstable.

2020-10-21: Upload package calamares (3.2.32.1-3) to Debian unstable.

2020-10-21: Upload package calamares-settings-debian (11.0.3-1) to Debian unstable (Closes: #969930, #941301).

2020-10-21: Upload package partitionmanager (4.2.0-1) to Debian unstable.

2020-10-21: Upload package gnome-shell-extension-hard-disk-led (22-1) to Debian unstable (Closes: #971041).

2020-10-21: Merge MR!1 for catimg (Janitor improvements).

2020-10-21: Sponsor package r4d (1.7-1) for Debian unstable (Python Team request).

2020-10-22: Upload package aalib (1.4rc5-47) to Debian unstable.

2020-10-22: Upload package fabulous (0.3.0+dfsg1-8) to Debian unstable.

2020-10-22: Merge MR!1 for gdisk (Janitor improvements).

2020-10-22: Merge MR!1 for gnome-shell-extension-arc-menu (New upstream URLs, thanks Edward Betts).

2020-10-22: Upload package gnome-shell-extension-draw-on-your-screen (10-1) to Debian unstable.

2020-10-22: Merge MR!1 for vim-airline (Janitor improvements).

2020-10-22: Merge MR!1 for vim-airline-themes (Janitor improvements).

2020-10-22: Merge MR!1 for preload (Janitor improvements).

2020-10-22: Upload package aalib (1.4rc5-48) to Debian unstable.

2020-10-26: Upload package bcachefs-tools (0.1+git20201025.742dbbdb-1) to Debian unstable.

2020-10-26: Sponsor package dunst (1.5.0-1) for Debian unstable (mentors.debian.net request).

,

Planet Debian — Kees Cook: combining “apt install” and “get dist-upgrade”?

I frequently see a pattern in image build/refresh scripts where a set of packages is installed, and then all packages are updated:

apt update
apt install -y pkg1 pkg2 pkg2


While it’s not much, this results in redundant work. For example reading/writing package database, potentially running triggers (man-page refresh, ldconfig, etc). The internal package dependency resolution stuff isn’t actually different: “install” will also do upgrades of needed packages, etc. Combining them should be entirely possible, but I haven’t found a clean way to do this yet.

The best I’ve got so far is:

apt update
apt-cache dumpavail | dpkg --merge-avail -
(for i in pkg1 pkg2 pkg3; do echo "$i install") | dpkg --set-selections apt-get dselect-upgrade  This gets me the effect of running “install” and “upgrade” at the same time, but not “dist-upgrade” (which has slightly different resolution logic that’d I’d prefer to use). Also, it includes the overhead of what should be an unnecessary update of dpkg’s database. Anyone know a better way to do this? Update: Julian Andres Klode pointed out that dist-upgrade actually takes package arguments too just like install. *face palm* I didn’t even try it — I believed the man-page and the -h output. It works perfectly! © 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License. CryptogramThe Legal Risks of Security Research Sunoo Park and Kendra Albert have published “A Researcher’s Guide to Some Legal Risks of Security Research.” From a summary: Such risk extends beyond anti-hacking laws, implicating copyright law and anti-circumvention provisions (DMCA §1201), electronic privacy law (ECPA), and cryptography export controls, as well as broader legal areas such as contract and trade secret law. Our Guide gives the most comprehensive presentation to date of this landscape of legal risks, with an eye to both legal and technical nuance. Aimed at researchers, the public, and technology lawyers alike, its aims both to provide pragmatic guidance to those navigating today’s uncertain legal landscape, and to provoke public debate towards future reform. Comprehensive, and well worth reading. Here’s a Twitter thread by Kendra. Worse Than Failure — Error'd: Nothing for You! "No, that's not the coupon code. They literally ran out of digital coupons," Steve wrote. "Wow! Amazon is absolutely SLASHING their prices out of existence," wrote Brian. Björn S. writes, "IKEA now employs what I'd call psychological torture kiosks. The text translates to 'Are you satisfied with your waiting time?' but the screen below displays an eternal spinner. Gaaah!" Daniel O. writes, "I mean, I could change my password right now, but I'm kind of tempted to wait and see when it'll actually expire." "This bank seems to offer its IT employees some nice perks! For example, this ATM is reserved strictly for its administrators," Oton R. wrote. [Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more. Sam Varghese — Australia pulls in new kids on the block for crucial Bledisloe Cup game The focal point of the third Bledisloe Cup game in Sydney on Saturday will be the Australian back-line where two rookies will be playing as fly-half and centre; that, incidentally, is the place on the field which many opposition players slip through when making a line-break. Noah Lolesio and Irae Simone will be under a lot of scrutiny and it may well be the game that establishes them. Both have come in because of injuries to the regulars in these positions, James O’Connor and Matt Toomua respectively. It will be a literal baptism of fire. For the second time in as many years, Australia will be going into a Bledisloe Cup game against New Zealand with more Pacific Islanders in its ranks than Anglo-Saxons. Of the 15 picked by the new coach, Dave Rennie, to take the field in Sydney on Saturday (31 October), eight will be Islanders. And on the bench, there will be another four from the same geographical area. In the first game of 2019, former coach Michael Cheika picked nine islanders and one Aboriginal player as well for the team that thrashed New Zealand 47-26 in Perth. And on the bench, half the number were again islanders. 2019 game 1: Kurtley Beale, Reece Hodge, James O’Connor, Samu Kerevi, Marika Koroibete, Christian Lealiifano, Nic White, Isi Naisarani, Michael Hooper, Lukhan Salakaia-Loto, Rory Arnold, Izack Rodda, Allan Alaalatoa, Tolu Latu and Scott Sio. Bench: Folau Fainga’a, James Slipper, Taniela Tupou, Adam Coleman, Luke Jones, Will Genia, Matt To’omua and Tom Banks. 2020 game 3: James Slipper, Brandon Paenga-Amosa, Allan Alaalatoa, Lukhan Salakaia-Loto, Matt Philip, Ned Hanigan, Michael Hooper, Harry Wilson, Nic White, Noah Lolesio, Marika Koroibete, Irae Simone, Jordan Petaia, Filipo Daugunu and Dane Haylett-Petty. Bench: Jordan Uelese, Scott Sio, Taniela Tupou, Rob Simmons, Fraser McReight, Tate McDermott, Reece Hodge and Hunter Paisami. Given Rennie is from New Zealand, his inclusion of this many islanders is not a surprise. New Zealand has benefitted greatly by attracting players from the many islands in the Pacific, with some notable names like the late Jonah Lomu and former All Blacks captain Jonathan Falafesa “Tana” Umaga. Rennie knows the worth of players from this region. But will this ensure a win for Australia to keep the series alive? One doubts that Rennie is basing his selection on that criterion; rather, like all rugby coaches of teams that have a chance of being number one in the world, he will be looking to identify the best 15 for the next Rugby World Cup which is in 2023. New Zealand is also trying out a few new faces for the third game of the Bledisloe Cup, which is the opening game of the Rugby Championship. [The latter contest will be a three-nation affair this year as South Africa has pulled out.] Coach Ian Foster has had to call in Hoskins Sotutu to fill in for Ardie Savea who has taken paternity leave and Karl Tu’inukuafe will come in for Joe Moody who is under observation after being knocked out during the second game in Auckland. Sam Whitelock will return as lock, taking over from newcomer Tupou Vaa’i. , Planet Debian — Ulrike Uhlig: Better handling emergencies We all know these situations when we receive an email asking Can you check the design of X, I need a reply by tonight. Or an instant message: My website went down, can you check? Another email: I canceled a plan at the hosting company, can you restore my website as fast as possible? A phone call: The TLS certificate didn’t get updated, and now we can’t access service Y. Yet another email: Our super important medical advice website is suddenly being censored in country Z, can you help? Everyone knows those messages that have “URGENT” in capital letters in the email subject. It might be that some of them really are urgent. Others are the written signs of someone having a hard time properly planning their own work and passing their delays on to someone who comes later in the creation or production chain. And others again come from people who are overworked and try to delegate some of their tasks to a friendly soul who is likely to help. How emergencies create more emergencies In the past, my first reflex when I received an urgent request was to start rushing into solutions. This happened partly out of empathy, partly because I like to be challenged into solving problems, and I’m fairly good at that. This has proven to be unsustainable, and here is why. Emergencies create unplanned work The first issue is that emergencies create a lot of unplanned work. Which in turn means not getting other, scheduled, things done. This can create a backlog, end up in working late, or working on weekends. Emergencies can create a permanent state of exception Unplanned work can also create a lot of frustration, out of the feeling of not getting the things done that one planned to do. We might even get a feeling of being nonautonomous (in German I would say fremdbestimmt, which roughly translates to “being directed by others”). On the long term, this can generate unsustainable situations: higher work loads, and burnout. When working in a team of several people, A might have to take over the work of B because B has not enough capacities. Then A gets overloaded in turn, and C and D have to take over A’s work. Suddenly the team is stuck in a permanent state of exception. This state of exception will produce more backlog. The team might start to deprioritize social issues over getting technical things done. They might not be able to recruit new people anymore because they have no capacity left to onboard newcomers. One emergency can result in a variety of emergencies for many people The second issue produced by urgent requests is that if I cannot solve the initial emergency by myself, I might try to involve colleagues, other people who are skilled in the area, or people who work in another relevant organization to help with this. Suddenly, the initial emergency has become my emergency as well as the emergency of a whole bunch of other people. A sidenote about working with friends This might be less of an issue in a classical work setup than in a situation in which a bunch of freelancers work together, or in setups in which work and friendships are intertwined. This is a problem, because the boundaries between friend and worker role, and the expectations that go along with these roles, can get easily confused. If a colleague asks me to help with task X, I might say no; if a friend asks, I might be less likely to say no. What I learnt about handling emergencies I came up with some guidelines that help me to better handle emergencies. Plan for unplanned work It doesn’t matter and it doesn’t help to distinguish if urgent requests are legitimate or if they come from people who have not done their homework on time. What matters is to make one’s weekly todo list sustainable. After reading Making work visible by Domenica de Grandis, I understood the need to add free slots for unplanned work into one’s weekly schedule. Slots for unplanned work can take up to 25% of the total work time! Take time to make plans Now that there are some free slots to handle emergencies, one can take some time to think when an urgent request comes in. A German saying proposes to wait and have some tea (“abwarten und Tee trinken”). I think this is actually really good advice, and works for any non-obvious problem. Sit down and let the situation sink in. Have a tea, take a shower, go for a walk. It’s never that urgent. Really, never. If possible, one can talk about the issue with another person, rubberduck style. Then one can make a plan on how to address the emergency properly, it could be that the solution is easier than at first thought. Affirming boundaries: Saying no Is the emergency that I’m asked to solve really my problem? Or is someone trying to involve me because they know I’m likely to help? Take a deep breath and think about it. No? It’s not my job, not my role? I have no time for this right now? I don’t want to do it? Maybe I’m not even paid for it? A colleague is pushing my boundaries to get some task on their own todo list done? Then I might want to say no. I can’t help with this. or I can help you in two weeks. I don’t need to give a reason. No. is a sentence. And: Saying no doesn’t make me an arse. Affirming boundaries: Clearly defining one’s role Clearly defining one’s role is something that is often overlooked. In many discussions I have with friends it appears that this is a major cause of overwork and underpayment. Lots of people are skilled, intelligent, and curious, and easily get challenged into putting on their super hero dress. But they’re certainly not the only person that can help—even if an urgent request makes them think that at first. To clearly define our role, we need to make clear which part of the job is our work, and which part needs to be done by other people. We should stop trying to accomodate people and their requests to the detriment of our own sanity. You’re a language interpreter and are being asked to mediate a bi-lingual conflict between the people you are interpreting for? It’s not your job. You’re the graphic designer for a poster, but the text you’ve been given is not good enough? Send back a recommendation to change the text; don’t do these changes yourself: it’s not your job. But you can and want to do this yourself and it would make your client’s life easier? Then ask them to get paid for the extra time, and make sure to renegotiate your deadline! Affirming boundaries: Defining expectations Along with our role, we need to define expectations: in which timeframe am I willing to do the job? Under which contract, which agreement, which conditions? For which payment? People who work in a salary office job generally do have a work contract in which their role and the expectations that come with this role are clearly defined. Nevertheless, I hear from friends that their superiors regularly try to make them do tasks that are not part of their role definition. So, here too, role and expectations sometimes need to be renegotiated, and the boundaries of these roles need to be clearly affirmed. Random conclusive thoughts If you’ve read until here, you might have experienced similar things. Or, on the contrary, maybe you’re already good at communicating your boundaries and people around you have learnt to respect them? Congratulations. In any case, for improving one’s own approach to such requests, it can be useful to find out which inner dynamics are at play when we interact with other people. Additionally, it can be useful to understand the differences between Asker and Guesser culture: when an Asker meets a Guesser, unpleasantness results. An Asker won’t think it’s rude to request two weeks in your spare room, but a Guess culture person will hear it as presumptuous and resent the agony involved in saying no. Your boss, asking for a project to be finished early, may be an overdemanding boor – or just an Asker, who’s assuming you might decline. If you’re a Guesser, you’ll hear it as an expectation. Askers should also be aware that there might be Guessers in their team. It can help to define clear guidelines about making requests (when do I expect an answer, under which budget/contract/responsibility does the request fall, what other task can be put aside to handle the urgent task?) Last, but not least, Making work visible has a lot of other proposals on how to visibilize and then deal with unplanned work. CryptogramTracking Users on Waze A security researcher discovered a wulnerability in Waze that breaks the anonymity of users: I found out that I can visit Waze from any web browser at waze.com/livemap so I decided to check how are those driver icons implemented. What I found is that I can ask Waze API for data on a location by sending my latitude and longitude coordinates. Except the essential traffic information, Waze also sends me coordinates of other drivers who are nearby. What caught my eyes was that identification numbers (ID) associated with the icons were not changing over time. I decided to track one driver and after some time she really appeared in a different place on the same road. The vulnerability has been fixed. More interesting is that the researcher was able to de-anonymize some of the Waze users, proving yet again that anonymity is hard when we’re all so different. Worse Than Failure — CodeSOD: Graceful Depredations Cloud management consoles are, in most cases, targeted towards enterprise customers. This runs into Remy’s Law of Enterprise Software: if a piece of software is in any way described as being “enterprise”, it’s a piece of garbage. Richard was recently poking around on one of those cloud provider’s sites. The software experience was about as luxurious as one expects, which is to say it was a pile of cryptically named buttons for the 57,000 various kinds of preconfigured services this particular service had on offer. At the bottom of each page, there was a small video thumbnail, linking back to a YouTube video, presumably to provide marketing information, or support guidance for whatever the user was trying to do. This was the code which generated them (whitespace added for readability): <script>function lazyLoadThumb(e){var t='<img loading="lazy" data-lazy-src="https://i.ytimg.com/vi/ID/hqdefault.jpg" alt="" width="480" height="360"> <noscript><img src="https://i.ytimg.com/vi/ID/hqdefault.jpg" alt="" width="480" height="360"></noscript>' ,a='<div class="play"></div>';return t.replace("ID",e)+a}</script> I appreciate the use of a <noscript> tag. Providing a meaningful fallback for browsers that, for whatever reason, aren’t executing JavaScript is a good thing. In the olden days, we called that “progressive enhancement” or “graceful degradation”: the page might work better with JavaScript turned on, but you can still get something out of it even if it’s not. Which is why it’s too bad the <noscript> is being output by JavaScript. And sure, I’ve got some serious questions with the data-lazy-src attribute, the fact that we’re dumping a div with a class="play" presumably to get wired up as our play button, and all the string mangling to get the correct video ID in there for the thumbnail, and just generating DOM elements by strings at all. But outputting <noscript>s from JavaScript is a new one on me. [Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today! Planet Debian — Norbert Preining: Deleting many files from an S3 bucket So we found ourselves in the need to delete a considerable amount of files (around 500000, amounting to 1.6T) from an S3 bucket. With the list of files in hand my first shot was calling aws s3 rm s3://BUCKET/FILE for each file. That wasnâ€™t the best idea I have to say, since first of all, it makes 500000 requests, and then it takes a looong time. And this command does not allow to pass in multiple files. Fortunately there is aws s3api delete-objects which takes a json input and can delete multiple files: aws 3api delete-objects --bucket BUCKET --delete '{"Objects": [ { "Key" : "FILE1" }, { "Key" : "FILE2"} ... ]}}' That did help, and with a bit of magic from bash (mapfile which can read in lines from stdin in batches) and jq, at the end it was a business of some 20min or so: cat files-to-be-deleted | while mapfile -t -n 500 ary && ((${#ary[@]})); do
objdef=$(printf '%s\n' "${ary[@]}" | jq -nR '{Objects: (reduce inputs as $line ([]; . + [{"Key":$line}]))}')
aws s3api --no-cli-pager  delete-objects --bucket BUCKET --delete "$objdef" done This reads 500 files a time, and reformats it using jq into the proper json format: reduce inputs is a jq filter that iterates over the input lines and does a map/reduce step. In this case we use an empty array as start and add new key/filename pairs on the go. Finally, the whole bunch is send to AWS with the above API call. Puuuh, 500000 files and 1.6T less, in 20min. Krebs on Security — FBI, DHS, HHS Warn of Imminent, Credible Ransomware Threat Against U.S. Hospitals On Monday, Oct. 26, KrebsOnSecurity began following up on a tip from a reliable source that an aggressive Russian cybercriminal gang known for deploying ransomware was preparing to disrupt information technology systems at hundreds of hospitals, clinics and medical care facilities across the United States. Today, officials from the FBI and the U.S. Department of Homeland Security hastily assembled a conference call with healthcare industry executives warning about an “imminent cybercrime threat to U.S. hospitals and healthcare providers.” The agencies on the conference call, which included the U.S. Department of Health and Human Services (HHS), warned participants about “credible information of an increased and imminent cybercrime threat to US hospitals and healthcare providers.” The agencies said they were sharing the information “to provide warning to healthcare providers to ensure that they take timely and reasonable precautions to protect their networks from these threats.” The warning came less than two days after this author received a tip from Alex Holden, founder of Milwaukee-based cyber intelligence firm Hold Security. Holden said he saw online communications this week between cybercriminals affiliated with a Russian-speaking ransomware group known as Ryuk in which group members discussed plans to deploy ransomware at more than 400 healthcare facilities in the U.S. One participant on the government conference call today said the agencies offered few concrete details of how healthcare organizations might better protect themselves against this threat actor or purported malware campaign. “They didn’t share any IoCs [indicators of compromise], so it’s just been ‘patch your systems and report anything suspicious’,” said a healthcare industry veteran who sat in on the discussion. However, others on the call said IoCs may be of little help for hospitals that have already been infiltrated by Ryuk. That’s because the malware infrastructure used by the Ryuk gang is often unique to each victim, including everything from the Microsoft Windows executable files that get dropped on the infected hosts to the so-called “command and control” servers used to transmit data between and among compromised systems. Nevertheless, cybersecurity incident response firm Mandiant today released a list of domains and Internet addresses used by Ryuk in previous attacks throughout 2020 and up to the present day. Mandiant refers to the group by the threat actor classification “UNC1878,” and aired a webcast today detailing some of Ryuk’s latest exploitation tactics. Charles Carmakal, senior vice president for Mandiant, told Reuters that UNC1878 is one of most brazen, heartless, and disruptive threat actors he’s observed over the course of his career. “Multiple hospitals have already been significantly impacted by Ryuk ransomware and their networks have been taken offline,” Carmakal said. One health industry veteran who participated in the call today and who spoke with KrebsOnSecurity on condition of anonymity said if there truly are hundreds of medical facilities at imminent risk here, that would seem to go beyond the scope of any one hospital group and may implicate some kind of electronic health record provider that integrates with many care facilities. So far, however, nothing like hundreds of facilities have publicly reported ransomware incidents. But there have been a handful of hospitals dealing with ransomware attacks in the past few days. Becker’s Hospital Review reported today that a ransomware attack hit Klamath Falls, Ore.-based Sky Lakes Medical Center’s computer systems. WWNY’s Channel 7 News in New York reported yesterday that a Ryuk ransomware attack on St. Lawrence Health System led to computer infections at Caton-Potsdam, Messena and Gouverneur hospitals. SWNewsMedia.com on Monday reported on “unidentified network activity” that caused disruption to certain operations at Ridgeview Medical Center in Waconia, Minn. SWNews says Ridgeview’s system includes Chaskaâ€™s Two Twelve Medical Center, three hospitals, clinics and other emergency and long-term care sites around the metro area. NBC5 reports The University of Vermont Health Network is dealing with a “significant and ongoing system-wide network issue” that could be a malicious cyber attack. -A story at BleepingComputer.com says Wyckoff Hospital in New York suffered a Ryuk ransomware attack on Oct. 28. This is a developing story. Stay tuned for further updates. Update, 10:11 p.m. ET: The FBI, DHS and HHS just jointly issued an alert about this, available here. Update, Oct. 30, 11:14 a.m. ET: Added mention of Wyckoff hospital Ryuk compromise. , Planet Debian — Daniel Lange: Git shared hosting quirk Oops 'eh? Yep, Linux has been backdoored. Well, or not. Konstantin Ryabitsev explains it nicely in a cgit mailing list email: It is common for git hosting environments to configure all forks of the same repo to use an "object storage" repository. For example, this is what allows git.kernel.org's 600+ forks of linux.git to take up only 10GB on disk as opposed to 800GB. One of the side-effects of this setup is that any object in the shared repository can be accessed from any of the forks, which periodically confuses people into believing that something terrible has happened. The hack was discussed on Github in Dec 2018 when it was discovered. I forgot about it again but Konstantin's mail brought the memory back and I think it deserves more attention. I'm sure putting some illegal content into a fork and sending a made up "blob" URL to law enforcement would go quite far. Good luck explaining the issue. "Yes this is my repo" but "no, no that's not my data" ... "yes, it is my repo but not my data" ... "no we don't want that data either, really" ... "but, but there is nothing we can do, we host on github...1". 1. Actually there is something you can do. Making a repo private takes it out of the shared "object storage". You can make it public again afterwards. Seems to work at least for now. Kevin Rudd — Statement on the International Peace Institute October 28, 2020 Mr Rudd has issued a statement regarding the International Peace Institute. Click here to read the statement. The post Statement on the International Peace Institute appeared first on Kevin Rudd. Krebs on Security — Security Blueprints of Many Companies Leaked in Hack of Swedish Firm Gunnebo In March 2020, KrebsOnSecurity alerted Swedish security giant Gunnebo Group that hackers had broken into its network and sold the access to a criminal group which specializes in deploying ransomware. In August, Gunnebo said it had successfully thwarted a ransomware attack, but this week it emerged that the intruders stole and published online tens of thousands of sensitive documents — including schematics of client bank vaults and surveillance systems. The Gunnebo Group is a Swedish multinational company that provides physical security to a variety of customers globally, including banks, government agencies, airports, casinos, jewelry stores, tax agencies and even nuclear power plants. The company has operations in 25 countries, more than 4,000 employees, and billions in revenue annually. Acting on a tip from Milwaukee, Wis.-based cyber intelligence firm Hold Security, KrebsOnSecurity in March told Gunnebo about a financial transaction between a malicious hacker and a cybercriminal group which specializes in deploying ransomware. That transaction included credentials to a Remote Desktop Protocol (RDP) account apparently set up by a Gunnebo Group employee who wished to access the company’s internal network remotely. Five months later, Gunnebo disclosed it had suffered a cyber attack targeting its IT systems that forced the shutdown of internal servers. Nevertheless, the company said its quick reaction prevented the intruders from spreading the ransomware throughout its systems, and that the overall lasting impact from the incident was minimal. Earlier this week, Swedish news agency Dagens Nyheter confirmed that hackers recently published online at least 38,000 documents stolen from Gunnebo’s network. Linus Larsson, the journalist who broke the story, says the hacked material was uploaded to a public server during the second half of September, and it is not known how many people may have gained access to it. Larsson quotes Gunnebo CEO Stefan Syrén saying the company never considered paying the ransom the attackers demanded in exchange for not publishing its internal documents. What’s more, Syrén seemed to downplay the severity of the exposure. “I understand that you can see drawings as sensitive, but we do not consider them as sensitive automatically,” the CEO reportedly said. “When it comes to cameras in a public environment, for example, half the point is that they should be visible, therefore a drawing with camera placements in itself is not very sensitive.” It remains unclear whether the stolen RDP credentials were a factor in this incident. But the password to the Gunnebo RDP account — “password01” — suggests the security of its IT systems may have been lacking in other areas as well. After this author posted a request for contact from Gunnebo on Twitter, KrebsOnSecurity heard from Rasmus Jansson, an account manager at Gunnebo who specializes in protecting client systems from electromagnetic pulse (EMP) attacks or disruption, short bursts of energy that can damage electrical equipment. Jansson said he relayed the stolen credentials to the company’s IT specialists, but that he does not know what actions the company took in response. Reached by phone today, Jansson said he quit the company in August, right around the time Gunnebo disclosed the thwarted ransomware attack. He declined to comment on the particulars of the extortion incident. Ransomware attackers often spend weeks or months inside of a target’s network before attempting to deploy malware across the network that encrypts servers and desktop systems unless and until a ransom demand is met. That’s because gaining the initial foothold is rarely the difficult part of the attack. In fact, many ransomware groups now have such an embarrassment of riches in this regard that they’ve taken to hiring external penetration testers to carry out the grunt work of escalating that initial foothold into complete control over the victim’s network and any data backup systems — a process that can be hugely time consuming. But prior to launching their ransomware, it has become common practice for these extortionists to offload as much sensitive and proprietary data as possible. In some cases, this allows the intruders to profit even if their malware somehow fails to do its job. In other instances, victims are asked to pay two extortion demands: One for a digital key to unlock encrypted systems, and another in exchange for a promise not to publish, auction or otherwise trade any stolen data. While it may seem ironic when a physical security firm ends up having all of its secrets published online, the reality is that some of the biggest targets of ransomware groups continue to be companies which may not consider cybersecurity or information systems as their primary concern or business — regardless of how much may be riding on that technology. Indeed, companies that persist in viewing cyber and physical security as somehow separate seem to be among the favorite targets of ransomware actors. Last week, a Russian journalist published a video on Youtube claiming to be an interview with the cybercriminals behind the REvil/Sodinokibi ransomware strain, which is the handiwork of a particularly aggressive criminal group that’s been behind some of the biggest and most costly ransom attacks in recent years. In the video, the REvil representative stated that the most desirable targets for the group were agriculture companies, manufacturers, insurance firms, and law firms. The REvil actor claimed that on average roughly one in three of its victims agrees to pay an extortion fee. Mark Arena, CEO of cybersecurity threat intelligence firm Intel 471, said while it might be tempting to believe that firms which specialize in information security typically have better cybersecurity practices than physical security firms, few organizations have a deep understanding of their adversaries. Intel 471 has published an analysis of the video here. Arena said this is a particularly acute shortcoming with many managed service providers (MSPs), companies that provide outsourced security services to hundreds or thousands of clients who might not otherwise be able to afford to hire cybersecurity professionals. “The harsh and unfortunate reality is the security of a number of security companies is shit,” Arena said. “Most companies tend to have a lack of ongoing and up to date understanding of the threat actors they face.” CryptogramThe NSA is Refusing to Disclose its Policy on Backdooring Commercial Products Senator Ron Wyden asked, and the NSA didn’t answer: The NSA has long sought agreements with technology companies under which they would build special access for the spy agency into their products, according to disclosures by former NSA contractor Edward Snowden and reporting by Reuters and others. These so-called back doors enable the NSA and other agencies to scan large amounts of traffic without a warrant. Agency advocates say the practice has eased collection of vital intelligence in other countries, including interception of terrorist communications. The agency developed new rules for such practices after the Snowden leaks in order to reduce the chances of exposure and compromise, three former intelligence officials told Reuters. But aides to Senator Ron Wyden, a leading Democrat on the Senate Intelligence Committee, say the NSA has stonewalled on providing even the gist of the new guidelines. […] The agency declined to say how it had updated its policies on obtaining special access to commercial products. NSA officials said the agency has been rebuilding trust with the private sector through such measures as offering warnings about software flaws. “At NSA, it’s common practice to constantly assess processes to identify and determine best practices,” said Anne Neuberger, who heads NSA’s year-old Cybersecurity Directorate. “We don’t share specific processes and procedures.” Three former senior intelligence agency figures told Reuters that the NSA now requires that before a back door is sought, the agency must weigh the potential fallout and arrange for some kind of warning if the back door gets discovered and manipulated by adversaries. The article goes on to talk about Juniper Networks equipment, which had the NSA-created DUAL_EC PRNG backdoor in its products. That backdoor was taken advantage of by an unnamed foreign adversary. Juniper Networks got into hot water over Dual EC two years later. At the end of 2015, the maker of internet switches disclosed that it had detected malicious code in some firewall products. Researchers later determined that hackers had turned the firewalls into their own spy tool here by altering Juniper’s version of Dual EC. Juniper said little about the incident. But the company acknowledged to security researcher Andy Isaacson in 2016 that it had installed Dual EC as part of a “customer requirement,” according to a previously undisclosed contemporaneous message seen by Reuters. Isaacson and other researchers believe that customer was a U.S. government agency, since only the U.S. is known to have insisted on Dual EC elsewhere. Juniper has never identified the customer, and declined to comment for this story. Likewise, the company never identified the hackers. But two people familiar with the case told Reuters that investigators concluded the Chinese government was behind it. They declined to detail the evidence they used. Okay, lots of unsubstantiated claims and innuendo here. And Neuberger is right; the NSA shouldn’t share specific processes and procedures. But as long as this is a democratic country, the NSA has an obligation to disclose its general processes and procedures so we all know what they’re doing in our name. And if it’s still putting surveillance ahead of security. Planet Debian — Kentaro Hayashi: lltsv 0.6.1-2: applied background color patch for readability Original version of lltsv doesn't specify background color, so it is hard to recognize texts in dark background. I've fixed this issue by specifying background color explicitly. Here is the screenshot of fixed version. Yay! This issue was already forwarded as PR github.com Worse Than Failure — CodeSOD: A Type of Useless TypeScript offers certain advantages over JavaScript. Compile time type-checking can catch a lot of errors, it can move faster than browsers, so it offers the latest standards (and the compiler handles the nasty details of shimming them into browsers), plus it has a layer of convenient, syntactic sugar. If you’re using TypeScript, you can use the compiler to find all sorts of ugly problems with your code, and all you need to do is turn the right flags on. Or, you can be like Quintus’s co-worker, who checked in this… thing. /** * Container holding definition information. * * @param String version * @param String date */ export class Definition { private id: string; private name: string; constructor(private version, private data) {} /** * get the definition version * * @return String version */ getVersion() { return this.id; } /** * get the definition date * * @return String date */ getDate() { return this.name; } } Now, if you were to try this on the TypeScript playground, you’d find that while it compiles and generates JavaScript, the compiler has a lot of reasonable complaints about it. However, if you were just to compile this with the command line tsc, it gleefully does the job without complaint, using the default settings. So the code is bad, and the tool can tell you it’s bad, but you have to actually ask the tool to tell you that. In any case, it’s easy to understand what happened with this bad code: this is clearly programming by copy/paste. They had a class that tied an id to a name. They copy/pasted to make one that mapped a version to a date, but got distracted halfway through and ended up with this incomplete dropping. And then they somehow checked it in, and nobody noticed it until Quintus was poking around. Now, a little bit about this code. You’ll note that there are private id and name properties. The constructor defines two more properties (and wires the constructor params up to map to them) with its private version, private data params. So if you call the constructor, you initialize two private members that have no accessors at all. There are accessors, but they point to id and name, which never get initialized in the constructor, and have no mutators. Of course, TypeScript compiles down into JavaScript, so those private keywords don’t really matter. JavaScript doesn’t have private. My suspicion is that this class ended up in the code base, but is never actually used. If it is used, I bet it’s used like: let f = new Definition(); f.id = "1.0.1" f.name = "28-OCT-2020" … let ver = f.getVersion(); That would work and do what the original developer expected. If they did that, the TypeScript compiler might complain, but as we saw, they don’t really care about what the compiler says. [Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more. , CryptogramReverse-Engineering the Redactions in the Ghislaine Maxwell Deposition Slate magazine was able to cleverly read the Ghislaine Maxwell deposition and reverse-engineer many of the redacted names. We’ve long known that redacting is hard in the modern age, but most of the failures to date have been a result of not realizing that covering digital text with a black bar doesn’t always remove the text from the underlying digital file. As far as I know, this reverse-engineering technique is new. EDITED TO ADD: A similar technique was used in 1991 to recover the Dead Sea Scrolls. Planet Debian — Noah Meyerhans: Debian STS: Short Term Support In another of my frequent late-night bouts with insomnia, I started thinking about the intersection of a number of different issues facing Debian today, both from a user point of view and a developer point of view. Debian has a reputation for shipping “stale” software. Versions in the stable branch are often significantly behind the latest development upstream. Debian’s policy here has been that this is fine, our goal is to ship something stable, not something bleeding edge. Unofficially, our response to users is: If you need bleeding edge software, Debian may not be for you. Officially, we have no response to users who want fresher software. Debian also has a problem with a lack of manpower. I believe that part of why we have a hard time attracting contributors is our reputation for stale software. It might be worth it for us to consider changes to our approach to releases. What about running testing? People who want newer software often look to Debian’s testing branch as a possible solution. It’s tempting, as it’s a dynamically generated release based on unstable, so it should be quite current. In practice, it’s not at all uncommon to find people running testing, and in fact I’m running it right now on the ThinkPad on which this is being typed. However, testing comes with a glaring issue: a lack of timely security support. Security updates must still propagate through unstable, and this can take some time. They can be held up by dependencies, library transitions, or other factors. Nearly every list of “best practices for computer security” lists keeping software up-to-date at or near the top of most important steps to take to safely use networked computer. Debian’s testing branch makes this very difficult, especially when faced with a zero-day with potential for real-world exploit. What about stable-backports? Stable backports is both better and worse than testing. It’s better in that it allows you to run a system comprised mainly of packages from the stable branch, which receive updates from the security team in a timely manner. However, it’s worse in that the packages from the backports repository incur an additional delay. The expectation around backports is that a package migrates naturally from unstable to testing, and then requires a maintainer to upload a new package based on the version in testing specifically targeted at stable backports. The migration can potentially be bypassed, and we used to have a mechanism for announcing the availability of security updates for the stable backports archive, but it has gone unused for several years now. The documentation describes a workflow for posting security updates that involves creating a ticket in Debian’s RT system, which is going to be quite foreign to most people. News from mid 2019 suggests that this process might change, but nothing appears to have come of this in over a year, and we still haven’t seen a proper security advisory for stable backports in years. Looking to LTS for ideas The Long-Term Support project is an “alternative” branch of Debian, maintained outside the normal stable release infrastructure. It’s stable, and expected to behave that way, but it’s not supported by the stable security team or release team. LTS provides a framework for providing security updates via targeted uploads by a team of interested individuals working outside the structure of the existing stable releases. This project seems to be quite active (how much of this is because at least some members are being paid?), and as of this writing has actually published more security advisories in the past month than the stable security team has published for the current stable branch. This is also interesting in that the software in LTS is quite old, first appearing in a Debian stable release in 2017. LTS is particularly interesting here as it’s an example an initiative within the Debian community taken specifically to address user needs. For some of our users, remaining on an old release is a perfectly valid thing for them to do, and we recognize this and support them in doing so. Debian Short-Term Support So, what would it take to create an “LTS-like” initiative in the other direction? Instead of providing ongoing support for ancient versions of software that previously comprised a stable release, could we build a distribution branch based on something that hasn’t yet reached stable? What would that look like? How would it fit in the existing unstable→testing migration process? What impact would it have on the size of the archive? Would we want a rolling release, or discrete releases? If the latter, how many would we want between proper stable releases? The security tracker already tracks outstanding issues in unstable and testing, and can even show issues that have been fixed in unstable but haven’t yet propagated to testing. If we want a rolling release, maybe we could just open up the testing-security repository more broadly? There was once a testing security team, which IIRC was chartered to publish updated packages directly to testing security, along with associated security advisory. Based on the mailing list history, that effort seems to have shut down around the time of the squeeze (Debian 6.0) release in early 2011. Would it be worth resurrecting it? We’ve probably got much of the infrastructure required in place already, since it previously existed. Personally I’m not really a fan of a pure rolling release. I’d rather see a light-weight release. Maybe a snapshot of testing that gets just a date, not a Toy Story codename. Probably skip building a dedicated installer for it. Upgrade from stable or use a d-i snapshot from testing if needed. This mini release is supported until the next one comes out, maybe 6 or 8 months later. By supported, I mean that the “Short Term Release” team is responsible for it. They can upload security or other critical bug fixes directly to a dedicated repository. When the next STS snapshot is released, packages in the updates repository are either archived, if they’re a lower version than the one in the new mini release, or rebuilt against the new mini release and preserved. Using some of the same mechanisms as the LTS release, we’d need 1. Something to take the place of oldstable, that is the base release against which updates are released. This could be something that effectively maps to a date snapshot served by http://snapshot.debian.org/. (Snapshot itself could not currently handle the increased load, as I understand it, but conceptually it’s similar.) 2. Something to take the place of the dist/updates apt repository that hosts the packages that are updated. In theory, if the infrastructure could support hose things, then we could in effect generate a mini release at any time based on a snapshot. I wonder if this could start as something totally unofficial; mirror an arbitrary testing snapshot and provide a place for interested people to publish package updates. Not a proposal, nor a criticism To be clear, I don’t really intend this as a proposal; It’s really half-baked. Maybe these ideas have already been considered and dismissed. I don’t know if people would be interested in working on such a project, and I’m not nearly familiar enough with the Debian archive tooling to even make a guess as to how hard it would be to implement much of it. I’m just posting some ideas that I came up with while pondering something that, from my perspective, is an area where Debian is clearly failing to meet the needs of some of our users. We know Debian is a popular and respected Linux distribution, and we know people value our stability. However, we also know that people like running Fedora and Ubuntu’s non-LTS releases. People like Arch Linux. Not just “end-users”, but also the people developing the software shipped by the distros themselves. There are a lot of potential contributors to Debian who are kept away by our unwillingness to provide a distro offering both fresh software and security support. I think that we could attract more people to the Debian community if we could provide a solution for these people, and that would ultimately be good for everybody. Also, please don’t interpret this as being critical of the release team, the stable security team, or any other team or individual in Debian. I’m sharing this because I think there are opportunities for Debian to improve how we serve our users, not because I think anybody is doing anything wrong. With all that said, though, let me know if you find the ideas interesting. If you think they’re crazy, you can tell me that, too. I’ll probably agree with you. LongNow — How “Forest Floors” in Finland’s Daycares Changed Children’s Immune Systems Once again on the theme of how the technological/cultural pace layer’s accelerating decoupling from the ecological pace layer in which we evolved poses serious risks to the integrity of both the human body and biosphere: When daycare workers in Finland rolled out a lawn, planted forest undergrowth such as dwarf heather and blueberries, and allowed children to care for crops in planter boxes, the diversity of microbes in the guts and on the skin of young kids appeared healthier in a very short space of time. Compared to other city kids who play in standard urban daycares with yards of pavement, tile and gravel, 3-, 4-, and 5-year-olds at these greened-up daycare centres in Finland showed increased T-cells and other important immune markers in their blood within 28 days. “We also found that the intestinal microbiota of children who received greenery was similar to the intestinal microbiota of children visiting the forest every day,” says environmental scientist Marja Roslund from the University of Helsinki. Daycares in Finland Built a ‘Forest Floor’, And It Changed Children’s Immune Systems in Science Alert That said, the hopeful news from this pilot project is that it may be easier to restore a healthy balance between the modern and premodern from within the built environment than most people believe. Charles Stross — All Glory to the New Management! Today is September 27th, 2020. On October 27th, Dead Lies Dreaming will be published in the USA and Canada: the British edition drops on October 29th. (Yes, there will be audio editions too, via the usual outlets.) This book is being marketed as the tenth Laundry Files novel. That's not exactly true, though it's not entirely wrong, either: the tenth Laundry book, about the continuing tribulations of Bob Howard and his co-workers, hasn't been written yet. (Bob is a civil servant who by implication deals with political movers and shakers, and politics has turned so batshit crazy in the past three years that I just can't go there right now.) There is a novella about Bob coming next summer. It's titled Escape from Puroland and Tor.com will be publishing it as an ebook and hardcover in the USA. (No UK publication is scheduled as yet, but we're working on it.) I've got one more novella planned, about Derek the DM, and then either one or two final books: I'm not certain how many it will take to wrap the main story arc yet, but rest assured that the tale of SOE's Q-Division, the Laundry, reaches its conclusion some time in 2015. Also rest assured that at least one of our protagonists survives ... as does the New Management. All Glory to the Black Pharaoh! Long may he rule over this spectred isle! (But what's this book about?) Dead Lies Dreaming is the first book in a project I dreamed up in (our world's) 2017, with the working title Tales of the New Management. It came about due to an unhappy incident: I found out the hard way that writing productively while one of your parents is dying is rather difficult. The first time it happened, it took down a promising space opera project. I plan to pick it up and re-do it next year, but it was the kind of learning experience I could happily have done without. The second time it happened, I had to stop work on Invisible Sun, the third and final Empire Games novel—I just couldn't get into the right head-space. (Empire Games is now written and in the hands of the production folks at Tor. It will almost certainly be published next September, if the publishing industry survives the catastrophe novel we're all living through right now.) Anyway, I was unable work on the a project with a fixed deadline, but I couldn't not write: so I gave myself license to doodle therapeutically. The therapeutic doodles somehow colonized the abandoned first third of a magical realist novel I pitched in 2014, and turned into an unexpected attack novel titled Lost Boys. (It was retitled Dead Lies Dreaming because a cult comedy movie from 1987 got remade for TV in 2020—unless you're a major bestseller you do not want your book title to clash with an unrelated movie—but it's still Lost Boys in my headcanon.) Lost Boys—that is, Dead Lies Dreaming—riffs heavily off Peter and Wendy, the original taproot of Peter Pan, a stage play and novel by J. M. Barrie that predates the more familiar, twee, animated Disney version of Peter Pan from 1953 by some decades. (Actually Peter and Wendy recycled Barrie's character from an earlier work, The Little White Bird, from 1902, but let's not get into the J. M. Barrie arcana at this point.) Peter and Wendy can be downloaded from Project Gutenberg here. And if you only know Pan from Disney, you're in for a shock. Barrie was writing in an era when antibiotics hadn't been discovered, and far fewer vaccines were available for childhood diseases. Almost 20% of children died before reaching their fifth birthday, and this was a huge improvement over the earlier decades of the 19th century: parents expected some of their babies to die, and furthermore, had to explain infant deaths to toddlers and pre-tweens. Disney's Peter is a child of the carefree first flowering of the antibiotic age, and thereby de-fanged, but the original Peter Pan isn't a twee fairy-like escapist fantasy. He's a narcissistic monster, a kidnapper and serial killer of infants who is so far detached from reality that his own shadow can't keep up. Barrie's story is a metaphor designed to introduce toddlers to the horror of a sibling's death. And I was looking at it in this light when I realized, "hey, what if Peter survived the teind of infant mortality, only to grow up under the dictatorship of the New Management?" This led me down all sorts of rabbit holes, only some of which are explored in Dead Lies Dreaming. The nerdish world-building impulse took over: it turns out that civilian life under the rule of N'yar lat-Hotep, the Black Pharaoh (in his current incarnation as Fabian Everyman MP), is harrowing and gruesome in its own right—there's a Tzompantli on Marble Arch: indications that Lovecraft's Elder Gods were worshipped under other names by other cultures: oligarchs and private equity funds employ private armies: and Brexit is still happening—but nevertheless, ordinary life goes on. There are jobs for cycle couriers, administrative assistants, and ex-detective constables-turned-security guards. People still need supermarkets and high street banks and toy shops. The displays of severed heads on the traffic cameras on the M25 don't stop drivers trying to speed. Boys who never grew up are still looking for a purpose in life, at risk of their necks, while their big sisters try to save them. And so on. Dead Lies Dreaming is the first of the Tales of the New Management, which are being positioned as a continuation of the Laundry Files (because Marketing). There will be more. A second novel, In His House, already exists in first draft. Tt's a continuation of the story, remixed with Sweeney Todd and Mary Poppins—who in the original form is, like Peter Pan, much more sinister than the Disney whitewash suggests. A third novel, Bones and Nightmares, is planned. (However, I can't give you a publication date, other than to say that In His house can't be published before late 2022: COVID19 has royally screwed up publishers' timetables.) Anyway, you probably realized that instead of riffing off classic British spy thrillers or urban fantasy tropes, I'm now perverting beloved childhood icons for my own nefarious purposes—and I'm having a gas. Let's just hope that the December of 2016 in which Dead Lies Dreaming is set doesn't look impossibly utopian and optimistic by the time we get to the looming and very real December of 2020! I really hate it when reality front-runs my horror novels ... Worse Than Failure — CodeSOD: On the Creation Understanding the Gang of Four design patterns is a valuable bit of knowledge for a programmer. Of course, instead of understanding them, it sometimes seems like most design pattern fans just… use them. Sometimes- often- overuse them. The Java Spring Framework infamously has classes with names like SimpleBeanFactoryAwareAspectInstanceFactory. Whether that's a proper use of patterns and naming conventions is a choice I leave to the reader, but boy do I hate looking at it. The GoF patterns break down into four major categories: Behavioral, Structural, Concurrency, and Creational patterns. The Creational category, as the name implies, is all about code which can be used to create instances of objects, like that Factory class above. It is a useful collection of patterns for writing reusable, testable, and modular code. Most Dependency Injection/Inversion of Control frameworks are really just applied creational patterns. It also means that some people decide that "directly invoking constructors is considered harmful". And that's why Emiko found this Java code:  /** * Creates an empty {@link MessageCType}. * @return {@link MessageCType} */ public static MessageCType createMessage() { MessageCType retVal = new MessageCType() return retVal; }  This is just a representitive method; the code was littered with piles of these. It'd be potentially forgiveable if they also used a fluent interface with method chaining to intialize the object, buuut… they don't. Literally, this ends up getting used like:  MessageCType msg = MessageCType.createMessage(); msg.type = someMessageType; msg.body = …  Emiko sums up: At work, we apparently pride ourselves in using the most fancyful patterns available. [Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today! , Krebs on Security — Google Mending Another Crack in Widevine For the second time in as many years, Google is working to fix a weakness in its Widevine digital rights management (DRM) technology used by online streaming sites like Disney, Hulu and Netflix to prevent their content from being pirated. The latest cracks in Widevine concern the encryption technology’s protection for L3 streams, which is used for low-quality video and audio streams only. Google says the weakness does not affect L1 and L2 streams, which encompass more high-definition video and audio content. “As code protection is always evolving to address new threats, we are currently working to update our Widevine software DRM with the latest advancements in code protection to address this issue,” Google said in a written statement provided to KrebsOnSecurity. In January 2019, researcher David Buchanan tweeted about the L3 weakness he found, but didn’t release any proof-of-concept code that others could use to exploit it before Google fixed the problem. This latest Widevine hack, however, has been made into an extension for Microsoft Windows users of the Google Chrome web browser and posted for download on the software development platform Github. Tomer Hadad, the researcher who developed the browser extension, said his proof-of-concept code “was done to further show that code obfuscation, anti-debugging tricks, whitebox cryptography algorithms and other methods of security-by-obscurity will eventually by defeated anyway, and are, in a way, pointless.” Google called the weakness a circumvention that would be fixed. But Hadad took issue with that characterization. “It’s not a bug but an inevitable flaw because of the use of software, which is also why L3 does not offer the best quality,” Hadad wrote in an email. “L3 is usually used on desktops because of the lack of hardware trusted zones.” Media companies that stream video online using Widevine can select different levels of protection for delivering their content, depending on the capabilities of the device requesting access. Most modern smartphones and mobile devices support much more robust L1 and L2 Widevine protections that do not rely on L3. Further reading: Breaking Content Protection on Streaming Websites Planet Debian — Molly de Blanc: Digital Self When we talk about the digital self, we are talking about the self as it exists within digital spaces. This holds differently for different people, as some of us prefer to live within an pseudonymous or anonymous identity online, divested from our physical selves, while others consider the digital a more holistic identity that extends from the physical. Your digital self is gestalt, in that it exists across whatever mediums, web sites, and services you use. These bits are pieces together to form a whole picture of what it means to be you, or some aspect of you. This may be carefully curated, or it may be an emergent property of who you are. The way your physical self has rights, so too does your digital self. Or, perhaps, it would be more accurate to say that your rights extend to your digital self. I do not personally consider that there is a separation between these selves when it comes to rights, as both are aspects of you and you have rights. I am explicitly not going to list what these rights are, because I have my own ideas about them and yours may differ. Instead, I will briefly talk about consent. I think it is essential that we genuinely consent to how others interact with us to maintain the sanctity of our selves. Consent is necessary to the protection and expression of our rights, as it ensures we are able to rely on our rights and creates a space where we are able to express our rights in comfort and safety. We may classically think of consent as it relates to sex and sexual consent: only we have the right to determine what happens to our bodies; no one else has the right to that determination. We are able to give sexual consent, and we are able to revoke it. Sexual consent, in order to be in good faith, must be requested and given from a place of openness and transparency. For this, we discuss with our partners the things about ourselves that may impact their decision to consent: we are sober; we are not ill; we are using (or not) protection as we agree is appropriate; we are making this decision because it is a thing we desire, rather than a thing we feel we ought to do or are being forced to do; as well as other topics. These things also all hold true for technology and the digital spaces in which we reside. Our digital autonomy is not the only thing at stake when we look at digital consent. The ways we interact in digital spaces impact our whole selves, and exploitation of our consent too impacts our whole selves. Private information appearing online can have material consequences — it can directly lead to safety issues, like stalking or threats, and it can lead to a loss of psychic safety and have a chilling effect. These are in addition to the threats posed to digital safety and well being. Consent must be actively sought, what one is consenting to is transparent, and the potential consequences must be known and understood. In order to protect and empower the digital self, to treat everyone justly and with respect, we must hold the digital self be as sacrosanct as other aspects of the self and treat it accordingly. LongNow — How Long-term Thinking Can Help Earth Now With half-lives ranging from 30 to 24,000, or even 16 million years, the radioactive elements in nuclear waste defy our typical operating time frames. The questions around nuclear waste storage — how to keep it safe from those who might wish to weaponize it, where to store it, by what methods, for how long, and with what markings, if any, to warn humans who might stumble upon it thousands of years in the future — require long-term thinking. These questions brought the anthropologist Vincent Ialenti to Finland’s Olkiluoto nuclear waste repository in 02012, where he began more than two years of field work with a team of experts seeking to answer them. Ialenti’s goal was to examine how these experts conceived of the future: What sort of scientific ethos, I wondered, do Safety Case experts adopt in their daily dealings with seemingly unimaginable spans of time? Has their work affected how they understand the world and humanity’s place within it? If so, how? If not, why not? Ialenti has crystallized his learnings in his new book, Deep Time Reckoning: How Future Thinking Can Help Earth Now. It is a book of extraordinary insight and erudition, thoughtful and lucid throughout its surprisingly-light 180 pages. Long Now’s Director of Development, Nicholas Paul Brysiewicz, recently posed a few questions to Ialenti about the genesis of his book; the “deflation of expertise” in North America, Western Europe and beyond; conceiving long-term thinking as a kind of exercise for the mind; and more. Vincent, thanks for sending over a copy of your new book. And thanks for making time to unpack some of those ideas with me here for The Long Now Foundation and our members around the globe. I’m curious: what drew you to study long-term thinking in the wild? Vincent Ialenti: In 02008, I was a Masters student in “Law, Anthropology, and Society” at the London School of Economics. I had a growing interest in long-term engineering projects like the Svalbard Global Seed Vault and the Clock of the Long Now. I decided to write my thesis (now published here) on the currently-defunct U.S. Yucca Mountain nuclear waste repository’s million-year federal licensing procedure. The licensing procedure stretched legal adjudication’s foundational rubric of “fact, rule, and judge” into distant futures. The U.S. Department of Energy developed quantitative models of million-year futures as facts. The U.S. Environmental Protection Agency defined multi-millennial radiological dose-limit standards as rules. The U.S. Nuclear Regulatory Commission subsumed the DOE’s facts to the EPA’s rules to produce a judgment on whether the repository could accept waste. In media commentaries, the Yucca Mountain project was enchanted with aesthetics of high modernist innovation and sci-fi futurology. Yet its decision-making procedure was grounded on an ancient legal procedural bedrock — rubrics formulated as far back as the Roman Empire. I grew fascinated by this temporal mashup. To delve deeper, though, I knew I’d have to conduct long-term fieldwork. I enrolled in a PhD program at Cornell University. With the help of a U.S. National Science Foundation grant, I spent 32 months among Finland’s Olkiluoto nuclear waste repository Safety Caseexperts from 02012–02014. These experts developed models of geological, hydrological, and ecological events that could occur in Western Finland over the coming tens of thousands — or even hundreds of thousands — of years. They reckoned with distant future glaciations, climate changes, earthquakes, floods, human and animal population changes, and more. I ended up recording 121 interviews. Those became the basis of my recent book. Early on in the book you introduce and discuss “the deflation of expertise.” Could you tell us what you mean by this phrase and how you see it shaping long-termism? We’re witnessing a troubling institutional erosion of expert authority in North America, Western Europe, and beyond. Vaccine science, stem cell research, polling data, climate change models, critical social theories, cell phone radiation studies, pandemic disease advisories, and human evolution research are routinely attacked in cable news free-for-alls and social media echo-chambers. Political power is increasingly gained through charismatic, populist performances that energize crowds by mocking expert virtues of cautious speech and detail-driven analysis. Expert voices are drowned out by Twitter mobs, dulled by corporate-bureaucratic “knowledge management” red tape, exhausted by universities’ adjunctification trends, warped by contingent “gig economy” research funding contracts, and rushed by publish-or-perish productivity pressures. As enthusiasm for liberal arts education and scientific inquiry declines, societies enter into a state of futurelessness: they develop a manic fixation on the present moment that incessantly shoots down proposals for envisioning better worlds. My book argues that anthropological engagement with Finland’s nuclear waste experts can help us (a) widen our thinking’s time horizons during a moment of ecological crisis some call the Anthropocene and (b) resist the deflation of expertise by opening our ears to a uniquely longsighted form of expert inquiry. I was heartened each time you pointed to the importance of multidisciplinarity, discursive diversity, and strategic redundancy for doing things at long timescales. Our experience has borne this out, as well. How did you arrive at an emphasis on these themes? What are some of the pitfalls of homogeneity? What about generational homogeneity? The Safety Case project convened several teams of experts — each with different disciplinary backgrounds — to pursue what my informants called “multiple lines of reasoning.” Some were systems analysts developing models of how different kinds of radionuclides could “migrate” through future underground water channels. Others were engineers reporting on the mechanical strength tests conducted on Finland’s copper nuclear waste canisters. Some were geologists making analogies that compared the Olkiluoto’s far future Ice Age conditions to those of a glacial ice sheet found in Greenland today. Others studied “archaeological analogues.” This meant comparing future repository components to ancient Roman iron nails found in Scotland, to a bronze cannon from the 17th century Swedish warship Kronan, and to a 2,100-year-old cadaver preserved in clay in China. Still others wrote prose scenarios with titles like The Evolution of the Repository System Beyond a Million Years in the Future. The Safety Case encompassed multiple disciplinary sensibilities to ensure that one potentially inaccurate assumption doesn’t invalidate the wider range of forecasts. For me, this holistic ethos was a refreshing counterpoint to the reductionist, homogeneous scientism that led us to many of today’s ecological crises. Certainly, it is important to hedge against generational homogeneity in thought too. Finland’s nuclear waste experts planned to release updated versions of the Safety Case throughout the coming century. They called these successive versions “iterations.” The first major report was 01985’s “Safety Analysis of Disposal of Spent Nuclear Fuel: Normal and Disturbed Evolution Scenarios.” The iteration I studied was the 02012 Construction License Safety Case. The next iteration will be the Operating License Safety Case, scheduled for submission to Finland’s nuclear regulator STUK in late 02021. Each iteration is, ideally, supposed to be more robust than the one before it. The Safety Case is an intergenerational work-in-progress. Throughout the work you describe long-term thinking as an imaginative exercise — a “calisthenics for the mind.” This suggestion floored me when I read it. I just couldn’t agree more. And earlier this year, prior to reading your book, I even published an essay arguing for that same interpretation. What do you think makes exercise such an apt metaphor for understanding this phenomenon we’re discussing? We all need to integrate a more vivid awareness of deep time into our everyday habits, actions, and intuitions. We need to override the shallow time discipline into which we’ve been enculturated. This requires self-discipline and ongoing practice. I believe putting aside time to do long-termist intellectual workouts or deep time mental exercise routines can help get us there. Here’s an example. From 02017 to 02020, I was a researcher at George Washington University in Washington DC. In 01922, fossilized ~100,000-year-old bald cypress trees were found just twenty feet below the nearby city surface. Back then, America’s capital city was a literal swamp. Today, four bald cypresses, planted in the mid-1800s, grow in Lafayette Square right near the White House. I approached them as intellectual exercise equipment for stretching my mind across time. The cypresses provided me with tree imageries I could draw upon when re-imagining the U.S. capital as a prehistoric swamp. Here’s another example. I sometimes headed west to hike in West Virginia. Hundreds of millions of years ago, Appalachia was home to much taller mountains. Some say their elevations rivaled those of today’s Rockies, Alps, or Himalayas. I tried to discipline my imagination, while hiking, into reimagining the hills in a wider temporal frame. I drew upon on the images I had in my head of what taller mountain ranges look like today. This helped me stretch the momentary “now” of my hike by endowing it with a deeper history and future. Anyone can do these long-termist exercises. A person in Bangladesh, New York, Rio de Janeiro, Osaka, or Shanghai could, for instance, try imagining their area submerged by, or fighting off, future sea level rise. But what inspired me to integrate these exercises into my own life? Well, it was — again — my fieldwork among Finland’s nuclear waste experts. I modeled these exercises on the Safety Case’s natural and archaeological analogue studies. The key was to (a) make an analogical connection between one’s immediate surroundings and a dramatically long-term future or past and then (b) try to envision it as accurately as possible by drawing, analogically, from scientific information and imageries we already have in our heads of real-world locales out there today. What was the most surprising thing you discovered while working on this book? Early on, I decided to end each chapter with five or six takeaway lessons in long-termism. I call these lessons “reckonings.” As I wrote, however, I was surprised to discover that, even as I engaged with very alien far future Finlands, most of the “reckonings” I collected ended up pertaining to some of the most ordinary features of everyday experience. These include the power of analogy (Chapter 1), the power of pattern-making (Chapter 2), the power of shifting and re-shifting perspectives (Chapter 3), and the problem of human mortality (Chapter 4). I found that these familiarities can be useful. Their sheer relatability can serve as a launching-off point for the rest of us as we pursue long-termist learning. The analogical exercises I mentioned previously are a good example of this. It’s been so exciting for us to see this next generation of long-term thinkers publishing excellent new books on the topic — from your penetrating work in Deep Time Reckoning to Long Now Seminar Speaker Bina Venkataraman’s encompassing work in The Optimist’s Telescope to Long Now Seminar Speaker (and author of the foreword to your book) Marcia Bjornerud’s geological work in Timefulness to Long Now Research Fellow Roman Krznaric’s philosophical work in The Good Ancestor. What role do you think books play in helping the world think long-term? Those are important books! I’ll add a few more: David Farrier’s Footprints: In Search of Future Fossils, Richard Irvine’s An Anthropology of Deep Time, and Hugh Raffles’ Book of Unconformities: Speculations on Lost Time. These pose crucial questions about time, geology, and human (and non-human) imagination. An argument could be made that there’s sort of a diffuse, de-centralized, interdisciplinary “deep time literacy” movement coalescing (mostly on its own!). This is urgent work. Earlier this year, the Trump Administration advanced a proposal to reform key National Environmental Policy Act regulations to read: “effects should not be considered significant if they are remote in time, geographically remote, or the product of a lengthy causal chain.” This is out of sync with our mission to become more mindful of far future worlds. I won’t speak for the others, but my hope is that these books can inch us closer to escaping the virulent short-termism that our current ecological woes, deflations of expertise, and political crises exploit and reinforce. In closing, I’ll mention that, thanks to you, I now know there is no direct way for me to say “Someday we will have an in-person discussion about all this at The Interval” in Finnish because Finnish does not have a future tense. And yet here we are, discussing Finnish expertise at thinking about the future. What’s up with that? Hah! Yes, that’s right. There’s no future tense in Finnish. Finns tend to use the present tense instead. There is something sensible about this: all visions of the future are, indeed, tethered to the present moment. Marcia Bjornerud cleverly linked this linguistic quirk to my book’s broader arguments: “There is some irony in studying Finns as exemplars of future thinkers: as Ialenti points out, the Finnish language has no future tense. Instead, either present tense or conditional mode verbs are used, which seems a rather oblique way of speaking of times to come. But this linguistic treatment of the future may reflect a deep wisdom in Finnish culture that informs the philosophy of the Safety Case. Making declarative pronouncements about the future is imprudent; the best that can be done is to envisage a spectrum of possible futures and develop a sense for how likely each is to unfold.” From all of us at and around The Long Now Foundation: thank you for your time and expertise, Vincent. And thank you! I’ll get back to reading your essay on long-termist askēsis. Keep up the great work. Learn More • Read Long Now Editor Ahmed Kabil’s 02017 feature on the nuclear waste storage problem. • Read Vincent Ialenti’s 02016 essay for Long Now, “Craters & Mudrock: Tools for Imagining Distant Finlands.” • Watch Ralph Cavanagh and Peter Schwartz’s 02006 Long Now Seminar on Nuclear Power, Climate Change, and the Next 10,000 Years. Planet Debian — Reproducible Builds: Second Reproducible Builds IRC meeting After the success of our previous IRC meeting, we are having our second IRC meeting today. Monday 26th October, at 18:00 UTC: • 11:00am San Francisco. [] • 2:00pm New York. [] • 6:00pm London. [] • 7:00pm Paris/Berlin. [] • 11:30pm Delhi. [] • 2:00am Beijing. [] (+1 day) Please join us on the #reproducible-builds channel on irc.oftc.net — an agenda is available. As mentioned in our previous meeting announcement, due to the unprecedented events in 2020, there will be no in-person Reproducible Builds event this year, but we plan to run these IRC meetings every fortnight. Worse Than Failure — Serial Problems If we presume there is a Hell for IT folks, we can only assume the eternal torment involves configuring or interfacing with printers. Perhaps Anabel K is already in Hell, because that describes her job. Anabel's company sells point-of-sale tools, including receipt printers. Their customers are not technical, so a third-party installer handles configuring those printers in the field. To make the process easy and repeatable, Anabel maintains an app which automates the configuration process for the third party. The basic flow is like this: The printer gets shipped to the installer. At their facility, someone from the installer opens the box, connects the printer to power, and then to the network. They then run the app Anabel maintains, which connects to the printer's on-board web server and POSTs a few form-data requests. Assuming everything works, the app reports success. The printer goes back in the box and is ready to get installed at the client site at some point in the near future. The whole flow was relatively painless until the printer manufacturer made a firmware change. Instead of the username/password being admin/admin, it was now admin/serial-number. No one was interested in having the installer techs key in the long serial number, but digging around in the documentation, Anabel found a simple fix. In addition to the on-board web-server, there was also a TCP port running. If you connected to the port and sent the correct command, it would reply with the serial number. Anabel made the appropriate changes. Now, her app would try and authenticate as admin/admin, and if it failed, it'd open a TCP connection, query the serial number, and then try again. Anabel grabbed a small pile of printers from storage, a mix of old and new firmware, loaded them up with receipt paper, and ran the full test suite to make sure everything still worked. Within minutes, they were all happily churning out test prints. Anabel released her changes to production, and off it went to the installer technicians. A few weeks later, the techs call in through support, in an absolute panic. "The configuration app has stopped working. It doesn't work on any of the printers we received in the past few weeks." There was a limited supply of the old version of printers, and dozens got shipped out every day. If this didn't get fixed ASAP, they would very quickly find themselves with a pile of printers the installers couldn't configure. Management got on conference calls, roped Anabel in on the middle of long email chains, and they all agreed: there must be something wrong with Anabel's changes. It wasn't unreasonable to suspect, but Anabel had tested it thoroughly. Heck, she had a few of the new printers on her desk and couldn't replicate the failure. So she got on a call with a tech and started from square one. Is it plugged in. Is it plugged into the network. Are there any restrictions on the network, or on the machine running the app, that might prevent access to non-standard ports? Over the next few days, while the stock of old printers kept dwindling, this escalated up to sending a router with a known configuration out to the technicians. It was just to ensure that there were no hidden firewalls or network policies preventing access to the TCP port. Even still, on its own dedicated network, nothing worked. "Okay, let's double check the printer's network config," Anabel said on the call. "When it boots up, it should print out its network config- IP, subnet, gateway, DHCP config, all of that. What does that say?" The tech replied, "Oh. We don't have paper in it. One sec." While rooting around in the box, they added, "We don't normally bother. It's just one more thing to unbox before putting it right back in the box." The printer booted up, faithfully printed out its network config, which was what everyone expected. "Okay, I guess… try running the tool again?" Anabel suggested. And it worked. Anabel turned to one of the test printers she had been using, and pulled out the roll of receipt paper. She ran the configuration tool… and it failed. The TCP service only worked when there was paper in the printer. Anabel reported it as a bug to the printer vendor, but if and when that gets fixed is anyone's guess. The techs didn't want to have to fully unbox the printers, including the paper, for every install, but that was an easy fix: with each shipment of printers Anabel's company just started shipping a few packs of receipt paper for the techs. They can just crack one open and use it to configure a bunch of printers before it runs out. [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how! CryptogramIMSI-Catchers from Canada Gizmodo is reporting that Harris Corp. is no longer selling Stingray IMSI-catchers (and, presumably, its follow-on models Hailstorm and Crossbow) to local governments: L3Harris Technologies, formerly known as the Harris Corporation, notified police agencies last year that it planned to discontinue sales of its surveillance boxes at the local level, according to government records. Additionally, the company would no longer offer access to software upgrades or replacement parts, effectively slapping an expiration date on boxes currently in use. Any advancements in cellular technology, such as the rollout of 5G networks in most major U.S. cities, would render them obsolete. The article goes on to talk about replacement surveillance systems from the Canadian company Octasic. Octasic’s Nyxcell V800 can target most modern phones while maintaining the ability to capture older GSM devices. Florida’s state police agency described the device, made for in-vehicle use, as capable of targeting eight frequency bands including GSM (2G), CDMA2000 (3G), and LTE (4G). […] A 2018 patent assigned to Octasic claims that Nyxcell forces a connection with nearby mobile devices when its signal is stronger than the nearest legitimate cellular tower. Once connected, Nyxcell prompts devices to divulge information about its signal strength relative to nearby cell towers. These reported signal strengths (intra-frequency measurement reports) are then used to triangulate the position of a phone. Octasic appears to lean heavily on the work of Indian engineers and scientists overseas. A self-published biography of the company notes that while the company is headquartered in Montreal, it has “R&D facilities in India,” as well as a “worldwide sales support network.” Nyxcell’s website, which is only a single page requesting contact information, does not mention Octasic by name. Gizmodo was, however, able to recover domain records identifying Octasic as the owner. Cory Doctorow — Someone Comes to Town, Someone Leaves Town (part 20) Here’s part twenty of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here). This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.” Here’s how my publisher described it when it came out: Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off. Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls. Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge. Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends. Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read. Planet Debian — Marco d'Itri: RPKI validation with FORT Validator This article documents how to install FORT Validator (an RPKI relying party software which also implements the RPKI to Router protocol in a single daemon) on Debian 10 to provide RPKI validation to routers. If you are using testing or unstable then you can just skip the part about apt pinnings. The packages in bullseye (Debian testing) can be installed as is on Debian stable with no need to rebuild them, by configuring an appropriate pinning for apt: cat <<END > /etc/apt/sources.list.d/bullseye.list deb http://deb.debian.org/debian/ bullseye main END cat <<END > /etc/apt/preferences.d/pin-rpki # by default do not install anything from bullseye Package: * Pin: release bullseye Pin-Priority: 100 Package: fort-validator rpki-trust-anchors Pin: release bullseye Pin-Priority: 990 END apt update  Before starting, make sure that curl (or wget) and the web PKI certificates are installed: apt install curl ca-certificates  If you already know about the legal issues related to the ARIN TAL then you may instruct the package to automatically install it. If you skip this step then you will be asked at installation time about it, either way is fine. echo 'rpki-trust-anchors rpki-trust-anchors/get_arin_tal boolean true' \ | debconf-set-selections  Install the package as usual: apt install fort-validator  You may also install rpki-client and gortr on Debian 10, or maybe cfrpki and gortr. I have also tried packaging Routinator 3000 for Debian, but this effort is currently on hold because the Rust ecosystem is broken and hostile to the good packaging practices of Linux distributions. Planet Debian — Marco d'Itri: RPKI validation with OpenBSD's rpki-client and Cloudflare's gortr This article documents how to install rpki-client (an RPKI relying party software, the actual validator) and gortr (which implements the RPKI to Router protocol) on Debian 10 to provide RPKI validation to routers. If you are using testing or unstable then you can just skip the part about apt pinnings. The packages in bullseye (Debian testing) can be installed as is on Debian stable with no need to rebuild them, by configuring an appropriate pinning for apt: cat <<END > /etc/apt/sources.list.d/bullseye.list deb http://deb.debian.org/debian/ bullseye main END cat <<END > /etc/apt/preferences.d/pin-rpki # by default do not install anything from bullseye Package: * Pin: release bullseye Pin-Priority: 100 Package: gortr rpki-client rpki-trust-anchors Pin: release bullseye Pin-Priority: 990 END apt update  Before starting, make sure that curl (or wget) and the web PKI certificates are installed: apt install curl ca-certificates  If you already know about the legal issues related to the ARIN TAL then you may instruct the package to automatically install it. If you skip this step then you will be asked at installation time about it, either way is fine. echo 'rpki-trust-anchors rpki-trust-anchors/get_arin_tal boolean true' \ | debconf-set-selections  Install the packages as usual: apt install rpki-client gortr  And then configure rpki-client to generate its output in the the JSON format needed by gortr: echo 'OPTIONS=-j' > /etc/default/rpki-client  You may manually start the service unit to immediately generate the data instead of waiting for the next timer run: systemctl start rpki-client &  gortr too needs to be configured to use the JSON data generated by rpki-client: echo 'GORTR_ARGS=-bind :323 -verify=false -checktime=false -cache /var/lib/rpki-client/json' > /etc/default/gortr  And then it needs to be restarted to use the new configuration: systemctl restart gortr  You may also install FORT Validator on Debian 10, or maybe cfrpki with gortr. I have also tried packaging Routinator 3000 for Debian, but this effort is currently on hold because the Rust ecosystem is broken and hostile to the packaging practices of Linux distributions. , Charles Stross — Introducing a new guest blogger: Sheila Williams It's been ages since I last hosted a guest blogger here, but today I'd like to introduce you to Sheila Williams, who will be talking about her work next week. Normally my gues bloggers are other SF/F authors, but Sheila is something different: she's the multiple Hugo-Award winning editor of Asimov's Science Fiction magazine. She is also the winner of the 2017 Kate Wilhelm Solstice Award for distinguished contributions to the science fiction and fantasy community. Sheila started at Asimov's in June 1982 as the editorial assistant. Over the years, she was promoted to a number of different editorial positions at the magazine and she also served as the executive editor of Analog from 1998 until 2004. With Rick Wilber, she is also the co-founder of the Dell Magazines Award for Undergraduate Excellence in Science Fiction and Fantasy. This annual award has been bestowed on the best short story by an undergraduate student at the International Conference on the Fantastic since 1994. In addition, Sheila is the editor or co-editor of twenty-six anthologies. Her newest anthology, Entanglements: Tomorrow's Lovers, Families, and Friends, is the 2020 volume of the Twelve Tomorrow series. The book is just out from MIT Press. Charles Stross — Entanglements! Many thanks to Charlie for giving me the chance to write about editing and my latest project. I'm very excited about the publication of Entanglements. The book has received a starred review from Publishers Weekly and terrific reviews in Lightspeed, Science, and the Financial Times. MIT Press has created a very nice "Pubpub" page about Entanglements, with information about the book and its various contributors. The "On the Stories" section has an essay about by Nick Wolven about his amazing story, "Sparkly Bits," and a fun Zoom conversation with James Patrick Kelly, Nancy Kress, and Sam J. Miller. I think the site is well worth checking out, and here's the Pubpub description of the book: Science fiction authors offer original tales of relationships in a future world of evolving technology. In a future world dominated by the technological, people will still be entangled in relationships--in romances, friendships, and families. This volume in the Twelve Tomorrows series considers the effects that scientific and technological discoveries will have on the emotional bonds that hold us together. The strange new worlds in these stories feature AI family therapy, floating fungitecture, and a futuristic love potion. A co-op of mothers attempts to raise a child together, lovers try to resolve their differences by employing a therapeutic sexbot, and a robot helps a woman dealing with Parkinson's disease. Contributions include Xia Jia's novelette set in a Buddhist monastery, translated by the Hugo Award-winning writer Ken Liu; a story by Nancy Kress, winner of six Hugos and two Nebulas; and a profile of Kress by Lisa Yaszek, Professor of Science Fiction Studies at Georgia Tech. Stunning artwork by Tatiana Plakhova--"infographic abstracts" of mixed media software--accompanies the texts. , Planet Debian — Dirk Eddelbuettel: digest 0.6.27: Build fix Exactly one week after the previous release 0.6.26 of digest, a minor cleanup release 0.6.27 just arrived on CRAN and will go to Debian shortly. digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, and blake3 algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at one million monthly downloads, 282 direct reverse dependencies and 8068 indirect reverse dependencies, or just under half of CRAN) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation. Release 0.6.26 brought support for the (nice, even cryptographic) blake3 hash algorithm. In the interest of broader buildability we had already (with a sad face) disabled a few very hardware-specific implementation aspects using intrinsic ops. But to our chagrin, we left one #error define that raised its head on everybody’s favourite CRAN build platform. Darn. So 0.6.27 cleans that up and also removes the check and #error as … all the actual code was already commented out. If you read this and tears start running down your cheeks, then by all means come and help us bring blake3 to its full (hardware-accelerated) potential. This (probably) only needs a little bit of patient work with the build options and configurations. You know where to find us… My CRANberries provides the usual summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. Planet Debian — Sandro Tosi: Multiple git configurations depending on the repository path For my work on Debian, i want to use my debian.org email address, while for my personal projects i want to use my gmail.com address. One way to change the user.email git config value is to git config --local in every repo, but that's tedious, error-prone and doesn't scale very well with many repositories (and the chances to forget to set the right one on a new repo are ~100%). The solution is to use the git-config ability to include extra configuration files, based on the repo path, by using includeIf: Content of ~/.gitconfig: [user] name = Sandro Tosi email = <personal.address>@gmail.com[includeIf "gitdir:~/deb/"] path = ~/.gitconfig-deb Every time the git path is in ~/deb/ (which is where i have all Debian repos) the fileÂ ~/.gitconfig-deb will be included; its content: [user] email = morph@debian.org That results in my personal address being used on all repos not part of Debian, where i use my Debian email address. This approach can be extended to every other git configuration values. Sam Varghese — Australian sports writer’s predictions prove to be those of a false prophet After the first match in the Bledisloe Cup series ended in a 16-all draw, Australian sports writers were on a giddy high, predicting that the dominance of the All Blacks had more or less ended and the big boys had been caught with their pants down. Well before this hype began, at the end of the game, there was a gesture by the Australian team which showed that its mental state was still very fragile. When the final whistle blew, the ball was still live, so the referee let play proceed. A thrilling nine minutes ensued, with first Australia, and then New Zealand, threatening to score. Strangely, though, neither team thought of attempting a drop-goal to win the game. After one of the New Zealand forays, the Australians regained the ball and fly-half James O’Connor kicked it into touch, ending the game. Now O’Connor could have continued play, by running the ball from his own end. The All Blacks never took the option of ending the game when they got the ball during that nine-minute stretch. O’Connor’s gesture gave the game away: for Australia, a draw was as good as a win. It indicated the extent to which he ranked his team against the All Blacks, despite the heroics they had showed. With that kind of mental attitude, it was only to be expected that Australia would lose at Eden Park the following week. As they did, by a 27-7 margin, at a venue where they have not beaten New Zealand since 1986. During the week, there were several triumphal essays in the Australian press; one, by Jamie Pandaram, a senior sports writer at The Daily Telegraph, gives an insight into the type of shallow understanding that sports writers on this side of the Tasman have when it comes to rugby union, and the nationalistic fervour that surrounds sport (as it does everything else). The headline gave an indication of the bombast that was to follow, reading: “Why All Blacks are finally vulnerable.” It started off saying that while facing a beaten All Blacks team was a dangerous exercise, “there’s a different feel about 2020”. And he cited concerns about the coaching, selection, tactics and loss of seniority in All Black ranks as factors that had contributed to what he called a decline in their ranks. He cited the absence of four players – Kieran Read, Brodie Retallick, Sonny Bill Williams and Ryan Crotty – as making the team unable to produce those moments of inspiration for which they have become famous. And he claimed that Ian Foster, who took over as coach from Steve Hansen after the 2019 World Cup, was not the best coach among those who could be chosen. As evidence that stress was allegedly mounting on New Zealand, Pandaram cited the complaints made by the assistant coach John Plumtree about illegal tactics employed by Australia in taking out players. The first game was unusual in that it was not refereed by a neutral referee – the first time this has happened in a long time and mainly due to the travel issues cause by the coronavirus pandemic. The referee, New Zealand’s Paul Williams, had to tread a difficult path; he had to ensure that his rulings could not be criticised as being partial to his own country and at the same time he had to police Australia’s thuggery properly without accusations of bias. Having watched the match twice, I can say with confidence that Williams only erred once, in not calling Rieko Ioane for stepping on the sideline boundary when he began what ended up as a try. This was the fault of Australian Angus Gardner who was the linesman on the side concerned. “It’s a common Kiwi play; turn the referees’ and public’s attention to perceived cheating by their opposition – we’ve seen them previously call out the Wallabies’ scrumming and breakdown play – to take the spotlight away from their own,” wrote Pandaram, completely forgetting that this was exactly what former Wallabies coach Michael Cheika did after every game. He opined that Foster would be under “intense scrutiny” during the second game as many people in New Zealand felt that the job of chief coach should have gone to Scott Robertson instead, the latter being one who has taken the Crusaders to four Super Rugby titles in his first four years as their coach. And Pandaram went on and on, outlining what he perceived to be issues with the team, about playing this player and that in this position or that. I haven’t seen anything he wrote after the second game when Australia was competitive for just one half, and unable to score in the second half. All those perceived “problems” he pointed out were gone. One crucial factor that he forgot was that this was the first game for both teams this year. Normally, both Australia and New Zealand play two or three Tests before the games against each other, South Africa and Argentina, which make up the Bledisloe Cup and Rugby Championship each year, begin. Both teams were quite rusty. In 2015, New Zealand lost more talent after the World Cup than they did in 2019; that time Daniel Carter, Richie McCaw, Ma’a Nonu, Conrad Smith, Tony Woodcock and Keven Mealamu all retired from international rugby. But the side picked up and carried on. A lot of Pandaram’s moaning comes out of nationalism; Australians are the most one-sided sports writers I have seen. When one is shown up like this, they tend to lie quiet until the public forgets. As Pandaram is doing now. Planet Debian — Jelmer Vernooij: Debian Janitor: Hosters used by Debian packages The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor. The Janitor knows how to talk to different hosting platforms. For each hosting platform, it needs to support the platform- specific API for creating and managing merge proposals. For each hoster it also needs to have credentials. At the moment, it supports the GitHub API, Launchpad API and GitLab API. Both GitHub and Launchpad have only a single instance; the GitLab instances it supports are gitlab.com and salsa.debian.org. This provides coverage for the vast majority of Debian packages that can be accessed using Git. More than 75% of all packages are available on salsa - although in some cases, the Vcs-Git header has not yet been updated. Of the other 25%, the majority either does not declare where it is hosted using a Vcs-* header (10.5%), or have not yet migrated from alioth to another hosting platform (9.7%). A further 2.3% are hosted somewhere on GitHub (2%), Launchpad (0.18%) or GitLab.com (0.15%), in many cases in the same repository as the upstream code. The remaining 1.6% are hosted on many other hosts, primarily people’s personal servers (which usually don’t have an API for creating pull requests). Outdated Vcs-* headers It is possible that the 20% of packages that do not have a Vcs-* header or have a Vcs header that say there on alioth are actually hosted elsewhere. However, it is hard to know where they are until a version with an updated Vcs-Git header is uploaded. The Janitor primarily relies on vcswatch to find the correct locations of repositories. vcswatch looks at Vcs-* headers but has its own heuristics as well. For about 2,000 packages (6%) that still have Vcs-* headers that point to alioth, vcswatch successfully finds their new home on salsa. Merge Proposals by Hoster These proportions are also visible in the number of pull requests created by the Janitor on various hosters. The vast majority so far has been created on Salsa.  Hoster Open Merged & Applied Closed github.com 92 168 5 gitlab.com 12 3 0 code.launchpad.net 24 51 1 salsa.debian.org 1,360 5,657 126 In this graph, “Open” means that the pull request has been created but likely nobody has looked at it yet. Merged means that the pull request has been marked as merged on the hoster, and applied means that the changes have ended up in the packaging branch but via a different route (e.g. cherry-picked or manually applied). Closed means that the pull request was closed without the changes being incorporated. Note that this excludes ~5,600 direct pushes, all of which were to salsa-hosted repositories. See also: For more information about the Janitor's lintian-fixes efforts, see the landing page. , Planet Debian — Dirk Eddelbuettel: RcppSpdlog 0.0.3: New features and much more docs A good month after the initial two releases, we are thrilled to announce relase 0.0.3 of RcppSpdlog. This brings us release 1.8.1 of spdlog as well as a few local changes (more below). RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovic. This version of RcppSpdlog brings a new top-level function setLogLevel to control what events get logged, updates the main example to show this and to also make the R-aware logger the default logger, and adds both an extended vignette showing several key features and a new (external) package documentation site. The NEWS entry for this release follows. Changes in RcppSpdlog version 0.0.3 (2020-10-23) • New function setLogLevel with R accessor in exampleRsink example • Updated exampleRsink to use default logger instance • Upgraded to upstream release 1.8.1 which contains finalised upstream use to switch to REprintf() if R compilation detected • Added new vignette with extensive usage examples, added compile-time logging switch example • A package documentation website was added Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. Planet Debian — Birger Schacht: An Analysis of 5 Million OpenPGP Keys In July I finished my Bachelorâ€™s Degree in IT Security at the University of Applied Sciences in St. Poelten. During the studies I did some elective courses, one of which was about Data Analysis using Python, Pandas and Jupyter Notebooks. I found it very interesting to do calculations on different data sets and to visualize them. Towards the end of the Bachelor I had to find a topic for my Bachelor Thesis and as a long time user of OpenPGP I thought it would be interesting to do an analysis of the collection of OpenPGP keys that are available on the keyservers of the SKS keyserver network. So in June 2019 I fetched a copy of one of the key dumps of the one of the keyservers (some keyserver publish these copies of their key database so people who want to join the SKS keyserver network can do an initial import). At that time the copy of the key database contained 5,499,675 keys and was around 12GB. Using the hockeypuck keyserver software I imported the keys into an PostgreSQL database. Hockeypuck uses a table called keys to store the keys and in there the column doc stores the OpenPGP keys in JSON format (always with a data field containing the original unparsed data). For the thesis I split the analysis in three parts, first looking at the Public Key packets, then analysing the User ID packets and finally studying the Signature Packets. To analyse the respective packets I used SQL to export the data to CSV files and then used the pandas read_csv method to create a dataframe of the values. In a couple of cases I did some parsing before converting to a DataFrame to make the analysis step faster. The parsing was done using the pgpdump python library. Together with my advisor I decided to submit the thesis for a journal, so we revised and compressed the whole paper and the outcome was now in the Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications (JoWUA). I think the work gives some valuable insight in the development of the use of OpenPGP in the last 30 years. Looking at the public key packets we were able to compare the different public key algorithms and for example visualize how DSA was the most used algorithm until around 2010 when it was replaced by RSA. When looking at the less used algorithms a trend towards ECC based crytography is visible. What we also noticed was an increase of RSA keys with algorithm ID 3 (RSA Sign-Only), which are deprecated. When we took a deeper look at those keys we realized that most of those keys used a specific User ID string in the User ID packets which allowed us to attribute those keys to two software projects both using the Bouncy Castle Java Cryptographic API (resp. the Spongy Castle version for Android). We also stumbled over a tutorial on how to create RSA keys with Bouncycastle which also describes how to create RSA keys with code that produces RSA Sign-Only keys. In one of those projects, this was then fixed. By looking at the User ID packets we did some statistics about the most used email providers used by OpenPGP users. One domain stood out, because it is not the domain of an email provider: tellfinder.com is a domain used in around 45,000 keys. Tellfinder is a Big Data analysis software and the UID of all but two of those keys is TellFinder Page Archiver- Signing Key <support@tellfinder.com>. We also looked at the comments used in OpenPGP User ID fields. In 2013 Daniel Kahn Gillmor published a blog post titled OpenPGP User ID Comments considered harmful in which he pointed out that most of the comments in the User ID field of OpenPGP keys are duplicating information that is already present somewhere in the User ID or the key itself. In our dataset 3,133 comments were exactly the same as the name, 3,346 were the same as the domain and 18,246 comments were similar to the local part of the email address Last but not least we looked at the signature subpackets and the development of some of the preferences (Preferred Symmetric Algorithm, Preferred Hash Algorithm) that are being published using signature packets. Analysing this huge dataset of cryptographic keys of the last 20 to 30 years was very interesting and I learned a lot about the history of PGP resp. OpenPGP and the evolution of cryptography overall. I think it would be interesting to look at even more properties of OpenPGP keys and I also think it would be valuable for the OpenPGP ecosystem if these kinds analysis could be done regularly. An approach like Tor Metrics could lead to interesting findings and could also help to back decisions regarding future developments of the OpenPGP standard. Planet Debian — Enrico Zini: Hetzner build machine This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi. Building Qt5 takes a long time. The build server I was using had CPUs and RAM, but was very slow on I/O. I was very frustrated by that, and I started evaluating alternatives. I ended up setting up scripts to automatically provision a throwaway cloud server at Hetzner. Initial setup I got an API key from my customer's Hetzner account. I installed hcloud-cli, currently only in testing and unstable: apt install hcloud-cli  Then I configured hcloud with the API key: hcloud context create  Spin up I wrote a quick and dirty script to spin up a new machine, which grew a bit with little tweaks: #!/bin/sh # Create the server hcloud server create --name buildqt --ssh-key … --start-after-create \ --type cpx51 --image debian-10 --datacenter … # Query server IP IP="$(hcloud server describe buildqt -o json | jq -r .public_net.ipv4.ip)"

# Update ansible host file
echo "buildqt ansible_user=root ansible_host=$IP" > hosts # Remove old host key ssh-keygen -f ~/.ssh/known_hosts -R "$IP"

echo "ssh root@$IP" >> login chmod 0755 login  I picked a datacenter in the same location as where we have other servers, to get quicker data transfers. I like that CLI tools have JSON output that I can cleanly pick at with jq. Sadly, my ISP doesn't do IPv6 yet. Since the server just got regenerated, I remove a possibly cached host key. Provisioning the machine One git server I need is behind HTTP authentication. Here's a quick hack to pass the relevant .netrc credentials to ansible before provisioning: #!/usr/bin/python3 import subprocess import netrc import tempfile import json login, account, password = netrc.netrc().authenticators("…") with tempfile.NamedTemporaryFile(mode="wt", suffix=".json") as fd: json.dump({ "repo_user": login, "repo_password": password, }, fd) fd.flush() subprocess.run([ "ansible-playbook", "-i", "hosts", "-l", "buildqt", "--extra-vars", f"@{fd.name}", "provision.yml", ], check=True)  And here's the ansible playbook: #!/usr/bin/env ansible-playbook - name: Install and configure buildqt hosts: all tasks: - name: Update apt cache apt: update_cache: yes cache_valid_time: 86400 - name: Create build user user: name: build comment: QT5 Build User shell: /bin/bash - name: Create sources directory become: yes become_user: build file: path: ~/sources state: directory mode: 0755 - name: Download sources become: yes become_user: build get_url: url: "https://…/{{item}}" dest: "~/sources/{{item}}" mode: 0644 with_items: - "qt-everywhere-src-5.15.1.tar.xz" - "qt-creator-enterprise-src-4.13.2.tar.gz" - name: Populate home directory become: yes become_user: build copy: src: build dest: ~/ mode: preserve - name: Write .netrc become: yes become_user: build copy: dest: ~/.netrc mode: 0600 content: | machine … login {{repo_user}} password {{repo_password}} - name: Write .screenrc become: yes become_user: build copy: dest: ~/.screenrc mode: 0644 content: | hardstatus alwayslastline hardstatus string '%{= cw}%-Lw%{= KW}%50>%n%f* %t%{= cw}%+Lw%< %{= kK}%-=%D %Y-%m-%d %c%{-}' startup_message off defutf8 on defscrollback 10240 - name: Install base packages apt: name: git,mc,ncdu,neovim,eatmydata,devscripts,equivs,screen state: present - name: Clone git repo become: yes become_user: build git: repo: https://…@…/….git dest: ~/… - name: Copy Qt license become: yes become_user: build copy: src: qt-license.txt dest: ~/.qt-license mode: 0600  Now everything is ready for a 16 core, 32Gb ram build on SSD storage. Tear down When done: #!/bin/sh hcloud server delete buildqt  The whole spin up plus provisioning takes around a minute, so I can do it when I start a work day, and take it down at the end. The build machine wasn't that expensive to begin with, and this way it will even be billed by the hour. A first try on a CPX51 machine has just built the full Qt5 Everywhere Enterprise including QtWebEngine and all its frills, for amd64, in under 1 hour and 40 minutes. CryptogramNew Report on Police Decryption Capabilities There is a new report on police decryption capabilities: specifically, mobile device forensic tools (MDFTs). Short summary: it’s not just the FBI that can do it. This report documents the widespread adoption of MDFTs by law enforcement in the United States. Based on 110 public records requests to state and local law enforcement agencies across the country, our research documents more than 2,000 agencies that have purchased these tools, in all 50 states and the District of Columbia. We found that state and local law enforcement agencies have performed hundreds of thousands of cellphone extractions since 2015, often without a warrant. To our knowledge, this is the first time that such records have been widely disclosed. Lots of details in the report. And in this news article: At least 49 of the 50 largest U.S. police departments have the tools, according to the records, as do the police and sheriffs in small towns and counties across the country, including Buckeye, Ariz.; Shaker Heights, Ohio; and Walla Walla, Wash. And local law enforcement agencies that don’t have such tools can often send a locked phone to a state or federal crime lab that does. […] The tools mostly come from Grayshift, an Atlanta company co-founded by a former Apple engineer, and Cellebrite, an Israeli unit of Japan’s Sun Corporation. Their flagship tools cost roughly$9,000 to $18,000, plus$3,500 to $15,000 in annual licensing fees, according to invoices obtained by Upturn. Planet Debian — Molly de Blanc: Endorsements Transparency is essential to trusting a technology. Through transparency we can understand what we’re using and build trust. When we know what is actually going on, what processes are occurring and how it is made, we are able to decide whether interacting with it is something we actually want, and we’re able to trust it and use it with confidence. This transparency could mean many things, though it most frequently refers to the technology itself: the code or, in the case of hardware, the designs. We could also apply it to the overall architecture of a system. We could think about the decision making, practices, and policies of whomever is designing and/or making the technology. These are all valuable in some of the same ways, including that they allow us to make a conscious choice about what we are supporting. When we choose to use a piece of technology, we are supporting those who produce it. This could be because we are directly paying for it, however our support is not limited to direct financial contributions. In some cases this is because of things hidden within a technology: tracking mechanisms or backdoors that could allow companies or governments access to what we’re doing. When creating different types of files on a computer, these files can contain metadata that says what software was used to make it. This is an implicit endorsement, and you can also explicitly endorse a technology by talking about that or how you use it. In this, you have a right (not just a duty) to be aware of what you’re supporting. This includes, for example, organizational practices and whether a given company relies on abusive labor policies, indentured servitude, or slave labor. Endorsements inspire others to choose a piece of technology. Most of my technology is something I investigate purely for functionality, and the pieces I investigate are based on what people I know use. The people I trust in these cases are more inclined than most to do this kind of research, to perform technical interrogations, and to be aware of what producers of technology are up to. This is how technology spreads and becomes common or the standard choice. In one sense, we all have the responsibility (one I am shirking) to investigate our technologies before we choose them. However, we must acknowledge that not everyone has the resources for this – the time, the skills, the knowledge, and therein endorsements become even more important to recognize. Those producing a technology have the responsibility of making all of these angles something one could investigate. Understanding cannot only be the realm of experts. It should not require an extensive background in research and investigative journalism to find out whether a company punishes employees who try to unionize or pay non-living wages. Instead, these must be easy activities to carry out. It should be standard for a company (or other technology producer) to be open and share with people using their technology what makes them function. It should be considered shameful and shady to not do so. Not only does this empower those making choices about what technologies to use, but it empowers others down the line, who rely on those choices. It also respects the people involved in the processes of making these technologies. By acknowledging their role in bringing our tools to life, we are respecting their labor. By holding companies accountable for their practices and policies, we are respecting their lives. Worse Than Failure — Error'd: Errors by the Pound "I can understand selling swiss cheese by the slice, but copier paper by the pound?" Dave P. wrote. Amanda R. writes, "Ok, that's fine, but can the 1% correctly spell 'people'?" "In this form, language is quite variable as is when you are able to cancel your reservation ...which are in fact, actual variables," wrote Jean-Pierre M. Barry M. wrote, "Hey, Royal Caribbean, you know what? I'll take the win-win: total control AND save$7!"

"Oh wow! The secret on how to write good articles is out!" writes Barry L.

,

Krebs on Security — The Now-Defunct Firms Behind 8chan, QAnon

Some of the world’s largest Internet firms have taken steps to crack down on disinformation spread by QAnon conspiracy theorists and the hate-filled anonymous message board 8chan. But according to a California-based security researcher, those seeking to de-platform these communities may have overlooked a simple legal solution to that end: Both the Nevada-based web hosting company owned by 8chan’s current figurehead and the California firm that provides its sole connection to the Internet are defunct businesses in the eyes of their respective state regulators.

In practical terms, what this means is that the legal contracts which granted these companies temporary control over large swaths of Internet address space are now null and void, and American Internet regulators would be well within their rights to cancel those contracts and reclaim the space.

The IP address ranges in the upper-left portion of this map of QAnon and 8kun-related sites — some 21,000 IP addresses beginning in “206.” and “207.” — are assigned to N.T. Technology Inc. Image source: twitter.com/Redrum_of_Crows

That idea was floated by Ron Guilmette, a longtime anti-spam crusader who recently turned his attention to disrupting the online presence of QAnon and 8chan (recently renamed “8kun”).

On Sunday, 8chan and a host of other sites related to QAnon conspiracy theories were briefly knocked offline after Guilmette called 8chan’s anti-DDoS provider and convinced them to stop protecting the site from crippling online attacks (8Chan is now protected by an anti-DDoS provider in St. Petersburg, Russia).

The public face of 8chan is Jim Watkins, a pig farmer in the Philippines who many experts believe is also the person behind the shadowy persona of “Q” at the center of the conspiracy theory movement.

Watkin owns and operates a Reno, Nev.-based hosting firm called N.T. Technology Inc. That company has a legal contract with the American Registry for Internet Numbers (ARIN), the non-profit which administers IP addresses for entities based in North America.

ARIN’s contract with N.T. Technology gives the latter the right to use more than 21,500 IP addresses. But as Guilmette discovered recently, N.T. Technology is listed in Nevada Secretary of State records as under an “administrative hold,” which according to Nevada statute is a “terminated” status indicator meaning the company no longer has the right to transact business in the state.

N.T. Technology’s listing in the Nevada Secretary of State records. Click to Enlarge.

The same is true for Centauri Communications, a Freemont, Calif.-based Internet Service Provider that serves as N.T. Technology’s colocation provider and sole connection to the larger Internet. Centauri was granted more than 4,000 IPv4 addresses by ARIN more than a decade ago.

According to the California Secretary of State, Centauri’s status as a business in the state is “suspended.” It appears that Centauri hasn’t filed any business records with the state since 2009, and the state subsequently suspended the company’s license to do business in Aug. 2012. Separately, the California State Franchise Tax Board (FTB) suspended this company as of April 1, 2014.

Centauri Communications’ listing with the California Secretary of State’s office.

Neither Centauri Communications nor N.T. Technology responded to repeated requests for comment.

KrebsOnSecurity shared Guilmette’s findings with ARIN, which said it would investigate the matter.

“ARIN has received a fraud report from you and is evaluating it,” a spokesperson for ARIN said. “We do not comment on such reports publicly.”

Guilmette said apart from reclaiming the Internet address space from Centauri and NT Technology, ARIN could simply remove each company’s listings from the global WHOIS routing records. Such a move, he said, would likely result in most ISPs blocking access to those IP addresses.

“If ARIN were to remove these records from the WHOIS database, it would serve to de-legitimize the use of these IP blocks by the parties involved,” he said. “And globally, it would make it more difficult for the parties to find people willing to route packets to and from those blocks of addresses.”

Planet Debian — Bastian Blank: Salsa updated to GitLab 13.5

Today, GitLab released the version 13.5 with several new features. Also Salsa got some changes applied to it.

GitLab 13.5

GitLab 13.5 includes several new features. See the upstream release postfix for a full list.

Shared runner builds on larger instances

It's been way over two years since we started to use Google Compute Engine (GCE) for Salsa. Since then, all the jobs running on the shared runners run within a n1-standard-1 instance, providing a fresh set of one vCPU and 3.75GB of RAM for each and every build.

GCE supports several new instance types, featuring better and faster CPUs, including current AMD EPICs. However, as it turns out, GCE does not support any single vCPU instances for any of those types. So jobs in the future will use n2d-standard-2 for the time being, provinding two vCPUs and 8GB of RAM..

Builds run with IPv6 enabled

All builds run with IPv6 enabled in the Docker environment. This means the lo network device got the IPv6 loopback address ::1 assigned. So tests that need minimal IPv6 support can succeed. It however does not include any external IPv6 connectivity.

Planet Debian — Vincent Fourmond: QSoas tips and tricks: generating smooth curves from a fit

Often, one would want to generate smooth data from a fit over a small number of data points. For an example, take the data in the following file. It contains (fake) experimental data points that obey to Michaelis-Menten kinetics: $$v = \frac{v_m}{1 + K_m/s}$$ in which $$v$$ is the measured rate (the y values of the data), $$s$$ the concentration of substrate (the x values of the data), $$v_m$$ the maximal rate and $$K_m$$ the Michaelis constant. To fit this equation to the data, just use the fit-arb fit:
QSoas> l michaelis.dat
QSoas> fit-arb vm/(1+km/x)
After running the fit, the window should look like this:
Now, with the fit, we have reasonable values for $$v_m$$ (vm) and $$K_m$$ (km). But, for publication, one would want to generate "smooth" curve going through the lines... Saving the curve from "Data.../Save all" doesn't help, since the data has as many points as the original data and looks very "jaggy" (like on the screenshot above)... So one needs a curve with more data points.

Maybe the most natural solution is simply to use generate-buffer together with apply-formula using the formula and the values of km and vm obtained from the fit, like:

QSoas> generate-buffer 0 20
QSoas> apply-formula y=3.51742/(1+3.69767/x)
By default, generate-buffer generate 1000 evenly spaced x values, but you can change their number using the /samples option. The two above commands can be combined to just one call to generate-buffer:
QSoas> generate-buffer 0 20 3.51742/(1+3.69767/x)
This works, but it is quite cumbersome and it is not going to work well for complex formulas or the results of differential equations or kinetic systems...

This is why to each fit- command corresponds a sim- command that computes the result of the fit using a "saved parameters" file (here, michaelis.params, but you can also save it yourself) and buffers as "models" for X values:

QSoas> generate-buffer 0 20
QSoas> sim-arb vm/(1+km/x) michaelis.params 0
This strategy works with every single fit ! As an added benefit, you even get the fit parameters as meta-data, which are displayed by the show command:
QSoas> show 0
Dataset generated_fit_arb.dat: 2 cols, 1000 rows, 1 segments, #0
Flags:
Meta-data:	commands =	 sim-arb vm/(1+km/x) michaelis.params 0	fit =	 arb (formula: vm/(1+km/x))	km =	 3.69767
vm =	 3.5174
They also get saved as comments if you save the data.

Important note: the sim-arb command will be available only in the 3.0 release, although you can already enjoy it if you use the github version.

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050â€“5052. Current version is 2.2. You can download its source code and compile it yourself or buy precompiled versions for MacOS and Windows there.

Planet Debian — Steinar H. Gunderson: plocate in testing

plocate hit testing today, so it's officially on its way to bullseye :-) I'd love to add a backport to stable, but bpo policy says only to backport packages with a “notable userbase”, and I guess 19 installations in popcon isn't that :-) It's also hit Arch Linux, obviously Ubuntu universe, and seemingly also other distributions like Manjaro. No Fedora yet, but hopefully, some Fedora maintainer will pick it up. :-)

Also, pabs pointed out another possible use case, although this is just a proof-of-concept:

pannekake:~/dev/plocate/obj> time apt-file search bin/updatedb
locate: /usr/bin/updatedb.findutils
mlocate: /usr/bin/updatedb.mlocate
roundcube-core: /usr/share/roundcube/bin/updatedb.sh
apt-file search bin/updatedb  1,19s user 0,58s system 163% cpu 1,083 total

pannekake:~/dev/plocate/obj> time ./plocate -d apt-file.plocate.db bin/updatedb
locate: /usr/bin/updatedb.findutils
mlocate: /usr/bin/updatedb.mlocate
roundcube-core: /usr/share/roundcube/bin/updatedb.sh
./plocate -d apt-file.plocate.db bin/updatedb  0,00s user 0,01s system 79% cpu 0,012 total


Things will probably be quieting down now; there's just not that many more logical features to add.

Worse Than Failure — CodeSOD: Query Elegance

It’s generally hard to do worse than a SQL injection vulnerability. Data access is fundamental to pretty much every application, and every programming environment has some set of rich tools that make it easy to write powerful, flexible queries without leaving yourself open to SQL injection attacks.

And yet, and yet, they’re practically a standard feature of bad code. I suppose that’s what makes it bad code.

Gidget W inherited a PHP application which, unsurprisingly, is rife with SQL injection vulnerabilities. But, unusually, it doesn’t leverage string concatenation to get there. Gidget’s predecessor threw a little twist on it.

$fields = "t1.id, t1.name, UNIX_TIMESTAMP(t1.date) as stamp, ";$fields .= "t2.idT1, t2.otherDate, t2.otherId";
$join = "table1 as t1 join table2 as t2 on t1.id=t2.idT1";$where = "where t1.lastModified > $val && t2.lastModified = '$val2'";
$query = "select$field from $join$where";

This pattern appears all through the code. Because it leverages string interpolation, the same core structure shows up again and again, almost copy/pasted, with one line repeated each time.

$query = "select$field from $join$where";

What goes into $field and $join and $where may change each time, but "select$field from $join$where" is eternal, unchanging, and omnipresent. Every database query is constructed this way.

It’s downright elegant, in its badness. It simultaneously shows an understanding of how to break up a pattern into reusable code, but also no understanding of why all of this is a bad idea.

But we shouldn’t let that distract us from the little nuances of the specific query that highlight more WTFs.

t1.lastModified > $val && t2.lastModified = '$val2'

lastModified in both of these tables is a date, as one would expect. Which raises the question: why does one of these conditions get quotes and why does the other one not? It implies that $val probably has the quotes baked in? Gidget also asks: “Why is the WHERE keyword part of the$where variable instead of inline in the query, but that isn’t the case for SELECT or FROM?”

That, at least, I can answer. Not every query has a filter condition. Since you can’t have WHERE followed by nothing, just make the $where variable contain that. See? Elegant in its badness. [Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today! , LongNow — The Data of Long-lived Institutions The following transcript has been edited for length and clarity. I want to lead you through some of the research that I’ve been doing on a meta-level around long-lived institutions, as well as some observations of the ways various systems have lasted for hundreds of thousands of years. Long Now as a Long-lived Institution This is one of the early projects I worked with Stewart Brand on at Long Now. We were trying to define our problem space and explore the ways we think on different timescales. Generally, companies are working in the “nowadays,” although that’s been shortening to some extent, with more quarterly thinking than decade-level thinking. It was Peter Schwartz who suggested this 10,000 year timeframe. Danny Hillis’ original idea for what would ultimately become The 10,000 Year Clock was that it would be a Millennium Clock: it would tick once a year, bong once a century, and the cuckoo would come out once a millennium. He didn’t really have an end date. We use the 10,000 year time frame to orient our efforts at Long Now because global civilization arose when the last Interglacial period ended 10,000 years ago. It was only then, around 8,000 BC, that we had the emergence of agriculture and the first cities. If we can look back that far, we should be able to look forward that far. Thinking about ourselves as in the middle of a 20,000 year story is very different than thinking about ourselves as at the end of a 10,000 year story. This pace layers diagram is the very first thing I worked on at Long Now. The notion of pace layers came out of a discussion between Stewart and Long Now co-founder Brian Eno. They were trying to tease apart these layers of human time. Institutions can be mapped across the pace layers diagram as well. Take Apple Computer, for example. They’re coming out with new iPhones every six months, which is the fashion layer. The commerce layer is Apple selling these devices. The infrastructure layer is the cell phone networks and chip fabs that it’s all built on. The governance layer—and note that it is governance, not government; they’re mostly working with governments, but they also have to work with general governing systems. Some of these companies are hitting walls against different types of governments who have different ideas of privacy, different ideas of commercialization, and they’re now having to shape their companies around that. And then obviously, culture is moving slower underneath all of this, but Apple is starting to affect culture. And then there’s the last pace layer, nature, moving the slowest. At some point, Apple is going to have to come to terms with the level of environmental damage and problems that are happening on the nature pace layer if it is going to be a company that lasts for hundreds or a thousand years. So we could imagine any large institution mapped across this and I think it’s a useful tool for that. Also very early on in Long Now’s history, in 01997, Kees van der Heijden, who wrote the book on scenario planning, came to a charrette that Long Now organized to come up with business ideas for our organization. He formulated a business plan that was strangely prophetic: The squares are areas where we have core competencies. The dotted lines indicate temporary competencies, like the founders. The other items indicate all the things we hadn’t really gotten to yet or figured out: we didn’t have a way of funding ourselves; we didn’t have a membership program; we didn’t have a large community of donors; we didn’t have an endowment; and we didn’t have people willing to give their estates to us. We still don’t have an endowment or people willing to give us our estates, but we’ve achieved the rest. And now that we’ve been around for 22 years, we can imagine how those two items are going to start to happen next. I also want to point out the cyclical nature of this diagram. There’s no system in the world that I’ve found that is linear that has lasted on these timescales. You need to have a cyclical business model, not a linear business model. The Longest-Lived Institutions in the World I’ve been collecting data on all of the longest lived institutions in the world. As you look at these, there’s a few things that stick out. Notice: brewery, brewery, winery, hotel, bar, pub, right? And also notice that a lot of them are in Japan. There’s been a rough system of government there for over 2,000 years (the Royal Family) that’s held together enough to enable something like the Royal Confectioner of Japan to be one of the oldest companies in the world. But there’s also temple builders and things like that. In the West, most of the companies that have survived for a very long time are basically service companies. It’s a lot easier to reinvent yourself as a service-oriented company than it is as a commodity company when that particular commodity goes out of use. Colgate Palmolive (founded 01806) and DuPont (founded 01802) are commodity companies that are broad enough to change the kinds of products they sell over time. I’m interested in learning more about all these companies, as they probably all have some kind of special sauce in their stories of longevity. Something else that came out of this research is the fact that the length of company’s lives is shrinking at almost one year per year. In 01950, the average company on the Fortune 500 had been around for 61 years. Now it’s 18 years. Companies’ lives are getting shorter. As I mentioned, most of the oldest companies in the world are in Japan. In a survey of 5,500 companies over 200 years old, 3,100 are based in Japan. The rest are in some of the older countries in Europe. But—and this was a fact I found curious, and one that speaks to the cyclical nature of things—90% of the companies that are over 200 years old have 300 employees or less; they’re not mega companies. In surveying 1,000 companies over 300 years old, you find a huge amount of disparity concerning which industries they’re a part of. But there were a few big groupings that I found interesting. 23% are in the alcohol industry, and this doesn’t even include pubs and restaurants and hotels that may sell alcohol. Patrick McGovern, a biomolecular archeologist who I talked to when we were building The Interval, has done DNA analysis on vines, which are a clonal species. From that analysis, we know that civilization started cultivating wine 8,000 BC. McGovern supposes that it’s not at all clear if civilization stopped being nomadic in order to ferment things, or because they started fermenting things, they stopped being nomadic. It’s an intriguing correlation, and notable that some of this overwhelmingly large segment of the oldest companies in the world deal in alcohol. Long-term Thinking is Not Inherently Good A quick word about values: long-term thinking, and aspiring to be a long-term institution, is not inherently good. At Long Now, we’ve always emphasized the importance of long-term thinking without trying to ascribe a lot of values to it. But I don’t think that’s intellectually honest. We have to ask ourselves what we’re trying to perpetuate. We have to step back far enough and ensure that the kinds of things we’re perpetuating are generally good for society. How to Build Things That Last One way that things have lasted for a really long time is to just take a really long time to build them. Cathedrals are a famous example of this. The most dangerous time for anything that’s lasting is really just one generation after it was built. It’s no longer a new, cool thing; it’s the thing that your parents did, and it’s not cool anymore. And it’s not until another couple generations later where everyone values it, largely because it is old and venerable and has become a kind of cultural icon. And we already see this with this cathedral: the Sagrada Familia in Barcelona. It’s still under construction, 125 years into its build process, and it’s already a UNESCO World Heritage Site. The other way things last for a really long time, and this is the Japanese model, is that they’re just extremely well-maintained. At about 1,400 years old, these are the two oldest continuously standing wooden structures in the world. And they’ve replaced a lot of parts of them. They keep the roofs on them, and even in a totally humid and raining environment, the central timbers of these buildings have stayed true. Interestingly, this temple was also the place where, over a thousand years ago, a Japanese princess had a vision that she needed to send a particular prayer out to the world to make sure that it survived into the future. And so she had, literally, a million wooden pagodas made with the prayer put inside them, and distributed these little pagodas as far and wide as she could. You can still buy these on eBay right now. It’s an early example of the philosophy of “Lots of Copies Keep Stuff Safe” (LOCKSS). Another Japanese example that uses a totally different strategy is this Shinto shrine. Shinto is an animist religion whose adherents believe that spirits are in everything, unlike Buddhism, which came to Japan later. In the Shinto belief system, temples have this renewing technology, if you will, where they’re rebuilt in a site right next to each other in different periodicities. This one, which is the most famous in Japan, is the Ise Shrine, which is rebuilt every 20 years. A few years ago, I was fortunate enough to attend the rebuilding ceremony. (One of the oldest companies in the world, I should add, is the Japanese temple building company that builds these temples. The emphasis here is not on maintenance, but renewal. These temples made of thatch and wood—totally ephemeral materials—have lasted for 2,000 years. They have documented evidence of exact rebuilding for 1,600 years, but this site was founded in 4 AD—also by a visionary Japanese princess. And every 20 years, with the Japanese princess in attendance, they move the treasures from one from one temple to the other. And the kami—the spirits—follow that. And then they deconstruct the temple, the old one, and they send all those parts out to various Shinto temples in Japan as religious artifacts. I think the most important thing about this particular example is that each generation gets to have this moment where the older master teaches the apprentice the craft. So you get this handing off of knowledge that’s extremely physical and real and has a deliverable. It’s got to ship, right? “The princess is coming to move the stuff; we have to do this.” It’s an eight year process with tons of ritual that goes into rebuilding these temples. I think an interesting counterexample to things lasting a very long time is when they ascribe to certain ideologies. And I think it’s curious, one of our longest lived institutions is the Catholic Church, and the ideology of something like the Buddha’s Obamian has lasted, but a lot of the artifacts become targets for people who don’t believe in that ideology. The Taliban spent weeks dynamiting and using artillery to destroy these Buddhas. You would think that Buddhism, a relatively innocuous religion, is unthreatening—but not so much to the Taliban. This is the University of Bologna, which is largely credited as the earliest university in the world. It’s almost 1000 years old at this point. Oxford was shortly behind it. And there’s another 40 or so universities over 500 years old. Universities have this ability to do a kind of continual refresh where every four years, especially in undergraduate programs, you have a whole new set of people. And so they have to sell themselves to a new generation every single year. Their customer is a whole class. And we see universities now struggle when they aren’t teaching relevant things to people and they have to adjust. And that has kept them around as some of the longest lived institutions in the world. I think the idea of communities of practice is a really interesting one. In these communities, knowledge of practice is handed down from generation to generation. Such is the case with martial arts, which we have evidence for dating back at least 2,000 or 3,000 years. There’s several strategies in nature that allow systems to last for thousands of years. There’s clonal strategies like the Aspen tree. We’ve measured Mesquite rings in the desert where they die and then they grow up in a ring from the root structure that indicates that a Mesquite ring has the same DNA, effectively for 50,000 years. And these clonal forests have definitely been around for thousands of years, even though each individual will only last a few years in some cases. In other cases, things are cultivated. Going back to the wine example, where we know we have effectively the DNA of clonal species like these grapes from ancient Rome where we have taken a clipping and cultivated it from generation to generation. So there’s been this kind of interplay between humans and the natural world and we also see this in a lot of tree-caring practices. The bristlecone exemplifies how an existential crisis gives you practice in terms of how you’re going to survive. The bristlecone is the oldest continuously living single organism that we know of in the world. And the funny thing about the bristlecone is it was not discovered by coring to be the oldest living species in the world; it was postulated because a particular tree scientist had cored other pine species, and as they did that, they found that all the ones in the worst environments were the oldest. And he said: “If you can find the pine species that is living in the absolute worst environment, you will find the oldest species of pine in the world.” And he coined this term: adversity breeds longevity. And so then people went to go find the pines in the worst environment and up at the top of the White Mountains and in the Snake Range in Nevada, and some in Colorado as well, they found three different species of bristlecone, which have been dated to over 5,000 years at this point. Taking the Future into Account If any of us are to build an institution that’s going to last for the next few hundred or 1,000 years, the changes in demographics and the climate are a big part of it. This is the projected impact of climate change on agricultural yields until the 02080s. And as you can see, agricultural yields in the global south are going to be going down. In the global north and the much further north, more like Canada and Russia, they’re going to be getting a lot better. And this is going to change the world markets, world populations, and what we’re warring over for the next 100 years. In all natural systems, you have these sigmoid curves that things always go down. We are always assuming things like our population and our economies to always go up, but that is not the way the world works; it has never been that way, and we always have these kinds of corrections. In this case, a predator follows the prey as a lower sigmoid. Once its prey runs out, then the predators start dying off. How do we get good at failing, but not totally dying out? The lynx never dies out totally, but companies that do one thing are bad at recovering when that one thing is no longer the big commodity. It wasn’t record companies that invented iTunes; it was an outsider company. Record companies were adept at selling plastic circles and when there were no plastic circles to sell music on, they didn’t know how to adjust for that. The crux of anything that’s going to last for a long time is: how do you get good at reinvention and retooling? There’s no scenario that I’ve seen where the world population doesn’t start going down by at least a hundred years from now, if not less than 50 years from now. So even the median projection, that red line in the middle, tapers off. But this data is a couple of years old, and it’s now starting to increasingly look a lot more like that dotted blue line at the bottom. And the world has really never lived through a time, except for a few short plague seasons, where the world population was going down—and, by extension, where the customer base was going down. Even more dangerous than the population going down is that the population is changing. The red line here is the number of 15 to 64 year olds. And the blue line is the zero to 14 year olds. If the world is made up largely of older people who hoard wealth, don’t work hard, and don’t make huge contributions of creativity to the world the way 20 year olds do, that world is a world that I don’t think we’re prepared to live in right now. We’re seeing this now happening in a lot of the developed world and most notably in Japan. Those of you who remember the 01980s recall that there was no scenario where Japan was not an absolute dominant part of the economy of the world. And now they’re struggling just to be relevant in a lot of ways, and it’s largely because this population change happened and the young people were not there. They wouldn’t allow any immigration, and that creativity, and that thrust of civilization, went out of a country that was a dominant world economic power. Watch the video of Alexander Rose’s talk on the Data of Long-lived Institutions: Planet Debian — Christian Kastner: RStudio is a refreshingly intuitive IDE I currently need to dabble with R for a smallish thing. I have previously dabbled with R only once, for an afternoon, and that was about a decade ago, so I had no prior experience to speak of regarding the language and its surrounding ecosystem. Somebody recommended that I try out RStudio, a popular IDE for R. I was happy to see that an open-source community edition exists, in the form of a .deb package no less, so I installed it and gave it a try. It's remarkable how intuitive this IDE is. My first guess at doing something has so far been correct every. single. time. I didn't have to open the help, or search the web, for any solutions, either -- they just seem to offer themselves up. And it's not just my inputs; it's the output, too. The RStudio window has multiple tiles, and each tile has multiple tabs. I found this quite confusing and intimidating on first impression, but once I started doing some work, I was surprised to see that whenever I did something that produced output in one or more of the tabs, it was (again) always in an intuitive manner. There's a fine line between informing with relevant context and distracting with irrelevant context, but RStudio seems to have placed itself on the right side of it. This, and many other features that pop up here and there, like the live-rendering of LaTeX equations, contributed to what has to be one of the most positive experiences with an IDE that I've had so far. LongNow — A Long Now Drive-in Double Feature at Fort Mason Join the Long Now Community for a night of films that inspire long-term thinking. On October 27, 02020, we’ll screen Samsara followed by 2001: A Space Odyssey at Fort Mason. SAMSARA Drive-in Screening on Tuesday October 27, 02020 at 6:00pm PT SAMSARA is a Sanskrit word that means “the ever turning wheel of life” and is the point of departure for the filmmakers as they search for the elusive current of interconnection that runs through our lives. SAMSARA transports us to sacred grounds, disaster zones, industrial sites, global gatherings and natural wonders. By dispensing with dialogue and descriptive text, the film subverts our expectations of a traditional documentary, instead encouraging our own inner interpretations inspired by images and music that infuses the ancient with the modern. Filmed over five years in twenty-five countries, SAMSARA (02011) is a non-verbal documentary from filmmakers Ron Fricke and Mark Magidson, the creators of BARAKA. It is one of only a handful of films shot on 70mm in the last forty years. Through powerful images, the film illuminates the links between humanity and the rest of nature, showing how our life cycle mirrors the rhythm of the planet. 2001: A Space Odyssey Drive-in Screening on Tuesday October 27, 02020 at 8:45pm PT The genius is not in how much Stanley Kubrick does in “2001: A Space Odyssey,” but in how little. This is the work of an artist so sublimely confident that he doesn’t include a single shot simply to keep our attention. He reduces each scene to its essence, and leaves it on screen long enough for us to contemplate it, to inhabit it in our imaginations. Alone among science-fiction movies, “2001″ is not concerned with thrilling us, but with inspiring our awe. What Kubrick had actually done was make a philosophical statement about man’s place in the universe, using images as those before him had used words, music or prayer. And he had made it in a way that invited us to contemplate it — not to experience it vicariously as entertainment, as we might in a good conventional science-fiction film, but to stand outside it as a philosopher might, and think about it. Ticket & Event Information: • Tickets are$30 per vehicle for members, General Public Tickets are $60 per vehicle. • Separate tickets must be purchased for each of the screenings. • Parking opens at 5:00pm for the 6:00pm showing, and 7:45pm for the 8:45pm showing. • Please have your ticket printed out or on your phone so we can check you in. • Parking location will be chosen by the venue to insure that everyone can best see the screen. • The film audio will be through your FM radio receiver. • There will be concessions available at the event! The Interval will be open to purchase to-go drinks, there will be a Food Truck and popcorn, candy and other snacks for sale. COVID-19 Safety Information: • This is a socially distant event. Please do not attend if you are experiencing any symptoms of COVID-19. • Bathrooms will be cleaned throughout the evening. • Masks are required when outside of your vehicle. Masks with exhalation valves are not allowed. • Attendees must remain inside their vehicles except to use the restroom facilities or pickup concessions. • Each vehicle may only be occupied by members of a “pod” who have already been in close contact with each other. • Attendees who fail to follow safe distancing at the request of staff will cause the attendee to be subject to ejection of the event. No refund will be given. About FORT MASON FLIX With drive-in theaters experiencing a renaissance around the country, Fort Mason Center for Arts & Culture (FMCAC) announces FORT MASON FLIX, a pop-up drive-in theater launching September 18, 2020. Housed on FMCAC’s historic waterfront campus, FORT MASON FLIX will present a cornucopia of film programming, from family favorites and cult classics to blockbusters and arthouse cinema. CryptogramNSA Advisory on Chinese Government Hacking The NSA released an advisory listing the top twenty-five known vulnerabilities currently being exploited by Chinese nation-state attackers. This advisory provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks. Most of the vulnerabilities listed below can be exploited to gain initial access to victim networks using products that are directly accessible from the Internet and act as gateways to internal networks. The majority of the products are either for remote access (T1133) or for external web services (T1190), and should be prioritized for immediate patching. Planet Debian — Dirk Eddelbuettel: RcppZiggurat 0.1.6 A new release, now at version 0.1.6, of RcppZiggurat is now on the CRAN network for R. The RcppZiggurat package updates the code for the Ziggurat generator by Marsaglia and other which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretlâ€”all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity). This release brings a corrected seed setter and getter which now correctly take care of all four state variables, and not just one. It also corrects a few typos in the vignette. Both were fixed quite a while back, but we somehow managed to not ship this to CRAN for two years. The NEWS file entry below lists all changes. Changes in version 0.1.6 (2020-10-18) • Several typos were corrected in the vignette (Blagoje Ivanovic in #9). • New getters and setters for internal state were added to resume simulations (Dirk in #11 fixing #10). • Minor updates to cleanup script and Travis CI setup (Dirk). Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppZiggurat page. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. Planet Debian — Bits from Debian: Debian donation for Peertube development The Debian project is happy to announce a donation of 10,000 â‚¬ to help Framasoft reach the fourth stretch-goal of its Peertube v3 crowdfunding campaign -- Live Streaming. This year's iteration of the Debian annual conference, DebConf20, had to be held online, and while being a resounding success, it made clear to the project our need to have a permanent live streaming infrastructure for small events held by local Debian groups. As such, Peertube, a FLOSS video hosting platform, seems to be the perfect solution for us. We hope this unconventional gesture from the Debian project will help us make this year somewhat less terrible and give us, and thus humanity, better Free Software tooling to approach the future. Debian thanks the commitment of numerous Debian donors and DebConf sponsors, particularly all those that contributed to DebConf20 online's success (volunteers, speakers and sponsors). Our project also thanks Framasoft and the PeerTube community for developing PeerTube as a free and decentralized video platform. The Framasoft association warmly thanks the Debian Project for its contribution, from its own funds, towards making PeerTube happen. This contribution has a twofold impact. Firstly, it's a strong sign of recognition from an international project - one of the pillars of the Free Software world - towards a small French association which offers tools to liberate users from the clutches of the web's giant monopolies. Secondly, it's a substantial amount of help in these difficult times, supporting the development of a tool which equally belongs to and is useful to everyone. The strength of Debian's gesture proves, once again, that solidarity, mutual aid and collaboration are values which allow our communities to create tools to help us strive towards Utopia. Worse Than Failure — CodeSOD: Delete This About three years ago, Consuela inherited a giant .NET project. It was… not good. To communicate how “not good” it was, Consuela had a lot of possible submissions. Sending the worst code might be the obvious choice, but it wouldn’t give a good sense of just how bad the whole thing was, so they opted instead to find something that could roughly be called the “median” quality. This is a stored procedure that is roughly about the median sample of the overall code. Half of it is better, but half of it gets much, much worse. CREATE proc [dbo].[usermgt_DeleteUser] ( @ssoid uniqueidentifier ) AS begin declare @username nvarchar(64) select @username = Username from Users where SSOID = @ssoid if (not exists(select * from ssodata where ssoid = @ssoid)) begin insert into ssodata (SSOID, UserName, email, givenName, sn) values (@ssoid, @username, 'Email@email.email', 'Firstname', 'Lastname') delete from ssodata where ssoid = @ssoid end else begin RAISERROR ('This user still exists in sso', 10, 1) end Let’s talk a little bit about names. As you can see, they’re using an “internal” schema naming convention- usermgt clearly is defining a role for a whole class of stored procedures. Already, that’s annoying, but what does this procedure promise to do? DeleteUser. But what exactly does it do? Well, first, it checks to see if the user exists. If the user does exist… it raises an error? That’s an odd choice for deleting. But what does it do if the user doesn’t exist? It creates a user with that ID, then deletes it. Not only is this method terribly misnamed, it also seems to be utterly useless. At best, I think they’re trying to route around some trigger nonsense, where certain things happen ON INSERT and then different things happen ON DELETE. That’d be a WTF on its own, but that’s possibly giving this more credit than it deserves, because that assumes there’s a reason why the code is this way. Consuela adds a promise, which hopefully means some follow-ups: If you had access to the complete codebase, you would not EVER run out of new material for codesod. It’s basically a huge collection of “How Not To” on all possible layers, from single lines of code up to the complete architecture itself. [Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more. Planet Debian — Reproducible Builds: Supporter spotlight: Civil Infrastructure Platform The Reproducible Builds project depends on our many projects, supporters and sponsors. We rely on their financial support, but they are also valued ambassadors who spread the word about the Reproducible Builds project and the work that we do. This is the first installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org. However, we are kicking off this series by featuring Urs Gleim and Yoshi Kobayashi of the Civil Infrastructure Platform (CIP) project. Chris Lamb: Hi Urs and Yoshi, great to meet you. How might you relate the importance of the Civil Infrastructure Platform to a user who is non-technical? A: The Civil Infrastructure Platform (CIP) project is focused on establishing an open source ‘base layer’ of industrial-grade software that acts as building blocks in civil infrastructure projects. End-users of this critical code include systems for electric power generation and energy distribution, oil and gas, water and wastewater, healthcare, communications, transportation, and community management. These systems deliver essential services, provide shelter, and support social interactions and economic development. They are society’s lifelines, and CIP aims to contribute to and support these important pillars of modern society. Chris: We have entered an age where our civilisations have become reliant on technology to keep us alive. Does the CIP believe that the software that underlies our own safety (and the safety of our loved ones) receives enough scrutiny today? A: For companies developing systems running our infrastructure and keeping our factories working, it is part of their business to ensure the availability, uptime, and security of these very systems. However, software complexity continues to increase, and the efforts spent on those systems is now exploding. What is missing is a common way of achieving this through refining the same tools, and cooperating on the hardening and maintenance of standard components such as the Linux operating system. Chris: How does the Reproducible Builds effort help the Civil Infrastructure Platform achieve its goals? A: Reproducibility helps a great deal in software maintenance. We have a number of use-cases that should have long-term support of more than 10 years. During this period, we encounter issues that need to be fixed in the original source code. But before we make changes to the source code, we need to check whether it is actually the original source code or not. If we can reproduce exactly the same binary from the source code even after 10 years, we can start to invest time and energy into making these fixes. Chris: Can you give us a brief history of the Civil Infrastructure Platform? Are there any specific ‘success stories’ that the CIP is particularly proud of? A: The CIP Project formed in 2016 as a project hosted by Linux Foundation. It was launched out of necessity to establish an open source framework and the subsequent software foundation delivers services for civil infrastructure and economic development on a global scale. Some key milestones we have achieved as a project include our collaboration with Debian, where we are helping with the Debian Long Term Support (LTS) initiative, which aims to extend the lifetime of all Debian stable releases to at least 5 years. This is critical because most control systems for transportation, power plants, healthcare and telecommunications run on Debian-based embedded systems. In addition, CIP is focused on IEC 62443, a standards-based approach to counter security vulnerabilities in industrial automation and control systems. Our belief is that this work will help mitigate the risk of cyber attacks, but in order to deal with evolving attacks of this kind, all of the layers that make up these complex systems (such as system services and component functions, in addition to the countless operational layers) must be kept secure. For this reason, the IEC 62443 series is attracting attention as the de facto cyber-security standard. Chris: The Civil Infrastructure Platform project comprises a number of project members from different industries, with stakeholders across multiple countries and continents. How does working together with a broad group of interests help in your effectiveness and efficiency? A: Although the members have different products, they share the requirements and issues when developing sustainable products. In the end, we are driven by common goals. For the project members, working internationally is simply daily business. We see this as an advantage over regional or efforts that focus on narrower domains or markets. Chris: The Civil Infrastructure Platform supports a number of other existing projects and initiatives in the open source world too. How much do you feel being a part of the broader free software community helps you achieve your aims? A: Collaboration with other projects is an essential part of how CIP operates — we want to enable commonly-used software components. It would not make sense to re-invent solutions that are already established and widely used in product development. To this end, we have an ‘upstream first’ policy which means that, if existing projects need to be modified to our needs or are already working on issues that we also need, we work directly with them. Chris: Open source software in desktop or user-facing contexts receives a significant amount of publicity in the media. However, how do you see the future of free software from an industrial-oriented context? A: Open source software has already become an essential part of the industry and civil infrastructure, and the importance of open source software there is still increasing. Without open source software, we cannot achieve, run and maintain future complex systems, such as smart cities and other key pieces of civil infrastructure. Chris: If someone wanted to know more about the Civil Infrastructure Platform (or even to get involved) where should they go to look? A: We have many avenues to participate and learn more! We have a website, a wiki and you can even follow us on Twitter. For more about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org. , Worse Than Failure — CodeSOD: Extended Time The C# "extension method" feature lets you implement static methods which "magically" act like they're instance methods. It's a neat feature which the .NET Framework uses extensively. It's also a great way to implement some convenience functions. Brandt found some "convenience" functions which were exploiting this feature. public static bool IsLessThen<T>(this T a, T b) where T : IComparable<T> => a.CompareTo(b) < 0; public static bool IsGreaterThen<T>(this T a, T b) where T : IComparable<T> => a.CompareTo(b) > 0; public static bool And(this bool a, bool b) => a && b; public static bool Or(this bool a, bool b) => a || b; public static bool IsBetweensOrEqual<T>(this T a, T b, T c) where T : IComparable<T> => a.IsGreaterThen(b).Or(a.Equals(b)).And( a.IsLessThen(c).Or(a.Equals(c)) );  Here, we observe someone who maybe heard the term "functional programming" and decided to wedge it into their programming style however it would fit. We replace common operators and expressions with extension method versions. Instead of the cryptic a || b, we can now write a.Or(b), which isâ€¦ better? I almost don't hate the IsLessThan/IsGreaterThan methods, as that is (arguably) more readable. But wait, I have to correct myself: IsLessThen and IsGreaterThen. So they almost got something that was (arguably) more readable, but with a minor typo just made it all more confusing. All this, though, to solve their actual problem: the TimeSpan data type doesn't have a "between" comparator, and at one point in their code- one point- they need to perform that check. It's also worth noting that C# supports operator overloading, and TimeSpan does have an overload for all your basic comparison operators, so they could have just done that. Brandt adds: While you can argue that there is no out of the box 'Between' functionality, the colleague who programmed this ignored an already existing extension method that was in the same file, that offered exactly this functionality in a better way. [Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today! , Worse Than Failure — CodeSOD: Don't Not Be Negative One of my favorite illusions is the progress bar. Even the worst, most inaccurate progress bar will make an application feel faster. The simple feedback which promises "something is happening" alters the users' sense of time. So, let's say you're implementing a JavaScript progress bar. You need to decide if you are "in progress" or not. So you need to check: if there is a progress value, and the progress value is less than 100, you're still in progress. Mehdi's co-worker decided to implement that check asâ€¦ the opposite. const isInProgress = progress => !(!progress || (progress && progress > 100))  This is one of those lines of code where you can just see the developer's process, encoded in each choice made. "Okay, we're not in progress if progress doesn't have a value: !(!progress). Or we're not in progress if progress has a value and that value is over 100." There's nothing explicitly wrong with this code. It's just the most awkward, backwards possible way to express that check. I suspect that part of its tortured logic arises from the fact that the developer wanted to return false if the value was null or undefined, and this was the way they figured out to do that. Of course, a more straightforward way to write that might be (progress) => (progress && progress <= 100) || false. This will have "unexpected" behavior if the progress value is negative, but then again, so will the original code. In the end, this is just a story of a double negative. I definitely won't say you should never not use a double negative. Don't not avoid them. [Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today! Krebs on Security — QAnon/8Chan Sites Briefly Knocked Offline A phone call to an Internet provider in Oregon on Sunday evening was all it took to briefly sideline multiple websites related to 8chan/8kun — a controversial online image board linked to several mass shootings — and QAnon, the far-right conspiracy theory which holds that a cabal of Satanic pedophiles is running a global child sex-trafficking ring and plotting against President Donald Trump. Following a brief disruption, the sites have come back online with the help of an Internet company based in St. Petersburg, Russia. The IP address range in the upper-right portion of this map of QAnon and 8kun-related sites — 203.28.246.0/24 — is assigned to VanwaTech and briefly went offline this evening. Source: twitter.com/Redrum_of_Crows. A large number of 8kun and QAnon-related sites (see map above) are connected to the Web via a single Internet provider in Vancouver, Wash. called VanwaTech (a.k.a. “OrcaTech“). Previous appeals to VanwaTech to disconnect these sites have fallen on deaf ears, as the company’s owner Nick Lim reportedly has been working with 8kun’s administrators to keep the sites online in the name of protecting free speech. But VanwaTech also had a single point of failure on its end: The swath of Internet addresses serving the various 8kun/QAnon sites were being protected from otherwise crippling and incessant distributed-denial-of-service (DDoS) attacks by Hillsboro, Ore. based CNServers LLC. On Sunday evening, security researcher Ron Guilmette placed a phone call to CNServers’ owner, who professed to be shocked by revelations that his company was helping QAnon and 8kun keep the lights on. Within minutes of that call, CNServers told its customer — Spartan Host Ltd., which is registered in Belfast, Northern Ireland — that it would no longer be providing DDoS protection for the set of 254 Internet addresses that Spartan Host was routing on behalf of VanwaTech. Contacted by KrebsOnSecurity, the person who answered the phone at CNServers asked not to be named in this story for fear of possible reprisals from the 8kun/QAnon crowd. But they confirmed that CNServers had indeed terminated its service with Spartan Host. That person added they weren’t a fan of either 8kun or QAnon, and said they would not self-describe as a Trump supporter. CNServers said that shortly after it withdrew its DDoS protection services, Spartan Host changed its settings so that VanwaTech’s Internet addresses were protected from attacks by ddos-guard[.]net, a company based in St. Petersburg, Russia. Spartan Host’s founder, 25-year-old Ryan McCully, confirmed CNServers’ report. McCully declined to say for how long VanwaTech had been a customer, or whether Spartan Host had experienced any attacks as a result of CNServers’ action. McCully said while he personally doesn’t subscribe to the beliefs espoused by QAnon or 8kun, he intends to keep VanwaTech as a customer going forward. “We follow the ‘law of the land’ when deciding what we allow to be hosted with us, with some exceptions to things that may cause resource issues etc.,” McCully said in a conversation over instant message. “Just because we host something, it doesn’t say anything about we do and don’t support, our opinions don’t come into hosted content decisions.” But according to Guilmette, Spartan Host’s relationship with VanwaTech wasn’t widely known previously because Spartan Host had set up what’s known as a “private peering” agreement with VanwaTech. That is to say, the two companies had a confidential business arrangement by which their mutual connections were not explicitly stated or obvious to other Internet providers on the global Internet. Guilmette said private peering relationships often play a significant role in a good deal of behind-the-scenes-mischief when the parties involved do not want anyone else to know about their relationship. “These arrangements are business agreements that are confidential between two parties, and no one knows about them, unless you start asking questions,” Guilmette said. “It certainly appears that a private peering arrangement was used in this instance in order to hide the direct involvement of Spartan Host in providing connectivity to VanwaTech and thus to 8kun. Perhaps Mr. McCully was not eager to have his involvement known.” 8chan, which rebranded last year as 8kun, has been linked to white supremacism, neo-Nazism, antisemitism, multiple mass shootings, and is known for hosting child pornography. After three mass shootings in 2019 revealed the perpetrators had spread their manifestos on 8chan and even streamed their killings live there, 8chan was ostracized by one Internet provider after another. The FBI last year identified QAnon as a potential domestic terror threat, noting that some of its followers have been linked to violent incidents motivated by fringe beliefs. Further reading: What Is QAnon? QAnon: A Timeline of Violent Linked to the Conspiracy Theory Cory Doctorow — Someone Comes to Town, Someone Leaves Town (part 19) Here’s part nineteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here). This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.” Some show notes: Here’s the form to request a copy of Attack Surface for schools, libraries, classrooms, etc. Here’s how my publisher described it when it came out: Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off. Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls. Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge. Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends. Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read. , Cory Doctorow — Stop Techno Dystopia with SRSLY WRONG SRSLY WRONG is a leftist/futuristic podcast incorporating sketches in long-form episodes; I became aware of them last year when Michael Pulsford recommended their series on “library socialism”, an idea I was so stricken by that it made its way into The Lost Cause, a novel I’m writing now. The Wrong Boys invited me on for an episode (Stop Techno Dystopia!) (MP3) as part of the Attack Surface tour and it came out so, so good! Thanks, Wrong Boys! Cory Doctorow — My appearance on the Judge John Hodgman podcast! I’ve been a fan of the Judge John Hodgman podcast for so many years, and often threaten my wife with bringing a case before the judge whenever we have a petty disagreement. I was so pleased to appear on the JJHO podcast (MP3) this week as part of the podcast tour for Attack Surface! Cory Doctorow — Talking writing with the Writing Excuses crew A million years ago, I set sail on the Writing Excuses Cruise, a writing workshop at sea. As part of that workshop, I sat down with the Writing Excuses podcast team (Mary Robinette Kowal, Piper J Drake, and Howard Taylor) and recorded a series of short episodes explaining my approach to writing. I had clean forgotten that they saved one to coincide with the release of Attack Surface, until this week’s episode went live (MP3). Listening to it today, I discovered that it was incredibly entertaining! , CryptogramFriday Squid Blogging: Interview with a Squid Researcher Interview with Mike Vecchione, Curator of Cephalopoda — now that’s a job title — at the Smithsonian Museum of National History. One reason they’re so interesting is they are intelligent invertebrates. Almost everything that we think of as being intelligent — parrots, dolphins, etc. — are vertebrates, so their brains are built on the same basic structure. Whereas cephalopod brains have evolved from a ring of nerves around the esophagus. It’s a form of intelligence that’s completely independent from ours. As usual, you can also use this squid post to talk about the security stories in the news that I havenâ€™t covered. Read my blog posting guidelines here. CryptogramCybersecurity Visuals The Hewlett Foundation just announced its top five ideas in its Cybersecurity Visuals Challenge. The problem Hewlett is trying to solve is the dearth of good visuals for cybersecurity. A Google Images Search demonstrates the problem: locks, fingerprints, hands on laptops, scary looking hackers in black hoodies. Hewlett wanted to go beyond those tropes. I really liked the idea, but find the results underwhelming. It’s a hard problem. Hewlett press release. CryptogramSplit-Second Phantom Images Fool Autopilots Researchers are tricking autopilots by inserting split-second images into roadside billboards. Researchers at Israel’s Ben Gurion University of the Negev … previously revealed that they could use split-second light projections on roads to successfully trick Tesla’s driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they’ve found they can pull off the same trick with just a few frames of a road sign injected on a billboard’s video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind. […] In this latest set of experiments, the researchers injected frames of a phantom stop sign on digital billboards, simulating what they describe as a scenario in which someone hacked into a roadside billboard to alter its video. They also upgraded to Tesla’s most recent version of Autopilot known as HW3. They found that they could again trick a Tesla or cause the same Mobileye device to give the driver mistaken alerts with just a few frames of altered video. The researchers found that an image that appeared for 0.42 seconds would reliably trick the Tesla, while one that appeared for just an eighth of a second would fool the Mobileye device. They also experimented with finding spots in a video frame that would attract the least notice from a human eye, going so far as to develop their own algorithm for identifying key blocks of pixels in an image so that a half-second phantom road sign could be slipped into the “uninteresting” portions. The paper: Abstract: In this paper, we investigate “split-second phantom attacks,” a scientific gap that causes two commercial advanced driver-assistance systems (ADASs), Telsa Model X (HW 2.5 and HW 3) and Mobileye 630, to treat a depthless object that appears for a few milliseconds as a real obstacle/object. We discuss the challenge that split-second phantom attacks create for ADASs. We demonstrate how attackers can apply split-second phantom attacks remotely by embedding phantom road signs into an advertisement presented on a digital billboard which causes Tesla’s autopilot to suddenly stop the car in the middle of a road and Mobileye 630 to issue false notifications. We also demonstrate how attackers can use a projector in order to cause Tesla’s autopilot to apply the brakes in response to a phantom of a pedestrian that was projected on the road and Mobileye 630 to issue false notifications in response to a projected road sign. To counter this threat, we propose a countermeasure which can determine whether a detected object is a phantom or real using just the camera sensor. The countermeasure (GhostBusters) uses a “committee of experts” approach and combines the results obtained from four lightweight deep convolutional neural networks that assess the authenticity of an object based on the object’s light, context, surface, and depth. We demonstrate our countermeasure’s effectiveness (it obtains a TPR of 0.994 with an FPR of zero) and test its robustness to adversarial machine learning attacks. Worse Than Failure — Error'd: Try a Different Address "Aldi doesn't deliver to a ...computer memory address!? Ok, that sounds fair," wrote Oisin. "When you order pictures from this photography studio, you find that what sets them apart from other studios is group photography and not web site design," Stephen D. wrote. John H. writes, "To be honest, I'm a little bit surprised. I figured for sure that it would have been a 50/50 split between meat and booze." "I was just trying to perform a trivial merge using GVim, but then ṕ̵̡̛̃̍̊͊̌̉h̸̢̦̭̟̿̉̔̔͛̋̓̑̉́͌͊̒̕͜'̸̛͎̹̦͔̳͎̹͎̥̻̺̦̳͈̞̃̿̐́́͑̒͛̀̋̑̕͠n̸̨̛͉̥̻̘̣̞̉́͂̌͋́̾͗͠g̵̣̫̯̈̈́̕l̷͍̳͚̻̺̀̊͆̂͒͐̀̔́͘͜͝͝͝ừ̷̯̬̜̱̭̳̞͇̠̄̎͗͝į̴̗̺͓̣̘͚̯̳͕̠̗̮̄͛͌̍̈́͌͊͌̓̈́͌̌͝ ̵̼̼̥͍̙̋̇͂͋͐͐͂m̶̥̭͍͕͍̑̏g̸̡͖̜̜̭͈͖͇̦͍͉͚̎͜ͅl̵̡͉̜̀̂̎́͐̑̑͑̍̚ŵ̴̡͚̝̣̬̭̥̙͎̻̽͝'̸̼̩̑́̄͆̾͆̿́̕ṅ̶̛̯͍̰͒́̂̎ǎ̶͇̬̲͕͍̻̞̫̻͕͈͔̍͌͆͌̈̓͛̀̌̿̚͝͠f̸͎͖̰̖̫̪͎̄̈̓̏́̐̄̈̒́̈̾̔̕ḩ̶̠̺̯̦̪͓̜͙̬͎̳̭͊̀̌͂͆́̾̑̾͝ ̶̣̘̟͊̈́͊͗͒͋̊͊̀͛̉͝Ģ̶̛̙̜̗̖̼̺͓͐̀͐͒V̸̢̙̰̟̙͚̗͖̆̈̇̓͘͠͝͝i̶̛̹̳̍͂́͝͝͝m̶̛͈͈̹̯͕̗͐̂͊̇̃̃͌̌̓̄̔̆͘ ̸̢̛̣͓̪͚̘͚̰͖͐́͜R̸͍͈͚̻͕̗̻͉͙͆͌͐͒̓͒̐̓̑̊͒͝'̴̧͚͔̫̼̺͔͎̖͈̞͙͆͆l̷͍̠̄͂̆̒͊̔ẏ̶̢̮͓̄͆́̉̈́̑͗̉͠ͅè̴̡̧̝͙̖̹͓̤̼̻͓̬͚̰̅̋́̑̾͘͜h̷̳͇̖͔̤͇̦̹̮̐̏͛̊͐͒͐ ̷̧͚̔̉̍͆͌̓͗̓̐̐͘w̸̤̝̪͎͇̩̤̳͒̏̒̎̈́̉͗̈́̚̚͝m̴̼̦̩͉̜̳͓͔̟̭͇̜̰̬̋̎͗͆̒̀̍̍̇̔͋̕͝͝ͅê̸̢̡̢̝͓̟̭̞͍̞͈̖̠̲̬̒̐̎̿̇̍́̂͒̏̉͘͠r̸̬̉̈́͆̌͆̈̉ǧ̴̢̙͚͓͇͖͔̩̣͕̞̚e̸̡̢̩̠͙͖̺̥͉̦̟̩͐͘'̵̡̢̧̟͇̭̲̳͕̻̜͇̘̬̙̈̀̓́̄̒̔̒̕͘͠n̶̢̛͎̹̮̻̼̳̜͖̲̂̇̅̾̄̐̊̇̓̒̒̍͘a̸̟̞̗̻͕̘̳͔̿̌͑̌̎̈̊̓͊̊͋͜g̷̢̮̟̠͖̞̤͖̘̻̞̀̄̓͌̾́̉̏͑͐͜l̵̪̫̝̐̃ ̸̮̤̱͇͂̔̂̀̾̀̚f̵̥͌̂̒̐̚h̷̡̤̮͇̥̖̼̙́̌̕ͅţ̴̛̞̦̩͚̝̦̮͎͕̹̖̰̀̋̋͐̍̅̀͛̕̕̚͘͝͠ȁ̸̖̝͈͎̤͇̽͛́̄͆͒̃̓̏͐͊͒̔̌g̸̜̠͉̝̱̳͔̭̦͇̱̘̺͋ͅͅn̶̦̥͈̻͍̠̂̔̊́̑͆̉̈́͝," wrote Ivan. Robert M. writes, "Look, considering how 2020 is going, 'Invalid Date' could be really any day coming up. Listen - I just want to know, is this before or after Christmas, because like it or not, I still have to do holiday shopping." [Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more. , Krebs on Security — Breach at Dickey’s BBQ Smokes 3M Cards One of the digital underground’s most popular stores for peddling stolen credit card information began selling a batch of more than three million new card records this week. KrebsOnSecurity has learned the data was stolen in a lengthy data breach at more than 100 Dickey’s Barbeque Restaurant locations around the country. An ad on the popular carding site Joker’s Stash for “BlazingSun,” which fraud experts have traced back to a card breach at Dickey’s BBQ. On Monday, the carding bazaar Joker’s Stash debuted “BlazingSun,” a new batch of more than three million stolen card records, advertising “valid rates” of between 90-100 percent. This is typically an indicator that the breached merchant is either unaware of the compromise or has only just begun responding to it. Multiple companies that track the sale in stolen payment card data say they have confirmed with card-issuing financial institutions that the accounts for sale in the BlazingSun batch have one common theme: All were used at various Dickey’s BBQ locations over the past 13-15 months. KrebsOnSecurity first contacted Dallas-based Dickey’s on Oct. 13. Today, the company shared a statement saying it was aware of a possible payment card security incident at some of its eateries: “We received a report indicating that a payment card security incident may have occurred. We are taking this incident very seriously and immediately initiated our response protocol and an investigation is underway. We are currently focused on determining the locations affected and time frames involved. We are utilizing the experience of third parties who have helped other restaurants address similar issues and also working with the FBI and payment card networks. We understand that payment card network rules generally provide that individuals who timely report unauthorized charges to the bank that issued their card are not responsible for those charges.” The confirmations came from Miami-based Q6 Cyber and Gemini Advisory in New York City. Q6Cyber CEO Eli Dominitz said the breach appears to extend from May 2019 through September 2020. “The financial institutions we’ve been working with have already seen a significant amount of fraud related to these cards,” Dominitz said. Gemini says its data indicated some 156 Dickey’s locations across 30 states likely had payment systems compromised by card-stealing malware, with the highest exposure in California and Arizona. Gemini puts the exposure window between July 2019 and August 2020. “Low-and-slow” aptly describes the card breach at Dickie’s, which persisted for at least 13 months. With the threat from ransomware attacks grabbing all the headlines, it may be tempting to assume plain old credit card thieves have moved on to more lucrative endeavors. Alas, cybercrime bazaars like Joker’s Stash have continued plying their trade, undeterred by a push from the credit card associations to encourage more merchants to install credit card readers that require more secure chip-based payment cards. That’s because there are countless restaurants — usually franchise locations of an established eatery chain — that are left to decide for themselves whether and how quickly they should make the upgrades necessary to dip the chip versus swipe the stripe. “Dickey’s operates on a franchise model, which often allows each location to dictate the type of point-of-sale (POS) device and processors that they utilize,” Gemini wrote in a blog post about the incident. “However, given the widespread nature of the breach, the exposure may be linked to a breach of the single central processor, which was leveraged by over a quarter of all Dickeyâ€™s locations.” While there have been sporadic reports about criminals compromising chip-based payment systems used by merchants in the U.S., the vast majority of the payment card data for sale in the cybercrime underground is stolen from merchants who are still swiping chip-based cards. This isn’t conjecture; relatively recent data from the stolen card shops themselves bear this out. In July, KrebsOnSecurity wrote about an analysis by researchers at New York University, which looked at patterns surrounding more than 19 million stolen payment cards that were exposed after the hacking of BriansClub, a top competitor to the Joker’s Stash carding shop. The NYU researchers found BriansClub earned close to$104 million in gross revenue from 2015 to early 2019, and listed over 19 million unique card numbers for sale. Around 97% of the inventory was stolen magnetic stripe data, commonly used to produce counterfeit cards for in-person payments.

Visa and MasterCard instituted new rules in October 2015 that put retailers on the hook for all of the losses associated with counterfeit card fraud tied to breaches if they haven’t implemented chip-based card readers and enforced the dipping of the chip when a customer presents a chip-based card.

Dominitz said he never imagined back in 2015 when he founded Q6Cyber that we would still be seeing so many merchants dealing with magstripe-based data breaches.

“Five years ago I did not expect we would be in this position today with card fraud,” he said. “You’d think the industry in general would have made a bigger dent in this underground economy a while ago.”

Tired of having your credit card re-issued and updating your payment records at countless e-commerce sites every time some restaurant you frequent has a breach? Here’s a radical idea: Next time you visit an eatery (okay, if that ever happens again post-COVID, etc), ask them if they use chip-based card readers. If not, consider taking your business elsewhere.

Worse Than Failure — CodeSOD: A New Generation

Mapping between types can create some interesting challenges. Michal has one of those scenarios. The code comes to us heavily anonymized, but let’s see what we can do to understand the problem and the solution.

There is a type called ItemA. ItemA is a model object on one side of a boundary, and the consuming code doesn’t get to touch ItemA objects directly, it instead consumes one of two different types: ItemB, or SimpleItemB.

The key difference between ItemB and SimpleItemB is that they have different validation rules. It’s entirely possible that an instance of ItemA may be a valid SimpleItemB and an invalid ItemB. If an ItemA contains exactly five required fields importantDataPieces, and everything else is null, it should turn into a SimpleItemB. Otherwise, it should turn into an ItemB.

Michal adds: “Also noteworthy is the fact that ItemA is a class generated from XML schemas.”

The Java class was generated, but not the conversion method.

public ItemB doConvert(ItemA itemA) {
final ItemA emptyItemA = new ItemA();
emptyItemA.setId(itemA.getId());
emptyItemA.setIndex(itemA.getIndex());
emptyItemA.setOp(itemA.getOp());

final String importantDataPiece1 = itemA.importantDataPiece1();
final String importantDataPiece2 = itemA.importantDataPiece2();
final String importantDataPiece3 = itemA.importantDataPiece3();
final String importantDataPiece4 = itemA.importantDataPiece4();
final String importantDataPiece5 = itemA.importantDataPiece5();

itemA.withImportantDataPiece1(null)
.withImportantDataPiece2(null)
.withImportantDataPiece3(null)
.withImportantDataPiece4(null)
.withImportantDataPiece5(null);

final boolean isSimpleItem = itemA.equals(emptyItemA)
&& importantDataPiece1 != null && importantDataPiece2 != null && importantDataPiece3 != null && importantDataPiece4 != null;

itemA.withimportantDataPiece1(importantDataPiece1)
.withImportantDataPiece2(importantDataPiece2)
.withImportantDataPiece3(importantDataPiece3)
.withImportantDataPiece4(importantDataPiece4)
.withImportantDataPiece5(importantDataPiece5);

if (isSimpleItem) {
return simpleItemConverter.convert(itemA);
} else {
return itemConverter.convert(itemA);
}
}

We start by making a new instance of ItemA, emptyItemA, and copy a few values over to it. Then we clear out the five required fields (after caching the values in local variables). We rely on .equals, generated off that XML schema, to see if this newly created item is the same as our recently cleared out input item. If they are, and none of the required fields are null, we know this will be a SimpleItemB. We’ll put the required fields back into the input object, and then call the appropriate conversion methods.

Let’s restate the goal of this method, to understand how ugly it is: if an object has five required values and nothing else, it’s SimpleItemB, otherwise it’s a regular ItemB. The way this developer decided to perform this check wasn’t by examining the fields (which, in their defense, are being generated, so you might need reflection to do the inspection), but by this unusual choice of equality test. Create an empty object, copy a few ID related elements into it, and your default constructor should handle nulling out all the things which should be null, right?

Or, as Michal sums it up:

The intention of the above snippet appears to be checking whether itemA contains all fields mandatory for SimpleItemB and none of the other ones. Why the original author started by copying some fields to his ‘template item’ but switched to the ‘rip the innards of the original object, check if the ravaged carcass is equal to the barebones template, and then stuff the guts back in and pretend nothing ever happened’ approach halfway through? I hope I never find out what it’s like to be in a mental state when any part of this approach seems like a good idea.

Ugly, yes, but still, this code worked… until it didn’t. Specifically, a new nullable Boolean field was added to ItemA which was used by ItemB, but had no impact on SimpleItemB. This should have continued to work, except that the original developer defaulted the new field to false in the constructor, but didn’t update the doConvert method, so equals started deciding that our input item and our “empty” copy no longer matched. Downstream code started getting invalid ItemB objects when it should have been getting valid SimpleItemB objects, which triggered many hours of debugging to try and understand why this small change had such cascading effects.

Michal refactored this code, but was not the first person to touch it recently:

A cherry on top is the fact that importantDataPiece5 came about a few years after the original implementation. Someone saw this code, contributed to the monstrosity, and happily kept on trucking.

,

CryptogramUpcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

Worse Than Failure — CodeSOD: Nothing But Garbage

Janell found herself on a project where most of her team were offshore developers she only interacted with via email, and a tech lead who had not ever programmed on the .NET Framework, but did some VB6 “back in the day”, and thus “knew VB”.

The team dynamic rapidly became a scenario where the tech lead issued an edict, the offshore team blindly applied it, and then Janell was left staring at the code wondering how to move forward with this.

These decrees weren’t all bad. For example: “Avoid repeated code by re-factoring into methods,” isn’t the worst advice. It’s not complete advice- there’s a lot of gotchas in that- but it fits on a bumper-sticker and generally leads to better code.

There were other rules that… well:

To improve the performance of the garbage collector, all variables must be set to nothing (null) at the end of each method.

Any time someone says something like “to improve the performance of the garbage collector,” you know you’re probably in for a bad time. This is no exception. Now, in old versions of VB, there’s a lot of “stuff” about whether or not you need to do this. It was considered a standard practice in a lot of places, though the real WTF was clearly VB in this case.

In .NET, this is absolutely unnecessary, and there are much better approaches if you need to do some sort of cleanup, like implementing the IDisposable interface. But, since this was the rule, the offshore team followed the rule. And, since we have repeated code, the offshore team followed that rule too.

Thus:

Public Sub SetToNothing(ByVal object As Object)
object = Nothing
End Sub

If there is a platonic ideal of bad code, this is very close to that. This method attempts to solve a non-problem, regarding garbage collection. It also attempts to be “DRY”, by replacing one repeated line with… a different repeated line. And, most important: it doesn’t do what it claims.

The key here is that the parameter is ByVal. The copy of the reference in this method is set to nothing, but the original in the calling code is left unchanged in this example.

Oh, but remember when I said, “they were attempting to be DRY”? I lied. I’ll let Janell explain:

I found this gem in one of the developers classes. It turned out that he had copy and pasted it into every class he had worked on for good measure.

,

CryptogramSwiss-Swedish Diplomatic Row Over Crypto AG

Previously I have written about the Swedish-owned Swiss-based cryptographic hardware company: Crypto AG. It was a CIA-owned Cold War operation for decades. Today it is called Crypto International, still based in Switzerland but owned by a Swedish company.

It’s back in the news:

Late last week, Swedish Foreign Minister Ann Linde said she had canceled a meeting with her Swiss counterpart Ignazio Cassis slated for this month after Switzerland placed an export ban on Crypto International, a Swiss-based and Swedish-owned cybersecurity company.

The ban was imposed while Swiss authorities examine long-running and explosive claims that a previous incarnation of Crypto International, Crypto AG, was little more than a front for U.S. intelligence-gathering during the Cold War.

Linde said the Swiss ban was stopping “goods” — which experts suggest could include cybersecurity upgrades or other IT support needed by Swedish state agencies — from reaching Sweden.

She told public broadcaster SVT that the meeting with Cassis was “not appropriate right now until we have fully understood the Swiss actions.”

EDITED TO ADD (10/13): Lots of information on Crypto AG.

Krebs on Security — Microsoft Patch Tuesday, October 2020 Edition

It’s Cybersecurity Awareness Month! In keeping with that theme, if you (ab)use Microsoft Windows computers you should be aware the company shipped a bevy of software updates today to fix at least 87 security problems in Windows and programs that run on top of the operating system. That means it’s once again time to backup and patch up.

Eleven of the vulnerabilities earned Microsoft’s most-dire “critical” rating, which means bad guys or malware could use them to gain complete control over an unpatched system with little or no help from users.

Worst in terms of outright scariness is probably CVE-2020-16898, which is a nasty bug in Windows 10 and Windows Server 2019 that could be abused to install malware just by sending a malformed packet of data at a vulnerable system. CVE-2020-16898 earned a CVSS Score of 9.8 (10 is the most awful).

Security vendor McAfee has dubbed the flaw “Bad Neighbor,” and in a blog post about it said a proof-of-concept exploit shared by Microsoft with its partners appears to be “both extremely simple and perfectly reliable,” noting that this sucker is imminently “wormable” — i.e. capable of being weaponized into a threat that spreads very quickly within networks.

“It results in an immediate BSOD (Blue Screen of Death), but more so, indicates the likelihood of exploitation for those who can manage to bypass Windows 10 and Windows Server 2019 mitigations,” McAfee’s Steve Povolny wrote. “The effects of an exploit that would grant remote code execution would be widespread and highly impactful, as this type of bug could be made wormable.”

Trend Micro’s Zero Day Initiative (ZDI) calls special attention to another critical bug quashed in this month’s patch batch: CVE-2020-16947, which is a problem with Microsoft Outlook that could result in malware being loaded onto a system just by previewing a malicious email in Outlook.

“The Preview Pane is an attack vector here, so you don’t even need to open the mail to be impacted,” said ZDI’s Dustin Childs.

While there don’t appear to be any zero-day flaws in October’s release from Microsoft, Todd Schell from Ivanti points out that a half-dozen of these flaws were publicly disclosed prior to today, meaning bad guys have had a jump start on being able to research and engineer working exploits.

Other patches released today tackle problems in Exchange Server, Visual Studio, .NET Framework, and a whole mess of other core Windows components.

For any of you who’ve been pining for a Flash Player patch from Adobe, your days of waiting are over. After several months of depriving us of Flash fixes, Adobe’s shipped an update that fixes a single — albeit critical — flaw in the program that crooks could use to install bad stuff on your computer just by getting you to visit a hacked or malicious website.

Chrome and Firefox both now disable Flash by default, and Chrome and IE/Edge auto-update the program when new security updates are available. Mercifully, Adobe is slated to retire Flash Player later this year, and Microsoft has said it plans to ship updates at the end of the year that will remove Flash from Windows machines.

It’s a good idea for Windows users to get in the habit of updating at least once a month, but for regular users (read: not enterprises) it’s usually safe to wait a few days until after the patches are released, so that Microsoft has time to iron out any chinks in the new armor.

But before you update, please make sure you have backed up your system and/or important files. It’s not uncommon for a Windows update package to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

CryptogramUS Cyber Command and Microsoft Are Both Disrupting TrickBot

Earlier this month, we learned that someone is disrupting the TrickBot botnet network.

Over the past 10 days, someone has been launching a series of coordinated attacks designed to disrupt Trickbot, an enormous collection of more than two million malware-infected Windows PCs that are constantly being harvested for financial data and are often used as the entry point for deploying ransomware within compromised organizations.

On Sept. 22, someone pushed out a new configuration file to Windows computers currently infected with Trickbot. The crooks running the Trickbot botnet typically use these config files to pass new instructions to their fleet of infected PCs, such as the Internet address where hacked systems should download new updates to the malware.

But the new configuration file pushed on Sept. 22 told all systems infected with Trickbot that their new malware control server had the address 127.0.0.1, which is a “localhost” address that is not reachable over the public Internet, according to an analysis by cyber intelligence firm Intel 471.

A few days ago, the Washington Post reported that it’s the work of US Cyber Command:

U.S. Cyber Command’s campaign against the Trickbot botnet, an army of at least 1 million hijacked computers run by Russian-speaking criminals, is not expected to permanently dismantle the network, said four U.S. officials, who spoke on the condition of anonymity because of the matter’s sensitivity. But it is one way to distract them at least for a while as they seek to restore operations.

The network is controlled by “Russian speaking criminals,” and the fear is that it will be used to disrupt the US election next month.

The effort is part of what Gen. Paul Nakasone, the head of Cyber Command, calls “persistent engagement,” or the imposition of cumulative costs on an adversary by keeping them constantly engaged. And that is a key feature of CyberCom’s activities to help protect the election against foreign threats, officials said.

Here’s General Nakasone talking about persistent engagement.

Microsoft is also disrupting Trickbot:

We disrupted Trickbot through a court order we obtained as well as technical action we executed in partnership with telecommunications providers around the world. We have now cut off key infrastructure so those operating Trickbot will no longer be able to initiate new infections or activate ransomware already dropped into computer systems.

[…]

We took today’s action after the United States District Court for the Eastern District of Virginia granted our request for a court order to halt Trickbot’s operations.

During the investigation that underpinned our case, we were able to identify operational details including the infrastructure Trickbot used to communicate with and control victim computers, the way infected computers talk with each other, and Trickbot’s mechanisms to evade detection and attempts to disrupt its operation. As we observed the infected computers connect to and receive instructions from command and control servers, we were able to identify the precise IP addresses of those servers. With this evidence, the court granted approval for Microsoft and our partners to disable the IP addresses, render the content stored on the command and control servers inaccessible, suspend all services to the botnet operators, and block any effort by the Trickbot operators to purchase or lease additional servers.

To execute this action, Microsoft formed an international group of industry and telecommunications providers. Our Digital Crimes Unit (DCU) led investigation efforts including detection, analysis, telemetry, and reverse engineering, with additional data and insights to strengthen our legal case from a global network of partners including FS-ISAC, ESET, Lumen’s Black Lotus Labs, NTT and Symantec, a division of Broadcom, in addition to our Microsoft Defender team. Further action to remediate victims will be supported by internet service providers (ISPs) and computer emergency readiness teams (CERTs) around the world.

This action also represents a new legal approach that our DCU is using for the first time. Our case includes copyright claims against Trickbot’s malicious use of our software code. This approach is an important development in our efforts to stop the spread of malware, allowing us to take civil action to protect customers in the large number of countries around the world that have these laws in place.

In legal filings, Microsoft argued that Trickbot irreparably harms the company “by damaging its reputation, brands, and customer goodwill. Defendants physically alter and corrupt Microsoft products such as the Microsoft Windows products. Once infected, altered and controlled by Trickbot, the Windows operating system ceases to operate normally and becomes tools for Defendants to conduct their theft.”

This is a novel use of trademark law.

Cryptogram2020 Workshop on Economics of Information Security

The Workshop on Economics of Information Security will be online this year. Register here.

Cory Doctorow — Attack Surface is out!

Today is the US/Canada release-date for Attack Surface, the third Little Brother book. It’s been a long time coming (Homeland, the second book, came out in 2013)!

It’s the fourth book I’ve published in 2020, and it’s my last book of the year.

https://us.macmillan.com/books/9781250757531

When the lockdown hit in March, I started thinking about what I’d do if my US/Canada/UK/India events were all canceled, but I still treated it as a distant contingency. But well, here we are!

My US publisher, Tor Books, has put together a series of 8 ticketed events, each with a pair of brilliant, fascinating guests, to break down all the themes in the book. Each is sponsored by a different bookstore and each comes with a copy of the book.

We kick off the series TONIGHT at 5PM Pacific/8PM Eastern with “Politics and Protest,” sponsored by The Strand NYC, with guests Eva Galperin (EFF Threat Lab) and Ron Deibert (U of T Citizen Lab).

https://www.strandbooks.com/events/event93?title=cory_doctorow_attack_surface

There will be video releases of these events eventually, but if you want to attend more than one and don’t need more than one copy of the book, you can donate your copy to a school, prison, library, etc. Here’s a list of institutions seeking copies:

https://craphound.com/littlebrother/2020/10/05/as-freebies/

And if you are affiliated with an organization or institution that would like to put your name down for a freebie, here’s the form. I’m checking it several times/day and adding new entries to the list:

I got a fantastic surprise this morning: a review by Paul Di Filippo in the Washington Post:

https://www.washingtonpost.com/entertainment/books/cory-doctorows-attack-surface-is-a-riveting-techno-thriller/2020/10/13/a3a178d0-0cb9-11eb-8074-0e943a91bf08_story.html

He starts by calling me “among the best of the current practitioners of near-future sf,” and, incredibly, the review only gets better after that!

Di Filippo says the book is a “political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance,” whose hero, Masha, is “a protagonist worth rooting for, whose inner conflicts and cognitive dissonances propel her to surprising, heroic actions.”

He closes by saying that my work “charts the universal currents of the human heart and soul with precision.”

I mean, wow.

If you’d prefer an audiobook of Attack Surface; you’re in luck! I produced my own audio edition of the book, with Amber Benson narrating, and it’s amazing!

Those of you who backed the audio on Kickstarter will be getting your emails from BackerKit shortly (I’ve got an email in to them and will post an update to the KS as soon as they get back to me.

If you missed the presale, you can still get the audio, everywhere EXCEPT Audible, who refuse to carry my work as it’s DRM-free (that’s why I had to fund the audiobook; publishers aren’t interested in the rights to audio that can’t be sold on the dominant platform).

Here’s some of the stores carrying the book today:

Supporting Cast (Audiobooks from Slate):
https://attacksurface.supportingcast.fm/

I expect that both Downpour and Google Play should have copies for sale any minute now (both have the book in their systems but haven’t made it live yet).

And of course, you can get it direct from me, along with all my other ebooks and audiobooks:

https://craphound.com/shop

Worse Than Failure — Slow Load

After years spent supporting an enterprisey desktop application with a huge codebase full of WTFs, Sammy thought he had seen all there was to be seen. He was about to find out how endlessly deep the bottom of the WTF barrel truly was.

During development, Sammy frequently had to restart said application: a surprisingly onerous process, as it took about 30 seconds each and every time to return to a usable state. Eventually, a mix of curiosity and annoyance spurred him into examining just why it took so long to start.

He began by profiling the performance. When the application first initialized, it performed 10 seconds of heavy processing. Then the CPU load dropped to 0% for a full 16 seconds. After that, it increased, pegging out one of the eight cores on Sammy's machine for 4 seconds. Finally, the application was ready to accept user input. Sammy knew that, for at least some of the time, the application was calling out to and waiting for a response from a server. That angle would have to be investigated as well.

Further digging and hair-pulling led Sammy to a buried bit of code, a Very Old Mechanism for configuring the visibility of items on the main menu. While some menu items were hard-coded, others were dynamically added by extension modules. The application administrator had the ability to hide or show any of them by switching checkboxes in a separate window.

When the application first started up, it retrieved the user's latest configuration from the server, applied it to their main menu, then sent the resulting configuration back to the server. The server, in turn, called DELETE * FROM MENU_CFG; INSERT INTO MENU_CFG (…); INSERT INTO MENU_CFG (…); …

Sammy didn't know what was the worst thing about all this. Was it the fact that the call to the server was performed synchronously for no reason? Or, that after multiple DELETE / INSERT cycles, the table of a mere 400 rows weighed more than 16 MB? When the users all came in to work at 9:00 AM and started up the application at roughly the same time, their concurrent transactions caused quite the bottleneck—and none of it was necessary. The Very Old Mechanism had been replaced with role-based configuration years earlier.

All Sammy could do was write up a change request to put in JIRA. Speaking of bottlenecks ...

,

Cory Doctorow — Someone Comes to Town, Someone Leaves Town (part 18)

Here’s part eighteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

Content warning for domestic abuse and sexual violence.

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Some show notes:

Here’s the form to request a copy of Attack Surface for schools, libraries, classrooms, etc.

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

One of the things we learned from the Snowden documents is that the NSA conducts “about” searches. That is, searches based on activities and not identifiers. A normal search would be on a name, or IP address, or phone number. An about search would something like “show me anyone that has used this particular name in a communications,” or “show me anyone who was at this particular location within this time frame.” These searches are legal when conducted for the purpose of foreign surveillance, but the worry about using them domestically is that they are unconstitutionally broad. After all, the only way to know who said a particular name is to know what everyone said, and the only way to know who was at a particular location is to know where everyone was. The very nature of these searches requires mass surveillance.

The FBI does not conduct mass surveillance. But many US corporations do, as a normal part of their business model. And the FBI uses that surveillance infrastructure to conduct its own about searches. Here’s an arson case where the FBI asked Google who searched for a particular street address:

Homeland Security special agent Sylvette Reynoso testified that her team began by asking Google to produce a list of public IP addresses used to google the home of the victim in the run-up to the arson. The Chocolate Factory [Google] complied with the warrant, and gave the investigators the list. As Reynoso put it:

On June 15, 2020, the Honorable Ramon E. Reyes, Jr., United States Magistrate Judge for the Eastern District of New York, authorized a search warrant to Google for users who had searched the address of the Residence close in time to the arson.

The records indicated two IPv6 addresses had been used to search for the address three times: one the day before the SUV was set on fire, and the other two about an hour before the attack. The IPv6 addresses were traced to Verizon Wireless, which told the investigators that the addresses were in use by an account belonging to Williams.

Google’s response is that this is rare:

While word of these sort of requests for the identities of people making specific searches will raise the eyebrows of privacy-conscious users, Google told The Register the warrants are a very rare occurrence, and its team fights overly broad or vague requests.

“We vigorously protect the privacy of our users while supporting the important work of law enforcement,” Google’s director of law enforcement and information security Richard Salgado told us. “We require a warrant and push to narrow the scope of these particular demands when overly broad, including by objecting in court when appropriate.

“These data demands represent less than one per cent of total warrants and a small fraction of the overall legal demands for user data that we currently receive.”

Here’s another example of what seems to be about data leading to a false arrest.

According to the lawsuit, police investigating the murder knew months before they arrested Molina that the location data obtained from Google often showed him in two places at once, and that he was not the only person who drove the Honda registered under his name.

Avondale police knew almost two months before they arrested Molina that another man ­ his stepfather ­ sometimes drove Molina’s white Honda. On October 25, 2018, police obtained records showing that Molina’s Honda had been impounded earlier that year after Molina’s stepfather was caught driving the car without a license.

Data obtained by Avondale police from Google did show that a device logged into Molina’s Google account was in the area at the time of Knight’s murder. Yet on a different date, the location data from Google also showed that Molina was at a retirement community in Scottsdale (where his mother worked) while debit card records showed that Molina had made a purchase at a Walmart across town at the exact same time.

Molina’s attorneys argue that this and other instances like it should have made it clear to Avondale police that Google’s account-location data is not always reliable in determining the actual location of a person.

“About” searches might be rare, but that doesn’t make them a good idea. We have knowingly and willingly built the architecture of a police state, just so companies can show us ads. (And it is increasingly apparent that the advertising-supported Internet is heading for a crash.)

ME — First Attempt at Gnocchi-Statsd

I’ve been investigating the options for tracking system statistics to diagnose performance problems. The idea is to track all sorts of data about the system (network use, disk IO, CPU, etc) and look for correlations at times of performance problems. DataDog is pretty good for this but expensive, it’s apparently based on or inspired by the Etsy Statsd. It’s claimed that the gnocchi-statsd is the best implementation of the protoco used by the Etsy Statsd, so I decided to install that.

I use Debian/Buster for this as that’s what I’m using for the hardware that runs KVM VMs. Here is what I did:

# it depends on a local MySQL database
# install the basic packages for gnocchi
apt -y install gnocchi-common python3-gnocchiclient gnocchi-statsd uuid


In the Debconf prompts I told it to “setup a database” and not to manage keystone_authtoken with debconf (because I’m not doing a full OpenStack installation).

This gave a non-working configuration as it didn’t configure the MySQL database for the [indexer] section and the sqlite database that was configured didn’t work for unknown reasons. I filed Debian bug #971996 about this [1]. To get this working you need to edit /etc/gnocchi/gnocchi.conf and change the url line in the [indexer] section to something like the following (where the password is taken from the [database] section).

url = mysql+pymysql://gnocchi-common:PASS@localhost:3306/gnocchidb


To get the statsd interface going you have to install the gnocchi-statsd package and edit /etc/gnocchi/gnocchi.conf to put a UUID in the resource_id field (the Debian package uuid is good for this). I filed Debian bug #972092 requesting that the UUID be set by default on install [2].

Here’s an official page about how to operate Gnocchi [3]. The main thing I got from this was that the following commands need to be run from the command-line (I ran them as root in a VM for test purposes but would do so with minimum privs for a real deployment).

gnocchi-api
gnocchi-metricd


To communicate with Gnocchi you need the gnocchi-api program running, which uses the uwsgi program to provide the web interface by default. It seems that this was written for a version of uwsgi different than the one in Buster. I filed Debian bug #972087 with a patch to make it work with uwsgi [4]. Note that I didn’t get to the stage of an end to end test, I just got it to basically run without error.

After getting “gnocchi-api” running (in a terminal not as a daemon as Debian doesn’t seem to have a service file for it), I ran the client program “gnocchi” and then gave it the “status” command which failed (presumably due to the metrics daemon not running), but at least indicated that the client and the API could communicate.

Then I ran the “gnocchi-metricd” and got the following error:

2020-10-12 14:59:30,491 [9037] ERROR    gnocchi.cli.metricd: Unexpected error during processing job
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/gnocchi/cli/metricd.py", line 87, in run
self._run_job()
File "/usr/lib/python3/dist-packages/gnocchi/cli/metricd.py", line 248, in _run_job
self.coord.update_capabilities(self.GROUP_ID, self.store.statistics)
File "/usr/lib/python3/dist-packages/tooz/coordination.py", line 592, in update_capabilities
raise tooz.NotImplemented
tooz.NotImplemented


At this stage I’ve had enough of gnocchi. I’ll give the Etsy Statsd a go next.

Update

Thomas has responded to this post [5]. At this stage I’m not really interested in giving Gnocchi another go. There’s still the issue of the indexer database which should be different from the main database somehow and sqlite (the config file default) doesn’t work.

I expect that if I was to persist with Gnocchi I would encounter more poorly described error messages from the code which either don’t have Google hits when I search for them or have Google hits to unanswered questions from 5+ years ago.

The Gnocchi systemd config files are in different packages to the programs, this confused me and I thought that there weren’t any systemd service files. I had expected that installing a package with a daemon binary would also get the systemd unit file to match.

The cluster features of Gnocchi are probably really good if you need that sort of thing. But if you have a small instance (EG a single VM server) then it’s not needed. Also one of the original design ideas of the Etsy Statsd was that UDP was used because data could just be dropped if there was a problem. I think for many situations the same concept could apply to the entire stats service.

If the other statsd programs don’t do what I need then I may give Gnocchi another go.

Sociological Images — Sociology IRL

One of the goals of this blog is to help get sociology to the public by offering short, interesting comments on what our discipline looks like out in the world.

We live sociology every day, because it is the science of relationships among people and groups. But because the name of our discipline is kind of a buzzword itself, I often find excellent examples of books in the nonfiction world that are deeply sociological, even if that isn’t how their authors or publishers would describe them.

Last year, I had the good fortune to help a friend as he was working on one of these books. Now that the release date is coming up, I want to tell our readers about the project because I think it is an excellent example of what happens when ideas from our discipline make it out into the “real” world beyond academia. In fact, the book is about breaking down that idea of the “real world” itself. It is called IRL: Finding realness, meaning, and belonging in our digital lives, by Chris Stedman.

In IRL, Chris tackles big questions about what it means to be authentic in a world where so much of our social interaction is now taking place online. The book goes to deep places, but it doesn’t burden the reader with an overly-serious tone. Instead, Chris brings a lightness by blending memoir, interviews, and social science, all arranged in vignettes so that reading feels like scrolling through a carefully curated Instagram feed.

What makes this book stand out to me is that Chris really brings the sociology here. In the pages of IRL I spotted Zeynep Tufekci’s Twitter and Tear Gas, Mario Small’s Someone to Talk To, Nathan Jurgenson’s work on digital dualism, Jacqui Frost’s work on navigating uncertainty, Paul McClure on technology and religion, and a nod to some work with yours truly about nonreligious folks. To see Chris citing so many sociologists, among the other essayists and philosophers that inform his work, really gives you a sense of the intellectual grounding here and what it looks like to put our field’s ideas into practice.

Above all, I think the book is worth your time because it is a glowing example of what it means to think relationally about our own lives and the lives of others. That makes Chris’ writing a model for the kind of reflections many of us have in mind when we assign personal essays to our students in Sociology 101â€”not because it is basic, but because it is willing to deeply consider how we navigate our relationships today and how those relationships shape us, in turn.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

Krebs on Security — Microsoft Uses Trademark Law to Disrupt Trickbot Botnet

Microsoft Corp. has executed a coordinated legal sneak attack in a bid to disrupt the malware-as-a-service botnet Trickbot, a global menace that has infected millions of computers and is used to spread ransomware. A court in Virginia granted Microsoft control over many Internet servers Trickbot uses to plunder infected systems, based on novel claims that the crime machine abused the software giant’s trademarks. However, it appears the operation has not completely disabled the botnet.

A spam email containing a Trickbot-infected attachment that was sent earlier this year. Image: Microsoft.

“We disrupted Trickbot through a court order we obtained as well as technical action we executed in partnership with telecommunications providers around the world,” wrote Tom Burt, corporate vice president of customer security and trust at Microsoft, in a blog post this morning about the legal maneuver. “We have now cut off key infrastructure so those operating Trickbot will no longer be able to initiate new infections or activate ransomware already dropped into computer systems.”

Microsoft’s action comes just days after the U.S. military’s Cyber Command carried out its own attack that sent all infected Trickbot systems a command telling them to disconnect themselves from the Internet servers the Trickbot overlords used to control them. The roughly 10-day operation by Cyber Command also stuffed millions of bogus records about new victims into the Trickbot database in a bid to confuse the botnet’s operators.

In legal filings, Microsoft argued that Trickbot irreparably harms the company “by damaging its reputation, brands, and customer goodwill. Defendants physically alter and corrupt Microsoft products such as the Microsoft Windows products. Once infected, altered and controlled by Trickbot, the Windows operating system ceases to operate normally and becomes tools for Defendants to conduct their theft.”

From the civil complaint Microsoft filed on October 6 with the U.S. District Court for the Eastern District of Virginia:

“However, they still bear the Microsoft and Windows trademarks. This is obviously meant to and does mislead Microsoft’s customers, and it causes extreme damage to Microsoft’s brands and trademarks.”

“Users subject to the negative effects of these malicious applications incorrectly believe that Microsoft and Windows are the source of their computing device problems. There is great risk that users may attribute this problem to Microsoft and associate these problems with Microsoft’s Windows products, thereby diluting and tarnishing the value of the Microsoft and Windows trademarks and brands.”

Microsoft said it will leverage the seized Trickbot servers to identify and assist Windows users impacted by the Trickbot malware in cleaning the malware off of their systems.

Trickbot has been used to steal passwords from millions of infected computers, and reportedly to hijack access to well more than 250 million email accounts from which new copies of the malware are sent to the victim’s contacts.

Trickbot’s malware-as-a-service feature has made it a reliable vehicle for deploying various strains of ransomware, locking up infected systems on a corporate network unless and until the company agrees to make an extortion payment.

A particularly destructive ransomware strain that is closely associated with Trickbot — known as “Ryuk” or “Conti” — has been responsible for costly attacks on countless organizations over the past year, including healthcare providers, medical research centers and hospitals.

One recent Ryuk victim is Universal Health Services (UHS), a Fortune 500 hospital and healthcare services provider that operates more than 400 facilities in the U.S. and U.K.

On Sunday, Sept. 27, UHS shut down its computer systems at healthcare facilities across the United States in a bid to stop the spread of the malware. The disruption caused some of the affected hospitals to redirect ambulances and relocate patients in need of surgery to other nearby hospitals.

Microsoft said it did not expect its action to permanently disrupt Trickbot, noting that the crooks behind the botnet will likely make efforts to revive their operations. But so far it’s not clear whether Microsoft succeeded in commandeering all of Trickbot’s control servers, or when exactly the coordinated seizure of those servers occurred.

As the company noted in its legal filings, the set of Internet address used as Trickbot controllers is dynamic, making attempts to disable the botnet more challenging.

Indeed, according to real-time information posted by Feodo Tracker, a Swiss security site that tracks Internet servers used as controllers for Trickbot and other botnets, nearly two dozen Trickbot control servers — some of which first went active at beginning of this month — are still live and responding to requests at the time of this publication.

Trickbot control servers that are currently online. Source: Feodotracker.abuse.ch

Cyber intelligence firm Intel 471 says fully taking down Trickbot would require an unprecedented level of collaboration among parties and countries that most likely would not cooperate anyway. That’s partly because Trickbot’s primary command and control mechanism supports communication over The Onion Router (TOR) — a distributed anonymity service that is wholly separate from the regular Internet.

“As a result, it is highly likely a takedown of the Trickbot infrastructure would have little medium- to long-term impact on the operation of Trickbot,” Intel 471 wrote in an analysis of Microsoft’s action.

What’s more, Trickbot has a fallback communications method that uses a decentralized domain name system called EmerDNS, which allows people to create and use domains that cannot be altered, revoked or suspended by any authority. The highly popular cybercrime store Joker’s Stash — which sells millions of stolen credit cards — also uses this setup.

From the Intel 471 report [malicious links and IP address defanged with brackets]:

“In the event all Trickbot infrastructure is taken down, the cybercriminals behind Trickbot will need to rebuild their servers and change their EmerDNS domain to point at their new servers. Compromised systems then should be able to connect to the new Trickbot infrastructure. Trickbot’s EmerDNS fall-back domain safetrust[.]bazar recently resolved to the IP address 195.123.237[.]156. Not coincidentally, this network neighborhood also hosts Bazar malware control servers.”

“Researchers previously attributed the development of the Bazar malware family to the same group behind Trickbot, due to code similarities with the Anchor malware family and its methods of operation, such as shared infrastructure between Anchor and Bazar. On Oct. 12, 2020 the fall-back domain resolved to the IP address 23.92.93[.]233, which was confirmed by Intel 471 Malware Intelligence systems to be a Trickbot controller URL in May 2019. This suggests the fall-back domain is still controlled by the Trickbot operators at the time of this report.”

Intel 471 concluded that the Microsoft action has so far has done little to disrupt the botnet’s activity.

“At the time of this report, Intel 471 has not seen any significant impact on Trickbot’s infrastructure and ability to communicate with Trickbot-infected systems,” the company wrote.

The legal filings from Microsoft are available here.

Update, 9:51 a.m. ET: Feodo Tracker now lists just six Trickbot controllers as responding. All six were first seen online in the past 48 hours. Also added perspective from Intel 471.

Kevin Rudd — ABC TV: Murdoch Royal Commission

E&OE TRANSCRIPT
TV INTERVIEW
ABC NEWS BREAKFAST
12 OCTOBER 2020

The post ABC TV: Murdoch Royal Commission appeared first on Kevin Rudd.

Kevin Rudd — ABC RN: Murdoch Royal Commission

E&OE TRANSCRIPT

ABC RN BREAKFAST
12 OCTOBER 2020

Topics: #MurdochRoyalCommission petition

Fran Kelly
Former prime minister Kevin Rudd has escalated his war with the Murdoch media empire, which includes the Australian newspaper and a number of major selling daily tabloids. Kevin Rudd has set up a parliamentary petition, which calls for a royal commission into News Corp’s domination of the Australian media landscape, which he has branded, quote, a cancer on democracy. More than 73,000 signatures have been gathered in the 48 hours since the petition went live on the parliamentary website on Saturday morning. In the meantime, Rupert Murdoch’s son James Murdoch, has told The New York Times that he left his family’s company over concerns it was disguising facts and spreading disinformation. Kevin Rudd joins us from Brisbane. Kevin Rudd, welcome back to Breakfast.

Kevin Rudd
Thanks for having me on the programme, Fran.

Fran Kelly
You’re you want a royal commission into what you generalise as, quote, the threats to media diversity, and that includes Nine’s takeover of the Melbourne Age and the Sydney Morning Herald. But when you drill down, you are going after News Corp, aren’t you? You are out to get Rupert Murdoch. Everyone else would just be collateral damage.

Kevin Rudd
The bottom line here is, as a former prime minister of the country, Fran, I’m passionate about the future of our democracy and a free independent and balanced media is the lifeblood of democracy. And I think most people would agree, I think most of your listeners, in the last decade or so this has just eroded. It’s the further concentration of Murdoch’s control over print media. 70% of the print readership in this country is owned by Murdoch. In my home state of Queensland virtually every single newspaper up and down the coast from Cairns to the Gold Coast is owned by Murdoch. And the ABC’s funding is under assault. So for those reasons, I think it’s high time that we had an independent, dispassionate examination of the impact of media monopoly on Australian democratic life and alternative models aimed at diversifying media ownership into the future.

Fran Kelly
And in the petition, you refer to, quote, how Australia’s print media is overwhelmingly controlled by News Corp, as you just said, with two thirds of daily newspaper readership, this power is routinely used to attack opponents in business and politics by blending editorial opinion with news reporting. What’s the the example that you want to give us know how the blending of editorial opinion with news reporting has actually become an arrogant cancer and harmed Australia?

Kevin Rudd
Well I could give you dozens of examples of this across the nation’s tabloid newspapers and with the national daily, which is the Australian. If you were simply to go back to, for example, the 2013 election, the front page of I think the Sunday Telegraph was ‘we need Tony’, that was it. ‘We need Tony’ and a huge photograph of Tony Abbott. Now, I would suggest to you, Fran, that by any person’s definition, that’s a blending of an editorial view with news coverage. If on the front page of the same paper, it has me and Anthony Albanese, or The Daily Telegraph, dressed in Nazi uniforms, I would say that’s a blending of editorial opinion with news.

Fran Kelly
It’s also transparent isn’t it, Kevin Rudd? I mean, it’s transparent what they’re doing. The readers presumably know what they’re doing. And by those papers, knowing what they’re doing, it’s not as if newspaper circulation, you know, indicating that they’re necessarily the leader of you know, opinion in the media anymore. I mean, things are changing. We’ve got news websites, people access news from all over the world.

Kevin Rudd
So why does Murdoch choose to continue to invest in essentially loss-making newspapers, Fran?

Fran Kelly
For influence.

Kevin Rudd
Yeah, the reason is political influence and power. So if you’re in my home state of Queensland, and as I said before, you can’t go anywhere else in terms of print. Your daily paper in southeast Queensland is the Courier-Mail. And then it’s the Gold Coast Bulletin on the Sunshine Coast Daily, the Bundaberg News-Mail the Mackay Mercury, the Rockhampton Morning Bulletin, the Townsville Bulletin, the Cairns Post and others. And the reason is –

Fran Kelly
But it’s not like people only get their news from reading the Townsville Bulletin.

Kevin Rudd
No, but the reason they do it, and I’ve been in many many regional radio bureaux right around the country where, what flops onto the desk of the regional newsroom? It’s that day’s Murdoch print media because they know what happens is it helps set the agenda and the tone and the framing and the parameters of the national political debate. That’s why they do it. And so therefore, my question simply to a royal commission would be: is this poisoning our democracy? I believe over time, it is. And, for example, today we have this stunning statement by James Murdoch that the reason he left the board of News Corporation was because it was participating in the legitimisation of disinformation. Yet in today’s Australian Murdoch media, not a single reference to what James Murdoch has said; James, having helped run the organisation for decades. So this is monumental news for those of us concerned about whether in fact, we’ve got balanced news reporting or the slanting of editorial opinion.

Fran Kelly
Indeed, the comments from James Murdoch are startling and do dovetail pretty closely with some of the things you say in your petition. Have you had any discussions with him as you’ve been formulating your moves against News Corp?

Kevin Rudd
No. Not at all. I think I’ve met James Murdoch twice in my life. I just observe what he has said as someone who has been at the absolute heart of the operation. You see, people have often asked me: why now? Why are you calling for this now? The trend has got worse over the last decade. In Queensland, the ACCC’s, in my view, wrong decision to allow Murdoch to purchase Australian Provincial Newspapers delivered Queensland, the state which swings so many federal elections, and its print media, almost completely into Murdoch’s hands. So that’s a big change, the assault on Australian Associated Press, an institution in this country since the 1930s, where Murdoch is de-funding and seeking to set up a rival institution. And on top of that, look, 18 out of the last 18 federal and state elections, the Murdoch print media, in its daily newspapers campaigning viciously for one side of politics and viciously against the other in the news coverage. Look, you get to a point where enough is enough. And what I find stunning, Fran, is that in less than 48 hours, despite an inhospitable Parliament House website, 75,000 people have gone online to register already their names on this petition.

Fran Kelly
Sure, a lot of people agree with you, but you didn’t agree with this necessarily. You didn’t have a problem when News Corp backed due in the Kevin07 election, did you?

Kevin Rudd
If you were leader, the Labor Party in 2007 and in elections prior to that, and to some extent subsequent to that, you would do everything you could to reduce the negative coverage from the Murdoch media against the Australian Labor Party. That’s the responsibility of the parliamentary leader. What I’m saying is, prior to 2010, News Corp ran a somewhat more balanced approach to their overall news coverage.

Fran Kelly
Did they or is balance in the eye of the beholder?

Kevin Rudd
No, no. Balance means everyone getting a fair go in the news coverage. That’s what it means. And since then, I think any objective observer would say 18 out of 18 federal and state elections on the run, Tasmania, South Australia, Victoria, New South Wales, Queensland, federally, what you have seen is News Corporation running a vicious editorial line reflected in their news coverage against one side of politics against the other when they control 70% of the show. All I’m saying to you, Fran, is that enough is enough. And when the same party campaigning to kill your budget at the ABC that’s the other. That’s the other arm of have a balanced media environment in Australia. I think our democracy is under threat. And that’s why I’ve launched this petition.

Fran Kelly
I know but you’re calling for a royal commission, so it’s a serious call. And it would, you would also need to take note of the fact that in that 10 years Labor’s had a leadership merry-go-round, it’s had clear policy failings, a split over climate policy that can’t seem to be resolved. I mean, Labor’s woes are not all about News Corp, are they?

Kevin Rudd
Of course not. And if you’ve seen everything I’ve written on this subject, we own full responsibility for problems of our own making. Since I changed the rules of the Labor Party, we’ve had two leaders in the last seven or eight years. Contrast that, the other side of politics, we’ve had a rolling door of serving prime ministers. So let’s not be distracted by the question of whether one side of politics commits political errors or not. We have and the Coalition have. What I’m talking about, is simply whether or not we have a balanced media in this country, which is capable of giving what I would describe as objective coverage to the news, facts-based news. And if you want to look at the way in which this is trending in Australia, look no further than Fox News in the United States, again a Murdoch operation, which has been the lifeblood of the Trump Administration and Trump campaign for the last four years. That’s where we’re headed unless we choose to stop the rot here in Australia.

Fran Kelly
So then, are you disappointed that Anthony Albanese is not joining you in this call for a royal commission? Or do you accept that a current political leader would be crazy to endorse a royal commission into a major media organisation?

Kevin Rudd
Well, I didn’t speak to Albo before I launched this petition, nor have I spoken to Mr Morrison before launching this petition for a royal commission. This is a time for the Australian people including all your listeners to decide whether it’s worth putting their name to it or not. And it’s not just targeted at News Corporation. But as you rightly say, they’re the dominant players so they come in for the primary attention. Fairfax Media has been taken over by Nine, whose chairman is Peter Costello. Your budget, the ABC, is under challenge and under attack. And then we have the question of the emerging monopolies in terms of Google and Facebook. These are all legitimate questions to be examined. My interest as a former prime minister, is having the lifeblood of the democracy kept alive and flowing with diverse media platforms. We are no longer getting that. Okay.

Fran Kelly
Kevin Rudd, thank you very much for joining us.

Kevin Rudd
Thanks very much, Fran.

Image: World Economic Forum

The post ABC RN: Murdoch Royal Commission appeared first on Kevin Rudd.

Charles Stross — Books I Will Not Write #8: The Year of the Conspiracy

Global viral pandemics, insane right-wing dictator-wannabes trying to set fire to the planet, and climate change aside, I'm officially declaring 2020 to be the Year of the Conspiracy Theory.

This was the year when QAnon, a frankly puerile rehashing of antisemitic conspiracy theories going back to the infamous Tsarist secret police fabrication The Protocols of the Elders of Zion, went viral: its true number of followers is unclear but in the tens of thousands, and they've begun showing up in US politics as Republican candidates capable of displacing the merely crazy, such as Tea Partiers, who at least were identifiably a political movement (backed by Koch brothers lobbying money).

Nothing about the toxic farrago of memes stewing in the Qanon midden should come as a surprise to anyone who read the Illuminatus! trilogy back in the 1970s, except possibly the fact that this craziness has leached into mainstream politics. But I think it's worrying indicative of the way our post-1995, internet-enabled media environment is messing with the collective subconscious: conspiratorial thinking is now mainstream.

Anyway. When life hands you lemons its time to make lemonade. How could I (if I had more energy and fewer plans) monetize this trend, without sacrificing my dignity, sanity, and sense of integrity along the way?

I'm calling it time for the revival of the big fat 1960s-1980s cold war spy/conspiracy thriller. A doozy of a plot downloaded itself into my head yesterday, and I have neither the time nor the marketing stance to write it, so here it is. (Marketing: I'm positioned as an SF author, not a thriller/mens adventure author, so I'd be selling to a different editorial and marketing department and the book advances for starting out again wouldn't be great.)

So, some background for a Richard Condon style comedy spy/conspiracy thriller:

The USA is an Imperial hegemonic power, and is structured as such internally (even though its foundational myth--plucky colonials rebelling against an empire--is problematically at odds with the reality of what it has become nearly 250 years later). In particular, it has an imperial-scale bureaucracy with an annual budget measured in the trillions of dollars, and baroque excrescences everywhere. Nowhere is this more evident than in the intelligence sector.

The USA has spies and analysts and cryptographers and spooks coming out of its metaphorical ears. For example, the CIA, the best-known US espionage agency, is a sprawling bureaucracy with an estimated 21,000 employees and a budget of $15Bn/year. But it's by no means the largest or most expensive agency: the NRO (the folks who run spy satellites) used to have a bigger budget than NASA. And the mere existence and name of the National Reconnaissance Office were classified secrets until 1992. It has come to light that about 80% of the people who work in the intelligence sector in the US are not actual government officials or civil servants, but private sector contractors, mostly employed by service corporations who are cleared to handle state secrets. By some estimates there are two million security-cleared civilians working in the United States--more than the number of uniformed service personnel. Keeping track of this baroque empire of espionage is such a headache that there's an entire government agency devoted to it: the United States Intelligence Community, established in 1981 by an executive order issued by Ronald Reagan. Per wikipedia: The Washington Post reported in 2010 that there were 1,271 government organizations and 1,931 private companies in 10,000 locations in the United States that were working on counterterrorism, homeland security, and intelligence, and that the intelligence community as a whole includes 854,000 people holding top-secret clearances. According to a 2008 study by the ODNI, private contractors make up 29% of the workforce in the U.S. intelligence community and account for 49% of their personnel budgets. USIC has 17 declared member agencies and a budget that hit$53.9Bn in 2012, up from $40.9Bn in 2006. Obviously this is a growth industry. Furthermore, I find it hard to believe--even bearing in mind that this decade's normalization of conspiratorial thinking predisposes one towards such an attitude--that there aren't even more agencies out there which, like the NRO prior to 1992, remain under cover of a top secret classification. I'd expect some such agencies to focus on obvious tasks such as deniable electronic espionage (like the Russian government's APT29/Cozy Bear hacking group), weaponization of memes in pursuit of national strategic objectives (the British Army's 77th Brigade media influencers)), forensic analysis of offshore money laundering paper trails--the importance of which should be glaringly obvious to anyone with even a passing interest in world politics over the past two decades, or trying to identify foreign assets by analyzing the cornucopia of social data available from the likes of Facebook and Twitter's social graphs. (For example: it used to be the case that applicants for security clearance jobs with federal agencies were required not to have Facebook, Twitter, or similar social media accounts. They were also required not to have travelled overseas, with very limited exceptions, not to have criminal records, and so on. Bear in mind that Facebook maintains "ghost" accounts for everyone who doesn't already have a Facebook account, populated with data derived from their contacts who do. If you have access to FB's social graph you can in principle filter out all ghost accounts of the correct age and demographic (educational background, etc), cross-reference against twitter and other social media, and with a bit more effort find out if they've ever travelled abroad or had a criminal conviction. The result doesn't confirm that they're a security-cleared government employee but it's highly suggestive.) But I digress. The 1271 government organizations and 1931 private companies in 2010 have almost certainly mushroomed since then, during the global war on terror. Per Snowden, the proportion who are private contractors rather than civil servants, has also exploded. And, due to regulatory capture, it has become the norm for outsourcing contracts to be administered by former employees in the industries to which the contracts are awarded. There's a revolving door between civil service management and senior management in the contractor companies, simply because you need to understand the workload in order to allocate it to contractors, and because if you're a contractor knowing how the bidding process works from the inside gives you a huge advantage. Let us posit a small group of golfing buddies in DC in the 2000s who are deeply disillusioned and cynical about the way things are developing, and conspire to milk the system. (They don't think of themselves as a conspiracy: they're golfing buddies, what could be more natural than helping your mates out?) They've all got a background in intelligence, either working in middle-to-senior management as government agency officers, or in senior management in a corporate contractor. They know the ropes: they know how the game is played. Two of them take early retirement and invest their pensions in a pair of new startups: one of them--call them "A"--remains in place in, say, whatever department of the Defense Clandestine Service is the successor to the Counterintelligence Field Activity. They raise a slightly sketchy proposal for a domestic operation targeting proxies for Advanced Persistent Threats operating on US soil. For example: imagine QAnon is a fabrication of APT29. We know QAnon followers have carried out domestic terrorism attacks; if Q is actually a foreign intelligence service, is it plausible that they might also be using it to radicalize and recruit agents within other US intelligence services? One of our retirees, "B", has established a small corporation that just happens to specialize in searching for signs of radicalization at home and by some magical coincidence fits the exact bill of requirements that our insider is looking for in a contractor. Our other retiree, "C", has established a small corporation that produces Artificial Reality Games. As has been noted elsewhere Qanon bears a striking resemblance to a huge Artificial Reality Game. One of their products is not unlike "Spooks", from my 2007 novel Halting State; it's a game that encourages the players to carry out real world tasks on behalf of a shadowy national counter-espionage agency. In the novel, the players are unaware that they're working for a real national counter-espionage agency. In this scenario, the game is just a game ... but it's designed to make the players look plausibly similar to actual HUMINT assets working in a climate of surveillance capitalism and so reverting to classic tradecraft techniques in order to avoid being located by their dead letter drop's bluetooth pairing ID. But because they're actually gamers, on close examination they prove to not be actual spies. In other words, C generates lots of interesting false leads for B to explore and A to report on, but they never quite pan out. So far, so plausible. But where's the story? The clockwork powering the novel is simple: A runs his own pet counter-espionage project within the bigger agency and arranges to outsource the leg work to B's contractors on a cost-plus basis. Meanwhile, C's ARG designers create a perfectly balanced honeypot for B's agents. (The boots on the ground are all ignorant of the true relationship.) B is a major investor in C via a couple of offshore trusts and cut-outs: B also funnels money into A's offshore retirement fund. It's a nice little earner for everybody, bilking the federal government out of a few million bucks a year on an activity nobody expects to succeed but which might bear fruit one day and which meanwhile burnishes the status of the parent organization because it's clearly conducting innovative and proactive counter-espionage activity. Then the wheels fall off. C's team are running a handful of ARGs (because to run only one would be kind of suspicious). They are approached by the FBI to set up a honeypot for whichever radical group is the target of the day, be it Boogaloo Boys, Antifa, Al Qaida, Y'all Qaida, or whoever. And it turns out the FBI expect them to do something with the half-ton of ANFO they've conveniently provided (as an arrest pretext), B's team meanwhile discover a scarily real-looking conspiracy who are planning to start some sort of war over the purity of their precious bodily fluids. A countdown is running and A's expected to actually make progress and arrest a ring of radicals before they blow anything up. And A gradually comes to the realization that he and his golfing buddies are not the first people to have had this idea: they're not even the biggest. In fact, it begins to come to light that an entire top level division of the Department of Homeland Security (founded 2003! 240,000 employees! budget$51.67Bn!) is running plan B and funneling money to prop up various adversarial sock-puppets, one of which appears to have accidentally stolen half a dozen W76 physics packages (which, just like the good ole' days with the Minuteman III stockpile, have a permissive action lock code of "00000000"). The nukes are now in the possession of a bunch of nutters led by a preacher man who insists Jesus is coming, and his return will be heralded by a nuclear attack on the USA. But the folks behind the DHS grift can't do anything about it for obvious reasons (involving orange jumpsuits and long jail terms, or maybe actual risk of execution).

A can't expose this grift without attracting unwelcome attention, and a likely lifetime vacation in Club Fed along with his buddies B and C. But he doesn't want to be anywhere close to DC when the nukes go off. So what's a grifter to do?

C gets to give his ARG designers a new task: to set up a game targeting a very specific set of customers--conspiratorially-minded millenarian believers who are already up to their eyes in one plot and who need to be gently weaned onto a more potent brew of lies, right under the nose of the rival APT agents who have radicalized them ...

Climax: the nukes are defused, the idiot conspiracy believers are arrested, then the FBI turn up and arrest A, B, and C. On their way out the door it becomes apparent that they've been set up: they, themselves, were suckered into setting up their scam via another conspiracy ring to generate arrests for the FBI ...

Worse Than Failure — CodeSOD: Intergral to Dating

Date representations are one of those long-term problems for programmers, which all ties into the problem that "dates are hard". Nowadays, we have all these fancy date-time types that worry about things like time zones and leap years, and all of that stuff for us. Pretty much everything, at some level, relies on the Unix Epoch. But there is a time before that was the standard.

In the mainframe era, not all the numeric representations worked the same way that we're used to, and it was common to simply define the number of digits you wanted to store. So, if you wanted to store a date, you could define an 8-digit field, and store the date as 20201012: October 10th, 2020.

This is a pretty great date format, for that era. Relatively compact (and yes, the whole Y2K thing means that you might have defined that as a six digit field), inherently sortable, and it's not too bad to slice it back up into date parts, when you need it. And like anything else which was a good idea a long time ago, you still see it lurking around today. Which does become a problem when you inherit code written by people who didn't understand why things worked that way.

Virginia N inherited some C# code which meets that criteria. And the awkward date handling isn't even the WTF. There's a lot to unpack in this particular sample, so let's start with… the unpack function.

public static DateTime UnpackDateC(DateTime dateI, long sDate)
{
if (sDate.ToString().Length != 8) return dateI;

try
{
return new DateTime(Convert.ToInt16(sDate.ToString().Substring(0, 4)),
Convert.ToInt16(sDate.ToString().Substring(4, 2)),
Convert.ToInt16(sDate.ToString().Substring(6, 2)));
}
catch
{
return dateI;
}
}


sDate is our integer date: 20201012. Instead of converting it to a string once, and then validating and slicing it, we call ToString four times. We also reconvert each date part back into an integer so we can pass them to DateTime, and of course, DateTime.ParseExact is just sitting there in the documentation, shaking its head at all of this.

The really weird choice to me, though, is that we pass in dateI, which appears to be the "fallback" date value. That… worries me.

Well, let's take a peek deep in the body of a method called GetMC, because that's where this unpack function is called.

while (oDataReader.Read())
{
//...
DataRow oRow = oDataTable.NewRow();

{
DateTime dt = DateTime.Today;
}
else
{
oRow["DEB"] = DBNull.Value;
}
//...
}


It's hard to know for absolute certain, based on the code provided, but I don't think UnpackDateC is actually doing anything. We can see that the default dateI value is DateTime.Today. So perhaps the desired behavior is that every invalid/unknown date is today? Seems problematic, but maybe that jives with the requriements.

But note the logic. If the database value is null, we store a null in oRow["DEB"]- our output data. If it isn't null, we unpack the date and store it in… dt. Also, if you trace the type conversions, we convert an integer in the database into an integer in our program (which it already would have been) so that we can convert that integer into a string so that we can split the string and convert each portion into integers so we can convert it into a date.

How do I know that the field is an integer in the database? Well, I don't know for sure, but let's look at the query which drives that loop.

public static void GetMC(string sConnectionString, ref DataTable dtToReturn, string sOrgafi,
string valid, string exe, int iOperateur,
string sColTri, bool Asc, bool DBLink, string alias) // iOperateur 0, 1, 2
{
sSql =
" select * from (select ENTLCOD as ENTAFI, MARKYEAR as EXE, nvl(to_char(MARKNUM),'')||'-'||nvl(MARKNUMLOT,'') as COD, to_char(MARKNUM) as NUM, MARKOBJ1 as COM, MARKSTARTDATE as DEB, MARKENDDATE as FIN, MARKNUMLOT as NUMLOT, MARKVALIDDATE, TIERNUM as FOUR, MARTBASTIT as TYP " +
" from SOMEEXTERNALVIEW" + (DBLink ? alias + " " : " ") + " WHERE 1=1";
if (valid != null && valid.Length > 0)
sSql += " and (MARKVALIDDATE >= " + valid + " or MARKVALIDDATE=0 or MARKVALIDDATE is null)";
if (exe != null && exe.Length > 0) sSql += " and TRIM( MARKYEAR ) ='" + exe.Trim() + "' ";
sSql += " ) where 1=1";
//...


We can see that MARKSTARTDATE is the database field we call DEB. We can also see some conditional string concatenation to build our query, so hello possible SQL injection attacks. Now, I don't know that MARKSTARTDATE is an integer, but I can see that a similar field, MARKVALIDDATE is. Note the lack of quotes in the query string: "…(MARKVALIDDATE >= " + valid + " or MARKVALIDDATE=0 or MARKVALIDDATE is null)"

So MARKVALIDDATE is numeric in the database, which is great because the variable valid is passed in as a string, so we're just all over the place with types.

The structure of this query also adds on an extra layer of unnecessary complexity, as for some reason, we wrap the actual query up as a subquery, but the outer query is just SELECT * FROM (subquery) WHERE 1=1, so there is literally no reason to do that.

To finish this off, let's look at where GetMC is actually invoked, a method called CallWSM.

private void CallWSM(ref DataTable oDataTable, string sCode, string sNom, string sFourn, int iOperateur)   // iOperateur 0, 1, 2
{
try
{
m_bError = false;
string sColTri = m_Grid_SortRequest.FieldName;
SortOperator oDirection = m_Grid_SortRequest.SortDirection;
m_sAnnee = ctlRecherche.GetValueFilterItem(1);
string svalid = "";
string sdt = ctlRecherche.GetValueDateItem(0);
if (sdt.Length > 0)
{
DateTime dtvalid = DateTime.Parse(sdt);
long ldt = dtvalid.Year * 10000 + dtvalid.Month * 100 + dtvalid.Day;
svalid = ldt.ToString();
}
m_AppLogic.GetMC(sColTri, oDirection, m_sAdresseWS, ref oDataTable, svalid, m_sAnnee, iOperateur, PageCurrent, m_PageSize);
}
catch (WebException ex)
{
if (ex.Status == WebExceptionStatus.Timeout)
{
frmMessageBox.Show(ML.GetLibelle(4137), CONST.AppTITLE, MessageBoxButtons.OK, MessageBoxIcon.Error);
m_bError = true;
}
else
{
frmMessageBox.Show(ex.Message);
m_bError = true;
}
}
catch (Exception ex)
{
frmMessageBox.Show(ex.Message);
m_bError = true;
}
}


Now, I'm reading between the lines a bit, and maybe making some assumptions that I shouldn't be, but this method is called CallWSM, and one of the parameters we pass to GetMC is stored in a variable called m_sAdresseWS, and GetMC can apparently throw a WebException.

Are… are we building a query and then passing it off to a web service to execute? And then wrapping the response in a data reader? Because that would be terrible. But if we're not, does that mean that we're also calling a web service in some of the code Virginia didn't supply? Query the DB and call the web service in the same method? Or are we catching an exception that just could never happen, and all the WS stuff has nothing to do with web services?

Any one of those options would be a WTF.

Virginia adds, "I had the job to make a small change in the call. ...I'm used to a good amount of Daily WTF-erry in our code." After reading through the code though, Virginia had some second thoughts about changing the code. "At this point I decided not the change anything, because it hurts my head."

You and me both.

$WORD [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how! , Cory Doctorow — Danish Little Brother, now a free, CC-licensed download Science Fiction Cirklen is a member-funded co-op of Danish science fiction fans; they raise money to produce print translations of sf novels that Danes would otherwise have to read in English. They work together to translate the work, commission art, and pay to have the book printed and distributed to bookstores in order to get it into Danish hands. The SFC folks just released their Danish edition of Little Brother — translated by Lea Thume — as a Creative Commons licensed epub file, including the cover art they produced for their edition. I’m so delighted by this! My sincere thanks to the SFC people for bringing my work to their country, and I hope someday we can toast each other in Copenhagen. Krebs on Security — Report: U.S. Cyber Command Behind Trickbot Tricks A week ago, KrebsOnSecurity broke the news that someone was attempting to disrupt the Trickbot botnet, a malware crime machine that has infected millions of computers and is often used to spread ransomware. A new report Friday says the coordinated attack was part of an operation carried out by the U.S. military’s Cyber Command. Image: Shutterstock. On October 2, KrebsOnSecurity reported that twice in the preceding ten days, an unknown entity that had inside access to the Trickbot botnet sent all infected systems a command telling them to disconnect themselves from the Internet servers the Trickbot overlords used to control compromised Microsoft Windows computers. On top of that, someone had stuffed millions of bogus records about new victims into the Trickbot database — apparently to confuse or stymie the botnet’s operators. In a story published Oct. 9, The Washington Post reported that four U.S. officials who spoke on condition of anonymity said the Trickbot disruption was the work of U.S. Cyber Command, a branch of the Department of Defense headed by the director of the National Security Agency (NSA). The Post report suggested the action was a bid to prevent Trickbot from being used to somehow interfere with the upcoming presidential election, noting that Cyber Command was instrumental in disrupting the Internet access of Russian online troll farms during the 2018 midterm elections. The Post said U.S. officials recognized their operation would not permanently dismantle Trickbot, describing it rather as “one way to distract them for at least a while as they seek to restore their operations.” Alex Holden, chief information security officer and president of Milwaukee-based Hold Security, has been monitoring Trickbot activity before and after the 10-day operation. Holden said while the attack on Trickbot appears to have cut its operators off from a large number of victim computers, the bad guys still have passwords, financial data and reams of other sensitive information stolen from more than 2.7 million systems around the world. Holden said the Trickbot operators have begun rebuilding their botnet, and continue to engage in deploying ransomware at new targets. “They are running normally and their ransomware operations are pretty much back in full swing,” Holden said. “They are not slowing down because they still have a great deal of stolen data.” Holden added that since news of the disruption first broke a week ago, the Russian-speaking cybercriminals behind Trickbot have been discussing how to recoup their losses, and have been toying with the idea of massively increasing the amount of money demanded from future ransomware victims. “There is a conversation happening in the back channels,” Holden said. “Normally, they will ask for [a ransom amount] that is something like 10 percent of the victim company’s annual revenues. Now, some of the guys involved are talking about increasing that to 100 percent or 150 percent.” , CryptogramHacking Apple for Profit Five researchers hacked Apple Computer’s networks — not their products — and found fifty-five vulnerabilities. So far, they have received$289K.

One of the worst of all the bugs they found would have allowed criminals to create a worm that would automatically steal all the photos, videos, and documents from someone’s iCloud account and then do the same to the victim’s contacts.

Lots of details in this blog post by one of the hackers.

September featured two stories on a phony tech investor named John Bernard, a pseudonym used by a convicted thief named John Clifton Davies who’s fleeced dozens of technology companies out of an estimated $30 million with the promise of lucrative investments. Those stories prompted a flood of tips from Davies’ victims that paints a much clearer picture of this serial con man and his cohorts, including allegations of hacking, smuggling, bank fraud and murder. KrebsOnSecurity interviewed more than a dozen of Davies’ victims over the past five years, none of whom wished to be quoted here out of fear of reprisals from a man they say runs with mercenaries and has connections to organized crime. As described in Part II of this series, John Bernard is in fact John Clifton Davies, a 59-year-old U.K. citizen who absconded from justice before being convicted on multiple counts of fraud in 2015. Prior to his conviction, Davies served 16 months in jail before being cleared of murdering his third wife on their honeymoon in India. The scam artist John Bernard (left) in a recent Zoom call, and a photo of John Clifton Davies from 2015. After eluding justice in the U.K., Davies reinvented himself as The Private Office of John Bernard, pretending to a be billionaire Swiss investor who made his fortunes in the dot-com boom 20 years ago and who was seeking investment opportunities. In case after case, Bernard would promise to invest millions in tech startups, and then insist that companies pay tens of thousands of dollars worth of due diligence fees up front. However, the due diligence company he insisted on using — another Swiss firm called Inside Knowledge — also was secretly owned by Bernard, who would invariably pull out of the deal after receiving the due diligence money. Bernard found a constant stream of new marks by offering extraordinarily generous finders fees to investment brokers who could introduce him to companies seeking an infusion of cash. When it came time for companies to sign legal documents, Bernard’s victims interacted with a 40-something Inside Knowledge employee named “Katherine Miller,” who claimed to be his lawyer. It turns out that Katherine Miller is a onetime Moldovan attorney who was previously known as Ecaterina “Katya” Dudorenko. She is listed as a Romanian lawyer in the U.K. Companies House records for several companies tied to John Bernard, including Inside Knowledge Solutions Ltd., Docklands Enterprise Ltd., and Secure Swiss Data Ltd (more on Secure Swiss data in a moment). Another of Bernard’s associates listed as a director at Docklands Enterprise Ltd. is Sergey Valentinov Pankov. This is notable because in 2018, Pankov and Dudorenko were convicted of cigarette smuggling in the United Kingdom. Sergey Pankov and Ecaterina Dudorenco, in undated photos. Source: Mynewsdesk.com According to the Organized Crime and Corruption Reporting Project, “illicit trafficking of tobacco is a multibillion-dollar business today, fueling organized crime and corruption [and] robbing governments of needed tax money. So profitable is the trade that tobacco is the world’s most widely smuggled legal substance. This booming business now stretches from counterfeiters in China and renegade factories in Russia to Indian reservations in New York and warlords in Pakistan and North Africa.” Like their erstwhile boss Mr. Davies, both Pankov and Dudorenko disappeared before their convictions in the U.K. They were sentenced in absentia to two and a half years in prison. Incidentally, Davies was detained by Ukrainian authorities in 2018, although he is not mentioned by name in this story from the Ukrainian daily Pravda. The story notes that the suspect moved to Kiev in 2014 and lived in a rented apartment with his Ukrainian wife. John’s fourth wife, Iryna Davies, is listed as a director of one of the insolvency consulting businesses in the U.K. that was part of John Davies’ 2015 fraud conviction. Pravda reported that in order to confuse the Ukrainian police and hide from them, Mr. Davies constantly changed their place of residence. John Clifton Davies, a.k.a. John Bernard. Image: Ukrainian National Police. The Pravda story says Ukrainian authorities were working with the U.K. government to secure Davies’ extradition, but he appears to have slipped away once again. That’s according to one investment broker who’s been tracking Davies’ trail of fraud since 2015. According to that source — who we’ll call “Ben” — Inside Knowledge and The Private Office of John Bernard have fleeced dozens of companies out of nearly USD$30 million in due diligence fees over the years, with one company reportedly paying over $1 million. Ben said he figured out that Bernard was Davies through a random occurrence. Ben said he’d been told by a reliable source that Bernard traveled everywhere in Kiev with several armed guards, and that his entourage rode in a convoy that escorted Davies’ high-end Bentley. Ben said Davies’ crew was even able to stop traffic in the downtown area in what was described as a quasi military maneuver so that Davies’ vehicle could proceed unobstructed (and presumably without someone following his car). Ben said he’s spoken to several victims of Bernard who saw phony invoices for payments to be made to banks in Eastern Europe appear to come from people within their own organization shortly after cutting off contact with Bernard and his team. While Ben allowed that these invoices could have come from another source, it’s worth noting that by virtue of participating in the due diligence process, the companies targeted by these schemes would have already given Bernard’s office detailed information about their finances, bank accounts and security processes. In some cases, the victims had agreed to use Bernard’s Secure Swiss Data software and services to store documents for the due diligence process. Secure Swiss Data is one of several firms founded by Davies/Inside Knowledge and run by Dudorenko, and it advertised itself as a Swiss company that provides encrypted email and data storage services. In February 2020, Secure Swiss Data was purchased in an “undisclosed multimillion buyout” by SafeSwiss Secure Communication AG. Shortly after the first story on John Bernard was published here, virtually all of the employee profiles tied to Bernard’s office removed him from their work experience as listed on their LinkedIn resumes — or else deleted their profiles altogether. Also, John Bernard’s main website — the-private-office.ch — replaced the content on its homepage with a note saying it was closing up shop. Incredibly, even after the first two stories ran, Bernard/Davies and his crew continued to ply their scam with companies that had already agreed to make due diligence payments, or that had made one or all of several installment payments. One of those firms actually issued a press release in August saying it had been promised an infusion of millions in cash from John Bernard’s Private Office. They declined to be quoted here, and continue to hold onto hope that Mr. Bernard is not the crook that he plainly is. Worse Than Failure — CodeSOD: A Long Time to Master Lambdas At an old job, I did a significant amount of VB.Net work. I didn’t hate VB.Net. Sure, the syntax was clunky, but autocomplete mostly solved that, and it was more OR less feature-matched to C# (and, as someone who needed to handle XML, the fact that VB.Net had XML literals was handy). Every major feature in C# had a VB.Net equivalent, including lambdas. And hey, lambdas are great! What a wonderful way to express a filter condition. Well, Eric O sends us this filter lambda. Originally sent to us as a single line, I’m adding line breaks for readability, because I care about this more than the original developer did. Function(row, index) index <> 0 AND (row(0).ToString().Equals("DIV10106") OR row(0).ToString().Equals("326570") OR row(0).ToString().Equals("301100") OR row(0).ToString().Equals("305622") OR row(0).ToString().Equals("305623") OR row(0).ToString().Equals("317017") OR row(0).ToString().Equals("323487") OR row(0).ToString().Equals("323488") OR row(0).ToString().Equals("324044") OR row(0).ToString().Equals("317016") OR row(0).ToString().Equals("316875") OR row(0).ToString().Equals("323976") OR row(0).ToString().Equals("324813") OR row(0).ToString().Equals("147000") OR row(0).ToString().Equals("326984") OR row(0).ToString().Equals("326634") OR row(0).ToString().Equals("306039") OR row(0).ToString().Equals("307021") OR row(0).ToString().Equals("307050") OR row(0).ToString().Equals("307603") OR row(0).ToString().Equals("307604") OR row(0).ToString().Equals("307632") OR row(0).ToString().Equals("307704") OR row(0).ToString().Equals("308184") OR row(0).ToString().Equals("308531") OR row(0).ToString().Equals("309930") OR row(0).ToString().Equals("104253") OR row(0).ToString().Equals("104532") OR row(0).ToString().Equals("104794") OR row(0).ToString().Equals("104943") OR row(0).ToString().Equals("105123") OR row(0).ToString().Equals("105755") OR row(0).ToString().Equals("106075") OR row(0).ToString().Equals("108062") OR row(0).ToString().Equals("108417") OR row(0).ToString().Equals("108616") OR row(0).ToString().Equals("108625") OR row(0).ToString().Equals("108689") OR row(0).ToString().Equals("108851") OR row(0).ToString().Equals("108997") OR row(0).ToString().Equals("109358") OR row(0).ToString().Equals("109551") OR row(0).ToString().Equals("110081") OR row(0).ToString().Equals("111501") OR row(0).ToString().Equals("111987") OR row(0).ToString().Equals("112136") OR row(0).ToString().Equals("11229") OR row(0).ToString().Equals("112261") OR row(0).ToString().Equals("113127") OR row(0).ToString().Equals("113266") OR row(0).ToString().Equals("114981") OR row(0).ToString().Equals("116527") OR row(0).ToString().Equals("121139") OR row(0).ToString().Equals("121469") OR row(0).ToString().Equals("142449") OR row(0).ToString().Equals("144034") OR row(0).ToString().Equals("144693") OR row(0).ToString().Equals("144900") OR row(0).ToString().Equals("150089") OR row(0).ToString().Equals("194340") OR row(0).ToString().Equals("214950") OR row(0).ToString().Equals("215321") OR row(0).ToString().Equals("215908") OR row(0).ToString().Equals("216531") OR row(0).ToString().Equals("217151") OR row(0).ToString().Equals("220710") OR row(0).ToString().Equals("221265") OR row(0).ToString().Equals("221387") OR row(0).ToString().Equals("300011") OR row(0).ToString().Equals("300013") OR row(0).ToString().Equals("300020") OR row(0).ToString().Equals("300022") OR row(0).ToString().Equals("300024") OR row(0).ToString().Equals("300026") OR row(0).ToString().Equals("300027") OR row(0).ToString().Equals("300050") OR row(0).ToString().Equals("300059") OR row(0).ToString().Equals("300060") OR row(0).ToString().Equals("300059") OR row(0).ToString().Equals("300125") OR row(0).ToString().Equals("300139") OR row(0).ToString().Equals("300275") OR row(0).ToString().Equals("300330") OR row(0).ToString().Equals("300342") OR row(0).ToString().Equals("300349") OR row(0).ToString().Equals("300355") OR row(0).ToString().Equals("300363") OR row(0).ToString().Equals("300413") OR row(0).ToString().Equals("301359") OR row(0).ToString().Equals("302131") OR row(0).ToString().Equals("302595") OR row(0).ToString().Equals("302621") OR row(0).ToString().Equals("302649") OR row(0).ToString().Equals("302909") OR row(0).ToString().Equals("302955") OR row(0).ToString().Equals("302986") OR row(0).ToString().Equals("303096") OR row(0).ToString().Equals("303249") OR row(0).ToString().Equals("303753") OR row(0).ToString().Equals("304010") OR row(0).ToString().Equals("304016") OR row(0).ToString().Equals("304047") OR row(0).ToString().Equals("304566") OR row(0).ToString().Equals("305347") OR row(0).ToString().Equals("305486") OR row(0).ToString().Equals("305487") OR row(0).ToString().Equals("305489") OR row(0).ToString().Equals("305526") OR row(0).ToString().Equals("305568") OR row(0).ToString().Equals("305769") OR row(0).ToString().Equals("305773") OR row(0).ToString().Equals("305824") OR row(0).ToString().Equals("305998") OR row(0).ToString().Equals("306039") OR row(0).ToString().Equals("307021") OR row(0).ToString().Equals("307050") OR row(0).ToString().Equals("307603") OR row(0).ToString().Equals("307604") OR row(0).ToString().Equals("307632") OR row(0).ToString().Equals("307704") OR row(0).ToString().Equals("308184") OR row(0).ToString().Equals("308531") OR row(0).ToString().Equals("309930") OR row(0).ToString().Equals("322228") OR row(0).ToString().Equals("121081") OR row(0).ToString().Equals("321879") OR row(0).ToString().Equals("327391") OR row(0).ToString().Equals("328933") OR row(0).ToString().Equals("325038")) AND DateTime.ParseExact(row(2).ToString(), "dd.MM.yyyy", System.Globalization.CultureInfo.InvariantCulture).CompareTo(DateTime.Today.AddDays(-14)) <= 0 That is 5,090 characters of lambda right there, clearly copy/pasted with modifications on each line. The original developer, at no point, paused to think a moment about whether or not this was the way to achieve their goal? If you’re wondering about those numeric values, I’ll let Eric explain: The magic numbers are all customer object references, except the first one, which I have no idea what is. [Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more. , Kevin Rudd — The Guardian: Under the cover of Covid, Morrison wants to scrap my government’s protections against predatory lending Published in The Guardian on 5 October 2020. Pardon me for being just a little suspicious, but when I see an avalanche of enthusiasm from such reputable institutions as the Morrison government, the Murdoch media and the Australian Banking Association (anyone remember the Hayne royal commission?) about the proposed “reform” of the National Consumer Credit Protection Act, I smell a very large rodent. “Reform” here is effectively code language for repeal. And it means the repeal of major legislation introduced by my government to bring about uniform national laws to protect Australian consumers from unregulated and often predatory lending practices. The banks of course have been ecstatic at Morrison’s announcement, chiming in with the government’s political chimeric that allowing the nation’s lenders once again to just let it rip was now essential for national economic recovery. Westpac, whose reputation was shredded during the royal commission, was out of the blocks first in welcoming the changes: CEO Peter King said they would “reduce red tape”, “speed up the process for customers to obtain approval”, and help small businesses access credit to invest and grow. And right on cue, Westpac shares were catapulted 7.2% to$17.54 just before midday on the day of announcement. National Australia Bank was up more than 6% at $18.26, ANZ up more than 5% at$17.76, and Commonwealth Bank was trading almost 3.5% higher at $66.49. The popping of champagne corks could be heard right around the country as the banks, once again, saw the balance of market power swing in their direction and away from consumers. And that means more profit and less consumer protection. A little bit of history first. Back in the middle of the global financial crisis, when the banks came on bended knee to our government seeking sovereign guarantees to preserve their financial lifelines to international lines of credit, we acted decisively to protect them, and their depositors, and to underpin the stability of the Australian financial system. And despite a hail of abuse from both the Liberal party and the Murdoch media at the time, we succeeded. Not only did we keep Australia out of the global recession then wreaking havoc across all developed economies, we also prevented any single financial institution from falling over and protected every single Australian’s savings deposits. Not bad, given the circumstances. But we were also crystal-clear with the banks and other lenders at the time that we would be acting to protect working families from future predatory lending practices. And we did so. The national consumer credit protection bill 2009 (credit bill) and the national consumer credit protection (fees) bill 2009 (fees bill) collectively made up the consumer credit protection reform package. It included: • A uniform and comprehensive national licensing regime for the first time across the country for those engaging in credit activities via a new Australian credit licence administered by the Australian Securities and Investments Commission as the sole regulator; • industry-wide responsible lending conduct requirements for licensees; • improved sanctions and enhanced enforcement powers for the regulator; and • enhanced consumer protection through dispute resolution mechanisms, court arrangements and remedies. This reform was not dreamed up overnight. It gave effect to the Council of Australian Governments’ agreements of 26 March and 3 July 2008 to transfer responsibility for regulation of consumer credit, and a related cluster of additional financial services, to the commonwealth. It also implemented the first phase of a two-phase implementation plan to transfer credit regulation to the commonwealth endorsed by Coag on 2 October 2008. It was the product of much detailed work over more than 12 months. Scott Morrison’s formal argument to turn all this on its head is that “as Australia continues to recover from the Covid-19 pandemic, it is more important than ever that there are no unnecessary barriers to the flow of credit to households and small businesses”. But hang on. Where is Morrison’s evidence that there is any problem with the flow of credit at the moment? And if there were a problem, where is Morrison’s evidence that the proposed emasculation of our consumer credit protection law is the only means to improve credit flow? Neither he nor the treasurer, Josh Frydenberg, have provided us with so much as a skerrick. Indeed, the Commonwealth Bank said recently that the flow of new lending is now a little above pre-Covid levels and that, in annual terms, lending is growing at a strong pace. More importantly, we should turn to volume VI of royal commissioner Kenneth Hayne’s final report into the Australian banking industry. Hayne, citing the commonwealth Treasury as his authority, explicitly concluded that the National Consumer Credit Protection Act has not in fact hindered the flow of credit but instead had provided system stability. As Hayne states: “I think it important to refer to a number of aspects of Treasury’s submissions in response to the commission’s interim report. Treasury indicated that ‘there is little evidence to suggest that the recent tightening in credit standards, including through Apra’s prudential measures or the actions taken by Asic in respect of [responsible lending obligations], has materially affected the overall availability of credit’.” So once again, we find the emperor has no clothes. The truth is this attack on yet another of my government’s reforms has nothing to do with the macro-economy. It has everything to do with a Morrison government bereft of intellectual talent and policy ideas in dealing with the real challenge of national economic recovery. Just as it has everything to do with Frydenberg’s spineless yielding to the narrow interests of the banking lobby, using the Covid-19 crisis as political cover in order to lift the profit levels of the banks while throwing borrowers’ interests to the wind. This latest flailing in the wind by Morrison et al is part of a broader pattern of failed policy responses by the government to the economic impact of the crisis. Morrison had to be dragged kicking and screaming into accepting the reality of stimulus, having rejected RBA advice last year to do precisely the same – and that was before Covid hit. And despite a decade of baseless statements about my government’s “unsustainable” levels of public debt and budget deficit, Morrison is now on track to deliver five times the level of debt and six times the level of our budget deficit. But it doesn’t stop there. Having destroyed a giant swathe of the Australian manufacturing industry by destroying our motor vehicle industry out of pure ideology, Morrison now has the temerity to make yet another announcement about the urgent need now for a new national industry policy for Australia. Hypocrisy thy name is Liberal. Notwithstanding these stellar examples of policy negligence, incompetence and hypocrisy, there is a further pattern to Morrison’s response to the Covid crisis: to use it as political cover to justify the introduction of a number of regressive measures that will hurt working families. They’ve used Covid cover to begin dismantling successive Labor government reforms for our national superannuation scheme, the net effect of which will be to destroy the retirement income of millions of wage and salary earners. They are also on the cusp of introducing tax changes, the bulk of which will be delivered to those who do not need them, while further eroding the national revenue base at a time when all fiscal discipline appears to have been thrown out the window altogether. And that leaves to one side what they are also threatening to do – again under the cover of Covid – to undermine wages and conditions for working families under proposed changes to our industrial relations laws. The bottom line is that Morrison’s “reform” of the National Consumer Credit Law forms part of a wider pattern of behaviour: this is a government that is slow to act, flailing around in desperate search for a substantive economic agenda to lead the nation out of recession, while using Covid to further load the dice against the interests of working families for the future. Worse Than Failure — News Roundup: Excellent Data Gathering In a global health crisis, like say, a pandemic, accurate and complete data about its spread is a "must have". Which is why, in the UK, there's a great deal of head-scratching over how the government "lost" thousands of cases. Oops. Normally, we don't dig too deeply into current events on this site, but the circumstances here are too "WTF" to allow them to pass without comment. From the BBC, we know that this system was used to notify individuals if they have tested positive for COVID-19, and notify their close contacts that they have been exposed. That last bit is important. Disease spread can quickly turn exponential, and even though COVID-19 has a low probability of causing fatalities, the law of large numbers means that a lot of people will die anyway on that exponential curve. If you can track exposure, get exposed individuals tested and isolated before they spread the disease, you can significantly cut down its spread. People are rightfully pretty upset about this mistake. Fortunately, the BBC has a followup article discussing the investigation, where an analyst explores what actually happened, and as it turns out, we're looking at an abuse of everyone's favorite data analytics tool: Microsoft Excel. The companies administering the tests compile their data into plain text which appear to be CSV files. No real surprise there. Each test created multiple rows within the CSV file. Then, the people working for Public Health England imported that data into Excel… as .xls files. .xls is the old Excel format, dating back into far-off years, and retained for backwards compatibility. While modern .xlsx files can support a little over a million rows, the much older format caps out at 65,536. So: these clerks imported the CSV file, hit "save as…" and made a .xls, and ended up truncating the results. With the fact that these input datasets had multiple rows per tests, "in practice it meant that each template was limited to about 1,400 cases." Again, "oops". I've discussed how much users really want to do everything in Excel, and this is clearly what happened here. The users had one tool, Excel, and it looked like a hammer to them. Arcane technical details like how many rows different versions of Excel may or may not support aren't things it's fair to expect your average data entry clerk to know. On another level, this is a clear failing of the IT services. Excel was not the right tool for this job, but in the middle of a pandemic, no one is entirely sure what they needed. Excel becomes a tempting tool, because pretty much any end user can look at complicated data and slice/shape/chart/analyze it however they like. There's a good reason why they want to use Excel for everything: it's empowering to the users. When they have an idea for a new report, they don't need to go through six levels of IT management, file requests in triplicate, and have a testing and approval cycle to ensure the report meets the specifications. They just… make it. There are packaged tools that offer similar, purpose built functionality but still give users all the flexibility they could want for slicing data. But they're expensive, and many organizations (especially government offices) will be stingy about who gets a license. They may or may not be easy to use. And of course, the time to procure such a thing was in the years before a massive virus outbreak. Excel is there, on everyone's computer already, and does what they need. Still, they made the mistake, they saw the consequences, so now we know, they will definitely start taking steps to correct the problem, right? They know that Excel isn't fit-for-purpose, so they're switching tools, right? From the BBC: To handle the problem, PHE is now breaking down the data into smaller batches to create a larger number of Excel templates in order to make sure none hit their cap. Oops. [Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more. , CryptogramNew Privacy Features in iOS 14 Cory Doctorow — Free copies of Attack Surface for institutions (schools, libraries, classrooms, etc) Figuring out how to tour a book in the lockdown age is hard. Many authors have opted to do a handful of essentially identical events with a couple of stores as a way of spreading out the times so that readers with different work-schedules, etc can make it. But not me. My next novel, Attack Surface (the third Little Brother book) comes out in the US/Canada on Oct 13 and it touches on so many burning contemporary issues that I rounded up 16 guests for 8 different themed “Attack Surface Lectures. This has many advantages: it allows me to really explore a wide variety of subjects without trying to cram them all into a single event and it allows me to spread out the love to eight fantastic booksellers. Half of those bookstores (the ones WITH asterices in the listing) have opted to have ME fulfil their orders, meaning that Tor is shipping me all their copies, and I’m going to sign, personalize and mail them from home the day after each event! (The other half will be sending out books with adhesive-backed bookplates I’ve signed for them) All of that is obviously really cool, but there is a huge fly in the ointment: given that all these events are different, what if you want to attend more than one? This is where things get broken. Each of these booksellers is under severe strain from the pandemic (and the whole sector was under severe strain even before the pandemic), and they’re allocating resources – payrolled staff – to after-hours events that cost them real money. So each of them has a “you have to buy a book to attend” policy. These were pretty common in pre-pandemic times, too, because so many attendees showed up at indie stores that were being destroyed by Amazon, having bought the books on Amazon. What’s more, booksellers with in-person events at least got the possibility that attendees would buy another book while in the store, and/or that people would discover their store through the event and come back – stuff that’s not gonna happen with virtual events. There is, frankly, no good answer to this: no one in the chain has the resources to create and deploy a season’s pass system (let alone agree on how the money from it should be divided among booksellers) – and no reader has any use for 8 (or even 2!) copies of the book. It was my stupid mistake, as I explain here. After I posted, several readers suggested one small way I could make this better: let readers who want to attend more than one event donate their extra copies to schools, libraries and other institutions. Which I am now doing. If you WANT to attend more than one event and you are seeking to gift a copy to an institution, I have a list for you! It’s beneath this explanatory text. And if you are affiliated with an institution and you want to put yourself on this list, please complete this form. If you want to attend more than one event and you want to donate your copy of the book to one of these organizations, choose one from the list and fill its name in on the ticket-purchase page, then email me so I can cross it off the list: doctorow@craphound.com. I know this isn’t great, and I apologize. We’re all figuring out this book-launch-in-a-pandemic business here, and it was really my mistake. I can’t promise I won’t make different mistakes if I have to do another virtual tour, but I can promise I won’t make this one again. Covenant House – Oakland 200 Harrison St Oakland CA 94603 Madison Public Library Central Branch 201 W Mifflin St, madison wi 53703 Monona Public Library 1000 Nichols Rd Madison wi 53716 Madison Public Library Pinney Branch 516 Cottage Grove Rd Madison Wi 53716 La Cueva High School Michael Sanchez 7801 Wilshire NE Albuquerque New Mexico 87122 YouthCare 800-495-7802 2500 NE 54th St Seattle WA 98105 Nuestro Mundo Community School Hollis Rudiger 902 Nichols Rd, Monona Monona Wi 53716 New Horizons Young Adult Shelter info@nhmin.org 2709 3rd Avenue Seattle WA 98121 Salem Community College Library Jennifer L. Pierce 460 Hollywood Avenue Carneys Point NJ 8094 Sumas Branch of the Whatcom County Library System Laurie Dawson – Youth Focus Librarian 461 2nd Street Sumas Washington 98295 Riverview Learning Center Kris Rodger 32302 NE 50th Street Carnation WA 98014 Cranston Public Library Ed Garcia 140 Sockanosset Cross Rd Cranston RI 2920 Paideia School Library Anna Watkins 1509 Ponce de Leon Ave Atlanta GA 30307 Westfield Community School Stephen Fodor 2100 sleepy hollow algonquin Illinois 60102 Worldbuilders Nonprofit Gray Miller, Executive Director 1200 3rd St Stevens Point WI 54481 Northampton Community College Marshal Miller 3835 Green Pond Road Bethlehem PA 18020 Metropolitan Business Academy Magnet High School Steve Staysniak 115 Water Street New Haven CT 6511 New Haven Free Public Library Meghan Currey 133 Elm Street New Haven CT 6510 New Haven Free Public Library – Mitchell Branch Marian Huggins 37 Harrison Street New Haven CT 6515 New Haven Free Public Library – Wilson Branch Luis Chavez-Brumell 303 Washington Ave. New Haven CT 6519 New Haven Free Public Library – Stetson Branch Diane Brown 200 Dixwell Ave. New Haven CT 6511 New Haven Free Public Library – Fair Haven Branch Kirk Morrison 182 Grand Ave. New Haven CT 6513 University of Chicago Acquisitions Room 170 1100 E. 57th St Chicago IL 60637 Greenburgh Public Library John Sexton, Director 300 Tarrytown Rd Elmsford NY 10523 Red Rocks Community College Library Karen Neville 13300 W Sixth Avenue Lakewood CO 80228 Biola University Library Chuck Koontz 13800 Biola Ave. La Mirada CA 90639 Otto Bruyns Public Library Aubrey Hiers 241 W Mill Rd Northfield NJ 08225 California Rehabilitation Center Library William Swafford 5th Street and Western Ave. Norco CA 92860 Hastings High School Rachel Haider 200 General Sieben Drive Hastings MN 55033 Ballard High School Library TuesD Chambers 1418 NW 65th St. Seattle WA 98117 Southwest Georgia Regional Library System Catherine Vanstone 301 South Monroe Street Bainbridge GA 39819 Los Angeles Center For Enriched Studies (LACES) Library Rustum Jacob 5931 W. 18th St Los Angeles CA 90018 SOUTH SIDE HACKERSPACE: CHICAGO Dmitriy Vysotskiy or Shawn Coyle 1048 W. 37th St. Suite 105 Chicago Illinois 60609 Rising Tide Charter Public School Katie Klein 59 Armstrong Plymouth MA 02360 Doolen Middle School Carmen Coulter 2400 N Country Club Rd Tucson AZ 85716 Fruchthendler Elementary School Jessica Carter 7470 E Cloud Rd Tucson AZ 85750 Brandon Branch Library Mary Erickson 305 S. Splitrock Blvd. Brandon South Dakota 57005 Alternative Family Education of the Santa Cruz City Schools Dorothee Ledbetter 185 Benito Av Santa Cruz CA 95062 Helios School Elizabeth Wallace 597 Central Ave. Sunnyvale CA 94086 Eden Prairie High School Jenn Nelson 17185 Valley View Rd Eden Prairie MN 55346 44 Helios School Elizabeth Wallace 597 Central Ave. Sunnyvale CA 94086 Flatbush Commons mutual aid library Joshua Wilkerson 101 Kenilworth Pl Brooklyn NY 11210 46 CryptogramOn Risk-Based Authentication Interesting usability study: “More Than Just Good Passwords? A Study on Usability and Security Perceptions of Risk-based Authentication“: Abstract: Risk-based Authentication (RBA) is an adaptive security measure to strengthen password-based authentication. RBA monitors additional features during login, and when observed feature values differ significantly from previously seen ones, users have to provide additional authentication factors such as a verification code. RBA has the potential to offer more usable authentication, but the usability and the security perceptions of RBA are not studied well. We present the results of a between-group lab study (n=65) to evaluate usability and security perceptions of two RBA variants, one 2FA variant, and password-only authentication. Our study shows with significant results that RBA is considered to be more usable than the studied 2FA variants, while it is perceived as more secure than password-only authentication in general and comparably se-cure to 2FA in a variety of application types. We also observed RBA usability problems and provide recommendations for mitigation.Our contribution provides a first deeper understanding of the users’perception of RBA and helps to improve RBA implementations for a broader user acceptance. Paper’s website. I’ve blogged about risk-based authentication before. Cory Doctorow — Someone Comes to Town, Someone Leaves Town (part 17) Here’s part seventeen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here). This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.” Some show notes: Here’s the Kickstarter for the Attack Surface audiobook, where every backer gets Force Multiplier. Here’s the schedule for the Attack Surface lectures Here’s the form to request a copy of Attack Surface for schools, libraries, classrooms, etc. Here’s how my publisher described it when it came out: Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off. Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls. Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge. Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends. Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read. MP3 Sam Varghese — Something fishy about Trump’s taxes? Did nobody suspect it all along? Recently, the New York Times ran an article about Donald Trump having paid no federal income taxes for 10 of the last 15 years; many claimed it was a blockbuster story and that it would have far-reaching effects on the forthcoming presidential election. If this was the first time Trump’s tax evasion was being mentioned, then, sure, it would have been a big deal. But right from the time he first refused to make his tax returns public — before he was elected — this question has been hanging over Trump’s head. And the only logical conclusion has been that if he was making excuses, then there must be something fishy about it. Five years later, we find that he was highly adept at claiming exemptions from paying income tax for various reasons. Surprised? Some people assume that one would have to be. But this was only to be expected, given all the excuses that Trump has trotted out over all these years to avoid releasing his tax returns. When I posted a tweet pointing out that Trump had been doing exactly what the big tech companies — Apple, Google, Microsoft, Amazon and Facebook, among others — do, others were prone to try and portray Trump’s actions as somehow different. But nobody specified the difference. The question that struck me was: why is the New York Times publishing this story at this time? The answer is obvious: it would like to influence the election in favour of the Democrat candidate, Joe Biden. The newspaper is not the only institution or individual trying to carry water for Biden: in recent days, we have seen the release of a two-part TV series The Comey Rule, based on a book written by James Comey, the former FBI director, about the run-up to the 2016 election. It is extremely one-sided. Also out is a two-part documentary made by Alex Gibney, titled Agents of Chaos. which depends mostly on government sources to spread the myth that the Russians were responsible for Trump’s election. Another documentary, Totally Under Control, about the Trump administration’s response to the coronavirus pandemic, will be coming out on October 13. Again, Gibney is the director and producer. On the Republican side, there has been nothing of this kind. Whether that says something about the level of confidence in the Trump camp is open to question. With less than a month to go for the election, it should only be expected that the propaganda war will intensify as both sides try to push their man into the White House. Worse Than Failure — CodeSOD: Switched Requirements Code changes over time. Sometimes, it feels like gremlins sweep through the codebase and change it for us. Usually, though, we have changes to requirements, which drives changes to the code. Thibaut was looking at some third party code to implement tooling to integrate with it, and found this C# switch statement:  if (effectID != 10) { switch (effectID) { case 21: case 24: case 27: case 28: case 29: return true; case 22: case 23: case 25: case 26: break; default: switch (effectID) { case 49: case 50: return true; } break; } return false; } return true;  I'm sure this statement didn't start this way. And I'm sure that, to each of the many developers who swung through to add their own little case to the switch their action made some sort of sense, and maybe they knew they should refactor, they need to get this functionality in and they need it now, and code cleanliness can wait. And wait. And wait. Until Thibaut comes through, and replaces it with this:  switch (effectID) { case 10: case 21: case 24: case 27: case 28: case 29: case 49: case 50: return true; } return false;  [Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today! , Kevin Rudd — CNBC: On US-China tensions, growing export restrictions E&OE TRANSCRIPT TV INTERVIEW CNBC WORLDWIDE EXCHANGE 28 SEPTEMBER 2020 The post CNBC: On US-China tensions, growing export restrictions appeared first on Kevin Rudd. , Cory Doctorow — Attack Surface on the MMT Podcast I was incredibly happy to appear on the MMT Podcast again this week, talking about economics, science fiction, interoperability, tech workers and tech ethics, and my new novel ATTACK SURFACE, which comes out in the UK tomorrow (Oct 13 US/Canada): https://pileusmmt.libsyn.com/68-cory-doctorow-digital-rights-surveillance-capitalism-interoperable-socks We also delved into my new nonfiction book, HOW TO DESTROY SURVEILLANCE CAPITALISM, and how it looks at the problem of misinformation, and how that same point of view is weaponized by Masha, the protagonist and antihero of Attack Surface. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59 It’s no coincidence that both of these books came out at the same time, as they both pursue the same questions from different angles: • What does technology do? • Who does it do it TO and who does it do it FOR? • How can we change how technology works? And: • Why do people make tools of oppression, and what will it take to make them stop? Here’s the episode’s MP3: http://traffic.libsyn.com/preview/pileusmmt/The_MMT_Podcast_ep_68_Cory_Doctorow_v2.mp3 and here’s their feed: http://pileusmmt.libsyn.com/rss Sam Varghese — One feels sorry for Emma Alberici, but that does not mask the fact that she was incompetent Last month, the Australian Broadcasting Corporation, a taxpayer-funded entity, made several people redundant, due to a cut in funding by the Federal Government. Among them was Emma Alberici, a presenter who has been lionised a great deal as someone with great talents, but is actually a mediocre hack who lacks ability. What marked Alberici out is the fact that she had the glorious title of chief economics correspondent at the ABC, but was never seen on any TV show giving her opinion about anything to do with economics. Over the last two years, China and the US have been engaged in an almighty stoush; Australia, as a country that considers the US as its main ally and has China as its major trading partner, has naturally been of interest too. But the ABC always put forward people like Peter Ryan, a senior business correspondent, or Ian Verrender, the business editor, when there was a need for someone to appear during a news bulletin and provide a little insight into these matters. Alberici, it seemed, was persona non grata. The reason why she was kept behind the curtains, it turns out, was because she did not really know much about economics. This was made plain in April 2018 when she wrote what columnist Joe Aston of the Australian Financial Review described as “an undergraduate yarn touting ‘exclusive analysis’ of one publicly available document from which she derived that ‘one in five of Australia’s top companies has paid zero tax for the past three years’ and that ‘Australia’s largest companies haven’t paid corporate tax in 10 years’.” This article drew much criticism from many people, including politicians, who complained to the ABC. Her magnum opus has now disappeared from the web – an archived version is here. The fourth paragraph read: “Exclusive analysis released by ABC today reveals one in five of Australia’s top companies has paid zero tax for the past three years.” A Twitter teaser said: “Australia’s largest companies haven’t paid corporate tax in 10 years.” As Aston pointed out in withering tones: “Both premises fatally expose their author’s innumeracy. The first is demonstrably false. Freely available data produced by the Australian Taxation Office show that 32 of Australia’s 50 largest companies paid$19.33 billion in company tax in FY16 (FY17 figures are not yet available). The other 18 paid nothing. Why? They lost money, or were carrying over previous losses.

“Company tax is paid on profits, so when companies make losses instead of profits, they don’t pay it. Amazing, huh? And since 1989, the tax system has allowed losses in previous years to be carried forward – thus companies pay tax on the rolling average of their profits and losses. This is stuff you learn in high school. Except, obviously, if your dream by then was to join the socialist collective at Ultimo, to be a superstar in the cafes of Haberfield.”

As expected, Alberici protested and even called in her lawyer when the ABC took down the article. It was put up again with several corrections. But the damage had been done and the great economics wizard had been unmasked. She did not take it well, protesting that 17 years ago she was nominated for a Walkley Award on tax minimisation.

Underlining her lack of knowledge of economics, Alberici, who was paid a handsome \$189,000 per annum by the ABC, exposed herself again in May the same year. In an interview with Labor shadow treasurer Jim Chalmers, she kept pushing him as to what he would do with the higher surpluses that he proposed to run were a Labor government to be elected.

Aston has his own inimitable style, so let me use his words: “This time, Alberici’s utter non-comprehension of public sector accounting is laid bare in three unwitting confessions in her studio interview with Labor’s finance spokesman Jim Chalmers after Tuesday’s Budget.

“[Shadow Treasurer] Chris Bowen on the weekend told my colleague Barrie Cassidy that you want to run higher surpluses than the Government. How much higher than the Government and what would you do with that money?”

“Wearing that unmistakable WTF expression on his dial, Chalmers was careful to evade her illogical query.

“Undeterred, Alberici pressed again. ‘And what will you do with those surpluses?’ A second time, Chalmers dissembled.

“A third time, the cock crowed (this really was as inevitable as the betrayal of Jesus by St Peter). ‘Sorry, no, you said you would run higher surpluses, so what happens to that money?’

“Hanged [sic] by her own persistence. Chalmers, at this point, put her out of her blissful ignorance – or at least tried! ‘Surpluses go towards paying down debt’.

“Bingo. C’est magnifique! Hey, we majored in French. After 25 years in business and finance reporting, Alberici apparently thinks a budget surplus is an account with actual money in it, not a figure reached by deducting the Commonwealth’s expenditure from its revenue.”

Alberici has continued to complain that her knowledge of economics is adequate. She has been extremely annoyed when such criticism comes from any Murdoch publication. But the fact is that she is largely ignorant of the basics of economics.

Her incompetence is not limited to this one field alone. As I have pointed out, there have been occasions in the past, when she has shown that her knowledge of foreign affairs is as good as her knowledge of economics.

After she was made redundant, Alberici published a series of tweets which she later removed. An archive of that is here.

Alberici was the host of a program called Business Breakfast some years ago. It failed and had to be taken off the air. Then she was made the main host of the ABC’s flagship late news program, Lateline. That program was taken off the air last year due to budget cuts. However, Alberici’s performance did not set the Yarra on fire, to put it mildly.

Now she has joined an insurance comparison website, Compare The Market. That company has a bit of a dodgy reputation as the Australian news website The New Daily pointed out in 2014. As reporter George Lekakis wrote: “The website is owned by a leading global insurer that markets many of its own its products on the site. While comparethemarket.com.au offers a broad range of products for life insurance and travel cover, most of its general insurance offerings are underwritten by an insurer known as Auto & General.

“Auto & General is a subsidiary of global financial services group Budget Holdings Limited and is the ultimate owner of comparethemarket.com.au. An investigation by The New Daily of the brands marketed by the website for auto and home insurance reveals a disproportionate weighting to A&G products.”

Once again, the AFR, this time through columnist Myriam Robyn, was not exactly complimentary about Alberici’s new billet. But perhaps it is a better fit for Alberici than the ABC where she was really a square peg in a round hole. She is writing a book in which, no doubt, she will again try to put her case forward and prove she was a brilliant journalist who lost her job due to other people politicking against her. But the facts say otherwise.