Planet Russell

,

Sociological ImagesStop Trying to Make Fetch Happen: Social Theory for Shaken Routines

It is hard to keep up habits these days. As the academic year starts up with remote teaching, hybrid teaching, and rapidly-changing plans amid the pandemic, many of us are thinking about how to design new ways to connect now that our old habits are disrupted. How do you make a new routine or make up for old rituals lost? How do we make them stick and feel meaningful?

Social science shows us how these things take time, and in a world where we would all very much like a quick solution to our current social problems, it can be tough to sort out exactly what new rules and routines can do for us.

For example, The New York Times recently profiled “spiritual consultants” in the workplace – teams that are tasked with creating a more meaningful and communal experience on the job. This is part of a larger social trend of companies and other organizations implementing things like mindfulness practices and meditation because they…keep workers happy? Foster a sense of community? Maybe just keep the workers just a little more productive in unsettled times?

It is hard to talk about the motives behind these programs without getting cynical, but that snark points us to an important sociological point. Some of our most meaningful and important institutions emerge from social behavior, and it is easy to forget how hard it is to design them into place.

This example reminded me of the classic Social Construction of Reality by Berger and Luckmann, who argue that some of our strongest and most established assumptions come from habit over time. Repeated interactions become habits, habits become routines, and suddenly those routines take on a life of their own that becomes meaningful to the participants in a way that “just is.” Trust, authority, and collective solidarity fall into place when people lean on these established habits. In other words: on Wednesdays we wear pink.

The challenge with emergent social institutions is that they take time and repetition to form. You have to let them happen on their own, otherwise they don’t take on the same same sense of meaning. Designing a new ritual often invites cringe, because it skips over the part where people buy into it through their collective routines. This is the difference between saying “on Wednesdays we wear pink” and saying

“Hey team, we have a great idea that’s going to build office solidarity and really reinforce the family dynamic we’ve got going on. We’re implementing: Pink. Wednesdays.”

All of our usual routines are disrupted right now, inviting fear, sadness, anger, frustration, and disappointment. People are trying to persist with the rituals closest to them, sometimes to the extreme detriment of public health (see: weddings, rallies, and ugh). I think there’s some good sociological advice for moving through these challenges for ourselves and our communities: recognize those emotions, trust in the routines and habits that you can safely establish for yourself and others, and know that they will take a long time to feel really meaningful again, but that doesn’t mean they aren’t working for you. In other words, stop trying to make fetch happen.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Kevin RuddA Rational Fear Podcast: Climate Action

INTERVIEW AUDIO
A RATIONAL FEAR PRESENTS:
‘GREATEST MORAL PODCAST OF OUR GENERATION’
WITH DAN ILIC
RECORDED 2 SEPTEMBER 2020
PUBLISHED 14 SEPTEMBER 2020

The post A Rational Fear Podcast: Climate Action appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Nice Save

Since HTTP is fundamentally stateless, developers have found a million ways to hack state into web applications. One of my "favorites" was the ASP.NET ViewState approach.

The ViewState is essentially a dictionary, where you can store any arbitrary state values you might want to track between requests. When the server outputs HTML to send to the browser, the contents of ViewState are serialized, hashed, and base-64 encoded and dumped into an <input type="hidden"> element. When the next request comes in, the server unpacks the hidden field and deserializes the dictionary. You can store most objects in it, if you'd like. The goal of this, and all the other WebForm state stuff was to make handling web forms more like handling forms in traditional Windows applications.

It's "great". It's extra great when its default behavior is to ensure that the full state for every UI widget on the page. The incident which inpsired Remy's Law of Requirements Gathering was a case where our users wanted like 500 text boxes on a page, and we blew out our allowed request sizes due to gigundous ViewState because, at the time, we didn't know about that "helpful" feature.

Ryan N inherited some code which uses this, and shockingly, ViewState isn't the biggest WTF here:

protected void ValidateForm(object sender, EventArgs e) { bool Save = true; if (sInstructions == string.Empty) { sInstructions = string.Empty; } else { Save = false; } if (Save) {...} }

Let me just repeat part of that:

if (sInstructions == string.Empty) { sInstructions = string.Empty; }

If sInstructions is empty, set it to be empty. If sInstructions is not empty, then we set… Save to false? Reading this code, it implies if we have instructions we shouldn't save? What the heck is this sInstructions field anyway?

Well, Ryan explains: "sInstructions is a ViewState string variable, it holds error messages."

I had to spend a moment thinking "wait, why is it called 'instructions'?" But the method name is called ValidateForm, which explains it. It's because it's user instructions, as in, "please supply a valid email address". Honestly, I'm more worried about the fact that this approach is starting to make sense to me, than anything else.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianRuss Allbery: Review: Who Do You Serve, Who Do You Protect?

Review: Who Do You Serve, Who Do You Protect?, edited by Maya Schenwar, et al.

Editor: Maya Schenwar
Editor: Joe Macaré
Editor: Alana Yu-lan Price
Publisher: Haymarket Books
Copyright: June 2016
ISBN: 1-60846-684-1
Format: Kindle
Pages: 250

Who Do You Serve, Who Do You Protect? is an anthology of essays about policing in the United States. It's divided into two sections: one that enumerates ways that police are failing to serve or protect communities, and one that describes how communities are building resistance and alternatives. Haymarket Books (a progressive press in Chicago) has made it available for free in the aftermath of the George Floyd killing and resulting protests in the United States.

I'm going to be a bit unfair to this book, so let me start by admitting that the mismatch between it and the book I was looking for is not entirely its fault.

My primary goal was to orient myself in the discussion on the left about alternatives to policing. I also wanted to sample something from Haymarket Books; a free book was a good way to do that. I was hoping for a collection of short introductions to current lines of thinking that I could selectively follow in longer writing, and an essay collection seemed ideal for that.

What I had not realized (which was my fault for not doing simple research) is that this is a compilation of articles previously published by Truthout, a non-profit progressive journalism site, in 2014 and 2015. The essays are a mix of reporting and opinion but lean towards reporting. The earliest pieces in this book date from shortly after the police killing of Michael Brown, when racist police violence was (again) reaching national white attention.

The first half of the book is therefore devoted to providing evidence of police abuse and violence. This is important to do, but it's sadly no longer as revelatory in 2020, when most of us have seen similar things on video, as it was to white America in 2014. If you live in the United States today, while you may not be aware of the specific events described here, you're unlikely to be surprised that Detroit police paid off jailhouse informants to provide false testimony ("Ring of Snitches" by Aaron Miguel Cantú), or that Chicago police routinely use excessive deadly force with no consequences ("Amid Shootings, Chicago Police Department Upholds Culture of Impunity" by Sarah Macaraeg and Alison Flowers), or that there is a long history of police abuse and degradation of pregnant women ("Your Pregnancy May Subject You to Even More Law Enforcement Violence" by Victoria Law). There are about eight essays along those lines.

Unfortunately, the people who excuse or disbelieve these stories are rarely willing to seek out new evidence, let alone read a book like this. That raises the question of intended audience for the catalog of horrors part of this book. The answer to that question may also be the publication date; in 2014, the base of evidence and example for discussion had not been fully constructed. This sort of reporting is also obviously relevant in the original publication context of web-based journalism, where people may encounter these accounts individually through social media or other news coverage. In 2020, they offer reinforcement and rhetorical evidence, but I'm dubious that the people who would benefit from this knowledge will ever see it in this form. Those of us who will are already sickened, angry, and depressed.

My primary interest was therefore in the second half of the book: the section on how communities are building resistance and alternatives. This is where I'm going to be somewhat unfair because the state of that conversation may have been different in 2015 than it is now in 2020. But these essays were lacking the depth of analysis that I was looking for.

There is a human tendency, when one becomes aware of an obvious wrong, to simply publicize the horrible thing that is happening and expect someone to do something about it. It's obviously and egregiously wrong, so if more people knew about it, certainly it would be stopped! That has happened repeatedly with racial violence in the United States. It's also part of the common (and school-taught) understanding of the Civil Rights movement in the 1960s: activists succeeded in getting the violence on the cover of newspapers and on television, people were shocked and appalled, and the backlash against the violence created political change.

Putting aside the fact that this is too simplistic of a picture of the Civil Rights era, it's abundantly clear at this point in 2020 that publicizing racist and violent policing isn't going to stop it. We're going to have to do something more than draw attention to the problem. Deciding what to do requires political and social analysis, not just of the better world that we want to see but of how our current world can become that world.

There is very little in that direction in this book. Who Do You Serve, Who Do You Protect? does not answer the question of its title beyond "not us" and "white supremacy." While those answers are not exactly wrong, they're also not pushing the analysis in the direction that I wanted to read.

For example (and this is a long-standing pet peeve of mine in US political writing), it would be hard to tell from most of the essays in this book that any country besides the United States exists. One essay ("Killing Africa" by William C. Anderson) talks about colonialism and draws comparisons between police violence in the United States and international treatment of African and other majority-Black countries. One essay talks about US military behavior oversees ("Beyond Homan Square" by Adam Hudson). That's about it for international perspective. Notably, there is no analysis here of what other countries might be doing better.

Police violence against out-groups is not unique to the United States. No one has entirely solved this problem, but versions of this problem have been handled with far more success than here. The US has a comparatively appalling record; many countries in the world, particularly among comparable liberal democracies in Europe, are doing far better on metrics of racial oppression by agents of the government and of law enforcement violence. And yet it's common to approach these problems as if we have to develop a solution de novo, rather than ask what other countries are doing differently and if we could do some of those things.

The US has some unique challenges, both historical and with the nature of endemic violence in the country, so perhaps such an analysis would turn up too many US-specific factors to copy other people's solutions. But we need to do the analysis, not give up before we start. Novel solutions can lead to novel new problems; other countries have tested, working improvements that could provide a starting framework and some map of potential pitfalls.

More fundamentally, only the last two essays of this book propose solutions more complex than "stop." The authors are very clear about what the police are doing, seem less interested in why, and are nearly silent on how to change it. I suspect I am largely in political agreement with most of the authors, but obviously a substantial portion of the country (let alone its power structures) is not, and therefore nothing is changing. Part of the project of ending police violence is understanding why the violence exists, picking apart the motives and potential fracture lines in the political forces supporting the status quo, and building a strategy to change the politics. That isn't even attempted here.

For example, the "who do you serve?" question of the book's title is more interesting than the essays give it credit. Police are not a monolith. Why do Black people become police officers? What are their experiences? Are there police forces in the United States that are doing better than others? What makes them different? Why do police act with violence in the moment? What set of cultural expectations, training experiences, anxieties, and fears lead to that outcome? How do we change those factors?

Or, to take another tack, why are police not held accountable even when there is substantial public outrage? What political coalition supports that immunity from consequences, what are its fault lines and internal frictions, and what portions of that coalition could be broken off, pealed away, or removed from power? To whom, institutionally, are police forces accountable? What public offices can aspiring candidates run for that would give them oversight capability? This varies wildly throughout the United States; political approaches that work in large cities may not work in small towns, or with county sheriffs, or with the FBI, or with prison guards.

To treat these organizations as a monolith and their motives as uniform is bad political tactics. It gives up points of leverage.

I thought the best essays of this collection were the last two. "Community Groups Work to Provide Emergency Medical Alternatives, Separate from Police," by Candice Bernd, is a profile of several local emergency response systems that divert emergency calls from the police to paramedics, mental health experts, or social workers. This is an idea that's now relatively mainstream, and it seems to be finding modest success where it has been tried. It's more of a harm mitigation strategy than an attempt to deal with the root problem, but we're going to need both.

The last essay, "Building Community Safety" by Ejeris Dixon, is the only essay in this book that is pushing in the direction that I was hoping to read. Dixon describes building an alternative system that can intervene in violent situations without using the police. This is fascinating and I'm glad that I read it.

It's also frustrating in context because Dixon's essay should be part of a discussion. Dixon describes spending years learning de-escalation techniques, doing hard work of community discussion and collective decision-making, and making deep investment in the skills required to handle violence without calling in a dangerous outside force. I greatly admire this approach (also common in parts of the anarchist community) and the people who are willing to commit to it. But it's an immense amount of work, and as Dixon points out, that work often falls on the people who are least able to afford it. Marginalized communities, for whom the police are often dangerous, are also likely to lack both time and energy to invest in this type of skill training. And many people simply will not do this work even if they do have the resources to do it.

More fundamentally, this approach conflicts somewhat with division of labor. De-escalation and social work are both professional skills that require significant time and practice to hone, and as much as I too would love to live in a world where everyone knows how to do some amount of this work, I find it hard to imagine scaling this approach without trained professionals. The point of paying someone to do this work as their job is that the money frees up their time to focus on learning those skills at a level that is difficult to do in one's free time. But once you have an organized group of professionals who do this work, you have to find a way to keep them from falling prey to the problems that plague the police, which requires understanding the origins of those problems. And that's putting aside the question of how large the residual of dangerous crime that cannot be addressed through any form of de-escalation might be, and what organization we should use to address it.

Dixon's essay is great; I wouldn't change anything about it. But I wanted to see the next essay engaging with Dixon's perspective and looking for weaknesses and scaling concerns, and then the next essay that attempts to shore up those weaknesses, and yet another essay that grapples with the challenging philosophical question of a government monopoly on force and how that can and should come into play in violent crime. And then essays on grass-roots organizing in the context of police reform or abolition, and on restorative justice, and on the experience of attempting police reform from the inside, and on how to support public defenders, and on the merits and weaknesses of focusing on electing reform-minded district attorneys. Unfortunately, none of those are here.

Overall, Who Do You Serve, Who Do You Protect? was a disappointment. It was free, so I suppose I got what I paid for, and I may have had a different reaction if I read it in 2015. But if you're looking for a deep discussion on the trade-offs and challenges of stopping police violence in 2020, I don't think this is the place to start.

Rating: 3 out of 10

,

Planet DebianEnrico Zini: Travel links

A few interesting places to visit. Traveling could be complicated, and internet searches could be interesting enough.

For example, churches:

Or fascinating urbanistic projects, for which it's worth to look up photos:

Or nature, like Get Lost in Mega-Tunnels Dug by South American Megafauna

Planet DebianJonathan Carter: Wootbook / Tongfang laptop

Old laptop

I’ve been meaning to get a new laptop for a while now. My ThinkPad X250 is now 5 years old and even though it’s still adequate in many ways, I tend to run out of memory especially when running a few virtual machines. It only has one memory slot, which I maxed out at 16GB shortly after I got it. Memory has been a problem in considering a new machine. Most new laptops have soldered RAM and local configurations tend to ship with 8GB RAM. Getting a new machine with only a slightly better CPU and even just the same amount of RAM as what I have in the X250 seems a bit wasteful. I was eyeing the Lenovo X13 because it’s a super portable that can take up to 32GB of RAM, and it ships with an AMD Ryzen 4000 series chip which has great performance. With Lenovo’s discount for Debian Developers it became even more attractive. Unfortunately that’s in North America only (at least for now) so that didn’t work out this time.

Enter Tongfang

I’ve been reading a bunch of positive reviews about the Tuxedo Pulse 14 and KDE Slimbook 14. Both look like great AMD laptops, supports up to 64GB of RAM and clearly runs Linux well. I also noticed that they look quite similar, and after some quick searches it turns out that these are made by Tongfang and that its model number is PF4NU1F.

I also learned that a local retailer (Wootware) sells them as the Wootbook. I’ve seen one of these before although it was an Intel-based one, but it looked like a nice machine and I was already curious about it back then. After struggling for a while to find a local laptop with a Ryzen CPU and that’s nice and compact and that breaks the 16GB memory barrier, finding this one that jumped all the way to 64GB sealed the deal for me.

This is the specs for the configuration I got:

  • Ryzen 7 4800H 2.9GHz Octa Core CPU (4MB L2 cache, 8MB L3 cache, 7nm process).
  • 64GB RAM (2x DDR4 2666mHz 32GB modules)
  • 1TB nvme disk
  • 14″ 1920×1280 (16:9 aspect ratio) matt display.
  • Real ethernet port (gigabit)
  • Intel Wifi 6 AX200 wireless ethernet
  • Magnesium alloy chassis

This configuration cost R18 796 (€947 / $1122). That’s significantly cheaper than anything else I can get that even starts to approach these specs. So this is a cheap laptop, but you wouldn’t think so by using it.

I used the Debian netinstall image to install, and installation was just another uneventful and boring Debian installation (yay!). Unfortunately it needs the firmware-iwlwifi and firmare-amd-graphics packages for the binary blobs that drives the wifi card and GPU. At least it works flawlessly and you don’t need an additional non-free display driver (as is the case with NVidia GPUs). I haven’t tested the graphics extensively yet, but desktop graphics performance is very snappy. This GPU also does fancy stuff like VP8/VP9 encoding/decoding, so I’m curious to see how well it does next time I have to encode some videos. The wifi upgrade was nice for copying files over. My old laptop maxed out at 300Mbps, this one connects to my home network between 800-1000Mbps. At this speed I don’t bother connecting via cable at home.

I read on Twitter that Tuxedo Computers thinks that it’s possible to bring Coreboot to this device. That would be yet another plus for this machine.

I’ll try to answer some of my own questions about this device that I had before, that other people in the Debian community might also have if they’re interested in this device. Since many of us are familiar with the ThinkPad X200 series of laptops, I’ll compare it a bit to my X250, and also a little to the X13 that I was considering before. Initially, I was a bit hesitant about the 14″ form factor, since I really like the portability of the 12.5″ ThinkPad. But because the screen bezel is a lot smaller, the Wootbook (that just rolls off the tongue a lot better than “the PF4NU1F”) is just slightly wider than the X250. It weighs in at 1.1KG instead of the 1.38KG of the X250. It’s also thinner, so even though it has a larger display, it actually feels a lot more portable. Here’s a picture of my X250 on top of the Wootbook, you can see a few mm of Wootbook sticking out to the right.

Card Reader

One thing that I overlooked when ordering this laptop was that it doesn’t have an SD card reader. I see that some variations have them, like on this Slimbook review. It’s not a deal-breaker for me, I have a USB card reader that’s very light and that I’ll just keep in my backpack. But if you’re ordering one of these machines and have some choice, it might be something to look out for if it’s something you care about.

Keyboard/Touchpad

On to the keyboard. This keyboard isn’t quite as nice to type on as on the ThinkPad, but, it’s not bad at all. I type on many different laptop keyboards and I would rank this keyboard very comfortably in the above average range. I’ve been typing on it a lot over the last 3 days (including this blog post) and it started feeling natural very quickly and I’m not distracted by it as much as I thought I would be transitioning from the ThinkPad or my mechanical desktop keyboard. In terms of layout, it’s nice having an actual “Insert” button again. This is things normal users don’t care about, but since I use mc (where insert selects files) this is a welcome return :). I also like that it doesn’t have a Print Screen button at the bottom of my keyboard between alt and ctrl like the ThinkPad has. Unfortunately, it doesn’t have dedicated pgup/pgdn buttons. I use those a lot in apps to switch between tabs. At leas the Fn button and the ctrl buttons are next to each other, so pressing those together with up and down to switch tabs isn’t that horrible, but if I don’t get used to it in another day or two I might do some remapping. The touchpad has en extra sensor-button on the top left corner that’s used on Windows to temporarily disable the touchpad. I captured it’s keyscan codes and it presses left control + keyscan code 93. The airplane mode, volume and brightness buttons work fine.

I do miss the ThinkPad trackpoint. It’s great especially in confined spaces, your hands don’t have to move far from the keyboard for quick pointer operations and it’s nice for doing something quick and accurate. I painted a bit in Krita last night, and agree with other reviewers that the touchpad could do with just a bit more resolution. I was initially disturbed when I noticed that my physical touchpad buttons were gone, but you get right-click by tapping with two fingers, and middle click with tapping 3 fingers. Not quite as efficient as having the real buttons, but it actually works ok. For the most part, this keyboard and touchpad is completely adequate. Only time will tell whether the keyboard still works fine in a few years from now, but I really have no serious complaints about it.

Display

The X250 had a brightness of 172 nits. That’s not very bright, I think the X250 has about the dimmest display in the ThinkPad X200 range. This hasn’t been a problem for me until recently, my eyes are very photo-sensitive so most of the time I use it at reduced brightness anyway, but since I’ve been working from home a lot recently, it’s nice to sometimes sit outside and work, especially now that it’s spring time and we have some nice days. At full brightness, I can’t see much on my X250 outside. The Wootbook is significantly brighter even (even at less than 50% brightness), although I couldn’t find the exact specification for its brightness online.

Ports

The Wootbook has 3x USB type A ports and 1x USB type C port. That’s already quite luxurious for a compact laptop. As I mentioned in the specs above, it also has a full-sized ethernet socket. On the new X13 (the new ThinkPad machine I was considering), you only get 2x USB type A ports and if you want ethernet, you have to buy an additional adapter that’s quite expensive especially considering that it’s just a cable adapter (I don’t think it contains any electronics).

It has one hdmi port. Initially I was a bit concerned at lack of displayport (which my X250 has), but with an adapter it’s possible to convert the USB-C port to displayport and it seems like it’s possible to connect up to 3 external displays without using something weird like display over usual USB3.

Overall remarks

When maxing out the CPU, the fan is louder than on a ThinkPad, I definitely noticed it while compiling the zfs-dkms module. On the plus side, that happened incredibly fast. Comparing the Wootbook to my X250, the biggest downfall it has is really it’s pointing device. It doesn’t have a trackpad and the touchpad is ok and completely usable, but not great. I use my laptop on a desk most of the time so using an external mouse will mostly solve that.

If money were no object, I would definitely choose a maxed out ThinkPad for its superior keyboard/mouse, but the X13 configured with 32GB of RAM and 128GB of SSD retails for just about double of what I paid for this machine. It doesn’t seem like you can really buy the perfect laptop no matter how much money you want to spend, there’s some compromise no matter what you end up choosing, but this machine packs quite a punch, especially for its price, and so far I’m very happy with my purchase and the incredible performance it provides.

I’m also very glad that Wootware went with the gray/black colours, I prefer that by far to the white and silver variants. It’s also the first laptop I’ve had since 2006 that didn’t come with Windows on it.

The Wootbook is also comfortable/sturdy enough to carry with one hand while open. The ThinkPads are great like this and with many other brands this just feels unsafe. I don’t feel as confident carrying it by it’s display because it’s very thin (I know, I shouldn’t be doing that with the ThinkPads either, but I’ve been doing that for years without a problem :) ).

There’s also a post on Reddit that tracks where you can buy these machines from various vendors all over the world.

Planet DebianSteinar H. Gunderson: mlocate slowness

/usr/bin/mlocate asdf > /dev/null  23.28s user 0.24s system 99% cpu 23.536 total
/usr/local/sbin/mlocate-fast asdf > /dev/null  1.97s user 0.27s system 99% cpu 2.251 total

That is just changing mbsstr to strstr. Which I guess causes trouble for EUC_JP locales or something? But guys, scanning linearly through millions of entries is sort of outdated :-(

Kevin RuddDoorstop: Queensland’s Covid Response

E&OE TRANSCRIPT
DOORSTOP INTERVIEW
SUNSHINE COAST
13 SEPTEMBER 2020

Topic: The Palaszczuk Government’s Covid-19 response

Journalist: Kevin, what’s your stance on the border closures at the moment? Do you think it’s a good idea to open them up, or to keep them closed?

Kevin Rudd: Anyone who’s honest will tell you these are really hard decisions and it’s often not politically popular to do the right thing. But I think Annastacia Palaszczuk, faced with really hard decisions, I think has made the right on-balance judgment, because she’s putting first priority on the health security advice for all Queenslanders. That I think is fundamental. You see, in my case, I’m a Queenslander, proudly so, but the last five years I’ve lived in New York. Let me tell you what it’s like living in a city like New York, where they threw health security to the wind. It was a complete debacle. And so many thousands of people died as a result. So I think we’ve got to be very cautious about disregarding the health security advice. So when I see the Murdoch media running a campaign to force Premier Palaszczuk to open the borders of Queensland, I get suspicious about what’s underneath all that. Here’s the bottom line: Do you want your Premier to make a decision based on political advice, or based on media bullying? Or do you want your Premier to make a decision based on what the health professionals are saying? And from my point of view — as someone who loves this state, loves the Sunshine Coast, grew up here and married to a woman who spent her life in business employing thousands of people — on balance, what Queenslanders want is for them to have their health needs put first. And you know something? It may not be all that much longer that we need to wait. But as soon as we say, ‘well, it’s time for politics to take over, it’s time for the Murdoch media to take over and dictate when elected governments should open the borders or not’, I think that’s a bad day for Queensland.

Journalist: Do you think the recession and the economic impact is something that Australia will be able to mitigate post pandemic?

Kevin Rudd: Well, the responsibility for mitigating the national recession lies in the hands of Prime Minister Morrison. I know something about what these challenges mean. I was Prime Minister during the Global Financial Crisis when the rest of the world went into recession. So he has it within his hands to devise the economic plans necessary for the nation; state governments are responsible primarily for the health of their citizens. That’s the division of labour in our commonwealth. So, it’s either you have a situation where Premiers are told that it’s in your political interests, because the Murdoch media has launched a campaign to throw open your borders, or you’re attentive to the health policy advice from the health professionals to keep all Queenslanders safe. It’s hard, it’s tough, but on balance I think she’s probably made the right call. In fact, I’m pretty sure she has.

The post Doorstop: Queensland’s Covid Response appeared first on Kevin Rudd.

Planet DebianRussell Coker: Setting Up a Dell T710 Server with DRAC6

I’ve just got a Dell T710 server for LUV and I’ve been playing with the configuration options. Here’s a list of what I’ve done and recommendations on how to do things. I decided not to try to write a step by step guide to doing stuff as the situation doesn’t work for that. I think that a list of how things are broken and what to avoid is more useful.

BIOS

Firstly with a Dell server you can upgrade the BIOS from a Linux root shell. Generally when a server is deployed you won’t want to upgrade the BIOS (why risk breaking something when it’s working), so before deployment you probably should install the latest version. Dell provides a shell script with encoded binaries in it that you can just download and run, it’s ugly but it works. The process of loading the firmware takes a long time (a few minutes) with no progress reports, so best to plug both PSUs in and leave it alone. At the end of loading the firmware a hard reboot will be performed, so upgrading your distribution while doing the install is a bad idea (Debian is designed to not lose data in this situation so it’s not too bad).

IDRAC

IDRAC is the Integrated Dell Remote Access Controller. By default it will listen on all Ethernet ports, get an IP address via DHCP (using a different Ethernet hardware address to the interface the OS sees), and allow connections. Configuring it to be restrictive as to which ports it listens on may be a good idea (the T710 had 4 ports built in so having one reserved for management is usually OK). You need to configure a username and password for IDRAC that has administrative access in the BIOS configuration.

Web Interface

By default IDRAC will run a web server on the IP address it gets from DHCP, you can connect to that from any web browser that allows ignoring invalid SSL keys. Then you can use the username and password configured in the BIOS to login. IDRAC 6 on the PowerEdge T710 recommends IE 6.

To get a ssl cert that matches the name you want to use (and doesn’t give browser errors) you have to generate a CSR (Certificate Signing Request) on the DRAC, the only field that matters is the CN (Common Name), the rest have to be something that Letsencrypt will accept. Certbot has the option “--config-dir /etc/letsencrypt-drac” to specify an alternate config directory, the SSL key for DRAC should be entirely separate from the SSL key for other stuff. Then use the “--csr” option to specify the path of the CSR file. When you run letsencrypt the file name of the output file you want will be in the format “*_chain.pem“. You then have to upload that to IDRAC to have it used. This is a major pain for the lifetime of letsencrypt certificates. Hopefully a more recent version of IDRAC has Certbot built in.

When you access RAC via ssh (see below) you can run the command “racadm sslcsrgen” to generate a CSR that can be used by certbot. So it’s probably possible to write expect scripts to get that CSR, run certbot, and then put the ssl certificate back. I don’t expect to use IDRAC often enough to make it worth the effort (I can tell my browser to ignore an outdated certificate), but if I had dozens of Dells I’d script it.

SSH

The web interface allows configuring ssh access which I strongly recommend doing. You can configure ssh access via password or via ssh public key. For ssh access set TERM=vt100 on the host to enable backspace as ^H. Something like “TERM=vt100 ssh root@drac“. Note that changing certain other settings in IDRAC such as enabling Smartcard support will disable ssh access.

There is a limit to the number of open “sessions” for managing IDRAC, when you ssh to the IDRAC you can run “racadm getssninfo” to get a list of sessions and “racadm closessn -i NUM” to close a session. The closessn command takes a “-a” option to close all sessions but that just gives an error saying that you can’t close your own session because of programmer stupidity. The IDRAC web interface also has an option to close sessions. If you get to the limits of both ssh and web sessions due to network issues then you presumably have a problem.

I couldn’t find any documentation on how the ssh host key is generated. I Googled for the key fingerprint and didn’t get a match so there’s a reasonable chance that it’s unique to the server (please comment if you know more about this).

Don’t Use Graphical Console

The T710 is an older server and runs IDRAC6 (IDRAC9 is the current version). The Java based graphical console access doesn’t work with recent versions of Java. The Debian package icedtea-netx has has the javaws command for running the .jnlp command for the console, by default the web browser won’t run this, you download the .jnlp file and pass that as the first parameter to the javaws program which then downloads a bunch of Java classes from the IDRAC to run. One error I got with Java 11 was ‘Exception in thread “Timer-0” java.util.ServiceConfigurationError: java.nio.charset.spi.CharsetProvider: Provider sun.nio.cs.ext.ExtendedCharsets could not be instantiated‘, Google didn’t turn up any solutions to this. Java 8 didn’t have that problem but had a “connection failed” error that some people reported as being related to the SSL key, but replacing the SSL key for the web server didn’t help. The suggestion of running a VM with an old web browser to access IDRAC didn’t appeal. So I gave up on this. Presumably a Windows VM running IE6 would work OK for this.

Serial Console

Fortunately IDRAC supports a serial console. Here’s a page summarising Serial console setup for DRAC [1]. Once you have done that put “console=tty0 console=ttyS1,115200n8” on the kernel command line and Linux will send the console output to the virtual serial port. To access the serial console from remote you can ssh in and run the RAC command “console com2” (there is no option for using a different com port). The serial port seems to be unavailable through the web interface.

If I was supporting many Dell systems I’d probably setup a ssh to JavaScript gateway to provide a web based console access. It’s disappointing that Dell didn’t include this.

If you disconnect from an active ssh serial console then the RAC might keep the port locked, then any future attempts to connect to it will give the following error:

/admin1-> console com2
console: Serial Device 2 is currently in use

So far the only way I’ve discovered to get console access again after that is the command “racadm racreset“. If anyone knows of a better way please let me know. As an aside having “racreset” being so close to “racresetcfg” (which resets the configuration to default and requires a hard reboot to configure it again) seems like a really bad idea.

Host Based Management

deb http://linux.dell.com/repo/community/ubuntu xenial openmanage

The above apt sources.list line allows installing Dell management utilities (Xenial is old but they work on Debian/Buster). Probably the packages srvadmin-storageservices-cli and srvadmin-omacore will drag in enough dependencies to get it going.

Here are some useful commands:

# show hardware event log
omreport system esmlog
# show hardware alert log
omreport system alertlog
# give summary of system information
omreport system summary
# show versions of firmware that can be updated
omreport system version
# get chassis temp
omreport chassis temps
# show physical disk details on controller 0
omreport storage pdisk controller=0

RAID Control

The RAID controller is known as PERC (PowerEdge Raid Controller), the Dell web site has an rpm package of the perccli tool to manage the RAID from Linux. This is statically linked and appears to have different versions of libraries used to make it. The command “perccli show” gives an overview of the configuration, but the command “perccli /c0 show” to give detailed information on controller 0 SEGVs and the kernel logs a “vsyscall attempted with vsyscall=none” message. Here’s an overview of the vsyscall enmulation issue [2]. Basically I could add “vsyscall=emulate” to the kernel command line and slightly reduce security for the system to allow system calls from perccli that are called from old libc code to work, or I could run code from a dubious source as root.

Some versions of IDRAC have a “racadm raid” command that can be run from a ssh session to perform RAID administration remotely, mine doesn’t have that. As an aside the help for the RAC system doesn’t list all commands and the Dell documentation is difficult to find so blog posts from other sysadmins is the best source of information.

I have configured IDRAC to have all of the BIOS output go to the virtual serial console over ssh so I can see the BIOS prompt me for PERC administration but it didn’t accept my key presses when I tried to do so. In any case this requires downtime and I’d like to be able to change hard drives without rebooting.

I found vsyscall_trace on Github [3], it uses the ptrace interface to emulate vsyscall on a per process basis. You just run “vsyscall_trace perccli” and it works! Thanks Geoffrey Thomas for writing this!

Here are some perccli commands:

# show overview
perccli show
# help on adding a vd (RAID)
perccli /c0 add help
# show controller 0 details
perccli /c0 show
# add a vd (RAID) of level RAID0 (r0) with the drive 32:0 (enclosure:slot from above command)
perccli /c0 add vd r0 drives=32:0

When a disk is added to a PERC controller about 525MB of space is used at the end for RAID metadata. So when you create a RAID-0 with a single device as in the above example all disk data is preserved by default except for the last 525MB. I have tested this by running a BTRFS scrub on a disk from another system after making it into a RAID-0 on the PERC.

Planet Linux AustraliaSimon Lyall: Talks from KubeCon + CloudNativeCon Europe 2020 – Part 1

Various talks I watched from their YouTube playlist.

Application Autoscaling Made Easy With Kubernetes Event-Driven Autoscaling (KEDA) – Tom Kerkhove

I’ve been using Keda a little bit at work. Good way to scale on random stuff. At work I’m scaling pods against length of AWS SQS Queues and as a cron. Lots of other options. This talk is a 9 minute intro. A bit hard to read the small font on the screen of this talk.

Autoscaling at Scale: How We Manage Capacity @ Zalando – Mikkel Larsen, Zalando SE

  • These guys have their own HPA replacement for scaling. Kube-metrics-adapter .
  • Outlines some new stuff in scaling in 1.18 and 1.19.
  • They also have a fork of the Cluster Autoscaler (although some of what it seems to duplicate Amazon Fleets).
  • Have up to 1000 nodes in some of their clusters. Have to play with address space per nodes, also scale their control plan nodes vertically (control plan autoscaler).
  • Use Virtical Pod autoscaler especially for things like prometheus that varies by the size of the cluster. Have had problems with it scaling down too fast. They have some of their own custom changes in a fork

Keynote: Observing Kubernetes Without Losing Your Mind – Vicki Cheung

  • Lots of metrics dont’t cover what you want and get hard to maintain and complex
  • Monitor core user workflows (ie just test a pod launch and stop)
  • Tiny tools
    • 1 watches for events on cluster and logs them -> elastic
    • 2 watches container events -> elastic
    • End up with one timeline for a deploy/job covering everything
    • Empowers users to do their own debugging

Autoscaling and Cost Optimization on Kubernetes: From 0 to 100 – Guy Templeton & Jiaxin Shan

  • Intro to HPA and metric types. Plus some of the newer stuff like multiple metrics
  • Vertical pod autoscaler. Good for single pod deployments. Doesn’t work will with JVM based workloads.
  • Cluster Autoscaler.
    • A few things like using prestop hooks to give pods time to shutdown
    • pod priorties for scaling.
    • –expandable-pods-priority-cutoff to not expand for low-priority jobs
    • Using the priority-expander to try and expand spots first and then fallback to more expensive node types
    • Using mixed instance policy with AWS . Lots of instance types (same CPU/RAM though) to choose from.
    • Look at poddistruptionbudget
    • Some other CA flags like scale-down-utilisation-threshold to lok at.
  • Mention of Keda
  • Best return is probably tuning HPA
  • There is also another similar talk . Note the Male Speaker talks very slow so crank up the speed.

Keynote: Building a Service Mesh From Scratch – The Pinterest Story – Derek Argueta

  • Changed to Envoy as a http proxy for incoming
  • Wrote own extension to make feature complete
  • Also another project migrating to mTLS
    • Huge amount of work for Java.
    • Lots of work to repeat for other languages
    • Looked at getting Envoy to do the work
    • Ingress LB -> Inbound Proxy -> App
  • Used j2 to build the Static config (with checking, tests, validation)
  • Rolled out to put envoy in front of other services with good TLS termination default settings
  • Extra Mesh Use Cases
    • Infrastructure-specific routing
    • SLI Monitoring
    • http cookie monitoring
  • Became a platform that people wanted to use.
  • Solving one problem first and incrementally using other things. Many groups had similar problems. “Just a node in a mesh”.

Improving the Performance of Your Kubernetes Cluster – Priya Wadhwa, Google

  • Tools – Mostly tested locally with Minikube (she is a Minikube maintainer)
  • Minikube pause – Pause the Kubernetes systems processes and leave app running, good if cluster isn’t changing.
  • Looked at some articles from Brendon Gregg
  • Ran USE Method against Minikube
  • eBPF BCC tools against Minikube
  • biosnoop – noticed lots of writes from etcd
  • KVM Flamegraph – Lots of calls from ioctl
  • Theory that etcd writes might be a big contributor
  • How to tune etcd writes ( updated –snapshot-count flag to various numbers but didn’t seem to help)
  • Noticed CPU spkies every few seconds
  • “pidstat 1 60” . Noticed kubectl command running often. Running “kubectl apply addons” regularly
  • Suspected addon manager running often
  • Could increase addon manager polltime but then addons would take a while to show up.
  • But in Minikube not a problem cause minikube knows when new addons added so can run the addon manager directly rather than it polling.
  • 32% reduction in overhead from turning off addon polling
  • Also reduced coredns number to one.
  • pprof – go tool
  • kube-apiserver pprof data
  • Spending lots of times dealing with incoming requests
  • Lots of requests from kube-controller-manager and kube-scheduler around leader-election
  • But Minikube is only running one of each. No need to elect a leader!
  • Flag to turn both off –leader-elect=false
  • 18% reduction from reducing coredns to 1 and turning leader election off.
  • Back to looking at etcd overhead with pprof
  • writeFrameAsync in http calls
  • Theory could increase –proxy-refresh-interval from 30s up to 120s. Good value at 70s but unsure what behavior was. Asked and didn’t appear to be a big problem.
  • 4% reduction in overhead

Share

,

Planet DebianRuss Allbery: PGP::Sign 1.03

Part of the continuing saga to clean up CPAN testing failures with this module. Test reports uncovered a tighter version dependency for the flags I was using for GnuPG v2 (2.1.23) and a version dependency for GnuPG v1 (1.4.20). As with the previous release, I'm now skipping tests if the version is too old, since this makes the CPAN test results more useful to me.

I also took advantage of this release to push the Debian packaging to Salsa (and the upstream branch as well since it's easier) and update the package metadata, as well as add an upstream metadata file since interest in that in Debian appears to have picked up again.

You can get the latest release from CPAN or from the PGP::Sign distribution page.

Planet DebianRyan Kavanagh: Configuring OpenIKED VPNs for StrongSwan Clients

A few weeks ago I configured a road warrior VPN setup. The remote end is on a VPS running OpenBSD and OpenIKED, the VPN is an IKEv2 VPN using x509 authentication, and the local end is StrongSwan. I also configured an IKEv2 VPN between my VPSs. Here are the notes for how to do so.

In all cases, to use x509 authentication, you will need to generate a bunch of certificates and keys:

  • a CA certificate
  • a key/certificate pair for each client

Fortunately, OpenIKED provides the ikectl utility to help you do so. Before going any further, you might find it useful to edit /etc/ssl/ikeca.cnf to set some reasonable defaults for your certificates.

Begin by creating and installing a CA certificate:

# ikectl ca vpn create
# ikectl ca vpn install

For simplicity, I am going to assume that the you are managing your CA on the same host as one of the hosts that you want to configure for the VPN. If not, see the bit about exporting certificates at the beginning of the section on persistent host-host VPNs.

Create and install a key/certificate pair for your server. Suppose for example your first server is called server1.example.org:

# ikectl ca vpn certificate server1.example.org create
# ikectl ca vpn certificate server1.example.org install

Persistent host-host VPNs

For each other server that you want to use, you need to also create a key/certificate pair on the same host as the CA certificate, and then copy them over to the other server. Assuming the other server is called server2.example.org:

# ikectl ca vpn certificate server2.example.org create
# ikectl ca vpn certificate server2.example.org export

This last command will produce a tarball server2.example.org.tgz. Copy it over to server2.example.org and install it:

# tar -C /etc/iked -xzpvf server2.example.org.tgz

Next, it is time to configure iked. To do so, you will need to find some information about the certificates you just generated. On the host with the CA, run

$ cat /etc/ssl/vpn/index.txt
V       210825142056Z           01      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org
V       210825142208Z           02      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org

Pick one of the two hosts to play the “active” role (in this case, server1.example.org). Using the information you gleaned from index.txt, add the following to /etc/iked.conf, filling in the srcid and dstid fields appropriately.

ikev2 'server1_server2_active' active esp from server1.example.org to server2.example.org \
	local server1.example.org peer server2.example.org \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org'

On the other host, add the following to /etc/iked.conf

ikev2 'server2_server1_passive' passive esp from server2.example.org to server1.example.org \
	local server2.example.org peer server1.example.org \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org'

Note that the names 'server1_server2_active' and 'server2_server1_passive' in the two stanzas do not matter and can be omitted. Reload iked on both hosts:

# ikectl reload

If everything worked out, you should see the negotiated security associations (SAs) in the output of

# ikectl show sa

On OpenBSD, you should also see some output on success or errors in the file /var/log/daemon.

For a road warrior

Add the following to /etc/iked.conf on the remote end:

ikev2 'responder_x509' passive esp \
	from 0.0.0.0/0 to 10.0.1.0/24 \
	local server1.example.org peer any \
	srcid server1.example.org \
	config address 10.0.1.0/24 \
	config name-server 10.0.1.1 \
	tag "ROADW"

Configure or omit the address range and the name-server configurations to suit your needs. See iked.conf(5) for details. Reload iked:

# ikectl reload

If you are on OpenBSD and want the remote end to have an IP address, add the following to /etc/hostname.vether0, again configuring the address to suit your needs:

inet 10.0.1.1 255.255.255.0

Put the interface up:

# ifconfig vether0 up

Now create a client certificate for authentication. In my case, my road-warrior client was client.example.org:

# ikectl ca vpn certificate client.example.org create
# ikectl ca vpn certificate client.example.org export

Copy client.example.org.tgz to client and run

# tar -C /etc/ipsec.d/ -xzf client.example.org.tgz -- \
	./private/client.example.org.key \
	./certs/client.example.org.crt ./ca/ca.crt

Install StrongSwan and add the following to /etc/ipsec.conf, configuring appropriately:

ca example.org
  cacert=ca.crt
  auto=add

conn server1
  keyexchange=ikev2
  right=server1.example.org
  rightid=%server1.example.org
  rightsubnet=0.0.0.0/0
  rightauth=pubkey
  leftsourceip=%config
  leftauth=pubkey
  leftcert=client.example.org.crt
  auto=route

Add the following to /etc/ipsec.secrets:

# space is important
server1.example.org : RSA client.example.org.key

Restart StrongSwan, put the connection up, and check its status:

# ipsec restart
# ipsec up server1
# ipsec status

That should be it.

Sources:

Planet DebianRyan Kavanagh: Configuring OpenIKED VPNs for Road Warriors

A few weeks ago I configured a road warrior VPN setup. The remote end is on a VPS running OpenBSD and OpenIKED, the VPN is an IKEv2 VPN using x509 authentication, and the local end is StrongSwan. I also configured an IKEv2 VPN between my VPSs. Here are the notes for how to do so.

In all cases, to use x509 authentication, you will need to generate a bunch of certificates and keys:

  • a CA certificate
  • a key/certificate pair for each client

Fortunately, OpenIKED provides the ikectl utility to help you do so. Before going any further, you might find it useful to edit /etc/ssl/ikeca.cnf to set some reasonable defaults for your certificates.

Begin by creating and installing a CA certificate:

# ikectl ca vpn create
# ikectl ca vpn install

For simplicity, I am going to assume that the you are managing your CA on the same host as one of the hosts that you want to configure for the VPN. If not, see the bit about exporting certificates at the beginning of the section on persistent host-host VPNs.

Create and install a key/certificate pair for your server. Suppose for example your first server is called server1.example.org:

# ikectl ca vpn certificate server1.example.org create
# ikectl ca vpn certificate server1.example.org install

Persistent host-host VPNs

For each other server that you want to use, you need to also create a key/certificate pair on the same host as the CA certificate, and then copy them over to the other server. Assuming the other server is called server2.example.org:

# ikectl ca vpn certificate server2.example.org create
# ikectl ca vpn certificate server2.example.org export

This last command will produce a tarball server2.example.org.tgz. Copy it over to server2.example.org and install it:

# tar -C /etc/iked -xzpvf server2.example.org.tgz

Next, it is time to configure iked. To do so, you will need to find some information about the certificates you just generated. On the host with the CA, run

$ cat /etc/ssl/vpn/index.txt
V       210825142056Z           01      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org
V       210825142208Z           02      unknown /C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org

Pick one of the two hosts to play the “active” role (in this case, server1.example.org). Using the information you gleaned from index.txt, add the following to /etc/iked.conf, filling in the srcid and dstid fields appropriately.

ikev2 'server1_server2_active' active esp from server1.example.org to server2.example.org \
	local server1.example.org peer server2.example.org \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org'

On the other host, add the following to /etc/iked.conf

ikev2 'server2_server1_passive' passive esp from server2.example.org to server1.example.org \
	local server2.example.org peer server1.example.org \
	srcid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server2.example.org/emailAddress=rak@example.org' \
	dstid '/C=US/ST=Pennsylvania/L=Pittsburgh/CN=server1.example.org/emailAddress=rak@example.org'

Note that the names 'server1_server2_active' and 'server2_server1_passive' in the two stanzas do not matter and can be omitted. Reload iked on both hosts:

# ikectl reload

If everything worked out, you should see the negotiated security associations (SAs) in the output of

# ikectl show sa

On OpenBSD, you should also see some output on success or errors in the file /var/log/daemon.

For a road warrior

Add the following to /etc/iked.conf on the remote end:

ikev2 'responder_x509' passive esp \
	from 0.0.0.0/0 to 10.0.1.0/24 \
	local server1.example.org peer any \
	srcid server1.example.org \
	config address 10.0.1.0/24 \
	config name-server 10.0.1.1 \
	tag "ROADW"

Configure or omit the address range and the name-server configurations to suit your needs. See iked.conf(5) for details. Reload iked:

# ikectl reload

If you are on OpenBSD and want the remote end to have an IP address, add the following to /etc/hostname.vether0, again configuring the address to suit your needs:

inet 10.0.1.1 255.255.255.0

Put the interface up:

# ifconfig vether0 up

Now create a client certificate for authentication. In my case, my road-warrior client was client.example.org:

# ikectl ca vpn certificate client.example.org create
# ikectl ca vpn certificate client.example.org export

Copy client.example.org.tgz to client and run

# tar -C /etc/ipsec.d/ -xzf client.example.org.tgz -- \
	./private/client.example.org.key \
	./certs/client.example.org.crt ./ca/ca.crt

Install StrongSwan and add the following to /etc/ipsec.conf, configuring appropriately:

ca example.org
  cacert=ca.crt
  auto=add

conn server1
  keyexchange=ikev2
  right=server1.example.org
  rightid=%server1.example.org
  rightsubnet=0.0.0.0/0
  rightauth=pubkey
  leftsourceip=%config
  leftauth=pubkey
  leftcert=client.example.org.crt
  auto=route

Add the following to /etc/ipsec.secrets:

# space is important
server1.example.org : RSA client.example.org.key

Restart StrongSwan, put the connection up, and check its status:

# ipsec restart
# ipsec up server1
# ipsec status

That should be it.

Sources:

Kevin RuddAustralian Jewish News: Rudd Responds To Critics

With the late Shimon Peres

Published in The Australian Jewish News on 12 September 2020

Since writing in these pages last month, I’ve been grateful to receive many kind messages from readers reflecting on my record in office. Other letters, including those published in the AJN, deserve a response.

First to Michael Gawenda, whose letter repeated the false assertion that I believe in some nefarious connection between Mark Leibler’s lobbying of Julia Gillard and the factional machinations that brought her to power. Challenged to provide evidence for this fantasy, Gawenda pointed to this passage in my book, The PM Years: “The meticulous work of moving Gillard from the left to the right on foreign policy has already begun in earnest more than a year before the coup”.

Hold the phone! Of course Gillard was on the move – on the US, on Israel, and even on marriage equality. On all these, unlike me, she had a socialist-left background and wanted to appeal to right-wing factional bosses. Even Gawenda must comprehend that a strategy on Gillard’s part does not equate to Leibler plotting my demise.

My complaint against Leibler is entirely different: his swaggering arrogance in insisting Labor back virtually every move by Benjamin Netanyahu, good or bad; that we remain silent over the Mossad’s theft of Australian passports to carry out an assassination, when they’d already been formally warned under Howard never to do it again; and his bad behaviour as a dinner guest at the Lodge when, having angrily disagreed with me over the passports affair, leaned over the table and said menacingly to my face: “Julia is looking very good in the public eye these days, prime minister.”

There is a difference between bad manners and active conspiracy. Gawenda knows this. If he’d bothered to read my book, rather than flipping straight to the index, he would have found on page 293 the list of those actually responsible for plotting the coup. They were: Wayne Swan, Mark Arbib, David Feeney, Don Farrell and Stephen Conroy (with Bill Ludwig, Paul Howes, Bill Shorten, Tony Sheldon and Karl Bitar in supporting roles). All Labor Party members.

So what was Leibler up to? Unsurprisingly, he was cultivating Gillard as an influential contact in the government. Isn’t that, after all, what lobbyists do? They lobby. And Leibler, by his own account and Gawenda’s obsequious reporting, was very good at it.

Finally, I am disappointed by Gawenda’s lack of contrition for not contacting me before publication. This was his basic professional duty as a journalist. If he’d bothered, I could have dispelled his conspiracy theory inside 60 seconds. But, then again, it would have ruined his yarn.

A second letter, by Leon Poddebsky, asserted I engaged in “bombastic opposition to Israel’s blockade against the importation by Hamas of military and dual-purpose materials for its genocidal war against the Jewish state”.

I draw Mr Poddebsky to my actual remarks in 2010: “When it comes to a blockade against Gaza, preventing the supply of humanitarian aid, such a blockade should be removed. We believe that the people of Gaza … should be provided with humanitarian assistance.” It is clear my remarks were limited to curbs on humanitarian aid. The AJN has apologised for publishing this misrepresentation, and I accept that apology with no hard feeling.

Regarding David Singer’s letter, who denies Netanyahu’s stalled West Bank plan constitutes “annexation”, I decline to debate him on ancient history. I only note the term “annexation” is used by the governments of Australia and Britain, the European Union, the United Nations, the Israeli press and even the AJN.

Mr Singer disputes the illegality of annexation, citing documents from 1922. I direct him to the superseding Oslo II Accord of 1995 where Israel agreed “neither side shall initiate or take any step that will change the status of the West Bank and Gaza Strip pending the outcome of permanent status negotiations”.

Further, Mr Singer challenges my assertion that Britain’s Prime Minister agrees that annexation would violate international law. I direct Mr Singer to Boris Johnson’s article for Yedioth Ahronoth in July: “Annexation would represent a violation of International law. It would also be a gift to those who want to perpetuate old stories about Israel… If it does (go ahead), the UK will not recognise any changes to the 1967 lines, except those agreed by both parties.”

Finally, on Leibler’s continuing protestations about his behaviour at dinner, I will let others who know him better pass judgment on his character. More importantly, the net impact of Leibler’s lobbying has been to undermine Australia’s once-strong bipartisan support for Israel. His cardinal error, together with others, has been to equate support for Israel with support for Netanyahu’s policies. This may be tactically smart for their Likud friends, but it’s strategically dumb given the challenges Israel will face in the decades ahead.

The post Australian Jewish News: Rudd Responds To Critics appeared first on Kevin Rudd.

Sam VargheseWhen will Michael Hayden explain why the NSA did not predict 9/11?

As America marks the 19th anniversary of the destruction of the World Trade Centre towers by terrorists, it is a good time to ask when General Michael Hayden, head of the NSA at the time of 9/11, will come forward and explain why the agency was unable to detect the chatter among those who had banded together to wreak havoc in the US.

Before I continue, let me point out that nothing of what appears below is new; it was all reported some four years ago, but mainstream media have conspicuously avoided pursuing the topic because it would probably trouble some people in power.

The tale of how Hayden came to throw out a system known as ThinThread, devised by probably the most brilliant metadata analyst, William Binney, at that time the technical director of the NSA, has been told in a searing documentary titled A Good American.

Binney devised the system which would look at links between all people around the globe and added in measures to prevent violations of Americans’ privacy. Cryptologist Ed Loomis, analyst Kirk Wiebe, software engineer Tom Drake and Diane Roark, senior staffer at the House Intelligence Committee, worked along with him.

A skunkworks project, Binney’s ThinThread system handled data as it was ingested, unlike the existing method at the NSA which was to just collect all the data and analyse it later.

But when Hayden took the top job in the mid-1990s, he wanted to spread the NSA’s work around and also boost its budget – a bit of empire building, no less. Binney was asked what he could do with a billion dollars and more for the ThinThread system but said he could not use more than US$300 million.

What followed was surprising. Though ThinThread had been demonstrated to be able to sort data fast and accurately and track patterns within it, Hayden banned its use and put in place plans to develop a system known as Trailblazer – which incidentally had to be trashed many years later as an unmitigated disaster after many billions had gone down the drain.

Trailblazer was being used when September 11 came around.

Drake points out in the film that after 9/11, they set up ThinThread again and analysed all the data flowing into the NSA in the lead-up to the attack and found clear indications of the terrorists’ plans.

At the beginning of the film, Binney’s voice is heard saying, “It was just revolting… and disgusting that we allowed it to happen,” as footage of people leaping from the burning Trade Centre is shown.

After the tragedy, ThinThread, sans its privacy protections, was used to conduct blanket surveillance on Americans. Binney and his colleagues left the NSA shortly thereafter.

Hayden is often referred to as a great American patriot by American talk-show host Bill Maher. He was offered an interview by the makers of A Good American but did not take it up. Perhaps he would like to break his silence now.

Planet DebianMarkus Koschany: My Free Software Activities in August 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in September) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

teeworlds
  • I packaged a new upstream release of teeworlds, the well-known 2D multiplayer shooter with cute characters called tees to resolve a Python 2 bug (although teeworlds is actually a C++ game). The update also fixed a severe remote denial-of-service security vulnerability, CVE-2020-12066. I prepared a patch for Buster and will send it to the security team later today.
  • I sponsored updates of mgba, a Game Boy Advance emulator, for Ryan Tandy, and osmose-emulator for Carlos Donizete Froes.
  • I worked around a RC GCC 10 bug in megaglest by compiling with -fcommon.
  • Thanks to Gerardo Ballabio who packaged a new upstream version of galois which I uploaded for him.
  • Also thanks to Reiner Herrmann and Judit Foglszinger who fixed a regression (crash) in monsterz due to the earlier port to Python 3. Reiner also made fans of supertuxkart happy by packaging the latest upstream release version 1.2.

Debian Java

Misc

  • I was contacted by the upstream maintainer of privacybadger, a privacy addon for Firefox and Chromium, who dislikes the idea of having a stable and unchanging version in Debian stable releases. Obviously I can’t really do much about it although I believe the release team would be open-minded for regular point updates of browser addons though. However I don’t intend to do regular updates for all of my packages in stable unless there is a really good reason to do so. At the moment I’m willing to make an exception for ublock-origin and https-everywhere because I feel these addons should be core browser functionality anyway. I talked about this on our Debian Mozilla Extension Maintainers mailinglist and it seems someone is interested to take over privacybadger and prepare regular stable point updates. Let’s see how it turns out.
  • Finally this month saw the release of ublock-origin 1.29.0 and the creation of two different browser-specific binary packages for Firefox and Chromium. I have talked about it before and I believe two separate packages for ublock-origin are more aligned to upstream development and make the whole addon easier to maintain which benefits users, upstream and maintainers.
  • imlib2, an image library, and binaryen also got updated this month.

Debian LTS

This was my 54. month as a paid contributor and I have been paid to work 20 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-2303-1. Issued a security update for libssh fixing 1 CVE.
  • DLA-2327-1. Issued a security update for lucene-solr fixing 1 CVE.
  • DLA-2369-1. Issued a security update for libxml2 fixing 8 CVE.
  • Triaged CVE-2020-14340, jboss-xnio as not-affected for Stretch.
  • Triaged CVE-2020-13941, lucene-solr as no-dsa because the security impact was minor.
  • Triaged CVE-2019-17638, jetty9 as not-affected for Stretch and Buster.
  • squid3: I backported the patches for CVE-2020-15049, CVE-2020-15810, CVE-2020-15811 and CVE-2020-24606 from squid 4 to squid 3.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 „Jessie“. This was my 27. month and I have been paid to work 14,25 hours on ELTS.

  • ELA-271-1. Issued a security update for squid3 fixing 19 CVE. Most of the work was already done before ELTS started, only the patch for CVE-2019-12529 had to be adjusted for the nettle version in Jessie.
  • ELA-273-1. Issued a security update for nss fixing 1 CVE.
  • ELA-276-1. Issued a security update for libjpeg-turbo fixing 2 CVE.
  • ELA-277-1. Issued a security update for graphicsmagick fixing 1 CVE.
  • ELA-279-1. Issued a security update for imagemagick fixing 3 CVE.
  • ELA-280-1. Issued a security update for libxml2 fixing 4 CVE.

Thanks for reading and see you next time.

Planet DebianJelmer Vernooij: Debian Janitor: All Packages Processed with Lintian-Brush

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

On 12 July 2019, the Janitor started fixing lintian issues in packages in the Debian archive. Now, a year and a half later, it has processed every one of the almost 28,000 packages at least once.

Graph with Lintian Fixes Burndown

As discussed two weeks ago, this has resulted in roughly 65,000 total changes. These 65,000 changes were made to a total of almost 17,000 packages. Of the remaining packages, for about 4,500 lintian-brush could not make any improvements. The rest (about 6,500) failed to be processed for one of many reasons – they are e.g. not yet migrated off alioth, use uncommon formatting that can't be preserved or failed to build for one reason or another.

Graph with runs by status (success, failed, nothing-to-do)

Now that the entire archive has been processed, packages are prioritized based on the likelihood of a change being made to them successfully.

Over the course of its existence, the Janitor has slowly gained support for a wider variety of packaging methods. For example, it can now edit the templates for some of the generated control files. Many of the packages that the janitor was unable to propose changes for the first time around are expected to be correctly handled when they are reprocessed.

If you’re a Debian developer, you can find the list of improvements made by the janitor in your packages by going to https://janitor.debian.net/m/.

For more information about the Janitor's lintian-fixes efforts, see the landing page.

,

Sam VargheseSerena Williams, please go before people start complaining

The US Open 2020 represented the best chance for an aging Serena Williams to win that elusive 24th Grand Slam title and equal the record of Australian Margaret Court. Seeds Bianca Andreescu (6), Ashleigh Barty (1), Simona Halep (2), Kiki Bertens (7) and Elina Svitolina (5) are all not taking part.

But Williams, now 39, could not get past Victoria Azarenka in the semi-finals, losing 1-6, 6-3, 6-3.

Prior to this, Williams had lost four Grand Slam finals in pursuit of Court’s record: Andreescu defeated her at the US Open in 2019, Angelique Kerber beat her at Wimbledon in 2018, Naomi Osaka took care of her in the 2018 US Open and Halep accounted for Williams at Wimbledon in 2019. In all those finals, Williams was unable to win more than four games in any set.

Williams took a break to have a child some years ago and after returning her only final win was in the Australian Open in 2017 when she beat her sister, Venus, 6-4, 6-4.

It looks like it is time to bow out, if not gracefully, then clumsily. One, unfortunately, cannot associate grace with Williams.

But, no, Williams has already signed up to play in the French Open which has been pushed back to September 27 from its normal time of May due to the coronavirus pandemic. Only Barty has said she would not be participating.

Sportspeople are remembered for more than mere victories. One remembers the late Arthur Ashe not merely because he was the first black man to win Wimbledon, the US Open and the Australian Open, but for the way he carried himself.

He was a great ambassador for his country and the soul of professionalism. Sadly, he died at the age of 49 after contracting AIDS from a blood transfusion.

Another lesser known person who was in the Ashe mould is Larry Gomes, the West Indies cricketer who was part of the world-beating sides that never lost a Test series for 15 years from 1980 to 1995.

Gomes quit after playing 60 Tests, and the letter he sent to the Board when he did so was a wonderful piece of writing, reflecting both his humility and his professionalism.

In an era of dashing stroke-players, he was the steadying influence on the team many a time, and was thoroughly deserving of his place among the likes of the much better-known superstars like Viv Richards, Gordon Greenidge, Desmond Haynes and Clive Lloyd.

In Williams case, she has shown herself to be a poor sportsperson, far too focused on herself and unable to see the bigger picture.

She is at the other end of the spectrum compared to players like Chris Evert and Steffi Graf, both great champions, and also models of good fair-minded competitors.

It would be good for tennis if Williams leaves the scene after the French Open, no matter if she wins there or loses. There is more to a good sportsperson than mere statistics.

Cryptogram Friday Squid Blogging: Calamari vs. Squid

St. Louis Magazine answers the important question: “Is there a difference between calamari and squid?” Short answer: no.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Kevin RuddWPR Trend Lines: China Under Xi Jinping

E&OE TRANSCRIPT
PODCAST INTERVIEW
WORLD POLITICS REVIEW
11 SEPTEMBER 2020

Topics: China under Xi Jinping

World Politics Review: Mr. Rudd, thank you so much for joining us on Trend Lines.

Kevin Rudd: Happy to be on the program.

WPR: I want to start with a look at this bipartisan consensus that’s emerged in the United States, this reappraisal of China in the U.S., but also in Europe and increasingly in Australia. This idea that the problems of China’s trade policies and the level playing fields within the domestic market, aren’t going away. If anything, they’re getting worse. Same thing with regard to human rights and political liberalization under Xi Jinping. And that the West really needs to start preparing for a period of strategic rivalry with China, including increased friction and maybe some bumping of elbows. Are you surprised by how rapidly this new consensus has emerged and how widespread it is?

Mr. Rudd: I’m not surprised by the emergence of a form of consensus across the democracies, whether they happen to be Western or Asian democracies, or democracies elsewhere in the world. And there’s a reason for that, and that is that the principle dynamic here has been China’s changing course itself.

And secondly, quite apart from a change in management under Xi Jinping and the change in direction we’ve seen under him, China is now increasingly powerful. It doesn’t matter what matrix of power we’re looking at—economic power, trade, investment; whether it’s capital markets, whether it’s technology or whether it’s the classic determinants of international power and various forms of military leverage.

You put all those things together, we have a new guy in charge who has decided to be more assertive about China’s interests and values in the world beyond China’s borders. And secondly, a more powerful China capable of giving that effect.

So as a result of that, many countries for the first time have had this experience rub up against them. In the past, it’s only been China’s near neighbors who have had this experience. Now it’s a much broader and shared experience around the region and around the world.

So there are the structural factors at work in terms of shaping an emerging consensus on the part of the democracies, and those who believe in an open, free trading system in the world, and those who are committed to the retention of the liberal international order—that these countries are beginning to find a common cause in dealing with their collective challenges with the Middle Kingdom.

WPR: Now you mentioned the new guy in charge, obviously that’s Xi Jinping who’s been central to all of the recent shifts in China, whether it’s domestic policy or foreign policy. At the same time there’s some suggestion that as transformational as Xi is, that there’s quite a bit of continuity in terms of some of the more assertive policies of his predecessor Hu Jintao. Is it a mistake to focus so much on Xi? Does he reflect a break in China’s approach to the world, or is it continuity or more of a transition that responds to the context of perhaps waning American power as perceived by China? So is it a mistake to focus so much on Xi, as opposed to trying to understand the Chinese leadership writ large, their perception of their own national interest?

Mr. Rudd: I think it’s a danger to see policy continuity and policy change in China as some binary alternative, because with Xi Jinping, yes, there are elements of continuity, but there are also profound elements of change. So on the continuity front, yes, in the period of Hu Jintao, we began to see a greater Chinese experimental behavior in the South China Sea, a more robust approach to the assertion of China’s territorial claims there, for example.

But with Xi Jinping, this was taken to several extra degrees in intensity when we saw a full-blown island reclamation exercise, where you saw rocky atolls suddenly being transformed into sand-filled islands, and which were then militarized. So is that continuity or change? That becomes, I think, more of a definitional question.

If I was trying to sum up what is new, however, about Xi Jinping in contrast to his predecessors, it would probably be in these terms.

In politics, Chinese domestic politics, he has changed the discourse by taking it further to the left—by which I mean a greater role for the party and ideology, and the personal control of the leader, compared with what existed before.

On the economy we see a partial shift to the left with a resuscitation of state-owned enterprises and some disincentives emerging for the further and continued growth of China’s own hitherto successful private-sector entrepreneurial champions.

On nationalism we have seen a further push under Xi Jinping further to the right than his predecessors.

And in terms of degrees of international assertiveness, whether it’s over Hong Kong, the South China Sea, or over Taiwan, in relation to Japan and the territorial claims in the East China Sea, or with India, or in the big bilateral relationship with the United States as well as with other American allies—the Canadians, the Australians and the Europeans, et cetera—as well as big, new large-canvas foreign policy initiatives like the Belt and Road Initiative, what we’ve seen is an infinitely more assertive China.

So, yes, there are elements of continuity, but it would be foolish for us to underestimate the degree of change as a consequence of the agency of Xi Jinping’s leadership.

WPR: You mentioned the concentration of power in the leader’s hands, Xi Jinping’s hands. He’s clearly the most powerful Chinese leader since Mao Zedong. At the same time and especially in the immediate aftermath of the coronavirus pandemic, but over the course of the last year or so, there’s been some reporting about rumbling within the Chinese Communist Party about his leadership. What do you make of those reports? Is he a leader who’s in firm control in China? Is that something that’s always somewhat at question, given the kind of internal politics of the Chinese Communist Party? Or are there potential challenges to his leadership?

Mr. Rudd: Well, in our analysis of Chinese politics, it hasn’t really changed a lot since I was a young diplomat working in Beijing in the mid-1980s, and that is: The name of the game is opacity. With Chinese domestic politics we are staring through a glass dimly. And that’s because the nature of a Marxist-Leninist party, in particular a Leninist party, is deeply secretive about its own internal operations. So we are left to speculate.

But what we know from the external record, we know that Xi Jinping, as you indicated in your question, has become China’s most powerful leader at least since Deng and probably since Mao, against most matrices of power. He’s become what is described as the “chairman of everything.” Every single leading policy group of the politburo is now chaired by him.

And on top of that, the further instruments of power consolidation have been reflected in his utilization in the anti-corruption campaign, and now the unleashing of a Party Rectification campaign, which is designed to reinforce compliance on the part of party members to central leadership diktat. So that’s the sort of individual that we have, and that’s the journey that he has traveled in the last six to seven years, and where most of his principle opponents have either been arrested or incarcerated or have committed suicide.

So where does that leave us in terms of the prospects for any organized political opposition? Under the party constitution, Xi Jinping is up for a reelect at the 20th Party Congress, and that is his reappointment as general secretary of the party and as chairman of the Central Military Commission. Separately, he’s up for a reelect by the National People’s Congress for a further term as China’s president.

For him to go a further term would be to break all the post-Deng Xiaoping conventions built around the principles of shared or collective leadership. But Xi Jinping, in my judgment, is determined to remain China’s paramount leader through the 2020s and into the 2030s. And how would he do that? He would perhaps see himself at the next Party Congress appointed as party chairman, a position last occupied by Chairman Mao and Mao’s immediate successor, Chairman Hua Guofeng.

He would probably retain the position of president of the country, given the constitutional changes he brought about three years or so ago to remove the two-term limit for the presidency. And he would, I think, most certainly retain the chairmanship of the Central Military Commission.

These predispositions, however, of themselves combined with the emerging cult of personality around Xi, combined with fundamental disagreements on elements of economic policy and international policy, have created considerable walls of opposition to Xi Jinping within the Chinese Communist Party.

And the existence of that opposition is best proven by the fact that Xi Jinping, in August of 2020, decided to launch this new Party Rectification campaign in order to reestablish, from his perspective, proper party discipline—that is, obedience to Xi Jinping. So the $6,000 question is, Can these different sources of dissent within the Chinese Communist Party coalesce? There’s no obvious candidate to do the coalescing, just as Deng Xiaoping in the past was a candidate for coalescing different political and policy views to those of Mao Zedong, which is why Deng Xiaoping was purged at least two times in his latter stages of his career before his final return and rehabilitation.

So we can’t see a ready candidate, but the dynamics of Chinese politics tend to have been that if there is, however, a catastrophic event—an economic implosion, a major foreign policy or international policy mishap, misstep, or crisis or conflict which goes wrong—then these tend to generate their own dynamics. So Xi Jinping’s watchword between now and the 20th Party Congress to be held in October/November of 2022 will be to prevent any such crises from emerging.

WPR: It’s a natural transition to my next question, because another point of disagreement among China watchers is whether a lot of these moves that we’ve seen over the past five years, accelerating over the past five years, but this reassertion of the party centrality within or across all spheres of Chinese society, but also some of the international moves, whether they’re signs of strength in the face of what they see as a waning United States, or whether they’re signs of weakness and a sense of having to move faster than perhaps was previously anticipated? A lot of the challenges that China faces, whether it can become a rich country before it becomes an old country, some of the environmental degradation that its development has caused, none of them are going away and none of them have gone away. So what’s your view on that in terms of China’s future prospects? Do you see a continued rise? A leveling off? The danger of a failing China and everything that that might imply?

Mr. Rudd: Well, there are a bunch of questions embedded in what you’re saying, but I’d begin by saying that Americans shouldn’t talk themselves out of global leadership in the future. America remains a powerful country in economic terms, in technological terms and in military terms, and against all three measures still today more powerful than China. In the case of the military, significantly more powerful than China.

Of course, the gap begins to narrow, but this narrowing process is not overnight, it takes a long period of time, and there are a number of potential mishaps for China. An anecdote from the Chinese commentariat recently—in a debate about America’s preparedness to go into armed conflict or war with China in the South China Sea or over Taiwan, and the Chinese nationalist constituency in China basically chanting, “Bring it on.” If people chant, “USA,” in the United States, similar gatherings would probably chant, “PRC,” in Beijing.

But it was interesting what a Chinese scholar had to say as this debate unfolded, when one of the commentators said, “Well, at the end of the day, America is a paper tiger.” Of course, that was a phrase used by Mao back in the 1950s, ‘60s. The response from the Chinese scholar in China’s online discourse was, “No, America is not a paper tiger. America is a tiger with real teeth.”

So it’s important for Americans to understand that they are still in an extraordinarily powerful position in relation to both China and the rest of the world. So the question becomes one ultimately of the future leadership direction in the United States.

The second point I’d make in response to your question is that, when we seek to understand China’s international behavior, the beginning of wisdom is to understand its domestic politics, to the extent that we can. So when we are asked questions about strength or weakness, or why is China in the COVID world doubling down on its posture towards Hong Kong, the South China Sea, towards Taiwan, the East China Sea, Japan, as well as other countries like India, Canada, Australia, and some of the Europeans and the United States.

Well, I think the determination on the part of Xi Jinping’s leadership was to make it plain to all that China had not been weakened by COVID-19. Is that a sign of weakness or of strength? I think that again becomes a definitional question.

In terms of Chinese domestic politics, however, if Xi Jinping is under attack in terms of his political posture at home and the marginalization of others within the Chinese Communist Party, if he is under attack in terms of its direction on economic policy, with the slowing of growth even in the pre-COVID period, then of course from Xi Jinping’s perspective, the best way to deal with any such dissension on the home front is to become more nationalist on the foreign front. And we’ve seen evidence of that. Is that strength, or is it weakness? Again, it’s a definitional question, but it’s the reality of what we’re dealing with.

For the future, I think ultimately what Xi Jinping’s administration will be waiting for is what sort of president emerges from November of 2020. If Trump is victorious, I think Xi Jinping will privately be pretty happy about that, because he sees the Trump administration on the one hand being hard-lined towards China, but utterly chaotic in its management of America’s national China strategy—strong on some things, weak on the other, vacillating between A and B above—and ultimately a divisive figure in terms of the solidarity of American alliances around the world.

If Biden wins, I think the judgment in China will be that America could seriously get its act back together again, run a coherent hard-line national comprehensive China strategy. Secondly, do so—unlike the Trump administration—with the friends and allies around the world in full harness and incorporative harness. So it really does depend on what the United States chooses to do in terms of its own national leadership, and therefore foreign policy direction for the future.

WPR: You mentioned the question of whether the U.S. is a paper tiger or not. Clearly in terms of military assets and capabilities, it’s not, but there’s been some strategic debate in Australia over the past five years or so—and I’m thinking particularly of Hugh White and his line of analysis—the idea that regardless of what America can do, that there really won’t be, when push comes to shove, the will to engage in the kind of military conflict that would be required to respond, for instance, to Chinese aggression or attempt to reunify militarily with Taiwan, let alone something like the Senkaku islands, the territorial dispute with Japan. And the recently released Australian Defense White Paper suggests that there’s a sense that the U.S. commitment to mutual defense in the Asia-Pacific might be waning. So what are your thoughts about the future of American primacy in Asia? And you mentioned it’s a question of leadership, but are you optimistic about the public support in America for that historical project? And if not, what are the implications for Australia, for other countries in Asia and the world?

Mr. Rudd: Well, it’s been a while since I’ve been in rural Pennsylvania. So it’s hard for me to answer your question. [Laughter] I am obviously both a diplomat by training and a politician, so I kind of understand the policy world, but I also understand rural politics.

It does depend on which way the American people go. When I look at the most recent series of findings from the Pew Research Project on American attitudes to the world, what I find interesting is that contrary to many of the assumptions by the American political class, that nearly three-quarters of Americans are strongly supportive of trade and strongly supportive of variations of free trade. If you listen to the populist commentary in the United States, you would think that notions of free trade had become a bit like the anti-Christ.

So therefore I think it will be a challenge for the American political class to understand that the American public, at least reflected in these polling numbers on trade, are still fundamentally internationally engaged, and therefore are not in support of America incrementally withdrawing from global economic leadership. On the question of political leadership and America’s global national security role and its regional national security role in the Asia Pacific, again it’s a question of the proper harnessing of American resources.

I think one of the great catastrophes of the last 20 years was America’s decision to invade Iraq. You flushed up a whole lot of political capital and foreign policy capital around the world against the wall. You expended so much in blood and treasure that I think the political wash through also in America itself was a deep set of reservations in American voters about, let’s call it extreme foreign policy folly.

Remember, that enterprise was about eliminating weapons of mass destruction, which the Bush administration alleged to exist in Saddam Hussein’s bathroom locker, and they never did. So we’re partly experiencing the wash up of all of that in the American body politic, where there is a degree of, shall we say, exhaustion about unnecessary wars and about unnecessary foreign policy engagement. So the wisdom for the future will be what is necessary, as opposed to that which is marginal.

Now, I would think, on that basis, given the galvanizing force of American public sentiment—on COVID questions, on trade and investment questions, on national security questions, and on human rights questions—that the No. 1 foreign policy priority for the United States, for the period ahead, will be China. And so what I therefore see is that the centrality of future U.S. administrations, Republican and Democrat, is likely to be galvanized by a finding from the American people that, contrary to any of the urban myths, that the American people still want to see their country become more prosperous through trade. They want their country to become bigger through continued immigration—another finding in terms of Pew research. And they want their country to deal with the greatest challenge to America’s future, which is China’s rise.

So that is different from becoming deeply engaged and involved in every nuance of national security policy in the eastern reaches of either Libya or in the southern slopes of Lebanon. There has been something of a long-term Middle Eastern quagmire, and I think the wake-up call of the last several years has been for America to say that if there’s to be a second century of American global leadership, another Pax Americana for the 21st century, then it will ultimately be resolved on whether America rises to the global economic challenge, the global technology leadership challenge and the global China challenge.

And I think the American people in their own wonderfully, shall we say unorchestrated way, but subject to the bully pulpit of American presidential leadership and politics, can harness themselves for that purpose.

WPR: So much of the West’s engagement with China over the past 30 years has been this calculated gamble on what kind of power China would be when it did achieve great power status, and this idea that trade and engagement would pull China more toward being the kind of power that’s compatible with the international order, the postwar American-led international order. Do you think we have an answer to that question yet, and has the gamble of engaging with China paid off?

Mr. Rudd: The bottom line is that if you look at the history of America’s engagement strategy with China and the engagement strategy of many of America’s allies over the decades—really since the emergence of Deng in the late ‘70s and recommenced in the aftermath of Tiananmen, probably from about 1992, 1993—it has always been engagement, but hedge. In other words, there was engagement with China across all the instrumentalities of the global economy and global political and economic governance, and across the wide panoply of the multilateral system.

But there was always a conditionality and there was always a condition, which is, countries like the United States were not about to deplete their military, even after it won the Cold War against the Soviet Union. And America did not deplete its military. America’s military remained formidable. There are ebbs and flows in the debate, but the capabilities remained world class and dominant. And so in fairness to those who prosecuted that strategy of engagement, it always was engagement plus hedge.

And so what’s happened as China’s leadership direction has itself changed over the last six or seven years under Xi Jinping, is the hedge component of engagement has reared its head and said, “Well, I’m glad we did so.” Because we still, as the United States, have the world’s most powerful military. We still, as the United States, are the world’s technology leaders—including in artificial intelligence, despite all the hype coming out of Beijing. We are the world’s leaders when it comes to the core driver of the future of artificial intelligence, which is the production of computer chips, of semiconductors and other fundamental advances in computing. And you still are the world’s leaders in the military. And so therefore America is not in a hugely disadvantageous position. And on top of all the above, you still retain the global reserve currency called the U.S. dollar.

Each of these is under challenge by China. China is pursuing a highly systematic strategy to close the gap in each of these domains. But I think there is a degree of excessive pessimism when you look at America through the Washington lens, that nothing is happening. Well, a lot is. You go to Silicon Valley, you go to the Pacific Fleet, you go to the New York Stock Exchange, and you look at the seminal debates in Hong Kong at the moment, about whether Hong Kong will remain linked between the Hong Kong dollar and the U.S. dollar.

All this speaks still to the continued strength of America’s position, but it really does depend on whether you choose to husband these strengths for the future, and whether you choose to direct them in the future towards continued American global leadership of what we call the rules-based liberal international order.

WPR: Mr. Rudd, thank you so much for being so generous with your time and your insights.

Kevin Rudd: Good to be with you.

The post WPR Trend Lines: China Under Xi Jinping appeared first on Kevin Rudd.

Cryptogram The Third Edition of Ross Anderson’s Security Engineering

Ross Anderson’s fantastic textbook, Security Engineering, will have a third edition. The book won’t be published until December, but Ross has been making drafts of the chapters available online as he finishes them. Now that the book is completed, I expect the publisher to make him take the drafts off the Internet.

I personally find both the electronic and paper versions to be incredibly useful. Grab an electronic copy now while you still can.

Cryptogram Ranking National Cyber Power

Harvard Kennedy School’s Belfer Center published the “National Cyber Power Index 2020: Methodology and Analytical Considerations.” The rankings: 1. US, 2. China, 3. UK, 4. Russia, 5. Netherlands, 6. France, 7. Germany, 8. Canada, 9. Japan, 10. Australia, 11. Israel. More countries are in the document.

We could — and should — argue about the criteria and the methodology, but it’s good that someone is starting this conversation.

Executive Summary: The Belfer National Cyber Power Index (NCPI) measures 30 countries’ cyber capabilities in the context of seven national objectives, using 32 intent indicators and 27 capability indicators with evidence collected from publicly available data.

In contrast to existing cyber related indices, we believe there is no single measure of cyber power. Cyber Power is made up of multiple components and should be considered in the context of a country’s national objectives. We take an all-of-country approach to measuring cyber power. By considering “all-of-country” we include all aspects under the control of a government where possible. Within the NCPI we measure government strategies, capabilities for defense and offense, resource allocation, the private sector, workforce, and innovation. Our assessment is both a measurement of proven power and potential, where the final score assumes that the government of that country can wield these capabilities effectively.

The NCPI has identified seven national objectives that countries pursue using cyber means. The seven objectives are:

  1. Surveilling and Monitoring Domestic Groups;
  2. Strengthening and Enhancing National Cyber Defenses;
  3. Controlling and Manipulating the Information Environment;
  4. Foreign Intelligence Collection for National Security;
  5. Commercial Gain or Enhancing Domestic Industry Growth;
  6. Destroying or Disabling an Adversary’s Infrastructure and Capabilities; and,
  7. Defining International Cyber Norms and Technical Standards.

In contrast to the broadly held view that cyber power means destroying or disabling an adversary’s infrastructure (commonly referred to as offensive cyber operations), offense is only one of these seven objectives countries pursue using cyber means.

Planet DebianPetter Reinholdtsen: Buster update of Norwegian Bokmål edition of Debian Administrator's Handbook almost done

Thanks to the good work of several volunteers, the updated edition of the Norwegian translation for "The Debian Administrator's Handbook" is now almost completed. After many months of proof reading, I consider the proof reading complete enough for us to move to the next step, and have asked for the print version to be prepared and sent of to the print on demand service lulu.com. While it is still not to late if you find any incorrect translations on the hosted Weblate service, but it will be soon. :) You can check out the Buster edition on the web until the print edition is ready.

The book will be for sale on lulu.com and various web book stores, with links available from the web site for the book linked to above. I hope a lot of readers find it useful.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Worse Than FailureError'd: This Could Break the Bank!

"Sure, free for the first six months is great, but what exactly does happen when I hit month seven?" Stuart L. wrote.

 

"In order to add an app on the App Store Connect dashboard, you need to 'Register a new bundle ID in Certificates, Identifiers & Profiles'," writes Quentin, "Open the link, you have a nice 'register undefined' and cannot type anything in the identifier input field!"

 

"I was taught to keep money amounts as pennies rather than fractional dollars, but I guess I'm an old-fashioned guy!" writes Paul F.

 

Anthony C. wrote, "I was looking for headphones on Walmart.com and well, I guess they figured I'd like to look at something else for a change?"

 

"Build an office chair using only a spork, a napkin, and a coffee stirrer? Sounds like a job for McGuyver!"

 

"Translation from Swedish, 'We assume that most people who watch Just Chatting probably also like Just Chatting.' Yes, I bet it's true!," Bill W. writes.

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianLouis-Philippe Véronneau: Hire me!

I'm happy to announce I handed out my Master's Thesis last Monday. I'm not publishing the final copy just yet1, as it still needs to go through the approval committee. If everything goes well, I should have my Master of Economics diploma before Christmas!

It sure hasn't been easy, and although I regret nothing, I'm also happy to be done with university.

Looking for a job

What an odd time to be looking for a job, right? Turns out for the first time in 12 years, I don't have an employer. It's oddly freeing, but also a little scary. I'm certainly not bitter about it though and it's nice to have some time on my hands to work on various projects and read things other than academic papers. Look out for my next blog posts on using the NeTV2 as an OSHW HDMI capture card, on hacking at security tokens and much more!

I'm not looking for anything long term (I'm hoping to teach Economics again next Winter), but for the next few months, my calendar is wide open.

For the last 6 years, I worked as Linux system administrator, mostly using a LAMP stack in conjunction with Puppet, Shell and Python. Although I'm most comfortable with Puppet, I also have decent experience with Ansible, thanks to my work in the DebConf Videoteam.

I'm not the most seasoned Debian Developer, but I have some experience packaging Python applications and libraries. Although I'm no expert at it, lately I've also been working on Clojure packages, as I'm trying to get Puppet 6 in Debian in time for the Bullseye freeze. At the rate it's going though, I doubt we're going to make it...

If your company depends on Puppet and cares about having a version in Debian 11 that is maintained (Puppet 5 is EOL in November 2020), I'm your guy!

Oh, and I guess I'm a soon-to-be Master of Economics specialising in Free and Open Source Software business models and incentives theory. Not sure I'll ever get paid putting that in application, but hey, who knows.

If any of that resonates with you, contact me and let's have a chat! I promise I don't bite :)


  1. The title of the thesis is What are the incentive structures of Free Software? An economic analysis of Free Software's specific development model. Once the final copy is approved, I'll be sure to write a longer blog post about my findings here. 

Planet DebianReproducible Builds (diffoscope): diffoscope 160 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 160. This version includes the following changes:

* Check that pgpdump is actually installed before attempting to run it.
  Thanks to Gianfranco Costamagna (locutusofborg). (Closes: #969753)
* Add some documentation for the EXTERNAL_TOOLS dictionary.
* Ensure we check FALLBACK_FILE_EXTENSION_SUFFIX, otherwise we run pgpdump
  against all files that are recognised by file(1) as "data".

You find out more by visiting the project homepage.

,

Planet Linux AustraliaDavid Rowe: Playing with PAPR

The average power of a FreeDV signal is surprisingly hard to measure as the parallel carriers produce a waveform that has many peaks an troughs as the various carriers come in and out of phase with each other. Peter, VK3RV has been working on some interesting experiments to measure FreeDV power using calorimeters. His work got me thinking about FreeDV power and in particular ways to improve the Peak to Average Power Ratio (PAPR).

I’ve messed with a simple clipper for FreeDV 700C in the past, but decided to take a more scientific approach and use some simulations to measure the effect of clipping on FreeDV PAPR and BER. As usual, asking a few questions blew up into a several week long project. The usual bugs and strange, too good to be true initial results until I started to get results that felt sensible. I’ve tested some of the ideas over the air (blowing up an attenuator along the way), and learnt a lot about PAPR and related subjects like Peak Envelope Power (PEP).

The goal of this work is to explore the effect of a clipper on the average power and ultimately the BER of a received FreeDV signal, given a transmitter with a fixed peak output power.

Clipping to reduce PAPR

In normal operation we adjust our Tx drive so the peaks just trigger the ALC. This sets the average power at Ppeak – PAPR Watts, for example Pav = 100WPEP – 10dB = 10W average.

The idea of the clipper is to chop the tops off the FreeDV waveform so the PAPR is decreased. We can then increase the Tx drive, and get a higher average power. For example if PAPR is reduced from 10 to 4dB, we get Pav = 100WPEP – 4dB – 40W. That’s 4x the average power output of the 10dB PAPR case – Woohoo!

In the example below the 16 carrier waveform was clipped and the PAPR reduced from 10.5 to 4.5dB. The filtering applied after the clipper smooths out the transitions (and limits the bandwidth to something reasonable).

However it gets complicated. Clipping actually reduces the average power, as we’ve removed the high energy parts of the waveform. It also distorts the signal. Here is a scatter diagram of the signal before and after clipping:


The effect looks like additive noise. Hmmm, and what happens on multipath channels, does the modem perform the same as for AWGN with clipped signals? Another question – how much clipping should we apply?

So I set about writing a simulation (papr_test.m) and doing some experiments to increase my understanding of clippers, PAPR, and OFDM modem performance using typical FreeDV waveforms. I started out trying a few different compression methods such as different compander curves, but found that clipping plus a bandpass filter gives about the same result. So for simplicity I settled on clipping. Throughout this post many graphs are presented in terms of Eb/No – for the purpose of comparison just consider this the same thing as SNR. If the Eb/No goes up by 1dB, so does the SNR.

Here’s a plot of PAPR versus the number of carriers, showing PAPR getting worse with the number of carriers used:

Random data was used for each symbol. As the number of carriers increases, you start to get phases in carriers cancelling due to random alignment, reducing the big peaks. Behaivour with real world data may be different; if there are instances where the phases of all carriers are aligned there may be larger peaks.

To define the amount of clipping I used an estimate of the PDF and CDF:

The PDF (or histogram) shows how likely a certain level is, and the CDF shows the cumulative PDF. High level samples are quite unlikely. The CDF shows us what proportion of samples are above and below a certain level. This CDF shows us that 80% of the samples have a level of less than 4, so only 20% of the samples are above 4. So a clip level of 0.8 means the clipper hard limits at a level of 4, which would affect the top 20% of the samples. A clip value of 0.6, would mean samples with a level of 2.7 and above are clipped.

Effect of clipping on BER

Here are a bunch of curves that show the effect of clipping on and AWGN and multipath channel (roughly CCIR poor). A 16 carrier signal was used – typical of FreeDV waveforms. The clipping level and resulting PAPR is shown in the legend. I also threw in a Tx diversity curve – sending each symbol twice on double the carriers. This is the approach used on FreeDV 700C and tends to help a lot on multipath channels.

As we clip the signal more and more, the BER performance gets worse (Eb/No x-axis) – but the PAPR is reduced so we can increase the average power, which improves the BER. I’ve tried to show the combined effect on the (peak Eb/No x-axis) curves which scales each curve according to it’s PAPR requirements. This shows the peak power required for a given BER. Lower is better.




Take aways:

  1. The 0.8 and 0.6 clip levels work best on the peak Eb/No scale, ie when we combine effect of the hit on BER performance (bad) and PAPR improvement (good).
  2. There is about 4dB improvement across a range of operating points. This is pretty signficant – similar to gains we get from Tx diversity or a good FEC code.
  3. AWGN and Multipath improvements are similar – good. Sometimes you get an algorithm that works well on AWGN but falls in a heap on multipath channels, which are typically much tougher to push bits through.
  4. I also tried 8 carrier waveforms, which produced results about 1dB better, as I guess fewer carriers have a lower PAPR to start with.
  5. Non-linear techniques like clipping spread the energy in frequency.
  6. Filtering to constrain the frequency spread brings the PAPR up again. We can trade off PAPR with bandwidth: lower PAPR, more bandwidth.
  7. Non-linear technqiques will mess with QAM more. So we may hit a wall at high data rates.

Testing on a Real PA

All these simulations are great, but how do they compare with operation on a real HF radio? I designed an experiment to find out.

First, some definitions.

The same FreeDV OFDM signal is represented in different ways as it winds it’s way through the FreeDV system:

  1. Complex valued samples are used for much of the internal signal processing.
  2. Real valued samples at the interfaces, e.g. for getting samples in and out of a sound card and standard HF radio.
  3. Analog baseband signals, e.g. voltage inside your radio.
  4. Analog RF signals, e.g. at the output of your PA, and input to your receiver termminals.
  5. An electromagnetic wave.

It’s the same signal, as we can convert freely between the representations with no loss of fidelity, but it’s representation can change the way measures like PAPR work. This caused me some confusion – for example the PAPR of the real signal is about 3dB higher than the complex valued version! I’m still a bit fuzzy on this one, but have satisfied myself that the PAPR of the complex signal is the same as the PAPR of the RF signal – which is what we really care about.

Another definition that I had to (re)study was Peak Envelope Power (PEP) – which is the peak power averaged over one or more carrier cycles. This is the RF equivalent to our “peak” in PAPR. When driven by any baseband input signal, it’s the maximum RF power of the radio, averaged over one or more carrier cycles. Signals such as speech and FreeDV waveforms will have occasional peaks that hit the PEP. A baseband sine wave driving the radio would generate a RF signal that sits at the PEP power continuously.

Here is the experimental setup:

The idea is to play canned files through the radio, and measure the average Tx power. It took me several attempts before my experiment gave sensible results. A key improvement was to make the peak power of each sampled signal the same. This means I don’t have to keep messing with the audio drive levels to ensure I have the same peak power. The samples are 16 bits, so I normalised each file such that the peak was at +/- 10000.

Here is the RF power sampler:

It works pretty well on signals from my FT-817 and IC-7200, and will help prevent any more damage to RF test equipment. I used my RF sampler after my first attempt using a SMA barell attenuator resulted in it’s destruction when I accdentally put 5W into it! Suddenly it went from 30dB to 42dB attenuation. Oops.

For all the experiments I am tuned to 7.175 MHz and have the FT-817 on it’s lowest power level of 0.5W.

For my first experiment I played a 1000 Hz sine wave into the system, and measured the average power. I like to start with simple signals, something known that lets me check all the fiddly RF kit is actually working. After a few hours of messing about – I did indeed see 27dBm (0.5W) on my spec-an. So, for a signal with 0dB PAPR, we measure average power = PEP. Check.

In my next experiment, I measured the effect of ALC on TX power. With the FT-817 on it’s lowest power setting (0.5W), I increased the drive until just before the ALC bars came on: Here is the relationship I found with output power:

Bars Tx Power
0 26.7
1 26.4
2 26.7
3 27.0

So the ALC really does clamp the power at the peak value.

On to more complex FreeDV signals.

Mesuring the average power OFDM/parallel tone signals proved much harder to measure on the spec-an. The power bounces around over a period of several seconds the ODFM waveform evolves which can derail many power measurement techniques. The time constant, or measurement window is important – we want to capture the total power over a few seconds and average the value.

After several attempts and lots of head scratching I settled on the following spec-an settings:

  1. 10s sweep time so the RBW filter is averging a lot of time varying power at each point in the sweep.
  2. 100kHz span.
  3. RBW/VBW of 10 kHz so we capture all of the 1kHz wide OFDM signal in the RBW filter peak when averaging.
  4. Power averaging over 5 samples.

The two-tone signal was included to help me debug my spec-an settings, as it has a known (3dB) PAPR.

Here is a table showing the results for several test signals, all of which have the same peak power:

Sample Description PAPR Theory/Sim (dB) PAPR Mesured (dB)
sine1000 sine wave at 1000 Hz 0 0
sine_800_1200 two tones at 800 and 1200Hz 3 4
vanilla 700D test frames unclipped 7.1 7
clip0.8 700D test frames clipped at 0.8 3.4 4
ve9qrp 700D with real speech payload data 11 10.5

Click on the file name to listen to a 5 second sample of the sample. The lower PAPR (higher average power) signals sound louder – I guess our ears work on average power too! I kept the drive constant and the PEP/peak just happened to hit 26dBm. It’s not critical, as long as the drive (and hence peak level) is the same across all waveforms tested.

Note the two tone “control” is 1dB off (4dB measured on a known 3dB PAPR signal), I’m not happy about that. This suggests a spec-an set up issue or limitation on my spec-an (e.g. the way it averages power).

However the other signals line up OK to the simulated values, within about +/- 0.5dB, which suggests I’m on the right track with my simulations.

The modulated 700D test frame signals were generated by the Octave ofdm_tx.m script, which reports the PAPR of the complex signal. The same test frame repeats continuously, which makes BER measurements convenient, but is slightly unrealistic. The PAPR was lower than the ve9qrp signal which has real speech payload data. Perhaps because the more random, real world payload data leads to occasional frames where the phase of the carriers align leading to large peaks.

Another source of discrepancy is the non flat frequency filtering in the baseband audio/crystal filter path the signal has to flow through before it emerges as RF.

The zero-span spec-an setting plots power over time, and is very useful for visualing PAPR. The first plot shows the power of our 1000 Hz sine signal (yellow), and the two tone test signal (purple):

You can see how mixing just two signals modulates the power over time, the effect on PAPR, and how the average power is reduced. Next we have the ve9qrp signal (yellow), and our clip 0.8 signal (purple):

It’s clear the clipped signal has a much higher average power. Note the random way the waveform power peaks and dips, as the various carriers come into phase. Note very few high power peaks in the ve9qrp signal – in this sample we don’t have any that hits +26dBm, as they are fairly rare.

I found eye-balling the zero-span plots gave me similar values to non-zero span results in the table above, a good cross check.

Take aways:

  1. Clipping is indeed improving our measured average power, but there are some discrepencies between the measured and PAPR values estimated from theory/simulation.
  2. Using a SDR to receive the signal and measure PAPR using my own maths might be easier than fiddling with the spec-an and guessing at it’s internal algorithms.
  3. PAPR is worse for real world signals (e.g. ve9qrp) than my canned test frames due to relatively rare alignments of the carrier phases. This might only happen once every few seconds, but significantly raises the PAPR, and hurts our average power. These occasional peaks might be triggering the ALC, pushing the average power down every time they occur. As they are rare, these peaks can be clipped with no impact on perceived speech quality. This is why I like the CDF/PDF method of setting thresholds, it lets us discard rare (low probability) outliers that might be hurting our average power.

Conclusions and Further work

The simulations suggest we can improve FreeDV by 4dB using the right clipper/filter combination. Initial tests over a real PA show we can indeed reduce PAPR in line with our simulations.

This project has lead me down an interesting rabbit hole that has kept me busy for a few weeks! Just in case I haven’t had enough, some ideas for further work:

  1. Align these clipping levels and filtering to FreeDV 700D (and possibly 2020). There is existing clipper and filter code but the thresholds were set by educated guess several years for 700C.
  2. Currently each FreeDV waveform is scaled to have the same average power. This is the signal fed via the sound card to your Tx. Should the levels of each FreeDV waveform be adjusted to be the same peak value instead?
  3. Design an experiment to prove BER performance at a given SNR is improved by 4dB as suggested by these simulations. Currently all we have measured is the average power and PAPR – we haven’t actually verified the expected 4dB increase in performance (suggested by the BER simulations above) which is the real goal.
  4. Try the experiments on a SDR Tx – they tend to get results closer to theory due to no crystal filters/baseband audio filtering.
  5. Try the experiments on a 100WPEP Tx – I have ordered a dummy load to do that relatively safely.
  6. Explore the effect of ALC on FreeDV signals and why we set the signals to “just tickle” the ALC. This is something I don’t really understand, but have just assumed is good practice based on other peoples experiences with parallel tone/OFDM modems and on-air FreeDV use. I can see how ALC would compress the amplitude of the OFDM waveform – which this blog post suggests might be a good thing! Perhaps it does so in an uncontrolled manner – as the curves above show the amount of compression is pretty important. “Just tickling the ALC” guarantees us a linear PA – so we can handle any needed compression/clipping carefully in the DSP.
  7. Explore other ways of reducing PAPR.

To peel away the layers of a complex problem is very satisfying. It always takes me several goes, improvements come as the bugs fall out one by one. Writing these blog posts oftens makes me sit back and say “huh?”, as I discover things that don’t make sense when I write them up. I guess that’s the review process in action.

Links

Design for a RF Sampler I built, mine has a 46dB loss.

Peak to Average Power Ratio for OFDM – Nice discussion of PAPR for OFDM signals from DSPlog.

Planet DebianDaniel Silverstone: Broccoli Sync Conversation

Broccoli Sync Conversation

A number of days ago (I know, I'm an awful human who failed to post this for over a week), myself, Lars, Mark, and Vince discussed Dropbox's article about Broccoli Sync. It wasn't quite what we'd expected but it was an interesting discussion of compression and streamed data.

Vince observed that it was interesting in that it was a way to move storage compression cost to the client edge. This makes sense because decompression (to verify the uploaded content) is cheaper than compression; and also since CPU and bandwidth are expensive, spending the client CPU to reduce bandwidth is worthwhile.

Lars talked about how even in situations where everyone has gigabit data connectivity with no limit on total transit, bandwidth/time is a concern, so it makes sense.

We liked how they determined the right compresison level to use available bandwidth (i.e. not be CPU throttled) but also gain the most compression possible. Their diagram showing relative compression sizes for level 1 vs. 3 vs. 5 suggests that the gain for putting the effort in for 5 rather than 1. It's interesting in that diagram that 'documents' don't compress well but then again it is notable that such documents are likely DEFLATE'd zip files. Basically if the data is already compressed then there's little hope Brotli will gain much.

I raised that it was interesting that they chose Brotli, in part, due to the availability of a pure Rust implementation of Brotli. Lars mentioned that Microsoft and others talk about how huge quantities of C code has unexpected memory safety issues and so perhaps that is related. Daniel mentioned that the document talked about Dropbox having a policy of not running unconstrained C code which was interesting.

Vince noted that in their deployment challenges it seemed like a very poor general strategy to cope with crasher errors; but Daniel pointed out that it might be an over-simplified description, and Mark suggested that it might be sufficient until a fix can be pushed out. Vince agreed that it's plausible this is a tiered/sharded deployment process and thus a good way to smoke out problems.

Daniel found it interesting that their block storage sounds remarkably like every other content-addressible storage and that while they make it clear in the article that encryption, client identification etc are elided, it looks like they might be able to deduplicate between theoretically hostile clients.

We think that the compressed-data plus type plus hash (which we assume also contains length) is an interesting and nice approach to durability and integrity validation in the protocol. And the compressed blocks can then be passed to the storage backend quickly and effectively which is nice for latency.

Daniel raised that he thought it was fun that their rust-brotli library is still workable on Rust 1.12 which is really quite old.

We ended up on a number of tangential discussions, about Rust, about deployment strategies, and so on. While the article itself was a little thin, we certainly had a lot of good chatting around topics it raised.

We'll meet again in a month (on the 28th Sept) so perhaps we'll have a chunkier article next time. (Possibly this and/or related articles)

Worse Than FailureCodeSOD: Put a Dent in Your Logfiles

Valencia made a few contributions to a large C++ project run by Harvey. Specifically, there were some pass-by-value uses of a large data structure, and changing those to pass-by-reference fixed a number of performance problems, especially on certain compilers.

“It’s a simple typo,” Valencia thought. “Anyone could have done that.” But they kept digging…

The original code-base was indented with spaces, but Harvey just used tabs. That was a mild annoyance, but Harvey used a lot of tabs, as his code style was “nest as many blocks as deeply as possible”. In addition to loads of magic numbers that should be enums, Harvey also had a stance that “never use an int type when you can store your number as a double”.

Then, for example, what if you have a char and you want to turn the char into a string? Do you just use the std::string() constructor that accepts a char parameter? Not if you’re Harvey!

std::string ToString(char c)
{
    std::stringstream ss;
    std::string out = "";
    ss << c;
    ss >> out;
    return out;
}

What if you wanted to cache some data in memory? A map would be a good place to start. How many times do you want to access a single key while updating a cache entry? How does “four times” work for you? It works for Harvey!

void WriteCache(std::string key, std::string value)
{
    Setting setting = mvCache["cache_"+key];
    if (!setting.initialized)
    {
        setting.initialized=true;
        setting.data = "";
        mvCache.insert(std::map<std::string,Cache>::value_type("cache_"+key,setting));
        mvCache["cache_"+key]=setting;
    }
    setting.data = value;
    mvCache["cache_"+key]=setting;
}

And I don’t know exactly what they are trying to communicate with the mv prefix, but people have invented all sorts of horrible ways to abuse Hungarian notation. Fortunately, Valencia clarifies: “Harvey used an incorrect Hungarian notation prefix while they were at it.”

That’s the easy stuff. Ugly, bad code, sure, but nothing that leaves you staring, stunned into speechlessness.

Let’s say you added a lot of logging messages, and you wanted to control how many logging messages appeared. You’ve heard of “logging levels”, and that gives you an inspiration for how to solve this problem:

bool LogLess(int iMaxLevel)
{
     int verboseLevel = rand() % 1000;
     if (verboseLevel < iMaxLevel) return true;
     return false;
}

//how it's used:
if(LogLess(500))
   log.debug("I appear half of the time");

Normally, I’d point out something about how they don’t need to return true or return false when they could just return the boolean expression, but what’d be the point? They’ve created probabilistic log levels. It’s certainly one way to solve the “too many log messages” problem: just randomly throw some of them away.

Valencia gives us a happy ending:

Needless to say, this has since been rewritten… the end result builds faster, uses less memory and is several orders of magnitude faster.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Kevin RuddNew Israel Fund: Australia and the Middle East

INTERVIEW VIDEO
NIF AUSTRALIA
WITH LIAM GETREU
9 SEPTEMBER 2020

The post New Israel Fund: Australia and the Middle East appeared first on Kevin Rudd.

Rondam RamblingsThis is what the apocalypse looks like

This is a photo of our house taken at noon today:                 This is not the raw image.  I took this with an iPhone, whose auto-exposure made the image look much brighter than it actually is.  I've adjusted the brightness and color balance to match the actual appearance as much as I can.  Even so, this image doesn't do justice to the reality.  For one thing, the sky is much too blue.  The

Cryptogram Interesting Attack on the EMV Smartcard Payment Standard

It’s complicated, but it’s basically a man-in-the-middle attack that involves two smartphones. The first phone reads the actual smartcard, and then forwards the required information to a second phone. That second phone actually conducts the transaction on the POS terminal. That second phone is able to convince the POS terminal to conduct the transaction without requiring the normally required PIN.

From a news article:

The researchers were able to demonstrate that it is possible to exploit the vulnerability in practice, although it is a fairly complex process. They first developed an Android app and installed it on two NFC-enabled mobile phones. This allowed the two devices to read data from the credit card chip and exchange information with payment terminals. Incidentally, the researchers did not have to bypass any special security features in the Android operating system to install the app.

To obtain unauthorized funds from a third-party credit card, the first mobile phone is used to scan the necessary data from the credit card and transfer it to the second phone. The second phone is then used to simultaneously debit the amount at the checkout, as many cardholders do nowadays. As the app declares that the customer is the authorized user of the credit card, the vendor does not realize that the transaction is fraudulent. The crucial factor is that the app outsmarts the card’s security system. Although the amount is over the limit and requires PIN verification, no code is requested.

The paper: “The EMV Standard: Break, Fix, Verify.”

Abstract: EMV is the international protocol standard for smartcard payment and is used in over 9 billion cards worldwide. Despite the standard’s advertised security, various issues have been previously uncovered, deriving from logical flaws that are hard to spot in EMV’s lengthy and complex specification, running over 2,000 pages.

We formalize a comprehensive symbolic model of EMV in Tamarin, a state-of-the-art protocol verifier. Our model is the first that supports a fine-grained analysis of all relevant security guarantees that EMV is intended to offer. We use our model to automatically identify flaws that lead to two critical attacks: one that defrauds the cardholder and another that defrauds the merchant. First, criminals can use a victim’s Visa contact-less card for high-value purchases, without knowledge of the card’s PIN. We built a proof-of-concept Android application and successfully demonstrated this attack on real-world payment terminals. Second, criminals can trick the terminal into accepting an unauthentic offline transaction, which the issuing bank should later decline, after the criminal has walked away with the goods. This attack is possible for implementations following the standard, although we did not test it on actual terminals for ethical reasons. Finally, we propose and verify improvements to the standard that prevent these attacks, as well as any other attacks that violate the considered security properties.The proposed improvements can be easily implemented in the terminals and do not affect the cards in circulation.

Planet DebianReproducible Builds: Reproducible Builds in August 2020

Welcome to the August 2020 report from the Reproducible Builds project.

In our monthly reports, we summarise the things that we have been up to over the past month. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced from the original free software source code to the pre-compiled binaries we install on our systems. If you’re interested in contributing to the project, please visit our main website.




This month, Jennifer Helsby launched a new reproduciblewheels.com website to address the lack of reproducibility of Python wheels.

To quote Jennifer’s accompanying explanatory blog post:

One hiccup we’ve encountered in SecureDrop development is that not all Python wheels can be built reproducibly. We ship multiple (Python) projects in Debian packages, with Python dependencies included in those packages as wheels. In order for our Debian packages to be reproducible, we need that wheel build process to also be reproducible

Parallel to this, transparencylog.com was also launched, a service that verifies the contents of URLs against a publicly recorded cryptographic log. It keeps an append-only log of the cryptographic digests of all URLs it has seen. (GitHub repo)

On 18th September, Bernhard M. Wiedemann will give a presentation in German, titled Wie reproducible builds Software sicherer machen (“How reproducible builds make software more secure”) at the Internet Security Digital Days 2020 conference.


Reproducible builds at DebConf20

There were a number of talks at the recent online-only DebConf20 conference on the topic of reproducible builds.

Holger gave a talk titled “Reproducing Bullseye in practice”, focusing on independently verifying that the binaries distributed from ftp.debian.org are made from their claimed sources. It also served as a general update on the status of reproducible builds within Debian. The video (145 MB) and slides are available.

There were also a number of other talks that involved Reproducible Builds too. For example, the Malayalam language mini-conference had a talk titled എനിയ്ക്കും ഡെബിയനില്‍ വരണം, ഞാന്‍ എന്തു് ചെയ്യണം? (“I want to join Debian, what should I do?”) presented by Praveen Arimbrathodiyil, the Clojure Packaging Team BoF session led by Elana Hashman, as well as Where is Salsa CI right now? that was on the topic of Salsa, the collaborative development server that Debian uses to provide the necessary tools for package maintainers, packaging teams and so on.

Jonathan Bustillos (Jathan) also gave a talk in Spanish titled Un camino verificable desde el origen hasta el binario (“A verifiable path from source to binary”). (Video, 88MB)


Development work

After many years of development work, the compiler for the Rust programming language now generates reproducible binary code. This generated some general discussion on Reddit on the topic of reproducibility in general.

Paul Spooren posted a ‘request for comments’ to OpenWrt’s openwrt-devel mailing list asking for clarification on when to raise the PKG_RELEASE identifier of a package. This is needed in order to successfully perform rebuilds in a reproducible builds context.

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update.

Chris Lamb provided some comments and pointers on an upstream issue regarding the reproducibility of a Snap / SquashFS archive file. []

Debian

Holger Levsen identified that a large number of Debian .buildinfo build certificates have been “tainted” on the official Debian build servers, as these environments have files underneath the /usr/local/sbin directory []. He also filed against bug for debrebuild after spotting that it can fail to download packages from snapshot.debian.org [].

This month, several issues were uncovered (or assisted) due to the efforts of reproducible builds.

For instance, Debian bug #968710 was filed by Simon McVittie, which describes a problem with detached debug symbol files (required to generate a traceback) that is unlikely to have been discovered without reproducible builds. In addition, Jelmer Vernooij called attention that the new Debian Janitor tool is using the property of reproducibility (as well as diffoscope when applying archive-wide changes to Debian:

New merge proposals also include a link to the diffoscope diff between a vanilla build and the build with changes. Unfortunately these can be a bit noisy for packages that are not reproducible yet, due to the difference in build environment between the two builds. []

56 reviews of Debian packages were added, 38 were updated and 24 were removed this month adding to our knowledge about identified issues. Specifically, Chris Lamb added and categorised the nondeterministic_version_generated_by_python_param and the lessc_nondeterministic_keys toolchain issues. [][]

Holger Levsen sponsored Lukas Puehringer’s upload of the python-securesystemslib pacage, which is a dependency of in-toto, a framework to secure the integrity of software supply chains. []

Lastly, Chris Lamb further refined his merge request against the debian-installer component to allow all arguments from sources.list files (such as [check-valid-until=no]) in order that we can test the reproducibility of the installer images on the Reproducible Builds own testing infrastructure and sent a ping to the team that maintains that code.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of these patches, including:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can not only locate and diagnose reproducibility issues, it provides human-readable diffs of all kinds. In August, Chris Lamb made the following changes to diffoscope, including preparing and uploading versions 155, 156, 157 and 158 to Debian:

  • New features:

    • Support extracting data of PGP signed data. (#214)
    • Try files named .pgp against pgpdump(1) to determine whether they are Pretty Good Privacy (PGP) files. (#211)
    • Support multiple options for all file extension matching. []
  • Bug fixes:

    • Don’t raise an exception when we encounter XML files with <!ENTITY> declarations inside the Document Type Definition (DTD), or when a DTD or entity references an external resource. (#212)
    • pgpdump(1) can successfully parse some binary files, so check that the parsed output contains something sensible before accepting it. []
    • Temporarily drop gnumeric from the Debian build-dependencies as it has been removed from the testing distribution. (#968742)
    • Correctly use fallback_recognises to prevent matching .xsb binary XML files.
    • Correct identify signed PGP files as file(1) returns “data”. (#211)
  • Logging improvements:

    • Emit a message when ppudump version does not match our file header. []
    • Don’t use Python’s repr(object) output in “Calling external command” messages. []
    • Include the filename in the “… not identified by any comparator” message. []
  • Codebase improvements:

    • Bump Python requirement from 3.6 to 3.7. Most distributions are either shipping with Python 3.5 or 3.7, so supporting 3.6 is not only somewhat unnecessary but also cumbersome to test locally. []
    • Drop some unused imports [], drop an unnecessary dictionary comprehensions [] and some unnecessary control flow [].
    • Correct typo of “output” in a comment. []
  • Release process:

    • Move generation of debian/tests/control to an external script. []
    • Add some URLs for the site that will appear on PyPI.org. []
    • Update “author” and “author email” in setup.py for PyPI.org and similar. []
  • Testsuite improvements:

    • Update PPU tests for compatibility with Free Pascal versions 3.2.0 or greater. (#968124)
    • Mark that our identification test for .ppu files requires ppudump version 3.2.0 or higher. []
    • Add an assert_diff helper that loads and compares a fixture output. [][][][]
  • Misc:

In addition, Mattia Rizzolo documented in setup.py that diffoscope works with Python version 3.8 [] and Frazer Clews applied some Pylint suggestions [] and removed some deprecated methods [].

Website

This month, Chris Lamb updated the main Reproducible Builds website and documentation to:

  • Clarify & fix a few entries on the “who” page [][] and ensure that images do not get to large on some viewports [].
  • Clarify use of a pronoun re. Conservancy. []
  • Use “View all our monthly reports” over “View all monthly reports”. []
  • Move a “is a” suffix out of the link target on the SOURCE_DATE_EPOCH age. []

In addition, Javier Jardón added the freedesktop-sdk project [] and Kushal Das added SecureDrop project [] to our projects page. Lastly, Michael Pöhn added internationalisation and translation support with help from Hans-Christoph Steiner [].

Testing framework

The Reproducible Builds project operate a Jenkins-based testing framework to power tests.reproducible-builds.org. This month, Holger Levsen made the following changes:

  • System health checks:

    • Improve explanation how the status and scores are calculated. [][]
    • Update and condense view of detected issues. [][]
    • Query the canonical configuration file to determine whether a job is disabled instead of duplicating/hardcoding this. []
    • Detect several problems when updating the status of reporting-oriented ‘metapackage’ sets. []
    • Detect when diffoscope is not installable  [] and failures in DNS resolution [].
  • Debian:

    • Update the URL to the Debian security team bug tracker’s Git repository. []
    • Reschedule the unstable and bullseye distributions often for the arm64 architecture. []
    • Schedule buster less often for armhf. [][][]
    • Force the build of certain packages in the work-in-progress package rebuilder. [][]
    • Only update the stretch and buster base build images when necessary. []
  • Other distributions:

    • For F-Droid, trigger jobs by commits, not by a timer. []
    • Disable the Archlinux HTML page generation job as it has never worked. []
    • Disable the alternative OpenWrt rebuilder jobs. []
  • Misc;

Many other changes were made too, including:

Finally, build node maintenance was performed by Holger Levsen [], Mattia Rizzolo [][] and Vagrant Cascadian [][][][]


Mailing list

On our mailing list this month, Leo Wandersleb sent a message to the list after he was wondering how to expand his WalletScrutiny.com project (which aims to improve the security of Bitcoin wallets) from Android wallets to also monitor Linux wallets as well:

If you think you know how to spread the word about reproducibility in the context of Bitcoin wallets through WalletScrutiny, your contributions are highly welcome on this PR []

Julien Lepiller posted to the list linking to a blog post by Tavis Ormandy titled You don’t need reproducible builds. Morten Linderud (foxboron) responded with a clear rebuttal that Tavis was only considering the narrow use-case of proprietary vendors and closed-source software. He additionally noted that the criticism that reproducible builds cannot prevent against backdoors being deliberately introduced into the upstream source (“bugdoors”) are decidedly (and deliberately) outside the scope of reproducible builds to begin with.

Chris Lamb included the Reproducible Builds mailing list in a wider discussion regarding a tentative proposal to include .buildinfo files in .deb packages, adding his remarks regarding requiring a custom tool in order to determine whether generated build artifacts are ‘identical’ in a reproducible context. []

Jonathan Bustillos (Jathan) posted a quick email to the list requesting whether there was a list of To do tasks in Reproducible Builds.

Lastly, Chris Lamb responded at length to a query regarding the status of reproducible builds for Debian ISO or installation images. He noted that most of the technical work has been performed but “there are at least four issues until they can be generally advertised as such”. He pointed that the privacy-oriented Tails operation system, which is based directly on Debian, has had reproducible builds for a number of years now. []



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Kevin RuddABC: Journalists in China and Rio Tinto

INTERVIEW VIDEO
ABC NEWS CHANNEL
AFTERNOON BRIEFING
PATRICIA KARVELAS
9 SEPTEMBER 2020

Topics: Mike Smith, Bill Birtles, Cheng Lei, Rio Tinto and Queensland’s health border

The post ABC: Journalists in China and Rio Tinto appeared first on Kevin Rudd.

Planet DebianGunnar Wolf: RPi 4 + 8GB, Finally, USB-functional!

So… Finally, kernel 5.8 entered the Debian Unstable repositories. This means that I got my Raspberry image from their usual location and was able to type the following, using only my old trusty USB keyboard:

So finally, the greatest and meanest Raspberry is fully supported with a pure Debian image! (only tarnished by the nonfree raspi-firmware package.

Oh, in case someone was still wondering — The images generated follow the stable release. Only the kernel and firmware are installed from unstable. If / when kernel 5.8 enters Backports, I will reduce the noise of adding a different suit to the sources.list.

Worse Than FailureWeb Server Installation

Connect the dots puzzle

Once upon a time, there lived a man named Eric. Eric was a programmer working for the online development team of a company called The Company. The Company produced Media; their headquarters were located on The Continent where Eric happily resided. Life was simple. Straightforward. Uncomplicated. Until one fateful day, The Company decided to outsource their infrastructure to The Service Provider on Another Continent for a series of complicated reasons that ultimately benefited The Budget.

Part of Eric's job was to set up web servers for clients so that they could migrate their websites to The Platform. Previously, Eric would have provisioned the hardware himself. Under the new rules, however, he had to request that The Service Provider do the heavy lifting instead.

On Day 0 of our story, Eric received a server request from Isaac, a representative of The Client. On Day 1, Eric asked for the specifications for said server, which were delivered on Day 2. Day 2 being just before a long weekend, it was Day 6 before the specs were delivered to The Service Provider. The contact at The Service Provider, Thomas, asked if there was a deadline for this migration. Eric replied with the hard cutover date almost two months hence.

This, of course, would prove to be a fatal mistake. The following story is true; only the names have been changed to protect the guilty. (You might want some required listening for this ... )

Day 6

  • Thomas delivers the specifications to a coworker, Ayush, without requesting a GUI.
  • Ayush declares that the servers will be ready in a week.

Day 7

  • Eric informs The Client that the servers will be delivered by Day 16, so installations could get started by Day 21 at the latest.
  • Ayush asks if The Company wants a GUI.

Day 8

  • Eric replies no.

Day 9

  • Another representative of The Service Provider, Vijay, informs Eric that the file systems were not configured according to Eric's request.
  • Eric replies with a request to configure the file systems according to the specification.
  • Vijay replies with a request for a virtual meeting.
  • Ayush tells Vijay to configure the system according to the specification.

Day 16

  • The initial delivery date comes and goes without further word. Eric's emails are met with tumbleweeds. He informs The Client that they should be ready to install by Day 26.

Day 19

  • Ayush asks if any ports other than 22 are needed.
  • Eric asks if the servers are ready to be delivered.
  • Ayush replies that if port 22 needs to be opened, that will require approval from Eric's boss, Jack.

Day 20

  • Ayush delivers the server names to Eric as an FYI.

Day 22

  • Thomas asks Eric if there's been any progress, then asks Ayush to schedule a meeting to discuss between the three of them.

Day 23

  • Eric asks for the login credentials to the aforementioned server, as they were never provided.
  • Vijay replies with the root credentials in a plaintext email.
  • Eric logs in and asks for some network configuration changes to allow admin access from The Client's network.
  • Mehul, yet another person at The Service Provider, asks for the configuration change request to be delivered via Excel spreadsheet.
  • Eric tells The Client that Day 26 is unlikely, but they should probably be ready by end of Day 28, still well before the hard deadline of Day 60.

Day 28

  • The Client reminds Eric that they're decommissioning the old datacenter on Day 60 and would very much like to have their website moved by then.
  • Eric tells Mehul that the Excel spreadsheet requires information he doesn't have. Could he make the changes?
  • Thomas asks Mehul and Ayush if things are progressing. Mehul replies that he doesn't have the source IP (which was already sent). Thomas asks whom they're waiting for. Mehul replies and claims that Eric requested access from the public Internet.
  • Mehul escalates to Jack.
  • Thomas reminds Ayush and Mehul that if their work is pending some data, they should work toward getting that obstacle solved.

Day 29

  • Eric, reading the exchange from the evening before, begins to question his sanity as he forwards the original email back over, along with all the data they requested.

Day 30

  • Mehul replies that access has been granted.

Day 33

  • Eric discovers he can't access the machine from inside The Client's network, and requests opening access again.
  • Mehul suggests trying from the Internet, claiming that the connection is blocked by The Client's firewall.
  • Eric replies that The Client's datacenter cannot access the Internet, and that the firewall is configured properly.
  • Jack adds more explicit instructions for Mehul as to exactly how to investigate the network problem.

Day 35

  • Mehul asks Eric to try again.

Day 36

  • It still doesn't work.
  • Mehul replies with instructions to use specific private IPs. Eric responds that he is doing just that.
  • Ayush asks if the problem is fixed.
  • Eric reminds Thomas that time is running out.
  • Thomas replies that the firewall setting changes must have been stepped on by changes on The Service Provider's side, and he is escalating the issue.

Day 37

  • Mehul instructs Eric to try again.

Day 40

  • It still doesn't work.

Day 41

  • Mehul asks Eric to try again, as he has personally verified that it works from the Internet.
  • Eric reminds Mehul that it needs to work from The Client's datacenter—specifically, for the guy doing the migration at The Client.

Day 42

  • Eric confirms that the connection does indeed work from Internet, and that The Client can now proceed with their work.
  • Mehul asks if Eric needs access through The Company network.
  • Eric replies that the connection from The Company network works fine now.

Day 47

  • Ayush requests a meeting with Eric about support handover to operations.

Day 48

  • Eric asks what support is this referring to.
  • James (The Company, person #3) replies that it's about general infrastructure support.

Day 51

  • Eric notifies Ayush and Mehul that server network configurations were incorrect, and that after fixing the configuration and rebooting the server, The Client can no longer log in to the server because the password no longer works.
  • Ayush instructs Vijay to "setup the repository ASAP." Nobody knows what repository he's talking about.
  • Vijay responds that "licenses are not updated for The Company servers." Nobody knows what licenses he is talking about.
  • Vijay sends original root credentials in a plaintext email again.

Day 54

  • Thomas reminds Ayush and Mehul that the servers need to be moved by day 60.
  • Eric reminds Thomas that the deadline was extended to the end of the month (day 75) the previous week.
  • Eric replies to Vijay that the original credentials sent no longer work.
  • Vijay asks Eric to try again.
  • Mehul asks for the details of the unreachable servers, which were mentioned in the previous email.
  • Eric sends a summary of current status (can't access from The Company's network again, server passwords not working) to Thomas, Ayush, Mehul and others.
  • Vijay replies, "Can we discuss on this."
  • Eric replies that he's always reachable by Skype or email.
  • Mehul says that access to private IPs is not under his control. "Looping John and Jared," but no such people were added to the recipient list. Mehul repeats that from The Company's network, private IPs should be used.
  • Thomas tells Eric that the issue has been escalated again on The Service Provider's side.
  • Thomas complains to Roger (The Service Provider, person #5), Theodore (The Service Provider, person #6) and Matthew (The Service Provider, person #7) that the process isn't working.

Day 55

  • Theodore asks Peter (The Service Provider, person #8), Mehul, and Vinod (The Service Provider, person #9) what is going on.
  • Peter responds that websites should be implemented using Netscaler, and asks no one in particular if they could fill an Excel template.
  • Theodore asks who should be filling out the template.
  • Eric asks Thomas if he still thinks the sites can be in production by the latest deadline, Day 75, and if he should install the server on AWS instead.
  • Thomas asks Theodore if configuring the network really takes two weeks, and tells the team to try harder.

Day 56

  • Theodore replies that configuring the network doesn't take two weeks, but getting the required information for that often does. Also that there are resourcing issues related to such configurations.
  • Thomas suggests a meeting to fill the template.
  • Thomas asks if there's any progress.

Day 57

  • Ayush replies that if The Company provides the web service name, The Service Provider can fill out the rest.
  • Eric delivers a list of site domains and required ports.
  • Thomas forwards the list to Peter.
  • Tyler (The Company, person #4) informs Eric that any AWS servers should be installed by Another Service Provider.
  • Eric explains that the idea was that he would install the server on The Company's own AWS account.
  • Paul (The Company, person #5) informs Eric that all AWS server installations are to be done by Another Service Provider, and that they'll have time to do it ... two months down the road.
  • Kane (The Company, person #6) asks for a faster solution, as they've been waiting for nearly two months already.
  • Eric sets up the server on The Company's AWS account before lunch and delivers it to The Client.

Day 58

  • Peter replies that he needs a list of fully qualified domain names instead of just the site names.
  • Eric delivers a list of current blockers to Thomas, Theodore, Ayush and Jagan (The Service Provider, person #10).
  • Ayush instructs Vijay and the security team to check network configuration.
  • Thomas reminds Theodore, Ayush and Jagan to solve the issues, and reminds them that the original deadline for this was a month ago.
  • Theodore informs everyone that the servers' network configuration wasn't compatible with the firewall's network configuration, and that Vijay and Ayush are working on it.

Day 61

  • Peter asks Thomas and Ayush if they can get the configuration completed tomorrow.
  • Thomas asks Theodore, Ayush, and Jagan if the issues are solved.

Day 62

  • Ayush tells Eric that they've made configuration changes, and asks if he can now connect.

Day 63

  • Eric replies to Ayush that he still has trouble connecting to some of the servers from The Company's network.
  • Eric delivers network configuration details to Peter.
  • Ayush tells Vijay and Jai (The Service Provider, person #11) to reset passwords on servers so Eric can log in, and asks for support from Theodore with network configurations.
  • Matthew replies that Theodore is on his way to The Company.
  • Vijay resets the password and sends it to Ayush and Jai.
  • Ayush sends the password to Eric via plaintext email.
  • Theodore asks Eric and Ayush if the problems are resolved.
  • Ayush replies that connection from The Company's network does not work, but that the root password was emailed.

Day 64

  • Tyler sends an email to everyone and cancels the migration.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Cryptogram US Space Cybersecurity Directive

The Trump Administration just published “Space Policy Directive – 5“: “Cybersecurity Principles for Space Systems.” It’s pretty general:

Principles. (a) Space systems and their supporting infrastructure, including software, should be developed and operated using risk-based, cybersecurity-informed engineering. Space systems should be developed to continuously monitor, anticipate,and adapt to mitigate evolving malicious cyber activities that could manipulate, deny, degrade, disrupt,destroy, surveil, or eavesdrop on space system operations. Space system configurations should be resourced and actively managed to achieve and maintain an effective and resilient cyber survivability posture throughout the space system lifecycle.

(b) Space system owners and operators should develop and implement cybersecurity plans for their space systems that incorporate capabilities to ensure operators or automated control center systems can retain or recover positive control of space vehicles. These plans should also ensure the ability to verify the integrity, confidentiality,and availability of critical functions and the missions, services,and data they enable and provide.

These unclassified directives are typically so general that it’s hard to tell whether they actually matter.

News article.

Krebs on SecurityMicrosoft Patch Tuesday, Sept. 2020 Edition

Microsoft today released updates to remedy nearly 130 security vulnerabilities in its Windows operating system and supported software. None of the flaws are known to be currently under active exploitation, but 23 of them could be exploited by malware or malcontents to seize complete control of Windows computers with little or no help from users.

The majority of the most dangerous or “critical” bugs deal with issues in Microsoft’s various Windows operating systems and its web browsers, Internet Explorer and Edge. September marks the seventh month in a row Microsoft has shipped fixes for more than 100 flaws in its products, and the fourth month in a row that it fixed more than 120.

Among the chief concerns for enterprises this month is CVE-2020-16875, which involves a critical flaw in the email software Microsoft Exchange Server 2016 and 2019. An attacker could leverage the Exchange bug to run code of his choosing just by sending a booby-trapped email to a vulnerable Exchange server.

“That doesn’t quite make it wormable, but it’s about the worst-case scenario for Exchange servers,” said Dustin Childs, of Trend Micro’s Zero Day Initiative. “We have seen the previously patched Exchange bug CVE-2020-0688 used in the wild, and that requires authentication. We’ll likely see this one in the wild soon. This should be your top priority.”

Also not great for companies to have around is CVE-2020-1210, which is a remote code execution flaw in supported versions of Microsoft Sharepoint document management software that bad guys could attack by uploading a file to a vulnerable Sharepoint site. Security firm Tenable notes that this bug is reminiscent of CVE-2019-0604, another Sharepoint problem that’s been exploited for cybercriminal gains since April 2019.

Microsoft fixed at least five other serious bugs in Sharepoint versions 2010 through 2019 that also could be used to compromise systems running this software. And because ransomware purveyors have a history of seizing upon Sharepoint flaws to wreak havoc inside enterprises, companies should definitely prioritize deployment of these fixes, says Alan Liska, senior security architect at Recorded Future.

Todd Schell at Ivanti reminds us that Patch Tuesday isn’t just about Windows updates: Google has shipped a critical update for its Chrome browser that resolves at least five security flaws that are rated high severity. If you use Chrome and notice an icon featuring a small upward-facing arrow inside of a circle to the right of the address bar, it’s time to update. Completely closing out Chrome and restarting it should apply the pending updates.

Once again, there are no security updates available today for Adobe’s Flash Player, although the company did ship a non-security software update for the browser plugin. The last time Flash got a security update was June 2020, which may suggest researchers and/or attackers have stopped looking for flaws in it. Adobe says it will retire the plugin at the end of this year, and Microsoft has said it plans to completely remove the program from all Microsoft browsers via Windows Update by then.

Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for Windows updates to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Cryptogram More on NIST’s Post-Quantum Cryptography

Back in July, NIST selected third-round algorithms for its post-quantum cryptography standard.

Recently, Daniel Apon of NIST gave a talk detailing the selection criteria. Interesting stuff.

NOTE: We’re in the process of moving this blog to WordPress. Comments will be disabled until the move is complete. The management thanks you for your cooperation and support.

Cory DoctorowMy first-ever Kickstarter: the audiobook for Attack Surface, the third Little Brother book

I have a favor to ask of you. I don’t often ask readers for stuff, but this is maybe the most important ask of my career. It’s a Kickstarter – I know, ‘another crowdfunder?’ – but it’s:

a) Really cool;

b) Potentially transformative for publishing.

c) Anti-monopolistic

Here’s the tldr: Attack Surface – AKA Little Brother 3- is coming out in 5 weeks. I retained audio rights and produced an amazing edition that Audible refuses to carry. You can pre-order the audiobook, ebook (and previous volumes), DRM- and EULA-free.

https://www.kickstarter.com/projects/doctorow/attack-surface-audiobook-for-the-third-little-brother-book

That’s the summary, but the details matter. First: the book itself. ATTACK SURFACE is a standalone Little Brother book about Masha, the young woman from the start and end of the other two books; unlike Marcus, who fights surveillance tech, Masha builds it.

Attack Surface is the story of how Masha has a long-overdue moral reckoning with the way that her work has hurt people, something she finally grapples with when she comes home to San Francisco.

Masha learns her childhood best friend is leading a BLM-style uprising – and she’s being targeted by the same cyberweapons that Masha built to hunt Iraqi insurgents and post-Soviet democracy movements.

I wrote Little Brother in 2006, it came out in 2008, and people tell me it’s “prescient” because the digital human rights issues it grapples with – high-tech authoritarianism and high-tech resistance – are so present in our current world.

But it’s not so much prescient as observant. I wrote Little Brother during the Bush administration’s vicious, relentless, tech-driven war on human rights. Little Brother was a bet that these would not get better on their own.

And it was a bet that tales of seizing the means of computation would inspire people to take up digital arms of their own. It worked. Hundreds of cryptographers, security experts, cyberlawyers, etc have told me that Little Brother started them on their paths.

ATTACK SURFACE – a technothriller about racial injustice, police brutality, high-tech turnkey totalitarianism, mass protests and mass surveillance – was written between May 2016 and Nov 2018, before the current uprisings and the tech worker walkouts.

https://twitter.com/search?q=%20(%23dailywords)%20(from%3Adoctorow)%20%22crypto%20wars%22&src=typed_query&f=live

But just as with Little Brother, the seeds of the current situation were all around us in 2016, and if Little Brother inspired a cohort of digital activists, I hope Attack Surface will give a much-needed push to a group of techies (currently) on the wrong side of history.

As I learned from Little Brother, there is something powerful about technologically rigorous thrillers about struggles for justice – stories that marry excitement, praxis and ethics. Of all my career achievements, the people I’ve reached this way matter the most.

Speaking of careers and ethics. As you probably know, I hate DRM with the heat of 10000 suns: it is a security/privacy nightmare, a monopolist’s best friend, and a gross insult to human rights. As you may also know, Audible will not carry any audiobooks unless they have DRM.

Audible is Amazon’s audiobook division, a monopolist with a total stranglehold on the audiobook market. Audiobooks currently account for almost as much revenue as hardcovers, and if you don’t sell on Audible, you sacrifice about 95% of that income.

That’s a decision I’ve made, and it means that publishers are no longer willing to pay for my audiobook rights (who can blame them?). According to my agent, living my principles this way has cost me enough to have paid off my mortgage and maybe funding my retirement.

I’ve tried a lot of tactics to get around Audible; selling through the indies (libro.fm, downpour.com, etc), through Google Play, and through my own shop (craphound.com/shop).

I appreciate the support there but it’s a tiny fraction of what I’m giving up – both in terms of dollars and reach – by refusing to lock my books (and my readers) (that’s you) to Amazon’s platform for all eternity with Audible DRM.

Which brings me to this audiobook.

Look, this is a great audiobook. I hired Amber Benson (a brilliant writer and actor who played Tara on Buffy), Skyboat Media and director Cassandra de Cuir, and Wryneck Studios, and we produced a 15h long, unabridged masterpiece.

It’s done. It’s wild. I can’t stop listening to it. It drops on Oct 13, with the print/ebook edition.

It’ll be on sale in all audiobook stores (except Audible) on the 13th,for $24.95.

But! You can get it for a mere $20 via my first Kickstarter.

https://www.kickstarter.com/projects/doctorow/attack-surface-audiobook-for-the-third-little-brother-book

What’s more, you can pre-order the ebook – and also buy the previous ebooks and audiobooks (read by Wil Wheaton and Kirby Heyborne) – all DRM free, all free of license “agreements.”

The deal is: “You bought it, you own it, don’t violate copyright law and we’re good.”

And here’s the groundbreaking part. For this Kickstarter, I’m the retailer. If you pre-order the ebook from my KS, I get the 30% that would otherwise go to Jeff Bezos – and I get the 25% that is the standard ebook royalty.

This is a first-of-its-kind experiment in letting authors, agents, readers and a major publisher deal directly with one another in a transaction that completely sidesteps the monopolists who have profited so handsomely during this crisis.

Which is where you come in: if you help me pre-sell a ton of ebooks and audiobooks through this crowdfunder, it will show publishing that readers are willing to buy their ebooks and audiobooks without enriching a monopolist, even if it means an extra click or two.

So, to recap:

Attack Surface is the third Little Brother book

It aims to radicalize a generation of tech workers while entertaining its audience as a cracking, technologically rigorous thriller

The audiobook is amazing, read by the fantastic Amber Benson

If you pre-order through the Kickstarter:

You get a cheaper price than you’ll get anywhere else

You get a DRM- and EULA-free purchase

You’ll fight monopolies and support authorship

If you’ve ever enjoyed my work and wondered how you could pay me back: this is it. This is the thing. Do this, and you will help me artistically, professionally, politically, and (ofc) financially.

Thank you!

https://www.kickstarter.com/projects/doctorow/attack-surface-audiobook-for-the-third-little-brother-book

PS: Tell your friends!

Cory DoctorowAttack Surface Kickstarter Promo Excerpt!

This week’s podcast is a generous excerpt – 3 hours! – of the audiobook for Attack Surface, the third Little Brother book, which is available for pre-order today on my very first Kickstarter.

This Kickstarter is one of the most important moments in my professional career, an experiment to see if I can viably publish audiobooks without caving into Amazon’s monopolistic requirement that all Audible books be sold with DRM that locks it to Amazon’s corporate platform…forever. If you’ve ever wanted to thank me for this podcast or my other work, there has never been a better way than to order the audiobook (or ebook) (or both!).

Attack Surface is a standalone novel, meaning you can enjoy it without reading Little Brother or its sequel, Homeland. Please give this extended preview a listen and, if you enjoy it, back the Kickstarter and (this is very important): TELL YOUR FRIENDS.

Thank you, sincerely.

MP3

Planet DebianGunnar Wolf: Welcome to the family

Need I say more? OK, I will…

Still wanting some more details? Well…

I have had many cats through my life. When I was about eight years old, my parents tried to have a dog… but the experiment didn’t work, and besides those few months, I never had one.

But as my aging cats spent the final months of their last very long lifes, it was clear to us that, after them, we would be adopting a dog.

Last Saturday was the big day. We had seen some photos of the mother and the nine (!) pups. My children decided almost right away her name; they were all brownish, so the name would be corteza (tree bark. They didn’t know, of course, that dogs also have a bark! 😉)

Anyway, welcome little one!

Kevin RuddSMH: Scott Morrison is Yearning for a Donald Trump victory

Published in The Sydney Morning Herald on 08 September 2020

The PM will be praying for a Republican win in the US to back up his inaction on climate and the Paris Agreement.

A year out from Barack Obama’s election in 2008, John Howard made a stunning admission that he thought Americans should be praying for a Republican victory. Ideologically this was unremarkable. But the fact Howard said so publicly was because he knew just how uncomfortable an Obama victory would be for him given his refusal to withdraw our troops from Iraq.

Fast forward more than a decade, and Scott Morrison – even in the era of Donald Trump – will also be yearning desperately for a Republican victory come November. But this time it is the conservative recalcitrance on a very different issue that risks Australia being isolated on the world stage: climate change.

And as the next summer approaches, Australians will be reminded afresh of how climate change, and its impact on our country and economy, has not gone away.

Former vice-president Joe Biden has put at the centre of his campaign a historic plan to fight climate change both at home and abroad. On his first day in office, he has promised to return the US to the Paris Agreement. And he recently unveiled an unprecedented $2 trillion green investment plan, including the complete decarbonisation of the domestic electricity system by 2035.

By contrast, Morrison remains hell-bent on Australia doing its best to disrupt global momentum to tackle the climate crisis and burying our head in the sand when it comes to embracing the new economic opportunities that come with effective climate change action.

As a result, if Biden is elected this November, we will be on track for a collision course with our American ally in a number of areas.

First, Morrison remains recklessly determined on being able to carry over so-called “credits” from the overachievement of our 2020 Kyoto target to help it meet its already lacklustre 2030 target under the new Paris regime.

No other government in the world is digging their heels in like this. None. It is nothing more than an accounting trick to allow Australia to do less. Perhaps the greatest irony is that this “overachievement” was also in large part because of the mitigation actions of our government.

That aside, these carbon credits also do nothing for the atmosphere. At worst, using them beyond 2020 could be considered illegal and only opens the back door for other countries to also do less by following Morrison’s lead.

This will come to a head at the next UN climate talks in Glasgow next year. While Australia has thus far been able to dig in against objections by most of the rest of the world, a Biden victory would only strengthen the hand of the UK hosts to simply ride over the top of any further Australian intransigence. Morrison would be foolhardy to believe that Boris Johnson’s government will burn its political capital at home and abroad to defend the indefensible Australian position.

Second, unlike 114 countries around the world, Morrison remains hell-bent on ignoring the central promise of Paris: that all governments increase their 2030 targets by the time they get to Glasgow. That’s because even if all those commitments were fully implemented, it would only give the planet one-third of what is necessary to keep average temperature increases within 1.5 degrees by 2100, as the Paris Agreement requires. This is why governments agreed to increase their ambition every five years as technologies improved, costs lowered and political momentum built.

In 2014, the Liberal government explained our existing Paris target on the basis that it was the same as what the Americans were doing. In reality, the Obama administration planned to achieve the same cut of 26 to 28 per cent on 2005 emissions by 2025 – not 2030 as we pledged and sought to disguise.

So based on the logic that what America does is this government’s benchmark for its global climate change commitments, if the US is prepared to increase it’s Paris target (as it will under Biden), so too should we. Biden himself has not just committed the US to the goal of net zero emissions by 2050, but has undertaken to embed it in legislation as countries such as Britain and New Zealand have done, and rally others to do the same.

Unsurprisingly, despite the decisions of 121 countries around the world, Morrison also refuses to even identify a timeline for achieving the Paris Agreement’s long-term goal to reach net zero emissions. As the science tells us, this needs to be by 2050 to have any shot of protecting the world’s most vulnerable populations – including in the Pacific – and saving Australia from a rolling apocalypse of weather-related disasters that will wreak havoc on our economy.

For our part, the government insists that it won’t “set a target without a plan”. But governments exist to do the hard work. And politically, it goes against the myriad of support domestically for a net zero by 2050 goal, including from the peak business, industry and union groups, the top bodies for energy and agriculture (two sectors that together account for almost half of our emissions), as well as our national airline, two largest mining companies, every state and territory government, and even a majority of conservative voters.

As Tuvalu’s recent prime minister Enele Sopoaga reminded us recently, the fact that Morrison himself looked Pacific island leaders in the eye last year and promised to develop such a long-term plan – a promise he reiterated at the G20 – also shows we risk being a country that does not do what we say. For those in the Pacific, this just rubs salt into the wound of Morrison’s decision to blindly follow Trump’s lead in halting payments to the Green Climate Fund (something Biden would also reverse), requiring them to navigate a bureaucratic maze of individual aid programs as a result.

Finally, Biden has undertaken to also align trade and climate policy by imposing carbon tariffs against those countries that fail to do their fair share in global greenhouse gas reductions. The EU is in the process of embracing the same approach. So if Morrison doesn’t act, he’s going to put our entire export sector at risk of punitive tariffs because the Liberals have so consistently failed to take climate change seriously.

Under Trump, Morrison has been able to get one giant leave pass for doing nothing on climate. But under Biden, he’ll be seen as nothing more than the climate change free-loader that he is. As he will by the rest of the world. And our economy will be punished as a result.

The post SMH: Scott Morrison is Yearning for a Donald Trump victory appeared first on Kevin Rudd.

Planet DebianSven Hoexter: Cloudflare Bot Management, MITM Boxes and TLS 1.3

This is just a "warn your brothers" post for those who use Cloudflare Bot Management, and have customers which use MITM boxes to break up TLS 1.3 connections.

Be aware that right now some heuristic rules in the Cloudflare Bot Management score TLS 1.3 requests made by some MITM boxes with 1 - which equals "we're 99.99% sure that this is none human browser traffic". While technically correct - the TLS connection hitting the Cloudflare Edge node is not established by a browser - that does not help your customer if you block those requests. If you do something like blocking requests with a BM score of 1 at the Cloudflare Edge, you might want to reconsider that at the moment and sent a captcha challenge instead. While that is not a lot nicer, and still pisses people off, you might find a balance there between protecting yourself and still having some customers.

I've a confirmation for this happening with Cisco WSA, but it's likely to be also the case with other vendors. Breaking up TLS 1.2 seems to be stealthy enough in those appliances that it's not detected, so this issue creeps in with more enterprises rolling out modern browser.

You can now enter youself here a rant about how bad the client-server internet of 2020 is, and how bad it is that some of us rely on Cloudflare, and that they have accumulated a way too big market share. But the world is as it is. :(

LongNowTime-Binding and The Music History Survey

Musicologist Phil Ford, co-host of the Weird Studies podcast, makes an eloquent argument for the preservation of the “Chants to Minimalism” Western Music History survey—the standard academic curriculum for musicology students, akin to the “fish, frogs, lizards, birds” evolutionary spiral taught in bio classes—in an age of exponential change and an increased emphasis on “relevance” over the remembrance of canonical works:

Perhaps paradoxically, the rate of cultural change increases in proportional measure to the increase in cultural memory. Writing and its successor media of prosthetic memory enact a contradiction: the easy preservation of cultural memory enables us to break with the past, to unbind time. At its furthest extremes, this is manifested in the familiar and dismal spectacle of fascist and communist regimes, impelled by intellectual notions permitted by the intensified time-binding of literacy, imagining utopias that will ‘wipe the slate clean’ and trying to force people to live in a world entirely divorced from the bound time of social/cultural tradition.

See, for instance, Mao Zedong’s crusade to destroy Tibetan Buddhism by the erasure of context that held the Dalai Lama’s social role in place for fourteen generations. How is the culture to find a fifteenth Dalai Lama if no child can identify the relics of a prior Dalai Lama? Ironically this speaks to larger questions of the agency of landscapes and materials, and how it isn’t just our records as we understand them that help scaffold our identity; but that we are in some sense colonial organisms inextricable from, made by, our environments — whether built or wild. As recounted in countless change-of-station stories, monarchs and other leaders, like whirlpools or sea anemones, dissolve when pulled out of the currents that support them.

That said, the current isn’t just stability but change. Both novelty and structure are required to bind time. By pushing to extremes, modernity self-undermines, imperils its own basis for existence; and those cultures that slam on the brakes and dig into conservative tradition risk self-suffocation, or being ripped asunder by the friction of collision with the moving edge of history:

Modernity is (among other things) the condition in which time-binding is threatened by its own exponential expansion, and yet where it’s not clear exactly how we are to slow its growth.  Very modern people are reflexively opposed to anything that would slow down the acceleration: for them, the essence of the human is change. Reactionaries are reflexively opposed to anything that will speed up acceleration: for them, the essence of the human is continuity. Both are right!  Each side, given the opportunity to realize its imagined utopia of change or continuity, would make a world no sensible person would be caught dead in.

Ultimately, therefore, a conservative-yet-innovative balance must be found in the embrace of both new information technologies and their use for preservation of historic repertoires. When on a rocket into space, a look back at the whole Earth is essential to remember where we come from:

The best argument for keeping Sederunt in the classroom is that it is one of the nearly-infinite forms of music that the human mind has contrived, and the memory of those forms — time-binding — is crucial not only to the craft of musicians but to our continued sense of what it is to be a human being.

This isn’t just future-shocked reactionary work but a necessary integrative practice that enables us to reach beyond:

To tell the story of who we are is to engage in the scholar’s highest mission. It is the gift that shamans give their tribe.

Worse Than FailureCodeSOD: Sleep on It

If you're fetching data from a remote source, "retry until a timeout is hit" is a pretty standard pattern. And with that in mind, this C++ code from Auburus doesn't look like much of a WTF.

bool receiveData(uint8_t** data, std::chrono::milliseconds timeToWait) { start = now(); while ((now() - start) < timeToWait) { if (/* successfully receive data */) { return true; } std::this_thread::sleep_for(100ms); } return false; }

Track the start time. While the difference between the current time and the start is less than our timeout, try and get the data. If you don't, sleep for 100ms, then retry.

This all seems pretty reasonable, at first glance. We could come up with better ways, certainly, but that code isn't quite a WTF.

This code is:

// The ONLY call to that function receiveData(&dataPtr, 100ms);

By calling this with a 100ms timeout, and because we hard-coded in a 100ms sleep, we've guaranteed that we will never retry. That may or not be intentional, and that's what really bugs me about this code. Maybe they meant to do that (because they originally retried, and found it caused other bugs?). Maybe they didn't. But they didn't document it, either in the code or as a commit comment, so we'll never know.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

LongNowStudy Group for Progress Launches with Discount for Long Now Members

Long Now Member Jason Crawford, founder of The Roots of Progress, is starting up a weekly learning group on progress with a steep discount for Long Now Members:

The Study Group for Progress is a weekly discussion + Q&A on the history, economics and philosophy of progress. Long Now members can get 50% off registration using the link below.


Each week will feature a special guest for Q&A. Confirmed speakers so far include top economists and historians such as Robert J. Gordon (Northwestern, The Rise & Fall of American Growth), Margaret Jacob (UCLA), Richard Nelson (Columbia), Patrick Collison, and Anton Howes. Readings from each author will be given out ahead of time. Participants will also receive a set of readings originally created for the online learning program Progress Studies for Young Scholars: a summary of the history of technology, including advances in materials and manufacturing, agriculture, energy, transportation, communication, and disease.


The group will meet weekly on Sundays at 4:00–6:30pm Pacific, from September 13 through December 13 (recordings available privately afterwards). See the full announcement here and register for 50% off with this link

Planet DebianArturo Borrero González: Debconf 2020 online, summary

Debconf2020 logo

Debconf2020 took place when I was on personal vacations time. But anyway I’m lucky enough that my company, the Wikimedia Foundation, paid the conference registration fee for me and allowed me to take the time (after my vacations) to watch recordings from the conference.

This is my first time attending (or watching) a full-online conference, and I was curious to see first hand how it would develop. I was greatly surprised to see it worked pretty nicely, so kudos to the organization, video team, volunteers, etc!

What follows is my summary of the conference, from the different sessions and talks I watched (again, none of them live but recordings).

The first thing I saw was the Welcome to Debconf 2020 opening session. It is obvious the video was made with lots of love, I found it entertaining and useful. I love it :-)

Then I watched the BoF Can Free Software improve social equality. It was introduced and moderated by Hong Phuc Dang. Several participants, about 10 people, shared their visions on the interaction between open source projects and communities. I’m pretty much aware of the interesting social advancement that FLOSS can enable in communities, but sometimes is not so easy, it may also present challenges and barriers. The BoF was joined by many people from the Asia Pacific region, and for me, it has been very interesting to take a step back from the usual western vision of this topic. Anyway, about the session itself, I have the feeling the participants may have spent too much time on presentations, sharing their local stories (which are interesting, don’t get me wrong), perhaps leaving little room for actual proposal discussions or the like.

Next I watched the Bits from the DPL talk. In the session, Jonathan Carter goes over several topics affecting the project, both internally and externally. It was interesting to know more about the status of the project from a high level perspective, as an organization, including subjects such as money, common project problems, future issues we are anticipating, the social aspect of the project, etc.

The Lightning Talks session grabbed my attention. It is usually very funny to watch and not as dense as other talks. I’m glad I watched this as it includes some interesting talks, ranging from HAM radios (I love them!), to personal projects to help in certain tasks, and even some general reflections about life.

Just when I’m writing this very sentence, the video for the Come and meet your Debian Publicity team! talk has been uploaded. This team does an incredible work in keeping project information flowing, and social networks up-to-date and alive. Mind that the work of this team is mostly non-engineering, but still, is a vital part of the project. The folks in session explain what the team does, and they also discuss how new people can contribute, the different challenges related to language barriers, etc.

I have to admit I also started watching a couple other sessions that turned out to don’t be interesting to me (and therefore I didn’t finish the video). Also, I tried to watch a couple more sessions that didn’t publish their video recording just yet, for example the When We Virtualize the Whole Internet talk by Sam Hartman. Will check again in a couple of days.

It is a real pleasure the video recordings from the conference are made available online. One can join the conference anytime (like I’m doing!) and watch the sessions at any pace at any time. The video archive is big, I won’t be able to go over all of it. I won’t lie, I still have some pending videos to watch from last year Debconf2019 :-)

Worse Than FailureCodeSOD: Classic WTF: Covering All Cases… And Then Some

It's Labor Day in the US, where we celebrate the labor movement and people who, y'know, do actual work. So let's flip back to an old story, which does a lot of extra work. Original -- Remy

Ben Murphy found a developer who liked to cover all of his bases ... then cover the dug-out ... then the bench. If you think this method to convert input (from 33 to 0.33) is a bit superflous, you should see data validation.

Static Function ConvertPercent(v_value As Double)
  If v_value > 1 Then
    ConvertPercent = v_value / 100
  ElseIf v_value = 1 Then
    ConvertPercent = v_value / 100
  ElseIf v_value < 1 Then
    ConvertPercent = v_value / 100
  ElseIf v_value = -1 Then
    ConvertPercent = v_value / 100
  Else 
    ConvertPercent = v_value
  End If 
End Function


The original article- from 2004!- featured Alex asking for a logo. Instead, let me remind you to submit your WTF. Our stories come from our readers. If nothing else, it's a great chance to anonymously vent about work.
[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Planet DebianEnrico Zini: Learning resources links

Cognitive bias cheat sheet has another elegant infographic summarising cognitive biases. On this subject, you might want to also check out 15 Insane Things That Correlate With Each Other.

Get started | Learning Music (Beta) has a nice interactive introduction to music making.

If you leave in a block of flats and decide to learn music making, please use headphones when experimenting. Our neighbour, sadly, didn't.

You can also learn photography with Photography for Beginners (The Ultimate Guide in 2020) and somewhat related, Understanding Aspect Ratios: A Comprehensive Guide

Planet DebianJonathan Carter: DebConf 20 Online

This week, last week, Last month, I attended DebConf 20 Online. It was the first DebConf to be held entirely online, but it’s the 7th DebConf I’ve attended from home.

My first one was DebConf7. Initially I mostly started watching the videos because I wanted to learn more about packaging. I had just figured out how to create binary packages by hand, and have read through the new maintainers guide, but a lot of it was still a mystery. By the end of DebConf7 my grasp of source packages was still a bit thin, but other than that, I ended up learning a lot more about Debian during DebConf7 than I had hoped for, and over the years, the quality of online participation for each DebConf has varied a lot.

I think having a completely online DebConf, where everyone was remote, helped raise awareness about how important it is to make the remote experience work well, and I hope that it will make people who run sessions at physical events in the future consider those who are following remotely a bit more.

During some BoF sessions, it was clear that some teams haven’t talked to each other face to face in a while, and I heard at least 3 teams who said “This was nice, we should do more regular video calls!”. Our usual communication methods of e-mail lists and IRC serve us quite well, for the most part, but sometimes having an actual conversation with the whole team present at the same time can do wonders for dealing with many kind of issues that is just always hard to deal with in text based mediums.

There were three main languages used in this DebConf. We’ve had more than one language at a DebConf before, but as far as I know it’s the first time that we had multiple talks over 3 languages (English, Malayalam and Spanish).

It was also impressive how the DebConf team managed to send out DebConf t-shirts all around the world and in time before the conference! To my knowledge only 2 people didn’t get theirs in time due to customs.

I already posted about the new loop that we worked on for this DebConf. It was an unintended effect that we ended up having lots of shout-outs which ended up giving this online DebConf a much more warmer, personal feel to it than if we didn’t have it. I’m definitely planning to keep on improving on that for the future, for online and in-person events. There were also some other new stuff from the video team during this DebConf, we’ll try to co-ordinate a blog post about that once the dust settled.

Thanks to everyone for making this DebConf special, even though it was virtual!

Planet DebianThorsten Alteholz: My Debian Activities in August 2020

FTP master

This month I accepted 159 packages and rejected 16. The overall number of packages that got accepted was 172.

Debian LTS

This was my seventy-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 21.75h. During that time I did LTS uploads of:

  • [DLA 2336-1] firejail security update for two CVEs
  • [DLA 2337-1] python2.7 security update for nine CVEs
  • [DLA 2353-1] bacula security update for one CVE
  • [DLA 2354-1] ndpi security update for one CVE
  • [DLA 2355-1] bind9 security update for two CVEs
  • [DLA 2359-1] xorg-server security update for five CVEs

I also started to work on curl but did not upload a fixed version yet. As usual, testing the package takes up some time.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty sixth ELTS month.

During my allocated time I uploaded:

  • ELA-265-1 for python2.7
  • ELA-270-1 for bind9
  • ELA-272-1 for xorg-server

Like in LTS, I also started to work on curl and encountered the same problems as in LTS above.

Last but not least I did some days of frontdesk duties.

Other stuff

This month I found again some time for other Debian work and uploaded packages to fix bugs, mainly around gcc10:

I also uploaded new upstream versions of:

All package called *osmo* are developed by the Osmocom project, that is about Open Source MObile COMmunication. They are really doing a great job and I apologize that my uploads of new versions are mostly far behind their development.

Some of the uploads are related to new packages:

Planet DebianDirk Eddelbuettel: inline 0.3.16: Now with system2()

A new minor release of the inline package just arrived on CRAN. inline facilitates writing code in-line in simple string expressions or short files. The package is mature and stable, and can be considered to be in maintenance mode: Rcpp used it extensively in the vrey early days before Rcpp Attributes provided an even better alternative. Seveal other package still rely on inline.

One of these package is rstan, and Ben Goodrich updated our use of system() to system2() allowing for better error diagnostics. We also did a bit of standard maintenance to Travis CI and the README.md file.

See below for a detailed list of changes extracted from the NEWS file.

Changes in inline version 0.3.16 (2020-09-06)

  • Maintenance updates to README.md standardizing badges (Dirk).

  • Maintenance update to Travis CI setup (Dirk).

  • Switch to using system2() for better error diagnostics (Ben Goodrich in #12).

Courtesy of CRANberries, there is a comparison to the previous release.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMolly de Blanc: NYU VPN

I needed to setup a VPN in order to access my readings for class. The instructions for Linux are located: https://nyu.service-now.com/sp?id=kb_article_view&sysparm_article=KB0014932

After you download the VPN client of your choice (they recommend Cisco AnyConnect), connect to: vpn.nyu.edu.

It will ask for two passwords: your NYU username and password and a multi-factor authentication (MFA) code from Duo. Use the Duo. See below for stuff on Duo.

Hit connect and viola, you can connect to the VPN.

Duo Authentication Setup

Go to: https://start.nyu.edu and follow the instructions for MFA. They’ll tell you that a smart phone is the most secure method of setting up. I am skeptical.

Install the Duo Authentication App on your phone, enter your phone number into the NYU web page (off of ) and it will send a thing to your phone to connect it.

Commentary

Okay, I have to complain at least a little bit about this. I had to guess what the VPN address was because the instructions are for NYU Shanghai. I also had to install the VPN client using the terminal. These sorts of things make it harder for people to use Linux. Boo.

Planet DebianBen Armstrong: Dronefly relicensed under copyleft licenses

To ensure Dronefly always remains free, the Dronefly project has been relicensed under two copyleft licenses. Read the license change and learn more about copyleft at these links.

I was prompted to make this change after a recent incident in the Red DiscordBot development community that made me reconsider my prior position that the liberal MIT license was best for our project. While on the face of it, making your license as liberal as possible might seem like the most generous and hassle-free way to license any project, I was shocked into the realization that its liberality was also its fatal flaw: all is well and good so long as everyone is being cooperative, but it does not afford any protection to developers or users should things suddenly go sideways in how a project is run. A copyleft license is the best way to avoid such issues.

In this incident – a sad story of conflict between developers I respect on both sides of the rift, and owe a debt to for what they’ve taught me – three cogs we had come to depend on suddenly stopped being viable for us to use due to changes to the license & the code. Effectively, those cogs became unsupported and unsupportable. To avoid any such future disaster with the Dronefly project, I started shopping for a new license that would protect developers and users alike from similarly losing support, or losing control of their contributions. I owe thanks to Dale Floer, a team member who early on advised me the AGPL might be a better fit, and later was helpful in selecting the doc license and encouraging me to follow through. We ran the new licenses by each contributor and arrived at this consensus: the AGPL is best suited for our server-based code, and CC-BY-SA is best suited for our documentation. The relicensing was made official this morning.

On Discord platform alternatives

You might well question what I, a Debian developer steeped in free software culture and otherwise in agreement with its principles, am doing encouraging a community to grow on the proprietary Discord platform! I have no satisfying answer to that. I explained when I introduced my project here some of the backstory, but that’s more of an account of its beginnings than justification for it to continue on this platform. Honestly, all I can offer is a rather dissatisfying it seemed like the right thing to do at the time.

Time will tell whether we could successfully move off of it to a freedom-respecting and privacy-respecting alternative chat platform that is both socially and technically viable to migrate to. That platform would ideally:

  • not be under the control of a single, central commercial entity running proprietary code, so their privacy is safeguarded, and they are protected from disaster, should it become unattractive to remain on the platform;
  • have a vibrant and supportive open source third party extension development community;
  • support our image-rich content and effortless sharing of URLs with previews automatically provided from the page’s content (e.g. via OpenGraph tags);
  • be effortless to join regardless of what platform/device each user uses;
  • keep a history of messages so that future members joining the community can benefit from past conversations, and existing members can catch up on conversations they missed;
  • but above all else: be acceptable and compelling to the existing community to move over onto it.

I’m intrigued by Matrix and wonder if it provides some or all of the above in its current form. Are you a developer writing bots for this platform? If so, I especially want to hear from you in the comments about your experience. Or in any case, if you’ve been there before – if you’ve faced the same issue with your community and have a success story to share, I would love to hear from you.

Planet DebianJunichi Uekawa: Trying to boot Debian on Chromebook Kevin.

Trying to boot Debian on Chromebook Kevin. But I haven't yet found how to get the USB serial console device to work, and display is black. So I can't see anything.

Planet DebianRuss Allbery: September haul

So many good books, so little reading time.

Jairus Banaji — A Brief History of Commercial Capitalism (nonfiction)
Steven Brust — The Baron of Magister Valley (sff)
Micaiah Johnson — The Space Between Worlds (sff)
Ian McDonald — Luna: New Moon (sff)
Elizabeth Moon — Trading in Danger (sff)
Tamsyn Muir — Harrow the Ninth (sff)
Suzanne Palmer — Finder (sff)
Kit Rocha — Beyond Shame (sff)
Kit Rocha — Beyond Control (sff)
Kit Rocha — Beyond Pain (sff)
Arundhati Roy — Azadi (nonfiction)
Jeff VanderMeer — Authority (sff)
Jeff VanderMeer — Acceptance (sff)
K.B. Wagers — Behind the Throne (sff)
Jarrett Walker — Human Transit (nonfiction)

I took advantage of a few sales to get books I know I'm going to want to read eventually for a buck or two.

,

Planet DebianMike Gabriel: My Work on Debian LTS (August 2020)

In August 2020, I have worked on the Debian LTS project for 16 hours (of 8 hours planned, plus another 8 hours that I carried over from July).

For ELTS, I have worked for another 8 hours (of 8 hours planned).

LTS Work

  • LTS frontdesk: triage wireshark, yubico-piv-tool, trousers, software-properties, qt4-x11, qtbase-opensource-src, openexr, netty and netty-3.9
  • upload to stretch-security: libvncserver 0.9.11+dfsg-1.3~deb9u5 (fixing 9 CVEs, DLA-2347-1 [1])
  • upload to stretch-security: php-horde-core 2.27.6+debian1-2+deb9u1 (1 CVE, DLA-2348 [2])
  • upload to stretch-security: php-horde 5.2.13+debian0-1+deb9u3 (fixing 1 CVE, DLA-2349-1 [3])
  • upload to stretch-security: php-horde-kronolith 4.2.19-1+deb9u1 (fixing 1 CVE, DLA-2350-1 [4])
  • upload to stretch-security: php-horde-kronolith 4.2.19-1+deb9u2 (fixing 1 more CVE, DLA-2351-1 [5])
  • upload to stretch-security: php-horde-gollem 3.0.10-1+deb9u2 (fixing 1 CVE, DLA-2352-1 [6])
  • upload to stretch-security: freerdp 1.1.0~git20140921.1.440916e+dfsg1-13+deb9u4 (fixing 14 CVEs, DLA-2356-1 [7])
  • prepare salsa MRs for gnome-shell (for gnome-shell in stretch [8] and buster [9])

ELTS Work

  • Look into open CVEs for Samba in Debian jessie ELTS. Revisit issues affecting the Samba AD code that have previously been considered as issues.

Other security related work for Debian

  • upload to buster (SRU): libvncserver 0.9.11+dfsg-1.3+deb10u4 (fixing 9 CVEs) [10]

References

Cryptogram Schneier.com is Moving

I’m switching my website software from Movable Type to WordPress, and moving to a new host.

The migration is expected to last from approximately 3 AM EST Monday until 4 PM EST Tuesday. The site will still be visible during that time, but comments will be disabled. (This is to prevent any new comments from disappearing in the move.)

This is not a site redesign, so you shouldn’t notice many differences. Even the commenting system is pretty much the same, though you’ll be able to use Markdown instead of HTML if you want to.

The conversion to WordPress was done by Automattic, who did an amazing job of getting all of the site’s customizations and complexities — this website is 17 years old — to work on a new platform. Automattic is also providing the new hosting on their Pressable service. I’m not sure I could have done it without them.

Hopefully everything will work smoothly.

Planet DebianElana Hashman: Three talks at DebConf 2020

This year has been a really unusual one for in-person events like conferences. I had already planned to take this year off from travel for the most part, attending just a handful of domestic conferences. But the pandemic has thrown those plans into chaos; I do not plan to attend large-scale in-person events until July 2021 at the earliest, per my employer's guidance.

I've been really sad to have turned down multiple speaking invitations this year. To try to set expectations, I added a note to my Talks page that indicates I will not be writing any new talks for 2020, but am happy to join panels or reprise old talks.

And somehow, with all that background, I still ended up giving three talks at DebConf 2020 this year. In part, I think it's because this is the first DebConf I've been able to attend since 2017, and I was so happy to have the opportunity! I took time off work to give myself enough space to focus on the conference. International travel is very difficult for me, so DebConf is generally challenging if not impossible for me to attend.

A panel a day keeps the FTP Team away?

On Thursday, August 27th, I spoke on the Leadership in Debian panel, where I discussed some of the challenges leadership in the project must face, including an appropriate response to the BLM movement and sustainability for volunteer positions that require unsustainable hours (such as DPL).

On Friday, August 28th, I hosted the Debian Clojure BoF, attended by members of the Clojure and Puppet teams. The Puppet team is working to package the latest versions of Puppet Server/DB, which involve significant Clojure components, and I am doing my best to help.

On Saturday, August 29th, I spoke on the Meet the Technical Committee panel. The Committee presented a number of proposals for improving how we work within the project. I was responsible for presenting our first proposal on allowing folks to engage the committee privately.

,

Planet DebianDirk Eddelbuettel: nanotime 0.3.2: Tweaks

Another (minor) nanotime release, now at version 0.3.2. This release brings an endianness correction which was kindly contributed in a PR, switches to using the API header exported by RcppCCTZ, and tweaks test coverage a little with respect to r-devel.

nanotime relies on the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from work by co-author Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

The NEWS snippet adds full details.

Changes in version 0.3.2 (2020-09-03)

  • Correct for big endian (Elliott Sales de Andrade in #81).

  • Use the RcppCCTZ_API.h header (Dirk in #82).

  • Conditionally reduce test coverage (Dirk in #83).

Thanks to CRANberries there is also a diff to the previous version. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cryptogram Friday Squid Blogging: Morning Squid

Asa ika means “morning squid” in Japanese.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: We Must Explore the Null!

"Beyond the realm of Null, past the Black Stump, lies the mythical FILE_NOT_FOUND," writes Chris A.

 

"I know, Zwift, I should have started paying you 50 years ago," Jeff wrote, "But hey, thanks for still giving leeches like me a free ride!"

 

Drake C. writes, "Must...not...click...forbidden...newsletter!"

 

"I'm having a hard time picking between these 'Exclusives'. It's a shame they're both scheduled for the same time," wrote Rutger.

 

"Wait, so is this beer zero raised to hex FF? At that price I'll take 0x02!" wrote Tony B.

 

Kevin F. writes, "Some weed killers say they're powerful. This one backs up that claim!"

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram Hacking AI-Graded Tests

The company Edgenuity sells AI systems for grading tests. Turns out that they just search for keywords without doing any actual semantic analysis.

Planet DebianReproducible Builds (diffoscope): diffoscope 159 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 159. This version includes the following changes:

[ Chris Lamb ]
* Show "ordering differences only" in strings(1) output.
  (Closes: reproducible-builds/diffoscope#216)
* Don't alias output from "os.path.splitext" to variables that we do not end
  up using.
* Don't raise exceptions when cleaning up after a guestfs cleanup failure.

[ Jean-Romain Garnier ]
* Make "Command" subclass a new generic Operation class.

You find out more by visiting the project homepage.

,

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.900.3.0

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 769 other packages on CRAN.

A few days ago, Conrad released a new minor version 9.900.3 of Armadillo which we packaged and tested as usual. Given the incremental release character, we only tested the release and not candidate release. No regressions were found, and, as usual, logs from reverse-depends runs are in the rcpp-logs repo.

All changes in the new release are noted below.

Changes in RcppArmadillo version 0.9.900.3.0 (2020-09-02)

  • Upgraded to Armadillo release 9.900.3 (Nocturnal Misbehaviour)

    • More efficient code for initialising matrices with fill::zeros

    • Fixes for various error messages

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBen Hutchings: Debian LTS work, August 2020

I was assigned 16 hours of work by Freexian's Debian LTS initiative, but only worked 6.25 hours this month and have carried over the rest to September.

I finished my work to add Linux 4.19 to the stretch-security suite, providing an upgrade path for those previously installing it from stretch-backports (DLA-2323-1, DLA-2324-1). I also updated the firmware-nonfree package (DLA-2321-1) so that firmware needed by drivers in Linux 4.19 is also be available in the non-free section of the stretch-security suite.

I also reviewed the report of the Debian LTS survey and commented on the presentation of results. This report was presented in the Debian LTS BoF at DebConf.

Planet DebianMolly de Blanc: “All Animals Are Equal,” Peter Singer

I recently read “Disability Visibility,” which opens with a piece by Harriet McBryde Johnson about debating Peter Singer. When I got my first reading for my first class and saw it was Peter Singer, I was dismayed because of his (heinous) stances in disability. I assumed “All Animals Are Equal” was one of Singer’s pieces about animal rights. While I agree with many of the principles Singer discusses around animal rights, I feel as though his work on this front is significantly diminished by his work around disability. To put it simply, I can’t take Peter Singer seriously.

Because of this I had a lot of trouble reading “All Animals Are Equal” and taking it in good faith. I judged everything from his arguments to his writing harshly. While I don’t disagree with his basic point (all animals have rights) I disagree with how he made the point and the argument supporting it.

One of the things I was told to ask when reading any philosophy paper is “What is the argument?” or “What are they trying to convince you of?” In this case, you could frame the answer as: Animals have {some of) the same rights people do. I think it would be more accurate though to frame it as “All animals (including humans) have (some of) the same rights” or even “Humans are as equally worthy of consideration as animals are.”

I think when we usually talk about animal rights, we do it from a perspective of wanting to elevate animals to human status. From one perspective, I don’t like this approach because I feel as though it turns the framing of rights as something you deserve or earn, privileges you get for being “good enough.” The point about rights is that they are inherent — you get them because they are.

The valuable thing I got out of “All Animals Are Equal” is that “rights” are not universal. When we talk about things like abortion, for example, we talk about the right to have an abortion. Singer asks whether people who cannot get pregnant have the right to an abortion? What he doesn’t dig into is that the “right to an abortion” is really just an extension of bodily autonomy — turning one facet of bodily autonomy into the legal right to have a medical procedure.  I think this is worth thinking about more — turning high level human rights into the mundane rights, and acknowledging that not everyone can or needs them.

Worse Than FailureCodeSOD: Learning the Hard Way

If you want millions in VC funding, mumble the words “machine learning” and “disruption” and they’ll blunder out of the woods to just throw money at your startup.

At its core, ML is really about brute-forcing a statistical model. And today’s code from Norine could have possibly been avoided by applying a little more brute force to the programmer responsible.

This particular ML environment, like many, uses Python to wrap around lower-level objects. The ease of Python coupled with the speed of native/GPU-accelerated code. It has a collection of Model datatypes, and at runtime, it needs to decide which concrete Model type it should instantiate. If you come from an OO background in pretty much any other language, you’re thinking about factory patterns and abstract classes, but that’s not terribly Pythonic. Not that this developer’s solution is Pythonic either.

def choose_model(data, env):
  ModelBase = getattr(import_module(env.modelpath), env.modelname)
  
  class Model(ModelBase):
    def __init__(self, data, env):
      if env.data_save is None:
        if env.counter == 0:
          self.data = data
        else:
          raise ValueError("data unavailable with counter > 0")
      
      else:
        with open(env.data_save, "r") as df:
          self.data = json.load(df)
      ModelBase.__init__(self, **self.data)
  
  return Model(data, env)

This is an example of metaprogramming. We use import_module to dynamically load a module at runtime- potentially smart, because modules may take some time to load, so we shouldn’t load a module we don’t know that we’re going to use. Then, with get_attr, we extract the definition of a class with whatever name is stored in env.modelname.

This is the model class we want to instantiate. But instead of actually instantiating it, we instead create a new derived class, and slap a bunch of logic and file loading into it.

Then we instantiate and return an instance of this dynamically defined derived class.

There are so many things that make me cringe. First, I hate putting file access in the constructor. That’s maybe more personal preference, but I hate constructors which can possibly throw exceptions. See also the raise ValueError, where we explicitly throw exceptions. That’s just me being picky, though, and it’s not like this constructor will ever get called from anywhere else.

More concretely bad, these kinds of dynamically defined classes can have some… unusual effects in Python. For example, in Python2 (which this is), each call to choose_model will tag the returned instance with the same type, regardless of which base class it used. Since this method might potentially be using a different base class depending on the env passed in, that’s asking for confusion. You can route around these problems, but they’re not doing that here.

But far, far more annoying is that the super-class constructor, ModelBase.__init__, isn’t called until the end.

You’ll note that our child class manipulates self.data, and while it’s not pictured here, our base model classes? They also use a property called data, but for a different purpose. So our child class inits a child class property, specifically to build a dictionary of key/value pairs, which it then passes as kwargs, or keyword arguments (the ** operator) to the base class constructor… which then overwrites the self.data our child class was using.

So why do any of that?

Norine changed the code to this simpler, more reliable version, which doesn’t need any metaprogramming or dynamically defined classes:

def choose_model(data, env):
  Model = getattr(import_module(env.modelpath), env.modelname)
  
  if env.data_save is not None:
    with open(env.data_save, "r") as df:
      data = json.load(df)
  elif env.counter != 0:
    raise ValueError('if env.counter > 0 then must use data_save parameter')

  return Model(**data)

Norine adds:

I’m thinking of holding on to the original, and showing it to interviewees like a Rorschach test. What do you see in this? The fragility of a plugin system? The perils of metaprogramming? The hollowness of an overwritten argument? Do you see someone with more cleverness than sense? Or someone intelligent but bored? Or perhaps you see, in the way the superclass init is called, TRWTF: a Python 2 library written within the last 3 years.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Cryptogram 2017 Tesla Hack

Interesting story of a class break against the entire Tesla fleet.

Planet DebianNorbert Preining: KDE/Plasma Status Update 2020-09-03

Yesterday I have updated my builds of Plasma for Debian to Plasma 5.19.5, which are now available from the usual sources, nothing has changed.

[Update 2020-09-03: KDE Apps 20.08.1 are also now available]

On a different front, there are good news concerning updates in Debian proper: Together with Scarlett Moore and Patrick Franz we are in the process of updating the official Debian packages. The first bunch of packages has been uploaded to experimental, and after NEW processing the next group will go there, too. This is still 5.19.4, but a great step forward. I expect that all of Plasma 5.19.4 will be available in experimental in the next weeks, and soon after also in Debian/unstable.

Again, thanks to Scarlett and Patrick for the good collaboration, this is very much appreciated!

Krebs on SecurityThe Joys of Owning an ‘OG’ Email Account

When you own a short email address at a popular email provider, you are bound to get gobs of spam, and more than a few alerts about random people trying to seize control over the account. If your account name is short and desirable enough, this kind of activity can make the account less reliable for day-to-day communications because it tends to bury emails you do want to receive. But there is also a puzzling side to all this noise: Random people tend to use your account as if it were theirs, and often for some fairly sensitive services online.

About 16 years ago — back when you actually had to be invited by an existing Google Mail user in order to open a new Gmail account — I was able to get hold of a very short email address on the service that hadn’t yet been reserved. Naming the address here would only invite more spam and account hijack attempts, but let’s just say the account name has something to do with computer hacking.

Because it’s a relatively short username, it is what’s known as an “OG” or “original gangster” account. These account names tend to be highly prized among certain communities, who busy themselves with trying to hack them for personal use or resale. Hence, the constant account takeover requests.

What is endlessly fascinating is how many people think it’s a good idea to sign up for important accounts online using my email address. Naturally, my account has been signed up involuntarily for nearly every dating and porn website there is. That is to be expected, I suppose.

But what still blows me away is the number of financial and other sensitive accounts I could access if I were of a devious mind. This particular email address has accounts that I never asked for at H&R Block, Turbotax, TaxAct, iTunes, LastPass, Dashlane, MyPCBackup, and Credit Karma, to name just a few. I’ve lost count of the number of active bank, ISP and web hosting accounts I can tap into.

I’m perpetually amazed by how many other Gmail users and people on similarly-sized webmail providers have opted to pick my account as a backup address if they should ever lose access to their inbox. Almost certainly, these users just lazily picked my account name at random when asked for a backup email — apparently without fully realizing the potential ramifications of doing so. At last check, my account is listed as the backup for more than three dozen Yahoo, Microsoft and other Gmail accounts and their associated file-sharing services.

If for some reason I ever needed to order pet food or medications online, my phantom accounts at Chewy, Coupaw and Petco have me covered. If any of my Weber grill parts ever fail, I’m set for life on that front. The Weber emails I periodically receive remind me of a piece I wrote many years ago for The Washington Post, about companies sending email from [companynamehere]@donotreply.com, without considering that someone might own that domain. Someone did, and the results were often hilarious.

It’s probably a good thing I’m not massively into computer games, because the online gaming (and gambling) profiles tied to my old Gmail account are innumerable.

For several years until recently, I was receiving the monthly statements intended for an older gentleman in India who had the bright idea of using my Gmail account to manage his substantial retirement holdings. Thankfully, after reaching out to him he finally removed my address from his profile, although he never responded to questions about how this might have happened.

On balance, I’ve learned it’s better just not to ask. On multiple occasions, I’d spend a few minutes trying to figure out if the email addresses using my Gmail as a backup were created by real people or just spam bots of some sort. And then I’d send a polite note to those that fell into the former camp, explaining why this was a bad idea and ask what motivated them to do so.

Perhaps because my Gmail account name includes a hacking term, the few responses I’ve received have been less than cheerful. Despite my including detailed instructions on how to undo what she’d done, one woman in Florida screamed in an ALL CAPS reply that I was trying to phish her and that her husband was a police officer who would soon hunt me down. Alas, I still get notifications anytime she logs into her Yahoo account.

Probably for the same reason the Florida lady assumed I was a malicious hacker, my account constantly gets requests from random people who wish to hire me to hack into someone else’s account. I never respond to those either, although I’ll admit that sometimes when I’m procrastinating over something the temptation arises.

Losing access to your inbox can open you up to a cascading nightmare of other problems. Having a backup email address tied to your inbox is a good idea, but obviously only if you also control that backup address.

More importantly, make sure you’re availing yourself of the most secure form of multi-factor authentication offered by the provider. These may range from authentication options like one-time codes sent via email, phone calls, SMS or mobile app, to more robust, true “2-factor authentication” or 2FA options (something you have and something you know), such as security keys or push-based 2FA such as Duo Security (an advertiser on this site and a service I have used for years).

Email, SMS and app-based one-time codes are considered less robust from a security perspective because they can be undermined by a variety of well-established attack scenarios, from SIM-swapping to mobile-based malware. So it makes sense to secure your accounts with the strongest form of MFA available. But please bear in mind that if the only added authentication options offered by a site you frequent are SMS and/or phone calls, this is still better than simply relying on a password to secure your account.

Maybe you’ve put off enabling multi-factor authentication for your important accounts, and if that describes you, please take a moment to visit twofactorauth.org and see whether you can harden your various accounts.

As I noted in June’s story, Turn on MFA Before Crooks Do It For You, people who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control.

Are you in possession of an OG email account? Feel free to sound off in the comments below about some of the more gonzo stuff that winds up in your inbox.

,

Planet DebianKees Cook: security things in Linux v5.6

Previously: v5.5.

Linux v5.6 was released back in March. Here’s my quick summary of various features that caught my attention:

WireGuard
The widely used WireGuard VPN has been out-of-tree for a very long time. After 3 1/2 years since its initial upstream RFC, Ard Biesheuvel and Jason Donenfeld finished the work getting all the crypto prerequisites sorted out for the v5.5 kernel. For this release, Jason has gotten WireGuard itself landed. It was a twisty road, and I’m grateful to everyone involved for sticking it out and navigating the compromises and alternative solutions.

openat2() syscall and RESOLVE_* flags
Aleksa Sarai has added a number of important path resolution “scoping” options to the kernel’s open() handling, covering things like not walking above a specific point in a path hierarchy (RESOLVE_BENEATH), disabling the resolution of various “magic links” (RESOLVE_NO_MAGICLINKS) in procfs (e.g. /proc/$pid/exe) and other pseudo-filesystems, and treating a given lookup as happening relative to a different root directory (as if it were in a chroot, RESOLVE_IN_ROOT). As part of this, it became clear that there wasn’t a way to correctly extend the existing openat() syscall, so he added openat2() (which is a good example of the efforts being made to codify “Extensible Syscall” arguments). The RESOLVE_* set of flags also cover prior behaviors like RESOLVE_NO_XDEV and RESOLVE_NO_SYMLINKS.

pidfd_getfd() syscall
In the continuing growth of the much-needed pidfd APIs, Sargun Dhillon has added the pidfd_getfd() syscall which is a way to gain access to file descriptors of a process in a race-less way (or when /proc is not mounted). Before, it wasn’t always possible make sure that opening file descriptors via /proc/$pid/fd/$N was actually going to be associated with the correct PID. Much more detail about this has been written up at LWN.

openat() via io_uring
With my “attack surface reduction” hat on, I remain personally suspicious of the io_uring() family of APIs, but I can’t deny their utility for certain kinds of workloads. Being able to pipeline reads and writes without the overhead of actually making syscalls is pretty great for performance. Jens Axboe has added the IORING_OP_OPENAT command so that existing io_urings can open files to be added on the fly to the mapping of available read/write targets of a given io_uring. While LSMs are still happily able to intercept these actions, I remain wary of the growing “syscall multiplexer” that io_uring is becoming. I am, of course, glad to see that it has a comprehensive (if “out of tree”) test suite as part of liburing.

removal of blocking random pool
After making algorithmic changes to obviate separate entropy pools for random numbers, Andy Lutomirski removed the blocking random pool. This simplifies the kernel pRNG code significantly without compromising the userspace interfaces designed to fetch “cryptographically secure” random numbers. To quote Andy, “This series should not break any existing programs. /dev/urandom is unchanged. /dev/random will still block just after booting, but it will block less than it used to.” See LWN for more details on the history and discussion of the series.

arm64 support for on-chip RNG
Mark Brown added support for the future ARMv8.5’s RNG (SYS_RNDR_EL0), which is, from the kernel’s perspective, similar to x86’s RDRAND instruction. This will provide a bootloader-independent way to add entropy to the kernel’s pRNG for early boot randomness (e.g. stack canary values, memory ASLR offsets, etc). Until folks are running on ARMv8.5 systems, they can continue to depend on the bootloader for randomness (via the UEFI RNG interface) on arm64.

arm64 E0PD
Mark Brown added support for the future ARMv8.5’s E0PD feature (TCR_E0PD1), which causes all memory accesses from userspace into kernel space to fault in constant time. This is an attempt to remove any possible timing side-channel signals when probing kernel memory layout from userspace, as an alternative way to protect against Meltdown-style attacks. The expectation is that E0PD would be used instead of the more expensive Kernel Page Table Isolation (KPTI) features on arm64.

powerpc32 VMAP_STACK
Christophe Leroy added VMAP_STACK support to powerpc32, joining x86, arm64, and s390. This helps protect against the various classes of attacks that depend on exhausting the kernel stack in order to collide with neighboring kernel stacks. (Another common target, the sensitive thread_info, had already been moved away from the bottom of the stack by Christophe Leroy in Linux v5.1.)

generic Page Table dumping
Related to RISCV’s work to add page table dumping (via /sys/fs/debug/kernel_page_tables), Steven Price extracted the existing implementations from multiple architectures and created a common page table dumping framework (and then refactored all the other architectures to use it). I’m delighted to have this because I still remember when not having a working page table dumper for ARM delayed me for a while when trying to implement upstream kernel memory protections there. Anything that makes it easier for architectures to get their kernel memory protection working correctly makes me happy.

That’s in for now; let me know if there’s anything you think I missed. Next up: Linux v5.7.

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

Planet DebianVincent Bernat: Syncing SSH keys on Cisco IOS-XR with a custom Ansible module

The cisco.iosxr collection from Ansible Galaxy provides an iosxr_user module to manage local users, along with their SSH keys. However, the module is quite slow, do not display a diff for changed SSH keys, never signal change when a key is modified, and does not delete obsolete keys. Let’s write a custom Ansible module managing only the SSH keys while fixing these issues.

Notice

I recommend that you read “Writing a custom Ansible module” as an introduction.

How to add an SSH key to a user

Adding SSH keys to users in Cisco IOS-XR is quite undocumented. First, you need to encode the key with the “ssh-rsa” key ASN.1 format, like an OpenSSH public key, but without the base64-encoding:

$ awk '{print $2}' id_rsa.pub \
    | base64 -d \
    > publickey_vincent.raw

Then, you upload the key with SCP to harddisk:/publickey_vincent.raw and import it for the current user with the following IOS command:

crypto key import authentication rsa harddisk:/publickey_vincent.b64

However, if you want to import a key for another user, you need to be part of the root-system group:

username vincent
 group root-lr
 group root-system

With the following admin command, you can attach a key to another user:

admin crypto key import authentication rsa username cedric harddisk:/publickey_cedric.b64

Code

The module has the following signature and it installs the specified key for each user and remove keys from retired users—the ones we do not specify.

iosxr_users:
  keys:
    vincent: ssh-rsa AAAAB3NzaC1yc2EAA[…]ymh+YrVWLZMJR
    cedric:  ssh-rsa AAAAB3NzaC1yc2EAA[…]RShPA8w/8eC0n

Prerequisites

Unlike the iosxr_user module, our custom module only handles SSH keys, one per user. Therefore, the user definitions have to already exist in the running configuration.1 Moreover, the user defined in ansible_user needs to be in the root-system group. The cisco.iosxr collection must also be installed as the module relies on its code.

When running the module, ansible_connection needs to be set to network_cli and ansible_network_os to iosxr. These variables are usually defined in the inventory.

Module definition

Starting from the skeleton described in the previous article, we define the module:

module_args = dict(
    keys=dict(type='dict', elements='str', required=True),
)

module = AnsibleModule(
    argument_spec=module_args,
    supports_check_mode=True
)

result = dict(
    changed=False
)

Getting the installed keys

The next step is to retrieve the keys currently installed. This can be done with the following command:

# show crypto key authentication rsa all
Key label: vincent
Type     : RSA public key authentication
Size     : 2048
Imported : 16:17:08 UTC Tue Aug 11 2020
Data     :
 30820122 300D0609 2A864886 F70D0101 01050003 82010F00 3082010A 02820101
 00D81E5B A73D82F3 77B1E4B5 949FB245 60FB9167 7CD03AB7 ADDE7AFE A0B83174
 A33EC0E6 1C887E02 2338367A 8A1DB0CE 0C3FBC51 15723AEB 07F301A4 B1A9961A
 2D00DBBD 2ABFC831 B0B25932 05B3BC30 B9514EA1 3DC22CBD DDCA6F02 026DBBB6
 EE3CFADA AFA86F52 CAE7620D 17C3582B 4422D24F D68698A5 52ED1E9E 8E41F062
 7DE81015 F33AD486 C14D0BB1 68C65259 F9FD8A37 8DE52ED0 7B36E005 8C58516B
 7EA6C29A EEE0833B 42714618 50B3FFAC 15DBE3EF 8DA5D337 68DAECB9 904DE520
 2D627CEA 67E6434F E974CF6D 952AB2AB F074FBA3 3FB9B9CC A0CD0ADC 6E0CDB2A
 6A1CFEBA E97AF5A9 1FE41F6C 92E1F522 673E1A5F 69C68E11 4A13C0F3 0FFC782D
 27020301 0001

[…]

ansible_collections.cisco.iosxr.plugins.module_utils.network.iosxr.iosxr contains a run_commands() function we can use:

command = "show crypto key authentication rsa all"
out = run_commands(module, command)
out = out[0].replace(' \n', '\n')

A common library to parse a command output is textfsm: a Python module using a template-based state machine for parsing semi-formatted text.

template = r"""
Value Required Label (\w+)
Value Required,List Data ([A-F0-9 ]+)

Start
 ^Key label: ${Label}
 ^Data\s+: -> GetData

GetData
 ^ ${Data}
 ^$$ -> Record Start
""".lstrip()

re_table = textfsm.TextFSM(io.StringIO(template))
got = {data[0]: "".join(data[1]).replace(' ', '')
       for data in re_table.ParseText(out)}

got is a dictionary associating key labels, considered as usernames, with a hexadecimal representation of the public key currently installed. It looks like this:

>>> pprint(got)
{'alfred': '30820122300D0609[…]6F0203010001',
 'cedric': '30820122300D0609[…]710203010001',
 'vincent': '30820122300D0609[…]270203010001'}

Comparing with the wanted keys

Let’s now build the wanted dictionary using the same structure. In module.params['keys'], we have a dictionary associating usernames to public SSH keys in the OpenSSH format:

>>> pprint(module.params['keys'])
{'cedric': 'ssh-rsa AAAAB3NzaC1yc2[…]',
 'vincent': 'ssh-rsa AAAAB3NzaC1yc2[…]'}

We need to convert these keys in the same hexadecimal representation used by Cisco above. The ssh-keygen command and some glue can do the conversion:2

$ ssh-keygen -f id_rsa.pub -e -mPKCS8 \
   | grep -v '^---' \
   | base64 -d \
   | hexdump -e '4/1 "%0.2X"'
30820122300D06092[…]782D270203010001

Assuming we have a ssh2cisco() function doing that, we can build the wanted dictionary:

wanted = {k: ssh2cisco(v)
          for k, v in module.params['keys'].items()}

Applying changes

Back to the skeleton described in the previous article, the last step is to apply the changes if there is a difference between got and wanted when not running with check mode. The part comparing got and wanted is taken verbatim from the skeleton module:

if got != wanted:
    result['changed'] = True
    result['diff'] = dict(
        before=yaml.safe_dump(got),
        after=yaml.safe_dump(wanted)
    )

if module.check_mode or not result['changed']:
    module.exit_json(**result)

Let’s copy the new or changed keys and attach them to their respective users. For this purpose, we reuse the get_connection() and copy_file() functions from ansible_collections.cisco.iosxr.plugins.module_utils.network.iosxr.iosxr.

conn = get_connection(module)
for user in wanted:
    if user not in got or wanted[user] != got[user]:
        dst = f"/harddisk:/publickey_{user}.raw"
        with tempfile.NamedTemporaryFile() as src:
            decoded = base64.b64decode(
                module.params['keys'][user].split()[1])
            src.write(decoded)
            src.flush()
            copy_file(module, src.name, dst)
    command = ("admin crypto key import authentication rsa "
               f"username {user} {dst}")
    conn.send_command(command, prompt="yes/no", answer="yes")

Then, we remove obsolete keys:

for user in got:
    if user not in wanted:
        command = ("admin crypto key zeroize authentication rsa "
                   f"username {user}")
        conn.send_command(command, prompt="yes/no", answer="yes")

The complete code is available on GitHub. Compared to the iosxr_user module, this one displays a diff when running with --diff, correctly signals a change, is faster, 3 and deletes unwanted SSH keys. However, it is unable to create users and cannot configure passwords or multiple SSH keys.


  1. In our environment, the Ansible playbook pushes a full configuration, including the user definitions. Then, it synchronizes the SSH keys. ↩︎

  2. Despite the argument provided to ssh-keygen, the format used by Cisco is not PKCS#8. This is the ASN.1 representation of a Subject Public Key Info structure, as defined in RFC 2459. Moreover, PKCS#8 is a format for a private key, not a public one. ↩︎

  3. The main factors for being faster are:

    • not creating users, and
    • not reuploading existing SSH keys.

    ↩︎

Planet DebianVincent Bernat: Syncing MySQL tables with a custom Ansible module

The community.mysql collection from Ansible Galaxy provides a mysql_query module to run arbitrary MySQL queries. Unfortunately, it does not support check mode nor the --diff flag. It is also unable to tell if there was a change. Let’s write a specific Ansible module to workaround these issues.

Notice

I recommend that you read “Writing a custom Ansible module” as an introduction.

Code

The module has the following signature and it executes the provided SQL statements in a single transaction. It needs a list of the affected tables to be able to detect and show the changes.

mysql_sync:
  sql: |
    DELETE FROM rules WHERE name LIKE 'CMDB:%';
    INSERT INTO rules (name, rule) VALUES
      ('CMDB: check for cats', ':is(object, "CAT")'),
      ('CMDB: check for dogs', ':is(object, "DOG")');
    REPLACE INTO webhooks (name, url) VALUES
      ('OpsGenie', 'https://opsgenie/something/token'),
      ('Slack', 'https://slack/something/token');
  user: monitoring
  password: Yooghah5
  database: monitoring
  tables:
    - rules
    - webhooks

Prerequisites

The module does not enforce idempotency, but it is expected you provide appropriate SQL queries. In the above example, idempotency is achieved because the content of the rules table is deleted and recreated from scratch while the rows in the webhooks table are replaced if they already exist.

You need the PyMySQL package.

Module definition

Starting from the skeleton described in the previous article, here is the module definition:

module_args = dict(
    sql=dict(type='str', required=True),
    user=dict(type='str', required=True),
    password=dict(type='str', required=True, no_log=True),
    database=dict(type='str', required=True),
    tables=dict(type='list', required=True, elements='str'),
)

result = dict(
    changed=False
)

module = AnsibleModule(
    argument_spec=module_args,
    supports_check_mode=True
)

The password is marked with no_log to ensure it won’t be displayed or stored, notably when ansible-playbook runs in verbose mode. There is no host option as the module is executed on the MySQL host. Strong authentication using certificates is not implemented either. This matches our goal with custom modules: only implement what you strictly need.

Getting the current rows

The next step is to retrieve the records currently in the database. The got dictionary is a mapping from table names to the list of rows they contain:

got = {}
tables = module.params['tables']

connection = pymysql.connect(
    user=module.params['user'],
    password=module.params['password'],
    db=module.params['database'],
    charset='utf8mb4',
    cursorclass=pymysql.cursors.DictCursor
)

with connection.cursor() as cursor:
    for table in tables:
        cursor.execute("SELECT * FROM {}".format(table))
        got[table] = cursor.fetchall()

Computing the changes

Let’s now build the wanted dictionary. The trick is to execute the SQL statements in a transaction without issuing a final commit. The changes will be invisible1 to other readers and we can compare the final rows with the rows collected in got:

wanted = {}
sql = module.params['sql']
statements = [statement.strip()
              for statement in sql.split(";\n")
              if statement.strip()]

with connection.cursor() as cursor:
    for statement in statements:
        try:
            cursor.execute(statement)
        except pymysql.OperationalError as err:
            code, message = err.args
            result['msg'] = "MySQL error for {}: {}".format(
                statement,
                message)
            module.fail_json(**result)
    for table in tables:
        cursor.execute("SELECT * FROM {}".format(table))
        wanted[table] = cursor.fetchall()

The first for loop executes each statement. On error, we return a helpful message containing the faulty one. The second for loop records the final rows of each table in wanted.

Applying changes

Back to the skeleton described in the previous article, the last step is to apply the changes if there is a difference between got and wanted when not running with check mode. The diff object is a bit more elaborate as it is built table by table. This enables Ansible to display the name of each table before the diff representation:

if got != wanted:
    result['changed'] = True
    result['diff'] = [dict(
        before_header=table,
        after_header=table,
        before=yaml.safe_dump(got[table]),
        after=yaml.safe_dump(wanted[table]))
                      for table in tables
                      if got[table] != wanted[table]]

if module.check_mode or not result['changed']:
    module.exit_json(**result)

Applying the changes is quite trivial: just commit them! Otherwise, they are lost when the module exits.

connection.commit()

The complete code is available on GitHub. Compared to the mysql_query module, this one supports the check mode, signals correctly if there is a change and displays the differences. However, it should not be used with huge tables, as it would try to load them in memory.


  1. The tables need to use the InnoDB storage engine. Moreover, MySQL does not know how to use transactions with DDL statements: do not modify table definitions! ↩︎

Planet DebianVincent Bernat: Writing a custom Ansible module

Ansible ships a lot of modules you can combine for your configuration management needs. However, the quality of these modules may vary widely. Sometimes, it may be quicker and more robust to write your own module instead of shopping and assembling existing ones.1

In my opinion, a robust module exhibits the following characteristics:

  • idempotency,
  • diff support,
  • check mode compatibility,
  • correct change signaling, and
  • lifecycle management.

In a nutshell, it means the module can run with --diff --check and shows the changes it would apply. When run twice in a row, the second run won’t apply or signal changes. The last bullet point suggests the module should be able to delete outdated objects configured during previous runs.2

The module code should be minimal and tailored to your needs. Making the module generic for use by other users is a non-goal. Less code usually means less bugs and easier to understand.

I do not cover testing here. It is undeniably a good practice, but it requires a significant effort. In my opinion, it is preferable to have a well written module matching the above characteristics rather than a module that is well tested but without them or a module requiring further (untested) assembly to meet your needs.

Module skeleton

Ansible documentation contains instructions to build a module, along with some best practices. As one of our non-goal is to distribute it, we choose to take some shortcuts and skip some of the boilerplate. Let’s assume we build a module with the following signature:

custom_module:
  user: someone
  password: something
  data: "some random string"

There are various locations you can put a module in Ansible. A common possibility is to include it into a role. In a library/ subdirectory, create an empty __init__.py file and a custom_module.py file with the following code:3

#!/usr/bin/python

import yaml
from ansible.module_utils.basic import AnsibleModule


def main():
    # Define options accepted by the module. ❶
    module_args = dict(
        user=dict(type='str', required=True),
        password=dict(type='str', required=True, no_log=True),
        data=dict(type='str', required=True),
    )

    module = AnsibleModule(
        argument_spec=module_args,
        supports_check_mode=True
    )

    result = dict(
        changed=False
    )

    got = {}
    wanted = {}

    # Populate both `got` and `wanted`. ❷
    # [...]

    if got != wanted:
        result['changed'] = True
        result['diff'] = dict(
            before=yaml.safe_dump(got),
            after=yaml.safe_dump(wanted)
        )

    if module.check_mode or not result['changed']:
        module.exit_json(**result)

    # Apply changes. ❸
    # [...]

    module.exit_json(**result)


if __name__ == '__main__':
    main()

The first part, in ❶, defines the module, with the accepted options. Refer to the documentation on argument_spec for more details.

The second part, in ❷, builds the got and wanted variables. got is the current state while wanted is the target state. For example, if you need to modify records in a database server, got would be the current rows while wanted would be the modified rows. Then, we compare got and wanted. If there is a difference, changed is switched to True and we prepare the diff object. Ansible uses it to display the differences between the states. If we are running in check mode or if no change is detected, we stop here.

The last part, in ❸, applies the changes. Usually, it means iterating over the two structures to detect the differences and create the missing items, delete the unwanted ones and update the existing ones.

Documentation

Ansible provides a fairly complete page on how to document a module. I advise you to take a more minimal approach by only documenting each option sparingly,4 skipping the examples and only documenting return values if it needs to. I usually limit myself to something like this:

DOCUMENTATION = """
---
module: custom_module.py
short_description: Pass provided data to remote service
description:
  - Mention anything useful for your workmate.
  - Also mention anything you want to remember in 6 months.
options:
  user:
    description:
      - user to identify to remote service
  password:
    description:
      - password for authentication to remote service
  data:
    description:
      - data to send to remote service
"""

Error handling

If you run into an error, you can stop the execution with module.fail_json():

module.fail_json(
    msg=f"remote service answered with {code}: {message}",
    **result
)

There is no requirement to intercept all errors. Sometimes, not swallowing an exception provides better information than replacing it with a generic message.

Returning additional values

A module may return additional information that can be captured to be used in another task through the register directive. For this purpose, you can add arbitrary fields to the result dictionary. Have a look at the documentation for common return values. You should try to add these fields before exiting the module when in check mode. The returned values can be documented.

Examples

Here are several examples of custom modules following the previous skeleton. Each example highlight why a custom module was written instead of assembling existing modules. ⚙️


  1. Also, when using modules from Ansible Galaxy, you introduce a dependency to a third-party. This is not something that should be decided lightly: it may break later, it may only meet 80% of the needs, it may add bugs. ↩︎

  2. Some declarative systems, like Terraform, exhibits all these behaviors. ↩︎

  3. Do not worry about the shebang. It is hardcoded to /usr/bin/python. Ansible will modify it to match the chosen interpreter on the remote host. You can write Python 3 code if ansible_python_interpreter evaluates to a Python 3 interpreter. ↩︎

  4. The main issue I have with this non-programmatic approach to documentation is that it partly repeats the information contained in argument_spec. I think an auto-documenting structure would avoid this. ↩︎

Planet DebianElana Hashman: My term at the Open Source Initiative thus far

When I ran for the OSI board in early 2019, I set three goals for myself:

  • Grow the OSI's membership, and build a more representative organization.
  • Defend the Open Source Definition and FOSS commons.
  • Define the future of open source, as part of the larger community.

Now that the OSI has announced hiring an interim General Manager, I thought it would be a good time to publicly reflect on what I've accomplished and what I'd like to see next.

As I promised in my campaign pitch, I aim to be publicly accountable :)

Growing the OSI's membership

I have served as our Membership Committee Chair since the May 2019 board meeting, tasked with devising and supervising strategy to increase membership and deliver value to members.

As part of my election campaign last year, I signed up over 50 new individual members. Since May 2019, we've seen strong 33% growth of individual members, to reach a new all-time high over 600 (638 when I last checked).

I see the OSI as a relatively neutral organization that occupies a unique position to build bridges among organizations within the FOSS ecosystem. In order to facilitate this, we need a representative membership, and we need to engage those members and provide forums for cross-pollination. As Membership Committee Chair, I have been running quarterly video calls on Jitsi for our affiliate members, where we can share updates between many global organizations and discuss challenges we all face.

But it's not enough just to hold the discussion; we also need to bring fresh new voices into the conversation. Since I've joined the board, I'm thrilled to say that 16 new affiliate members joined (in chronological order) for a total of 81:

I was also excited to run a survey of the OSI's individual and affiliate membership to help inform the future of the organization that received 58 long-form responses. The survey has been accepted by the board at our August meeting and should be released publicly soon!

Defending the Open Source Definition

When I joined the board, the first committee I joined was the License Committee, which is responsible for running the licence review process, making recommendations on new licenses, and maintaining our existing licenses.

Over the past year, under Pamela Chestek's leadership as Chair, the full board has approved the following licenses (with SPDX identifiers in brackets) on the recommendation of the License Committee:

We withheld approval of the following licenses:

I've also worked to define the scope of work for hiring someone to improve our license review process, which we have an open RFP for!

Chopping wood and carrying water

I joined the OSI with the goal of improving an organization I didn't think was performing up to its potential. Its membership and board were not representative of the wider open source community, its messaging felt outdated, and it seemed to be failing to rise to today's challenges for FOSS.

But before one can rise to meet these challenges, you need a strong foundation. The OSI needed the organizational structure, health, and governance in order to address such questions. Completing that work is essential, but not exactly glamourous—and it's a place that I thrive. Honestly, I don't (yet?) want to be the public face of the organization, and I apologize to those who've missed me at events like FOSDEM.

I want to talk a little about some of my behind-the-scenes activities that I've completed as part of my board service:

All of this work is intended to improve the organization's health and provide it with an excellent foundation for its mission.

Defining the future of open source

Soon after I was elected to the board, I gave a talk at Brooklyn.js entitled "The Future of Open Source." In this presentation, I pondered about the history and future of the free and open source software movement, and the ethical questions we must face.

In my election campaign, I wrote "Software licenses are a means, not an end, to open source software. Focusing on licensing is necessary but not sufficient to ensure a vibrant, thriving open source community. Focus on licensing to the exclusion of other serious community concerns is to our collective detriment."

My primary goal for my first term on the board was to ensure the OSI would be positioned to answer wider questions about the open source community and its future beyond licenses. Over the past two months, I supported Megan Byrd-Sanicki's suggestion to hold (and then participated in, with the rest of the board) organizational strategy sessions to facilitate our long-term planning. My contribution to help inform these sessions was providing the member survey on behalf of the Membership Committee.

Now, I think we are much better equiped to face the hard questions we'll have to tackle. In my opinion, the Open Source Initiative is better positioned than ever to answer them, and I can't wait to see what the future brings.

Hope to see you at our first State of the Source conference next week!

Cryptogram Insider Attack on the Carnegie Library

Greg Priore, the person in charge of the rare book room at the Carnegie Library, stole from it for almost two decades before getting caught.

It’s a perennial problem: trusted insiders have to be trusted.

Worse Than FailureBidirectional

Merge-short arrows

Trung worked for a Microsoft and .NET framework shop that used AutoMapper to simplify object mapping between tiers. Their application's mapping configuration was performed at startup, as in the following C# snippet:

public void Configure(ConfigurationContext context)
{
AutoMapper.Mapper.CreateMap().AfterMap(Map); 
...
}

where the AfterMap() method's Map delegate was to map discrepancies that AutoMapper couldn't.

One day, a senior dev named Van approached Trung for help. He was repeatedly getting AutoMapper's "Missing type map configuration or unsupported mapping. Mapping types Y -> X ..." error.

Trung frowned a little, wondering what was mysterious about this problem. "You're ... probably missing mapping configuration for Y to X," he said.

"No, I'm not!" Van pointed to his monitor, at the same code snippet above.

Trung shook his head. "That mapping is one-way, from X to Y only. You can create the reverse mapping by using the Bidirectional() extension method. Here ..." He leaned over to type in the addition:

AutoMapper.Mapper.CreateMap()
.AfterMap(Map)
.Bidirectional();

This resolved Van's error. Both men returned to their usual business.

A few weeks later, Van approached Trung again, this time needing help with refactoring due to a base library change. While they huddled over Van's computer and dug through compilation errors, Trung kept seeing strange code within multiple AfterMap() delegates:

void Map(X src, Y desc)
{
desc.QueueId = src.Queue.Id;
src.Queue = Queue.GetById(desc.QueueId);
...
}

"Wait a minute!" Trung reached for the mouse to highlight two such lines and asked, "Why is this here?"

"The mapping is supposed to be bidirectional! Remember?" Van replied. "I’m copying from X to Y, then from Y to X."

Trung resisted the urge to clap a hand to his forehead or mutter something about CS101 and variable-swapping—not that this "swap" was necessary. "You realize you'd have nothing but X after doing that?"

The quizzical look on the senior developer's face assured Trung that Van hadn't realized any such thing.

Trung could only sigh and help Van trudge through the delegates he'd "fixed," working out a better mapping procedure for each.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianJunichi Uekawa: Updated page to record how-to video.

Updated page to record how-to video. I don't need big camera image, I just need a tiny image with probably my face on it. As a start, I tried extracting a rectangle.

Planet DebianNorbert Preining: Multiple GPUs for graphics and deep learning

For long time I have been using a good old nvidia GeForce GTX 1050 for my display and deep learning needs. I reported a few times how to get Tensorflow running on Debian/Sid, see here and here. Later on I switched to AMD GPU in the hope that an open source approach to both GPU driver as well as deep learning (ROCm) would improve the general experience. Unfortunately it turned out that AMD GPUs are generally not ready for deep learning usage.

The problems with AMD and ROCm are far and wide. First of all, it seems that for anything more complicated then simple stuff, AMD’s flagship RX 5700(XT) and all GFX10 (Navi) based cards are not(!!!) supported in ROCm. Yes, you read correct … AMD does not support 5700(XT) cards in the ROCm stack. Some simple stuff works, but nothing for real computations.

Then, even IF they would support, ROCm as distributed is currently a huge pain in the butt. The source code is a huge mess, and building usable packages from it is probably possible, but quite painful (I am member of the ROCm packaging team in Debian, and have tried many hours). And the packages provided by AMD are not installable on Debian/sid due to library incompatibilities.

So that left me with a bit a problem: for work I need to train quite some neural networks, do model selection, etc. Doing this on a CPU is a bit a burden. So at the end I decided to put the nVidia card back into the computer (well, after moving it to a bigger case – but that is a different story to tell). Here are the steps I did to get both cards working for their respective target: AMD GPU for driving the console and X (and games!), and the nVidia card doing the deep learning stuff (tensorflow using the GPU).

Starting point

Starting point was a working AMD GPU installation. The AMD GPU is also the first GPU card (top slot) and thus the one that is used by the BIOS and the Linux console. If you want the video output on the second card you need to trick, and probably don’t have console output, etc etc. So not a solution for me.

Installing libcuda1 and the nvidia kernel drivers

Next step was installing the libcuda1 package:

apt install libcuda1

This installs a lot of stuff, including the nvidia drivers, GLX libraries, alternatives setup, and update-glx tool and package.

The kernel module should be built and installed automatically for your kernel.

Installing CUDA

Follow more or less the instructions here and do

wget -O- https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | sudo tee /etc/apt/trusted.gpg.d/nvidia-cuda.asc
echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /" | sudo tee /etc/apt/sources.list.d/nvidia-cuda.list
sudo apt-get update
sudo apt-get install cuda-libraries-10-1

Warning! At the moment Tensorflow packages require CUDA 10.1, so don’t install the 10.0 version. This might change in the future!

This will install lots of libs into /usr/local/cuda-10.1 and add the respective directory to the ld.so path by creating a file /etc/ld.so.conf.d/cuda-10-1.conf.

Install CUDA CuDNN

One difficult to satisfy dependency are the CuDNN libraries. In our case we need the version 7 library for CUDA 10.1. To download these files one needs to have a NVIDIA developer account, which is quick and painless. After that go to the CuDNN page where one needs to select Archived releases and then Download cuDNN v7.N.N (xxxx NN, YYYY), for CUDA 10.1 and then cuDNN Runtime Library for Ubuntu18.04 (Deb).

At the moment (as of today) this will download a file libcudnn7_7.6.5.32-1+cuda10.1_amd64.deb which needs to be installed with dpkg -i libcudnn7_7.6.5.32-1+cuda10.1_amd64.deb.

Updating the GLX setting

Here now comes the very interesting part – one needs to set up the GLX libraries. Reading the output of update-glx --help and then the output of update-glx --list glx:

$ update-glx --help
update-glx is a wrapper around update-alternatives supporting only configuration
of the 'glx' and 'nvidia' alternatives. After updating the alternatives, it
takes care to trigger any follow-up actions that may be required to complete
the switch.
 
It can be used to switch between the main NVIDIA driver version and the legacy
drivers (eg: the 304 series, the 340 series, etc).
 
For users with Optimus-type laptops it can be used to enable running the discrete
GPU via bumblebee.
 
Usage: update-glx <command>
 
Commands:
  --auto <name>            switch the master link <name> to automatic mode.
  --display <name>         display information about the <name> group.
  --query <name>           machine parseable version of --display <name>.
  --list <name>            display all targets of the <name> group.
  --config <name>          show alternatives for the <name> group and ask the
                           user to select which one to use.
  --set <name> <path>      set <path> as alternative for <name>.
 
<name> is the master name for this link group.
  Only 'nvidia' and 'glx' are supported.
<path> is the location of one of the alternative target files.
  (e.g. /usr/lib/nvidia)
 
$ update-glx --list glx
/usr/lib/mesa-diverted
/usr/lib/nvidia

I was tempted into using

update-glx --config glx /usr/lib/mesa-diverted

because at the end the Mesa GLX libraries should be used to drive the display via the AMD GPU.

Unfortunately, with this neither the nvidia kernel module was loaded, the nvidia persistenced couldn’t run because the library libnvidia-cfg1 wasn’t found (not sure it was needed at all…), and with that also no way to run tensorflow on GPU.

So what I did I tried

update-glx --auto glx

(which is the same as update-glx --config glx /usr/lib/nvidia), and rebooted, and decided to check afterwards what is broken.

To my big surprise, the AMD GPU still worked out of the box, including direct rendering, and the games I tried (Overload, Supraland via Wine) all worked without a hinch.

Not that I really understand why the GLX libraries that are seemingly now in use are from nvidia but work the same (if anyone has an explanation, that would be great!), but since I haven’t had any problems till now, I am content.

Checking GPU usage in tensorflow

Make sure that you remove tensorflow-rocm and reinstall tensorflow with GPU support:

pip3 uninstall tensorflow-rocm
pip3 install --upgrade tensorflow-gpu

After that a simple

$ python3 -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
....(lots of output)
2020-09-02 11:57:04.673096: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3581 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:05:00.0, compute capability: 6.1)
tf.Tensor(1093.4915, shape=(), dtype=float32)
$

should indicate that the GPU is used by tensorflow!

The R Keras package should also work out of the box and pick up the system-wide tensorflow which in turn picks the GPU, see this post for example code to run for tests.

Conclusion

All in all it was easier than expected, despite the dances one has to do for nvidia to get the correct libraries. What still puzzles me is the selection option in update-glx, and might need a better support for secondary nvidia GPU cards.

,

Cory DoctorowGet Radicalized for a mere $2.99

The ebook of my 2019 book RADICALIZED — finalist for the Canada Reads award, LA Library book of the year, etc — is on sale today for $2.99 on all major platforms!

Books

There are a lot of ways to get radicalized in 2020, but this is arguably the cheapest.

Planet DebianUtkarsh Gupta: FOSS Activites in August 2020

Here’s my (eleventh) monthly update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 20th month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/

Well, this month we had DebConf! \o/
(more about this later this week!)

Anyway, here are the following things I did in Debian this month:

Uploads and bug fixes:

Other $things:

  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Sponsored php-dasprid-enum and php-bacon-baconqrcode for William and ruby-unparser, ruby-morpher, and ruby-path-exapander for Cocoa.

Goodbye GSoC! \o/

In May, I got selected as a Google Summer of Code student for Debian again! \o/
I am working on the Upstream-Downstream Cooperation in Ruby project.

The other 5 blogs can be found here:

Also, I log daily updates at gsocwithutkarsh2102.tk.

Since this is a wrap and whilst the daily updates are already available at the above site^, I’ll quickly mention the important points and links here.


Whilst working on Rubocop::Packaging, I contributed to more Ruby projects, refactoring their library a little bit and mostly fixing RuboCop issues and fixing issues that the Packaging extension reports as “offensive�.
Following are the PRs that I raised:


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my eleventh month as a Debian LTS and my second as a Debian ELTS paid contributor.
I was assigned 21.75 hours for LTS and 14.25 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

  • Issued ELA 255-1, fixing CVE-2020-14344, for libx11.
    For Debian 8 Jessie, these problems have been fixed in version 2:1.6.2-3+deb8u3.
  • Issued ELA 259-1, fixing CVE-2020-10177, for pillow.
    For Debian 8 Jessie, these problems have been fixed in version 2.6.1-2+deb8u5.
  • Issued ELA 269-1, fixing CVE-2020-11985, for apache2.
    For Debian 8 Jessie, these problems have been fixed in version 2.4.10-10+deb8u17.
  • Started working on clamAV update, it’s a major bump from v0.101.5 to v0.102.4. There were lots of movings parts. Contacted upstream maintainers to help reduce the risk of regression. Came up with a patch to loosen the libcurl version requirement. Hopefully, the update could be rolled out soon!

Other (E)LTS Work:

  • I spent an additional 11.15 hours working on compiling the responses of the LTS survey and preparing a gist of it for its presentation during the Debian LTS BoF at DebConf20.
  • Triaged qemu, pillow, gupnp, clamav, apache2, and uwsgi.
  • Marked CVE-2020-11538/pillow as not-affected for Stretch.
  • Marked CVE-2020-11984/apache2 as not-affected for Stretch.
  • Marked CVE-2020-10378/pillow as not-affected for Jessie.
  • Marked CVE-2020-11538/pillow as not-affected for Jessie.
  • Marked CVE-2020-3481/clamav as not-affected for Jessie.
  • Marked CVE-2020-11984/apache2 as not-affected for Jessie.
  • Marked CVE-2020-{9490,11993}/apache2 as not-affected for Jessie.
  • Hosted Debian LTS BoF at DebConf20. Recording here.
  • General discussion on LTS private and public mailing list.

Until next time.
:wq for today.

Planet DebianRussell Coker: BBB vs Jitsi

I previously wrote about how I installed the Jitsi video-conferencing system on Debian [1]. We used that for a few unofficial meetings of LUV to test it out. Then we installed Big Blue Button (BBB) [2]. The main benefit of Jitsi over BBB is that it supports live streaming to YouTube. The benefits of BBB are a better text chat system and a “whiteboard” that allows conference participants to draw shared diagrams. So if you have the ability to run both systems then it’s best to use Jitsi when you have so many viewers that a YouTube live stream is needed and to use BBB in all other situations.

One problem is with the ability to run both systems. Jitsi isn’t too hard to install if you are installing it on a VM that is not used for anything else. BBB is a major pain no matter what you do. The latest version of BBB is 2.2 which was released in March 2020 and requires Ubuntu 16.04 (which was released in 2016 and has “standard support” until April next year) and doesn’t support Ubuntu 18.04 (released in 2018 and has “standard support” until 2023). The install script doesn’t check for correct apt repositories and breaks badly with no explanation if you don’t have Ubuntu Multiverse enabled.

I expect that they rushed a release because of the significant increase in demand for video conferencing this year. But that’s no reason for demanding the 2016 version of Ubuntu, why couldn’t they have developed on version 18.04 for the last 2 years? Since that release they have had 6 months in which they could have released a 2.2.1 version supporting Ubuntu 18.04 or even 20.04.

The dependency list for BBB is significant, among other things it uses LibreOffice for the whiteboard. This adds to the pain of installing and maintaining it. It wouldn’t surprise me if some of the interactions between all the different components have security issues.

Conclusion

If you want something that’s not really painful to install and run then use Jitsi.

If you need YouTube live streaming use Jitsi.

If you need whiteboards and a good text chat system or if you generally need to run things like a classroom then BBB is a good option. But only if you can manage it, know someone who can manage it for you, or are happy to pay for a managed service provider to do it for you.

Planet DebianSylvain Beucler: Debian LTS and ELTS - August 2020

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In August, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 21.75h for LTS (out of my 30 max; all done) and 14.25h for ELTS (out of my 20 max; all done).

We had a Birds of a Feather videoconf session at DebConf20, sadly with varying quality for participants (from very good to unusable), where we shared the first results of the LTS survey.

There were also discussions about evaluating our security reactivity, which proved surprisingly hard to estimate (neither CVE release date and criticality metrics are accurate nor easily available), and about when it is appropriate to use public naming in procedures.

Interestingly ELTS gained new supported packages, thanks to a new sponsor -- so far I'd seen the opposite, because we were close to the EOL.

As always, there were opportunities to de-dup work through mutual cooperation with the Debian Security team, and LTS/ELTS similar updates.

ELTS - Jessie

  • Fresh build VMs
  • rails/redmine: investigate issue, initially no-action as it can't be reproduced on Stretch and isn't supported in Jessie; follow-up when it's supported again
  • ghostscript: global triage: identify upstream fixed version, distinguish CVEs fixed within a single patch, bisect non-reproducible CVEs, reference missing commit (including at MITRE)
  • ghostscript: fix 25 CVEs, security upload ELA-262-1
  • ghostscript: cross-check against the later DSA-4748-1 (almost identical)
  • software-properties: jessie triage: mark back for update, at least for consistency with Debian Stretch and Ubuntu (all suites)
  • software-properties: security upload ELA-266-1
  • qemu: global triage: update status and patch/regression/reproducer links for 6 pending CVEs
  • qemu: jessie triage: fix 4 'unknown' lines for qemu following changes in package attribution for XSA-297, work continue in September

LTS - Stretch

  • sane-backends: global triage: sort and link patches for 7 CVEs
  • sane-backends: fix dep-8 test and notify the maintainer,
  • sane-backends: security upload DLA-2332-1
  • ghostscript: security upload DLA 2335-1 (cf. common ELTS work)
  • ghostscript: rebuild ("give back") on armhf, blame armhf, get told it was a concurrency / build system issue -_-'
  • software-properties: security upload DLA 2339-1 (cf. common ELTS work)
  • wordpress: global triage: reference regression for CVE-2020-4050
  • wordpress: stretch triage: update past CVE status, work continues in September with probably an upstream upgrade 4.7.5 -> 4.7.18
  • nginx: cross-check my July update against the later DSA-4750-1 (same fix)
  • DebConf BoF + IRC follow-up

Documentation/Scripts

Kevin Rudd2GB: Morrison’s Retirement Rip-Off

E&OE TRANSCRIPT
RADIO INTERVIEW
BEN FORDHAM LIVE
2GB, SYDNEY

Topics: Superannuation; Cheng Lei consular matter

Ben Fordham
Now, superannuation to increase or not? Right now 9.5% of your wage goes towards super, and it sits there until you retire. From July 1 next year, the compulsory rate is going up. It will climb by half a per cent every year until it hits 12% in 2025. So it’s slowly going from 9.5% to 12%. Now that was legislated long before Coronavirus. Now we are in recession, and the government is hinting strongly that it’s ready to dump or delay the policy to increase super contributions. Now I reckon this is a genuine barbecue stopper. It’s not a question of Labor versus Liberal or left versus right. Some want their money now to help them out of hardship. Others say no, we have super for a reason and that is to save for the future. Former Prime Minister Kevin Rudd has got a strong view on this and he joins us on the line. Kevin Rudd, good morning to you.

Kevin Rudd
Morning, Ben. Thanks for having me on the program.

Ben Fordham
No problem. You want Scott Morrison and Josh Frydenberg to leave super alone.

Kevin Rudd
That’s right. And Mr Morrison did promise to maintain this policy which we brought in when he went to the people at the last election. And remember, Ben, back in 2014, they already deferred this for five years. Otherwise, this thing would be done and dusted and it’d be all the way up to 12 by now. I’m just worried we’re going to find one excuse after another to kick this into the Never Never Land. And the result is that working families, those people listening to your program this morning, are not going to have a decent nest egg for their retirement.

Ben Fordham
All right, most of those hard-working Aussies are telling me that they would like the option of having the money right now.

Kevin Rudd
Well, the problem with super is that if you open the floodgates and allow people what Morrison calls as ‘early access’, then what happens is they hollow out and then if you take out $10,000 now as a 35-year-old, by the time you retire you’re going to be $65,000 to $130,000 worse off. That’s how it builds up. So I’m really worried about that. And also, you know Ben, then we’re living longer. Once upon a time, we used to retire at 65 and we’d all be dead by 70. Guess what, that’s not the case anymore. People are living to 80, 90 and the young people listen to your program, or a large number of them, are going to be around until they’re 100. So what we have for retirement income is really important, otherwise you’re back on the age pension which, despite changes I made in office, is not hugely generous.

Ben Fordham
I’m sure you respect the view of the Reserve Bank governor Philip Lowe. Philip Lowe says lifting the super guarantee would reduce wages, cut consumer spending and cost jobs. So he’s got a very different view to you.

Kevin Rudd
Well, I’ve actually had a look at what Governor Lowe had to say. I’ve been reading his submission in the last 24 hours or so. On the question of the impact on wages, yes, he says it would be a potential deferral of wages, but he doesn’t express a view one way or the other to whether that is good or bad. But on employment and the argument used by the government that this is somehow some negative effect on employment, it just doesn’t stack up. By the way, Ben, remember, if this logic held that somehow if we don’t have the superannuation guarantee levy going up, that wages would increase; well, after the government deferred this for five years, starting from 2014, guess what, working people got no increase in their super, but also their wages have flatlined as well. I’m just worried about how this all lands at the end for working people wanting to have a decent retirement.

Ben Fordham
Okay, but don’t we need to be aware of the times that we’re living in? You said earlier, you’re concerned that the government’s looking for excuses to put this thing off or kill this thing off. Well, we do have a global health pandemic at the moment. Isn’t that the ultimate reason why we should be adjusting our position?

Kevin Rudd
There’s always a crisis. I took the country through the global financial crisis, which threw every economy in the world, every major one, into recession. We managed to avoid it here in Australia through a combination of good policy and some other factors as well. It didn’t cross our mind to kill super during that period of time, or superannuation increases. It was simply not in our view the right approach, because we were concerned about keeping the economy going in here and now, but also making proper preparations for the future. But then here’s the rub. If 9% is good enough for everybody, or 9.5% where it is at the moment, then why the politicians and their staffers currently on 15.4%? Very generous for them. Not so generous for working families. That’s what worries me.

Ben Fordham
We know that you keep a keen eye on China. We wake up this morning to the news that Chinese authorities have detained an Australian journalist, Cheng Lei, without charge. Is the timing of this at all suspicious?

Kevin Rudd
You know, Ben, I don’t know enough of the individual circumstances surrounding this case. I don’t want to say anything which jeopardizes the individual concerned. All I’d say is, the Australian Government has a responsibility to look after any Australian — Chinese Australian, Anglo Saxon Australian, whoever Australian — if they get into strife abroad. And I’m sure, knowing the professionalism of the Australian Foreign Service, that they’re doing everything physically possible at present to try and look after this person.

Ben Fordham
Yeah, we know that Marise Payne’s doing that this morning. We appreciate you jumping on the phone and talking to us.

Kevin Rudd
Thanks, Ben. Appreciate it.

Ben Fordham
Former Prime Minister Kevin Rudd, I reckon this is one of these issues where you can’t just put a line down the middle of the page and say, ‘okay, Labor supporters are going to think this way and Liberal supporters are going to think that way’. I think there are two schools of thought and it depends on your age, it depends on your circumstance, it depends on your attitude. Some say ‘give me the money now, it’s my money, not yours’. Others say ‘no, we have super for a reason, it’s there for our retirement’. Where do you stand? It’s 7.52 am.

The post 2GB: Morrison’s Retirement Rip-Off appeared first on Kevin Rudd.

Kevin RuddSunrise: Protecting Australian Retirees

E&OE TRANSCRIPT
TELEVISION INTERVIEW
SUNRISE, SEVEN NETWORK
1 SEPTEMBER 2020

Journalist
Now two former Labor prime ministers have taken aim at the government demanding it go ahead with next year’s planned increased to compulsory super. Paul Keating introduced the scheme back in 1992 and says workers should not miss out.

Paul Keating
[Recording] They want to gyp ordinary people by two and a half per cent of their income for the rest of their life. I mean, the gall of it. I mean, the heartlessness of it.

Journalist
Kevin Rudd, who moved to increase super contributions as well says the rise to 12% in the years ahead, should not be stalled.

Kevin Rudd
[Recording] This is a cruel assault by Morrison on the retirement income of working Australians and using the cover of COVID to try and get away with it.

Journalist
The government is yet to make an official decision. Joining me now is the former prime minister Kevin Rudd. Kevin Rudd, good morning to you. Rather than being a cruel assault by the federal government, is it an acknowledgment that we’re going into the worst recession since the Depression?

Kevin Rudd
Well, you know, Kochie, there’s always been an excuse not to do super and not to continue with super. And what we’ve seen in the past is the Liberal Party at various stages just trying to kill this scheme which Paul Keating got going for the benefit of working Australians all those years ago. They had no real excuse for deferring this move from nine to 12%. When they did it back in 2014, and this would be a further deferral. Look, what’s really at stake here, Kochie, is just working families watching your program this morning, having a decent retirement. That’s why Paul brought it in.

Journalist
Sure.

Kevin Rudd
That’s why we both decided to come and speak out.

Journalist
I absolutely agree with it, but it’s a matter of timing. What do you say to all the small business owners out there who are just trying to keep afloat? To say, hey gang, you’re gonna have to pay an extra half a per cent in super, that you’re going to have to pay on a quarterly basis, to add to your bills again, to try and survive this.

Kevin Rudd
Well, what Mr Morrison saying to those small business folks is the reason we don’t want to do this super increase is because it’s going to get in the road of a wage increase and you can’t have this both ways, mate. Either you’ve got an employer adding 0.5 by way of a wage increase, or by super that’s the bottom line here.

Journalist
OK.

Kevin Rudd
You can’t simply argue that this is all going to disappear into some magic pudding. The bottom line is: the reason we did it this way, and Paul before me, was a small increment each year.

Journalist
Right.

Kevin Rudd
But it builds up as you know, you’re a finance guy Kochie, into a ginormous nest egg for people.

Journalist
Absolutely.

Kevin Rudd
And for the country.

Journalist
I do not disagree with the overall theory of it. It’s just in the timing. So what you’re saying is to Australian bosses around the country is to go to your staff and say, ‘no, you’re not going to get a pay increase, because I’m going to put more into your super, and you’ve got to like it or lump it’.

Kevin Rudd
Well, Kochie, if that was the case, why is it that we’ve had no super increase in the guarantee levy over the last five or six years, and wages growth has been absolutely doodly-squat over that period of time? In other words, the argument for the last five years is we couldn’t do an SGL increase from nine to 12 because would impact on wages. Guess what, we got no increase in super and no increase in real wages. And it just doesn’t hold, mate.

Journalist
The Reserve Bank is saying don’t do it. Social services group are saying don’t do it.

Kevin Rudd
Well, mate, if you look carefully at what the governor the RBA says, he says on the impact on wages, yes, it is, in his language of wage deferral on which he does not express an opinion. And as for the employment, the jobs impact, he says he does not have a view. I think we need to be very careful in reading the detail of what governor Lowe has had to say. Our argument is just, what’s decent for working families? And why are the pollies and their staffers getting 15.4% and yet working families who Paul would try to look after with this massive reform 30 years ago, stuck at nine? I don’t think that’s fair. It’s a double standard.

Journalist
Yep. I absolutely agree with you on that as well.

The post Sunrise: Protecting Australian Retirees appeared first on Kevin Rudd.

Planet DebianJunichi Uekawa: September.

September. Recently I've been writing and reading more golang code and I feel more comfortable with it. However every day I feel frustrated by the lack of features.

Worse Than FailureCodeSOD: Unknown Purpose

Networks are complex beasts, and as they grow, they get more complicated. Diagnosing and understanding problems on networks rapidly gets hard. “Fortunately” for the world, IniTech ships one of those tools.

Leonore works on IniTech’s protocol analyzer. As you might imagine, a protocol analyzer gathers a lot of data. In the case of IniTech’s product, the lowest level of data acquisition is frequently sampled voltage measurements over time. And it’s a lot of samples- depending on the protocol in question, it might need samples on the order of nanoseconds.

In Leonore’s case, those raw voltage samples are the “primary data”. Now, there are all sorts of cool things that you can do with that primary data, but those computations become expensive. If your goal is to be able to provide realtime updates to the UI, you can’t do most of those computations- you do those outside of the UI update loop.

But you can do some of them. Things like level crossings and timing information can be built quickly enough for the UI. These values are “secondary data”.

As data is collected, there are a number of other sections of the application which need to be notified: the UI and the various high-level analysis components. Architecturally, Leonore’s team made an event-driven approach to doing this. As data is collected, a DataUpdatedEvent fires. The DataUpdatedEvent fires twice: once for the “primary data” and once for the “secondary data”. These two events always happen in lockstep, and they happen so closely together that, for all other modules in the application, they can safely be considered simultaneous, and no components in the application ever only care about one- they always want to see both the primary and the secondary data.

So, to review: the data collection module outputs a pair of data updated events, one containing primary data, one containing secondary data, and can never do anything else, and these two events could basically be viewed as the same event by everything else in the application.

Which raises a question about this C++/COM enum, used to tag the different events:

  enum DataUpdatedEventType 
  {
    [helpstring("Unknown data type.")] UnknownDataType = 0, 
    [helpstring("Primary data.")] PrimaryData = 1,
    [helpstring("Secondary data.")] SecondaryData = 2,
  };

As stated, the distinction between primary/secondary events is unnecessary. In fact, sending two events makes all the consuming code more complicated, because in many cases, they can’t start working until they’ve received the secondary data, and thus have to cache the primary data until the next event arrives.

But that’s minor. The UnknownDataType is never used. It can never be used. There is no case in which the data collection module will ever output that. There’s no reason why it would ever need to output that. None of the consumers are prepared to handle that- sending an UnknownDataType would almost certainly cause a crash in most configurations.

So why is it there? I’ll let Leonore explain:

The only answer I can give is this: When this was written, half of us didn’t know what we were doing most of the time, and most of us didn’t know what we were doing half of the time. So now there’s an enum in the code base that has never been used and, I would submit, CAN never be used. Or maybe I ought to say SHOULD never be used. I would just delete it, but I’ve never quite been able to bring myself to do so.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Cryptogram North Korea ATM Hack

The US Cybersecurity and Infrastructure Security Agency (CISA) published a long and technical alert describing a North Korea hacking scheme against ATMs in a bunch of countries worldwide:

This joint advisory is the result of analytic efforts among the Cybersecurity and Infrastructure Security Agency (CISA), the Department of the Treasury (Treasury), the Federal Bureau of Investigation (FBI) and U.S. Cyber Command (USCYBERCOM). Working with U.S. government partners, CISA, Treasury, FBI, and USCYBERCOM identified malware and indicators of compromise (IOCs) used by the North Korean government in an automated teller machine (ATM) cash-out scheme­ — referred to by the U.S. Government as “FASTCash 2.0: North Korea’s BeagleBoyz Robbing Banks.”

The level of detail is impressive, as seems to be common in CISA’s alerts and analysis reports.

Planet DebianRuss Allbery: Review: Riot Baby

Review: Riot Baby, by Tochi Onyebuchi

Publisher: Tor.com
Copyright: January 2020
ISBN: 1-250-21476-9
Format: Kindle
Pages: 176

From Ella's childhood, she sees visions of the future. They come at first with nose bleeds and other physical symptoms, but their worst aspect is that they're sad and dark. Ella is black, as are those around her, and their futures are full of shootings and gangs, death and trouble. As she grows older, she develops her Thing: powers that let her bend, move, and destroy things with her mind, and later to become invisible, teleport, and reshape the world. Ella has superpowers.

Ella is not the viewpoint character of most of Riot Baby, however. That is Kev, her younger brother, the riot baby of the title, born in South Central on the day of the Rodney King riots. Kev grows up in Harlem where they move after the destruction from the riots: keeping Ella's secret, making friends, navigating gang politics, watching people be harassed by the cops. Growing up black in the United States. Then Ella sees something awful in the future and disappears, and some time afterwards Kev ends up in Rikers Island.

One of the problems with writing reviews of every book I read is that sometimes I read books that I am utterly unqualified to review. This is one of those books. This novella is about black exhaustion and rage, about the experience of oppression, about how it feels to be inside the prison system. It's also a story in dialogue with an argument that isn't mine, between the patience and suffering of endurance and not making things worse versus the rage of using all the power that one has to force a change. Some parts of it sat uncomfortably and the ending didn't work for me on the first reading, but it's not possible for me to separate my reactions to the novella from being a white man and having a far different experience of the world.

I'm writing a review anyway because that's what I do when I read books, but even more than normal, take this as my personal reaction expressed in my quiet corner of the Internet. I'm not the person whose opinion of this story should matter.

In many versions of this novella, Ella would be the main character, since she's the one with superpowers. She does get some viewpoint scenes, but most of the focus is on Kev even when the narrative is following Ella. Kev trying to navigate the world, trying to survive prison, seeing his friends murdered by the police, and living as the target of oppression that Ella can escape. This was an excellent choice. Ella wouldn't have been as interesting of a character if the story were more focused on her developing powers instead of on the problems that she cannot solve.

The writing is visceral, immediate, and very evocative. Onyebuchi builds the narrative with a series of short and vividly-described moments showing the narrowing of Kev's life and Ella's exploration of her growing anger and search for a way to support and protect him.

This is not a story about nonviolent resistance or about the arc of the universe bending towards justice. Ella confronts this directly in a memorable scene in a church towards the end of the novella that for me was the emotional heart of the story. The previous generations, starting with Kev and Ella's mother, preach the gospel of endurance and survival and looking on the good side. The prison system eventually provides Kev a path to quiet and a form of peace. Riot Baby is a story about rejecting that approach to the continuing cycle of violence. Ella is fed up, tired, angry, and increasingly unconvinced that waiting for change is working.

I wasn't that positive on this story when I finished it, but it's stuck with me since I read it and my appreciation for it has grown while writing this review. It uses the power fantasy both to make a hard point about the problems power cannot solve and to recast the argument about pacifism and nonviolence in a challenging way. I'm still not certain what I think of it, but I'm still thinking about it, which says a lot. It deserves the positive attention that it's gotten.

Rating: 7 out of 10

Planet DebianPaul Wise: FLOSS Activities August 2020

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian: restarted RAM eating service
  • Debian wiki: unblock IP addresses, approve accounts

Sponsors

The cython-blis/preshed/thinc/theano bugs and smart-open/python-importlib-metadata/python-pyfakefs/python-zipp/python-threadpoolctl backports were sponsored by my employer. All other work was done on a volunteer basis.

,

Planet DebianChris Lamb: Free software activities in August 2020

Here is another monthly update covering what I have been doing in the free software world during August 2020 (previous month):

  • Filed a pull request against django-enumfield, a library that provides an enumeration-like model field for the Django web development framework. The classproperty helper has been moved to django.utils.functional in newer versions of Django. [...]

  • Transferred the maintainership of my Strava Enhancement Suite Chrome extension to improve the user experience on the Strava athletic tracker to Pavel Dolecek.

  • As part of my role of being the assistant Secretary of the Open Source Initiative and a board director of Software in the Public Interest, I attended their respective monthly meetings and participated in various licensing and other discussions occurring on the internet, as well as the usual internal discussions, etc.

  • Filed a pull request for JSON-C, a reference counting library to allow you to easily manipulate JSON objects from C in order to make the documentation build reproducibly. [...]

  • Reviewed and merged some changes to my django-auto-one-to-one library for Django from Dan Palmer (which automatically creates and destroys associated model instances) to not configure signals for models that aren't installed and to honour INSTALLED_APPS during model setup. [...]

  • Merged a pull request from Michael K. to cleanup the codebase after dropping support for Python 2 and Django 1.x [...] in my django-slack library which provides a convenient wrapper between projects using the Django and the Slack chat platform.

  • Updated my django-staticfiles-dotd utility that concatenates Debian .d-style directories containing Javascript and CSS to drop unquote usage from the six compatibility library. [...]

I uploaded Lintian versions 2.86.0, 2.87.0, 2.88.0, 2.89.0, 2.90.0, 2.91.0 and 2.92.0, as well as made the following changes:

  • New features:

    • Check for StandardOutput= and StandardError= fields that use the deprecated syslog or syslog-console systemd facilities. (#966617)
    • Add support for clzip as an alternative for lzip. (#967083)
    • Check for User=nobody and Group=nogroup in systemd .service files. (#966623)
  • Bug fixes:

  • Reporting/interface:

  • Misc:

    • Add justification for the use of the lzip dependency in a previous debian/changelog entry. (#966817)
    • Update the generate-tag-summary release script to reflect change of tag definition filename extension change from .desc.tag. [...]
    • Revert a change to the spelling-error-in-rules-requires-root tag's severity; this is not a "spelling" check in the sense that it does not use our dictionary. [...]
    • Drop an unused $skip_tag argument in the extract_service_file_values routine. [...]

§


Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:

  • Filed a pull request for JSON-C, a reference counting library to allow you to easily construct JSON objects from C in order to make the documentation build reproducibly. [...]

  • In Debian, I:

  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.

  • Filed a build-failure bug against the muroar package that was discovered while doing reproducibility testing. (#968189)

  • We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, I updated the self-serve package rescheduler to use HTML <pre> tags when dumping any debugging data. [...]


§


diffoscope

I made the following changes to diffoscope, including preparing and uploading versions 155, 156, 157 and 158 to Debian:

  • New features:

    • Support extracting data of PGP signed data. (#214)
    • Try files named .pgp against pgpdump(1) to determine whether they are Pretty Good Privacy (PGP) files. (#211)
    • Support multiple options for all file extension matching. [...]
  • Bug fixes:

    • Don't raise an exception when we encounter XML files with <!ENTITY> declarations inside the Document Type Definition (DTD), or when a DTD or entity references an external resource. (#212)
    • pgpdump(1) can successfully parse some binary files, so check that the parsed output contains something sensible before accepting it. [...]
    • Temporarily drop gnumeric from the Debian build-dependies as it has been removed from the testing distribution. (#968742)
    • Correctly use fallback_recognises to prevent matching .xsb binary XML files.
    • Correct identify signed PGP files as file(1) returns "data". (#211)
  • Logging improvements:

    • Emit a message when ppudump version does not match our file header. [...]
    • Don't use Python's repr(object) output in "Calling external command" messages. [...]
    • Include the filename in the "... not identified by any comparator" message. [...]
  • Codebase improvements:

    • Bump Python requirement from 3.6 to 3.7. Most distributions are either shipping with Python 3.5 or 3.7, so supporting 3.6 is not only somewhat unnecessary but also cumbersome to test locally. [...]
    • Drop some unused imports [...], drop an unnecessary dictionary comprehensions [...] and some unnecessary control flow [...].
    • Correct typo of "output" in a comment. [...]
  • Release process:

    • Move generation of debian/tests/control to an external script. [...]
    • Add some URLs for the site that will appear on PyPI.org. [...]
    • Update "author" and "author email" in setup.py for PyPI.org and similar. [...]
  • Testsuite improvements:

    • Update PPU tests for compatibility with Free Pascal versions 3.2.0 or greater. (#968124) [...][...][...]
    • Mark that our identification test for .ppu files requires ppudump version 3.2.0 or higher. [...]
    • Add an assert_diff helper that loads and compares a fixture output. [...][...][...][...]
  • Misc:


§


Debian

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Investigated and triaged chrony [...], golang-1.8 [...], golang-go.crypto [...], golang-golang-x-net-dev [...], icingaweb2 [...], lua5.3 [...], mongodb [...], net-snmp, php7.0 [...], qt4-x11, qtbase-opensource-src, ruby-actionpack-page-caching [...], ruby-doorkeeper [...], ruby-json-jwt [...], ruby-kaminari [...], ruby-kaminari [...], ruby-rack-cors [...], shiro [...] & squirrelmail.

  • Frontdesk duties, responding to user/developer questions, reviewing others' packages, participating in mailing list discussions, attending the Debian LTS BoF at DebConf20 etc.

  • Issued DLA 2313-1 and ELA-257-1 to fix a privilege escalation vulnerability in Net-SNMP.

  • Issued ELA-263-1 for qtbase-opensource-src and ELA-261-1 for qt4-x11, two components of cross-platform C++ application framework. A specially-crafted XBM image file could have caused a buffer overread.

  • Issued ELA-268-1 to address unsafe serialisation vulnerabilities that were discovered in the PHP-based squirrelmail webmail client.

  • Issued DLA 2311-1 for zabbix, the PHP-based monitoring system to fix a potential cross-site scripting vulnerability via <iframe> HTML elements.

  • Issued DLA 2334-1 to fix a denial of service vulnerability in ruby-websocket-extensions, a library for managing long-lived HTTP 'WebSocket' connections.

  • Issued DLA 2345-1 for PHP 7.0 as it was discovered that there was a use-after-free vulnerability when parsing PHAR files, a method of putting entire PHP applications into a single file.

  • I also updated the Extended LTS website to pluralise the "Related CVEs" text in announcement emails [...] and dropped some trailing whitespace [...].

You can find out more about the project via the following video:


§


Uploads to Debian

  • memcached:

    • 1.6.6-2 — Enable TLS capabilities by default. (#968603)
    • 1.6.6-3 — Add libio-socket-ssl-perl to test TLS support and perform a general package refresh.
  • python-django:

    • 2.2.15-1 (unstable) — New upstream bugfix release
    • 3.1-1 (experimental) — New upstream release.
    • 2.2.15-2 (unstable) & 3.1-2 (experimental) — Set the same PYTHONPATH when executing the runtime tests as we do in the package build. (#968577)
  • docbook-to-man:

    • 2.0.0-43 — Refresh packaging, and upload some changes from the Debian Janitor.
    • 2.0.0-44 — Fix compatibility with GCC 10, restoring the missing /usr/bin/instant binary. (#968900)
  • hiredis (1.0.0-1) — New upstream release to experimental.

LongNowMichael McElligott, A Staple of San Francisco Art and Culture, Dies at 50

It is with great sadness that we share the news that Michael McElligott, an event producer, thespian, writer, long-time Long Now staff member, and relentless promoter of the San Francisco avant-garde, has died. He was 50 years old.

Michael battled an aggressive form of brain cancer over the past year. He kept his legendary sense of humor throughout his challenging treatment. He died surrounded by family, friends, and his long-time partner, Danielle Engelman, Long Now’s Director of Programs.

Most of the Long Now community knew Michael as the face of the Conversations at The Interval speaking series, which began in 02014 with the opening of Long Now’s Interval bar/cafe. But he did much more than host the talks. For the first five years of the series, each of the talks was painstakingly produced by Michael. This included finding speakers, developing the talk with the speakers, helping curate all the media associated with each talk, and oftentimes hosting the talks. Many of the production ideas explored in this series by Michael became adopted across other Long Now programs, and we are so thankful we got to work with him.

An event producer since his college days, Michael was active in San Francisco’s art and theater scene as a performer and instigator of unusual events for more than 20 years. From 01999 to 02003 Michael hosted and co-produced The Tentacle Sessions, a monthly series spotlighting accomplished individuals in the San Francisco Bay Area scene—including writers, artists, and scientists. He has produced and performed in numerous alternative theater venues over the years including Popcorn Anti-Theater, The EXIT, 21 Grand, Stage Werx, and the late, great Dark Room Theater. He also produced events for Speechless LIVE and The Battery SF.

Michael was a long-time blogger (usually under his nom de kunst mikl-em) for publications including celebrated arts magazine Hi Fructose and award-winning internet culture site Laughing Squid. His writing can be found in print in the Hi Fructose Collected Edition books and Tales of The San Francisco Cacophony Society in which he recounted some of his adventures with that noted countercultural group.

Beginning in the late 01990s as an employee at Hot Wired, Michael worked in technology in various marketing, technical and product roles. He worked at both startups and tech giants; helped launch products for both consumers and enterprise; and worked with some of the best designers and programmers in the industry.

Originally from Richmond, Virginia, he co-founded a college radio station and an underground art space before moving to San Francisco. In the Bay Area, he was involved with myriad artistic projects and creative ventures, including helping start the online radio station Radio Valencia.

Michael had been a volunteer and associate of Long Now since 02006; he helped at events and Seminars, wrote for the blog and newsletter, and was a technical advisor. In 02013 he officially joined the staff to help raise funds, run social media, and design and produce the Conversations at The Interval lecture series.

To honor Michael’s role in helping to fundraise for and build The Interval, we have finally completed the design on the donor wall which will be dedicated to him.  It should be completed in the next few weeks and will bear a plaque remembering Michael. You can watch a playlist of all of Michael’s Interval talks here. Below, you can find a compilation of moments from the more than a hundred talks that Michael hosted over the years.

“This moment is really both an end and a beginning. And like the name The Interval, which is a measure of time, and a place out of time, this is that interval.”

Michael McElligott

Kevin RuddPress Conference: Morrison’s Assault on Superannuation

E&OE TRANSCRIPT
PRESS CONFERENCE
31 AUGUST 2020
BRISBANE

Kevin Rudd
The reason I’m speaking to you this afternoon, here in Brisbane, is that Paul Keating, former Prime Minister of Australia, and myself, have a deep passion for the future of superannuation, retirement income adequacy for working families for the future, the future of our national savings and the national economy. So former prime minister Paul Keating is speaking to the media now in Sydney, and I’m speaking to national media now in Brisbane. And I don’t think Paul and I have ever done a joint press conference before, albeit socially distanced between Brisbane and Sydney. But the reason we’re doing it today is because this is a major matter of public importance for the country.

Let tell you why. Keating is the architect of our national superannuation policy. This was some 30 years ago. And as a result of his efforts, we now have the real possibility of decent retirement income policy for working families for the first time in this country’s history. And on top of that, we’ve accumulated something like $3 trillion worth of national savings. If you ask the question today, why is it that Australia still has a triple-A credit rating around the world, it’s because we have a bucketload of national savings. And so Paul Keating should be thanked for that, not just for the macroeconomy though, but also for delivering this enormous dividend to working families and giving them retirement dignity. Of course, what we did in government was announce that we would move the superannuation guarantee level from 9% to 12%. And we legislated to that effect. And prior to last election, Mr Morrison said that that was also Liberal and National Party policy as well. What Mr Keating and I are deeply concerned about is whether, in fact, this core undertaking to Australian working families is now in the process of being junked.

There are two arguments, which I think we need to bear in mind. The first is already we’ve had the Morison Government rip out $40 billion-plus from people’s existing superannuation accounts. And the reason why they’ve done that is because they haven’t had an economic policy alternative other than to say to working families, if you’re doing it tough as a result of the COVID crisis, then you can go and raid your super. Well, that’s all very fine and dandy, but when those working people then go to retire in the decades ahead, they will have gutted their retirement income. And that’s because this government has allowed them to do that, and in fact forced them to do that, in the absence of an economic policy alternative. Therefore, we’ve had this slug taken to the existing national superannuation pile. But furthermore, the second big slug is this indication increasingly from both Mr Morrison and Mr Frydenberg that they’re now going to betray the Australian people, betray working families, by repudiating their last pre-election commitment by abandoning the increase from 9.5% where it is now to 12%. This is a cruel assault by Morrison on the retirement income of working Australians and using the cover of COVID to try and get away with it.

The argument which the Australian Government seems to be advancing to justify this most recent assault on retirement income policy is that they say that if we go ahead with increasing the superannuation guarantee level from 9.5% and to 10, to 10.5 to 11, to 11.5 to 12 in the years that are to come, that that will somehow depress natural wages growth in the Australian economy. Pigs might fly. That is the biggest bullshit argument I have ever heard against going ahead with decent provision for people’s superannuation savings for the future. There is no statistical foundation for it. There is no logical foundation for it. There is no data-based argument sustained. This is an increment of half-a-percent a year out for the next several years until we get to 12%. What is magic about 12%? It’s fundamental in terms of the calculations that have been done to provide people with decent superannuation adequacy, retirement income adequacy, when they stop working. That’s why we’re doing it. But the argument that somehow by not proceeding with the increase from 9.5 to 12%, we’re going to deny people a proper increase in wages in the period ahead is an absolute nonsense. There is no basis to that argument whatsoever.

And what does it mean for an average working family? If you’re currently on $70,000 a year and superannuation is frozen at 9.5%, and not increased to 12, by the time you retire, you’re going to be at least $70,000 worse off, than would otherwise be the case. Why have we, in successive Labor government’s been so passionate about superannuation policy? Because we believe that every Australian, every working family should have the opportunity for some decency, dignity and independence in their retirement. And guess what: as we live longer, we’re going to spend longer in retirement and this is going to mean more and more for the generations to come. Of course, what’s the alternative if we don’t have superannuation adequacy, and if this raid on super continues under cover of COVID again? Well, it means that Mr Morrison and Mr Frydenberg in the future are going to be forcing more and more people under the age pension and my challenge to Australians is simply this: do you really trust your future and your retirement to Mr Morrison’s generosity in years to come on the age pension? It’s a bit like saying that you trust Mr Morrison in terms of his custodianship of the aged care system in this country. Successive conservative governments have never supported effective increases to the age pension, and they’ve never properly supported the aged care sector either.

But the bottom line is, if you deny people dignity and independence through the superannuation system, and these measures which the current conservative government are undertaking and are foreshadowing take us further in that direction. Then there’s only one course left for people when they retire and that’s to go onto the age pension. One of the things I’m proudest of in our period government was that we brought about the biggest single adjustment and the aged pension in its history. It was huge, something like $65. And we made that as a one-off adjustment which was indexed to the future. But let me tell you, that would never happen under a conservative government. And therefore entrust people’s future retirement to the future generosity of whichever conservative government might be around at the time is frankly folly. The whole logic of us having a superannuation system is that every working Australian can have their own independent dignity in their own retirement. That’s what it’s about.

So my appeal to Mr Morrison and Mr Frydenberg today is: Scotty, Joshy, think about it again. This is a really bad idea. My appeal to them as human beings as they look to the retirement of people who are near and dear to them in the future is: don’t take a further meataxe to the retirement income of working families for the future. It’s just un-Australian. Thank you.

Journalist
Well, what do you think of the argument that delaying the superannuation guarantee increase would actually give people more money in their take home pay? I know you’ve used fairly strong language.

Kevin Rudd
Well, it is a fraudulent argument. There’s nothing in the data to suggest that that would happen. Let me give you one small example. In the last seven or eight years, we’ve had significant productivity growth in the Australian economy, in part because of some of the reforms we brought about in the economy during our own period in government. These things flow through. But if you look at productivity growth on the one hand, and look at the negligible growth in real wage levels over that same period of time, there is no historical argument to suggest that somehow by sacrificing superannuation increases that you’re going to generate an increase in average wages and average income. There’s simply nothing in the argument whatsoever.

So therefore, I can only conclude that this is a made-up argument by Mr Morrison using COVID cover, when in fact, what is their motivation? The Liberal Party have never liked the compulsory superannuation scheme, ever. They’ve opposed it all the way through. And I can only think that the reason for that is because Mr Keating came up with the idea in the first place. And on top of it, that because we now have such large industry superannuation funds around Australia, and $3 trillion therefore worth of muscle in the superannuation industry, that somehow represents a threat to their side of politics. But the argument that this somehow is going to effect wage increases for future average Australians is simply without logical foundation.

Journalist
Sure, but you’re comparing historical data with not exactly like-for-like given we’re now in a recession and the immediate future will be deeply in recession. So, in terms of the argument that delaying [inaudible] will end up increasing take-home pay packets. Do admit that, you know, by looking at historical data and looking at the current trajectory it’s not like for like?

Kevin Rudd
The bottom line is we’ve had relatively flat growth in the economy in the last several years, and I have seen so many times in recent decades conservative parties [inaudible] that somehow, by increasing superannuation, we’re going to depress average income levels. Remember, the conservatives have already delayed the implementation of this increase of 2.5% since they came to power in 2013-14. Whatever excuses they managed to marshall that time in so doing. But the bottom line is, as this data indicates, that hasn’t resulted in some significant increase in wages. In fact, the data suggests the reverse.

So what I’m suggesting to you is: for them to argue that a 0.5% a year increase in the superannuation guarantee level, is going to send a torpedo a’midships into the prospects of wage increases for working Australians makes no sense. What doesn’t make sense is the accumulation of those savings over a lifetime. If Paul Keating hadn’t done what he did back then, there’d be no $3 trillion worth of Australian national savings. Paul had the vision to do it. Good on him. We tried to complete that vision by going from 9 to 12. And this mob have tried to stop it. But the real people who miss out are your parents, your parents, and I’m sorry to tell you both, you’ll both get older and you too in terms of the adequacy of your retirement income when the day comes.

Journalist
So if it’s so important then, why did you only increase it by 0.5% during your six years in government, sharing that period of course with Julia Gillard?

Kevin Rudd
Well, the bottom line is: we decided to increase it gradually, so that we would not present any one-off assault to the ability of employers and employees to enjoy reasonable wage increases. It was a small increase every year and, guess what: it continues to be a very small increase every year until we get to 12. The other thing I’d say, which I haven’t raised so far in our discussion today, is that for most of the last five years, I’ve been in the United States. I run an American think tank. When I’ve traveled around the world and people know of my background in Australian politics, I am always asked this question: how did you guys come up with such a brilliant national savings policy? Very few, if any other countries in the world have this. But what we have done is a marvelous piece of long-term planning for generations of Australians. And with great macroeconomic benefit for the Australian economy in terms of this pool of national savings. We’re the envy of the world.

And yet what are we doing? Turning around and trashing it. So the reason we are gradual about it was to be responsible, not give people a sudden 3% hit, to tailor it over time, and we did so, just like Paul did with the original move from zero, first to 3, then 6 to 9. It happened gradually. But the cumulative effect of this over time for people retiring in 10, 20, 30 40 years’ time is enormous. And that’s why these changes are so important to the future. As you know, I rarely call a press conference. Paul doesn’t call many press conferences either, but he and I are angry as hell that this mob have decided it seems to take a meataxe to this important part of our national economic future and our social wellbeing. That’s what it’s about.

Journalist
So we know that [inaudible] super accounts have been wiped completely. What damage do you think that would do if it’s extended? So that people can continue to access their super?

Kevin Rudd
The damage it does for individual working Australians, as I said before, it throws them back onto the age pension. And the age pension is simply the absolute basic backbone, the absolute basic provision, for people’s retirement for the future. If no other options exist. And as I said, in office, we undertook a fundamental reform to take it from below poverty level to above poverty level. But if you want for the future, for folks who are retiring to look at that as their option, well, if you continue to destroy this nation’s superannuation nest egg, that’s exactly where you’re going to end up. I can’t understand the logic of this. I thought conservatives were supposed to favour thrift. I thought conservatives were supposed to favour saving. They’re accusation against those of us who come from the Labor side of politics apparently is that we love to spend; actually, we like to save, and we do it through a national savings policy. Good for working families and good for the national economy.

And I think it’s just wrong that people have as their only option there for the future to be thrown back on the age pension and on that point, apart from the wellbeing of individual families, think about the impact in the future on the national budget. Most countries say to me that they envy our national savings policy because it takes pressure off the national budget in the future. Why do you think so many of the ratings agencies are marking economies down around the world? Because they haven’t made adequate future provision for retirement. They haven’t made adequate provision for the future superannuation entitlements of government employees as well. So what we have with the Future Fund, which I concede readily was an initiative of the conservative government, but supported by us on a bipartisan basis, is dealing with that liability in terms of the retirement income needs of federal public servants. But in terms of the rest of the nation, that’s what our national superannuation policy was about. Two arms to it. So I can’t understand why a conservative government would want to take the meataxe to [inaudible].

Journalist
Following on from your comments in 2018 when you said national Labor should look at distancing themselves from the CFMEU, do you think that’s something Queensland Labor should do given the events of last week?

Kevin Rudd
Who are you from by the way?

Journalist
The Courier-Mail.

Kevin Rudd
Well, when the Murdoch media ask me a question, I’m always skeptical in terms of why it’s been asked. So I don’t know the context of this particular question. I simply stand by my historical comments.

Journalist
Do you think in light of what happened last week, Michael Ravbar came out quite strongly against Queensland Labor as that they have no economic plan and that the left faction was a bit not tapped into what everyone was thinking normally. So I just wanted to know whether that’s something you think should happen at the state level?

Kevin Rudd
What I know about the Murdoch media is that you have no interest in the future of the Labor government and no interest in the future of the Labor Party. What you’re interested in is a headline in tomorrow’s Courier-Mail which attacks the Palaszczuk government. I don’t intend to provide that for you. I kind of know what the agenda is here. I’ve been around for a long time and I know what instructions you’re going to get.

But let me say this about the Palaszczuk government: the Palaszczuk government has a strong economic record. The Palaszczuk government has handled the COVID crisis well. The Palaszczuk government is up against an LNP opposition led by Frecklington which has repeatedly called for Queensland’s borders to be opened. For for those reasons, the state opposition has no credibility. And for those reasons, Annastacia Palaszczuk has bucketloads of credibility. So as the internal debates, I will leave it to you and all the journalists who will follow them from the Curious Mail.

Journalist
Do you think Labor will do well at the election, Mr Rudd?

Kevin Rudd
That’s a matter for the Queensland people but Annastacia Palaszczuk, given all the challenges that state premiers are facing right now, is doing a first-class job in very difficult circumstances. I used to work for state government. I was Wayne Goss’s chief of staff. I used to be director-general of the Cabinet Office. And I do know something about how state governments operate. And I think she should be commended given the difficult choices which are available to her at this time for running a steady ship. [inaudible] Thanks very much.

The post Press Conference: Morrison’s Assault on Superannuation appeared first on Kevin Rudd.

Cory DoctorowHow to Destroy Surveillance Capitalism

For this week’s podcast, I read an excerpt from “How to Destroy Surveillance Capitalism,” a free short book (or long pamphlet, or “nonfiction novella”) I published with Medium’s Onezero last week. HTDSC is a long critical response to Shoshanna Zuboff’s book and paper on the subject, which re-centers the critique on monopolism and the abusive behavior it abets, while expressing skepticism that surveillance capitalists are really as good at manipulating our behavior as they claim to be. It is a gorgeous online package, and there’s a print/ebook edition following.

MP3

Planet DebianJonathan Carter: Free Software Activities for 2020-08

Debian packaging

2020-08-07: Sponsor package python-sabyenc (4.0.2-1) for Debian unstable (Python team request).

2020-08-07: Sponsor package gpxpy (1.4.2-1) for Debian unstable (Python team request).

2020-08-07: Sponsor package python-jellyfish (0.8.2-1) for Debian unstable (Python team request).

2020-08-08: Sponsor package django-ipwire (3.0.0-1) for Debian unstable (Python team request).

2020-08-08: Sponsor package python-mongoengine (0.20.0-1) for Debian unstable (Python team request).

2020-08-08: Review package pdfminer (20191020+dfsg-3) (Needs some more work) (Python team request).

2020-08-08: Upload package bundlewrap (4.1.0-1) to Debian unstable.

2020-08-09: Sponsor package pdfminer (20200726-1) for Debian unstable (Python team request).

2020-08-09: Sponsor package spyne (2.13.15-1) for Debian unstable (Python team request).

2020-08-09: Review package mod-wsgi (4.6.8-2) (Needs some more work) (Python team request).

2020-08-10: Sponsor package nfoview (1.28-1) for Debian unstable (Python team request).

2020-08-11: Sponsor package pymupdf (1.17.4+ds1-1) for Debian unstable (Python team request).

2020-08-11: Upload package calamares (3.2.28-1) to Debian ubstable.

2020-08-11: Upload package xabacus (8.2.9-1) to Debian unstable.

2020-08-11: Upload package bashtop (0.9.25-1~bpo10+1) to Debian buster-backports.

2020-08-11: Upload package live-tasks (11.0.3) to Debian unstable (Closes: #942834, #965999, #956525, #961728).

2020-08-12: Upload package calamares-settings-debian (10.0.20-1+deb10u4) to Debian buster (Closes: #968267, #968296).

2020-08-13: Upload package btfs (2.22-1) to Debian unstable.

2020-08-14: Upload package calamares (3.2.28.2-1) to Debian unstable.

2020-08-14: Upload package bundlewrap (4.1.1-1) to Debian unstable.

2020-08-19: Upload package gnome-shell-extension-dash-to-panel (38-2) to Debian unstable) (Closes: #968613).

2020-08-19: Sponsor package mod-wsgi (4.7.1-1) for Debian unstable (Python team request).

2020-08-19: Review package tqdm (4.48.2-1) (Needs some more work) (Python team request).

2020-08-19: Sponsor package tqdm (4.48.2-1) to unstable (Python team request).

2020-08-19: Upload package calamares (3.2.28.3-2) to unstable (Python team request).

Worse Than FailureThoroughly Tested

Zak S worked for a retailer which, as so often happens, got swallowed up by Initech's retail division. Zak's employer had a big, ugly ERP systems. Initech had a bigger, uglier ERP and once the acquisition happened, they all needed to play nicely together.

These kinds of marriages are always problematic, but this particular one was made more challenging: Zak's company ran their entire ERP system from a cluster of Solaris servers- running on SPARC CPUs. Since upgrading that ERP system to run in any other environment was too expensive to seriously consider, the existing services were kept on life-support (with hardware replacements scrounged from the Vintage Computing section of eBay), while Zak's team was tasked with rebuilding everything- point-of-sale, reporting, finance, inventory and supply chain- atop Initech's ERP system.

The project was launched with the code name "Cold Stone", with Glenn as new CTO. At the project launch, Glenn stressed that, "This is a high impact project, with high visibility throughout the organization, so it's on us to ensure that the deliverables are completed on time, on budget, to provide maximum value to the business and to that end, I'll be starting a series of meetings to plan the meetings and checkpoints we'll use to ensure that we have an action-plan that streamlines our…"

"Cold Stone" launched with a well defined project scope, but about 15 seconds after launch, that scope exploded. New "business critical" systems were discovered under every rock, and every department in the company had a moment of, "Why weren't we consulted on this plan? Our vital business process isn't included in your plan!" Or, "You shouldn't have included us in this plan, because our team isn't interested in a software upgrade, we're going to continue using the existing system until the end of time, thank you very much."

The expanding scope required expanding resources. Anyone with any programming experience more complex than "wrote a cool formula in Excel" was press-ganged into the project. You know how to script sending marketing emails? Get on board. You wrote a shell script to purge old user accounts? Great, you're going to write a plugin to track inventory at retail stores.

The project burned through half a dozen business analysts and three project managers, and that's before the COVID-19 outbreak forced the company to quickly downsize, and squish together several project management roles into one person.

"Fortunately" for Initech, that one person was Edyth, who was one of those employees who has given their entire life over to the company, and refuses to sotp working until the work is done. She was the sort of manager who would schedule meetings at 12:30PM, because she knew no one else would be scheduling meetings during the lunch hour. Or, schedule a half hour meeting at 4:30PM, when the workday ends at 5PM, then let it run long, "Since we're all here anyway, let's keep going." She especially liked to abuse video conferencing for this.

As the big ball of mud grew, the project slowly, slowly eased its way towards completion. And as that deadline approached, Edyth started holding meetings which focused on testing. Which is where Edyth started to raise some concerns.

"Lucy," Edyth said, "I noticed that you've marked the test for integration between the e-commerce site and the IniRewards™ site as not-applicable?"

"Well, yeah," Lucy said. "It says to test IniRewards™ signups on the e-commerce site, but our site doesn't do that. Signups entirely happen on the IniRewards™ site. There isn't really any integration."

"Oh," Edyth said. "So that sounds like it's a Zak thing?"

Zak stared at his screen for a moment. He was responsible for the IniRewards™ site, a port of their pre-acquisition customer rewards system to work with Initech's rewards system. He hadn't written it, but somewhere along the way, he became the owner of it, for reasons which remain murky. "Uh… it's a static link."

Edyth nodded, as if she understood what that meant. "So how long will that take to test? A day? Do you need any special setup for this test?"

"It's… a link. I'll click it."

"Great, yes," Edyth said. "Why don't you write up the test plan document for this user story, and then we'll schedule the test for… next week? Can you do it any earlier?"

"I can do it right now," Zak said.

"No, no," Edyth said. "We need to schedule these tests in advance so you're not interacting with anyone else using the test environment. I'll set up a followup meeting to review your test plan."

Test plans, of course, had a template which needed to be filled out. It was a long document, loaded with boiler plate, for the test to be, "Click the 'Rewards Signup' link in the e-commerce site footer. Expected behavior: the browser navigates to the IniRewards™ home page."

Zak added the document to the project document site, labelled "IniRewards Hyper-Link Test", and waited for the next meeting with Edyth to discuss schedule. This time, Glenn, the CTO was in the meeting.

"This 'Hyper-Link' test sounds very important," Glenn said. He enunciated "hyper-link" like it was a word in a foreign language. "Can we move that up in the schedule? I'd like that done tomorrow."

"I… can do it right now," Zak said. "It won't interact with other tests-"

"No, we shouldn't rush things." Glenn's eyes shifted towards another window as he reviewed the testing schedule. "It looks like there's nothing scheduled for testing between 10AM and 2PM tomorrow. Do you think four hours is enough time? Yes? Great, I'll block that off for you."

Suffice to say, the test passed, and was verified quite thoroughly.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram Seny Kamara on "Crypto for the People"

Seny Kamara gave an excellent keynote talk this year at the (online) CRYPTO Conference. He talked about solving real-world crypto problems for marginalized communities around the world, instead of crypto problems for governments and corporations. Well worth watching and listening to.

,

Planet Linux AustraliaSimon Lyall: Audiobooks – August 2020

Truth, Lies, and O-Rings: Inside the Space Shuttle Challenger Disaster by Allan J. McDonald

The author was a senior manager in the booster team who cooperated more fully with the investigation than NASA or his company’s bosses would have preferred. Mostly accounts of meetings, hearings & coverups with plenty of technical details. 3/5

The Ascent of Money: A Financial History of the World by Niall Ferguson

A quick tour though the rise of various financial concepts like insurance, bonds, stock markets, bubbles, etc. Nice quick intro and some well told stories. 4/5

The Other Side of the Coin: The Queen, the Dresser and the Wardrobe by Angela Kelly

An authorized book from the Queen’s dresser. Some interesting stories. Behind-the-scenes on typical days and regular events Okay even without photos. 3/5

Second Wind: A Sunfish Sailor, an Island, and the Voyage That Brought a Family Together by Nathaniel Philbrick

A writer takes up competitive sailing after a gap of 15 years, training on winter ponds in prep for the Nationals. A nice read. 3/5

Spitfire Pilot by Flight-Lieutentant David M. Crook, DFC

An account of the Author’s experiences as a pilot during the Battle of Britain. Covering air-combat, missions, loss of friends/colleagues and off-duty life. 4/5

Wild City: A Brief History of New York City in 40 Animals
by Thomas Hynes

A Chapter on each species. Usually information about incidents they were involved in (see “Tigers”) or the growth, decline, comeback of their population & habit. 3/5

Fire in the Sky: Cosmic Collisions, Killer Asteroids, and the Race to Defend Earth by Gordon L. Dillow

A history of the field and some of the characters. Covers space missions, searchers, discovery, movies and the like. Interesting throughout. 4/5

The Long Winter: Little House Series, Book 6 by Laura Ingalls Wilder

The family move into their store building in town for the winter. Blizzard after blizzard sweeps through the town over the next few months and starvation or freezing threatens. 3/5

The Time Traveller’s Almanac Part 1: Experiments edited by Anne and Jeff VanderMeer

First of 4 volumes of short stories. 14 stories, many by well known names (ie Silverberg, Le Guin). A good collection. 3/5

A Long Time Ago in a Cutting Room Far, Far Away: My Fifty Years Editing Hollywood Hits—Star Wars, Carrie, Ferris Bueller’s Day Off, Mission: Impossible, and More by Paul Hirsch

Details of the editing profession & technology. Lots of great stories 4/5

My Scoring System

  • 5/5 = Brilliant, top 5 book of the year
  • 4/5 = Above average, strongly recommend
  • 3/5 = Average. in the middle 70% of books I read
  • 2/5 = Disappointing
  • 1/5 = Did not like at all

Share

Planet Linux AustraliaSimon Lyall: Linkedin putting pressure on users to enable location tracking

I got this email from Linkedin this morning. It is telling me that they are going to change my location from “Auckland, New Zealand” to “Auckland, Auckland, New Zealand“.

Email from Linkedin on 30 August 2020

Since “Auckland, Auckland, New Zealand” sounds stupid to New Zealanders (Auckland is pretty much a big city with a single job market and is not a state or similar) I clicked on the link and opened the application to stick with what I currently have

Except the problem is that the pulldown doesn’t offer many any other locations

The only way to change the location is to click “use Current Location” and then allow Linkedin to access my device’s location.

According to the help page:

By default, the location on your profile will be suggested based on the postal code you provided in the past, either when you set up your profile or last edited your location. However, you can manually update the location on your LinkedIn profile to display a different location.

but it appears the manual method is disabled. I am guessing they have a fixed list of locations in my postcode and this can’t be changed.

So it appears that my options are to accept Linkedin’s crappy name for my location (Other NZers have posted problems with their location naming) or to allow Linkedin to spy on my location and it’ll probably still assign the same dumb name.

The basically appears to be a way for Linkedin to push user to enable location tracking. While at the same time they get to force their own ideas on how New Zealand locations work on users.

Share

,

Planet Linux AustraliaChris Smart: How to create bridges on bonds (with and without VLANs) using NetworkManager

Some production systems you face might make use of bonded network connections that you need to bridge in order to get VMs onto them. That bond may or may not have a native VLAN (in which case you bridge the bond), or it might have VLANs on top (in which case you want to bridge the VLANs), or perhaps you need to do both.

Let’s walk through an example where we have a bond that has a native VLAN, that also has the tagged VLAN 123 on top (and maybe a second VLAN 456), all of which need to be separately bridged. This means we will have the bond (bond0) with a matching bridge (br-bond0), plus a VLAN on the bond (bond0.123) with its matching bridge (br-vlan123). It should look something like this.

+------+   +---------+                           +---------------+
| eth0 |---|         |          +------------+   |  Network one  |
+------+   |         |----------|  br-bond0  |---| (native VLAN) |
           |  bond0  |          +------------+   +---------------+
+------+   |         |                                            
| eth1 |---|         |                                            
+------+   +---------+                           +---------------+
            | |   +---------+   +------------+   |  Network two  |
            | +---| vlan123 |---| br-vlan123 |---| (tagged VLAN) |
            |     +---------+   +------------+   +---------------+
            |                                                     
            |     +---------+   +------------+   +---------------+
            +-----| vlan456 |---| br-vlan456 |---| Network three |
                  +---------+   +------------+   | (tagged VLAN) |
                                                 +---------------+

To make it more complicated, let’s say that the native VLAN on the bond needs a static IP and to operate at an MTU of 1500 while the other uses DHCP and needs MTU of 9000.

OK, so how do we do that?

Start by creating the bridge, then later we create the interface that attaches to that bridge. When creating VLANs, they are created on the bond, but then attached as a slave to the bridge.

Create the bridge for the bond

First, let’s create the bridge for our bond. We’ll export some variables to make scripting easier, including the name, value for spanning tree protocol (SPT) and MTU. Note that in this example the bridge will have an MTU of 1500 (but the bond itself will be 9000 to support other VLANs at that MTU size.)

BRIDGE=br-bond0
BRIDGE_STP=yes
BRIDGE_MTU=1500

OK so let’s create the bridge for the native VLAN on the bond (which doesn’t exist yet).

nmcli con add ifname "${BRIDGE}" type bridge con-name "${BRIDGE}"
nmcli con modify "${BRIDGE}" bridge.stp "${BRIDGE_STP}"
nmcli con modify "${BRIDGE}" 802-3-ethernet.mtu "${BRIDGE_MTU}"

By default this will look for an address with DHCP. If you don’t want that you can either set it manually:

nmcli con modify "${BRIDGE}" ipv4.method static ipv4.address 192.168.0.123/24 ipv6.method ignore

Or disable IP addressing:

nmcli con modify "${BRIDGE}" ipv4.method disabled ipv6.method ignore

Finally, bring up the bridge. Yes, we don’t have anything attached to it yet, but that’s OK.

nmcli con up "${BRIDGE}"

You should be able to see it with nmcli and brctl tools (if available on your distro), although note that there is no device attached to this bridge yet.

nmcli con
brctl show

Next, we create the bond to attach to the bridge.

Create the bond and attach to the bridge

Let’s create the bond. In my example I’m using active-backup (mode 1) but your bond may use balance-rr (round robin, mode 0) or, depending on your switching, perhaps something like link aggregation control protocol (LACP) which is 802.3ad (mode 4).

Let’s say that your bond (we’re going to call bond0) has two interfaces, which are eth0 and eth1 respectively. Note that in this example, although the native interface on this bond wants an MTU of 1500, the VLANs which sit on top of the bond need a higher MTU of 9000. Thus, we set the bridge to 1500 in the previous step, but we need to set the bond and its interfaces to 9000. Let’s export those now to make scripting easier.

BOND=bond0
BOND_SLAVE0=eth0
BOND_SLAVE1=eth1
BOND_MODE=active-backup
BOND_MTU=9000

Now we can go ahead and create the bond, setting the options and the slave devices.

nmcli con add type bond ifname "${BOND}" con-name "${BOND}"
nmcli con modify "${BOND}" bond.options mode="${BOND_MODE}"
nmcli con modify "${BOND}" 802-3-ethernet.mtu "${BOND_MTU}"
nmcli con add type ethernet con-name "${BOND}-slave-${BOND_SLAVE0}" ifname "${BOND_SLAVE0}" master "${BOND}"
nmcli con add type ethernet con-name "${BOND}-slave-${BOND_SLAVE1}" ifname "${BOND_SLAVE1}" master "${BOND}"
nmcli con modify "${BOND}-slave-${BOND_SLAVE0}" 802-3-ethernet.mtu "${BOND_MTU}"
nmcli con modify "${BOND}-slave-${BOND_SLAVE1}" 802-3-ethernet.mtu "${BOND_MTU}"

OK at this point you have a bond specified, great! But now we need to attach it to the bridge, which is what will make the bridge actually work.

nmcli con modify "${BOND}" master "${BRIDGE}" slave-type bridge

Note that before we bring up the bond (or afterwards) we need to disable or delete any existing network connections for the individual interfaces. Check this with nmcli con and delete or disable those connections. Note that this may disconnect you, so make sure you have a console to the machine.

Now, we can bring the bond up which will also activate our interfaces.

nmcli con up "${BOND}"

We can check that the bond came up OK.

cat /proc/net/bonding/bond0

And this bond should also now be on the network, via the bridge which has an IP set.

Now if you look at the bridge you can see there is an interface (bond0) attached to it (your distro might not have brctl).

nmcli con
ls /sys/class/net/br-bond0/brif/
brctl show

Bridging a VLAN on a bond

Now that we have our bond, we can create the bridged for our tagged VLANs (remember that the bridge connected to the bond is a native VLAN so it didn’t need a VLAN interface).

Create the bridge for the VLAN on the bond

Create the new bridge, which for our example is going to use VLAN 123 which will use MTU of 9000.

VLAN=123
BOND=bond0
BRIDGE=br-vlan${VLAN}
BRIDGE_STP=yes
BRIDGE_MTU=9000

OK let’s go! (This is the same as the first bridge we created.)

nmcli con add ifname "${BRIDGE}" type bridge con-name "${BRIDGE}"
nmcli con modify "${BRIDGE}" bridge.stp "${BRIDGE_STP}"
nmcli con modify "${BRIDGE}" 802-3-ethernet.mtu "${BRIDGE_MTU}"

Again, this will look for an address with DHCP, so if you don’t want that, then disable it or set an address manually (as per first example). Then you can bring the device up.

nmcli con up "${BRIDGE}"

Create the VLAN on the bond and attach to bridge

OK, now we have the bridge, we create the VLAN on top of bond0 and then attach it to the bridge we just created.

nmcli con add type vlan con-name "${BOND}.${VLAN}" ifname "${BOND}.${VLAN}" dev "${BOND}" id "${VLAN}"
nmcli con modify "${BOND}.${VLAN}" master "${BRIDGE}" slave-type bridge
nmcli con modify "${BOND}.${VLAN}" 802-3-ethernet.mtu "${BRIDGE_MTU}"

If you look at bridges now, you should see the one you just created, attached to a VLAN device (note, your distro might not have brctl).

nmcli con
brctl show

And that’s about it! Now you can attach VMs to those bridges and have them on those networks. Repeat the process for any other VLANs you need on to of the bond.

,

Sam VargheseManaging a relationship is hard work

For many years, Australia has been trading with China, apparently in the belief that one can do business with a country for yonks without expecting the development of some sense of obligation. The attitude has been that China needs Australian resources and the relationship needs to go no further than the transfer of sand dug out of Australia and sent to China.

Those in Beijing, obviously, haven’t seen the exchange this way. There has been an expectation that there would be some obligation for the relationship to go further than just the impersonal exchange of goods for money. Australia, in true colonial fashion, has expected China to know its place and keep its distance.

This is similar to the attitude the Americans took when they pushed for China’s admission to the World Trade Organisation: all they wanted was a means of getting rid of their manufacturing so their industries could grow richer and an understanding that China would agree to go along with the American diktat to change as needed to keep the US on top of the trading world.

But then you cannot invite a man into your house for a dinner party and insist that he eat only bread. Once inside, he is free to choose what he wants to consume. It appears that the Americans do not understand this simple rule.

Both Australia and the US have forgotten they are dealing with the oldest civilisation in the world. A culture that plays the long waiting game. The Americans read the situation completely wrong for the last 70 years, assuming initially that the Kuomintang would come out on top and that the Communists would be vanquished. In the interim, the Americans obtained most of the money used for the early development of their country by selling opium to the Chinese.

China has not forgotten that humiliation.

There was never a thought given to the very likely event that China would one day want to assert itself and ask to be treated as an equal. Which is what is happening now. Both Australia and the US are feigning surprise and acting as though they are competely innocent in this exercise.

Fast forward to 2020 when the Americans and the Australians are both on the warpath, asserting that China is acting aggressively and trying to intimidate Australia while refusing to bow to American demands that it behave as it is expected to. There are complaints about Chinese demands for technology transfers, completely ignoring the fact that a developing country can ask for such transfers under WTO rules.

There are allegations of IP theft by the Americans, completely forgetting that they stole IP from Britain in the early days of the colonies; the name Samuel Slater should ring a bell in this context. Many educated Americans have themselves written about Slater.

Racism is one trait that defines the Australian approach to China. The Asian nation has been expected to confine itself to trade and never ask for more. And Australia, in condescending fashion, has lauded its approach, never understanding that it is seen as an American lapdog and no more. China has been waiting for the day when it can level scores.

It is difficult to comprehend why Australia genuflects before the US. There has been an attitude of veneration going back to the time of Harold Holt who is well known for his “All the way with LBJ” line, referring to the fact that Australian soldiers would be sent to Vietnam to serve as cannon fodder for the Americans and would, in short, do anything as long as the US decided so. Exactly what fight Australia had with Vietnam is not clear.

At that stage, there was no seminal action by the US that had put the fear of God into Australia; this came later, in 1975, when the CIA manipulated Australian politics and influenced the sacking of prime minister Gough Whitlam by the governor-general, Sir John Kerr. There is still resistance from Australian officialdom and its toadies to this version of events, but the evidence is incontrovertible; Australian journalist Guy Rundle has written two wonderful accounts of how the toppling took place.

Whitlam’s sins? Well, he had cracked down on the Australian Security Intelligence Organisation, an agency that spied on Australians and conveyed information to the CIA, when he discovered that it was keeping tabs on politicians. His attorney-general, Lionel Murphy, even ordered the Australian Federal Police to raid the ASIO, a major affront to the Americans who did not like their client being treated this way.

Whitlam also hinted that he would not renew a treaty for the Americans to continue using a base at Pine Gap as a surveillance centre. This centre was offered to the US, with the rent being one peppercorn for 99 years.

Of course, this was pure insolence coming from a country which the Americans — as they have with many other nations — treated as a vassal state and one only existing to do their bidding. So Whitlam was thrown out.

On China, too, Australia has served the role of American lapdog. In recent days, the Australian Prime Minister Scott Morrison has made statements attacking China soon after he has been in touch with the American leadership. In other words, the Americans are using Australia to provoke China. It’s shameful to be used in this manner, but then once a bootlicker, always a bootlicker.

Australia’s subservience to the US is so great that it even co-opted an American official, former US Secretary of Homeland Security Kirstjen Nielsen, to play a role in developing a cyber security strategy. There are a large number of better qualified people in the country who could do a much better job than Nielsen, who is a politician and not a technically qualified individual. But the slave mentality has always been there and will remain.

Cryptogram Friday Squid Blogging: How Squid Survive Freezing, Oxygen-Deprived Waters

Lots of interesting genetic details.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Krebs on SecuritySendgrid Under Siege from Hacked Accounts

Email service provider Sendgrid is grappling with an unusually large number of customer accounts whose passwords have been cracked, sold to spammers, and abused for sending phishing and email malware attacks. Sendgrid’s parent company Twilio says it is working on a plan to require multi-factor authentication for all of its customers, but that solution may not come fast enough for organizations having trouble dealing with the fallout in the meantime.

Image: Wikipedia

Many companies use Sendgrid to communicate with their customers via email, or else pay marketing firms to do that on their behalf using Sendgrid’s systems. Sendgrid takes steps to validate that new customers are legitimate businesses, and that emails sent through its platform carry the proper digital signatures that other companies can use to validate that the messages have been authorized by its customers.

But this also means when a Sendgrid customer account gets hacked and used to send malware or phishing scams, the threat is particularly acute because a large number of organizations allow email from Sendgrid’s systems to sail through their spam-filtering systems.

To make matters worse, links included in emails sent through Sendgrid are obfuscated (mainly for tracking deliverability and other metrics), so it is not immediately clear to recipients where on the Internet they will be taken when they click.

Dealing with compromised customer accounts is a constant challenge for any organization doing business online today, and certainly Sendgrid is not the only email marketing platform dealing with this problem. But according to multiple emails from readers, recent threads on several anti-spam discussion lists, and interviews with people in the anti-spam community, over the past few months there has been a marked increase in malicious, phishous and outright spammy email being blasted out via Sendgrid’s servers.

Rob McEwen is CEO of Invaluement.com, an anti-spam firm whose data on junk email trends are used to improve the spam-blocking technologies deployed by several Fortune 100 companies. McEwen said no other email service provider has come close to generating the volume of spam that’s been emanating from Sendgrid accounts lately.

“As far as the nasty criminal phishes and viruses, I think there’s not even a close second in terms of how bad it’s been with Sendgrid over the past few months,” he said.

Trying to filter out bad emails coming from a major email provider that so many legitimate companies rely upon to reach their customers can be a dicey business. If you filter the emails too aggressively you end up with an unacceptable number of “false positives,” i.e., benign or even desirable emails that get flagged as spam and sent to the junk folder or blocked altogether.

But McEwen said the incidence of malicious spam coming from Sendgrid has gotten so bad that he recently launched a new anti-spam block list specifically to filter out email from Sendgrid accounts that have been known to be blasting large volumes of junk or malicious email.

“Before I implemented this in my own filtering system a week ago, I was getting three to four phone calls or stern emails a week from angry customers wondering why these malicious emails were getting through to their inboxes,” McEwen said. “And I just am not seeing anything this egregious in terms of viruses and spams from the other email service providers.”

In an interview with KrebsOnSecurity, Sendgrid parent firm Twilio acknowledged the company had recently seen an increase in compromised customer accounts being abused for spam. While Sendgrid does allow customers to use multi-factor authentication (also known as two-factor authentication or 2FA), this protection is not mandatory.

But Twilio Chief Security Officer Steve Pugh said the company is working on changes that would require customers to use some form of 2FA in addition to usernames and passwords.

“Twilio believes that requiring 2FA for customer accounts is the right thing to do, and we’re working towards that end,” Pugh said. “2FA has proven to be a powerful tool in securing communications channels. This is part of the reason we acquired Authy and created a line of account security products and services. Twilio, like other platforms, is forming a plan on how to better secure our customers’ accounts through native technologies such as Authy and additional account level controls to mitigate known attack vectors.”

Requiring customers to use some form of 2FA would go a long way toward neutralizing the underground market for compromised Sendgrid accounts, which are sold by a variety of cybercriminals who specialize in gaining access to accounts by targeting users who re-use the same passwords across multiple websites.

One such individual, who goes by the handle “Kromatix” on several forums, is currently selling access to more than 400 compromised Sendgrid user accounts. The pricing attached to each account is based on volume of email it can send in a given month. Accounts that can send up to 40,000 emails a month go for $15, whereas those capable of blasting 10 million missives a month sell for $400.

“I have a large supply of cracked Sendgrid accounts that can be used to generate an API key which you can then plug into your mailer of choice and send massive amounts of emails with ensured delivery,” Kromatix wrote in an Aug. 23 sales thread. “Sendgrid servers maintain a very good reputation with [email service providers] so your content becomes much more likely to get into the inbox so long as your setup is correct.”

Neil Schwartzman, executive director of the anti-spam group CAUCE, said Sendgrid’s 2FA plans are long overdue, noting that the company bought Authy back in 2015.

Single-factor authentication for a company like this in 2020 is just ludicrous given the potential damage and malicious content we’re seeing,” Schwartzman said.

“I understand that it’s a task to invoke 2FA, and given the volume of customers Sendgrid has that’s something to consider because there’s going to be a lot of customer overhead involved,” he continued. “But it’s not like your bank, social media account, email and plenty of other places online don’t already insist on it.”

Schwartzman said if Twilio doesn’t act quickly enough to fix the problem on its end, the major email providers of the world (think Google, Microsoft and Apple) — and their various machine-learning anti-spam algorithms — may do it for them.

“There is a tipping point after which receiving firms start to lose patience and start to more aggressively filter this stuff,” he said. “If seeing a Sendgrid email according to machine learning becomes a sign of abuse, trust me the machines will make the decisions even if the people don’t.”

Worse Than FailureError'd: Don't Leave This Page

"My Kindle showed me this for the entire time I read this book. Luckily, page 31 is really exciting!" writes Hans H.

 

Tim wrote, "Thanks JustPark, I'd love to verify my account! Now...how about that button?"

 

"I almost managed to uninstall Viber, or did I?" writes Simon T.

 

Marco wrote, "All I wanted to do was to post a one-time payment on a reputable cloud provider. Now I'm just confused."

 

Brinio H. wrote, "Somehow I expected my muscles to feel more sore after walking over 382 light-years on one day."

 

"Here we have PowerBI failing to dispel the perception that 'Business Intelligence' is an oxymoron," writes Craig.

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Krebs on SecurityConfessions of an ID Theft Kingpin, Part II

Yesterday’s piece told the tale of Hieu Minh Ngo, a hacker the U.S. Secret Service described as someone who caused more material financial harm to more Americans than any other convicted cybercriminal. Ngo was recently deported back to his home country after serving more than seven years in prison for running multiple identity theft services. He now says he wants to use his experience to convince other cybercriminals to use their skills for good. Here’s a look at what happened after he got busted.

Hieu Minh Ngo, 29, in a recent photo.

Part I of this series ended with Ngo in handcuffs after disembarking a flight from his native Vietnam to Guam, where he believed he was going to meet another cybercriminal who’d promised to hook him up with the mother of all consumer data caches.

Ngo had been making more than $125,000 a month reselling ill-gotten access to some of the biggest data brokers on the planet. But the Secret Service discovered his various accounts at these data brokers and had them shut down one by one. Ngo became obsessed with restarting his business and maintaining his previous income. By this time, his ID theft services had earned roughly USD $3 million.

As this was going on, Secret Service agents used an intermediary to trick Ngo into thinking he’d trodden on the turf of another cybercriminal. From Part I:

The Secret Service contacted Ngo through an intermediary in the United Kingdom — a known, convicted cybercriminal who agreed to play along. The U.K.-based collaborator told Ngo he had personally shut down Ngo’s access to Experian because he had been there first and Ngo was interfering with his business.

“The U.K. guy told Ngo, ‘Hey, you’re treading on my turf, and I decided to lock you out. But as long as you’re paying a vig through me, your access won’t go away’,” the Secret Service’s Matt O’Neill recalled.

After several months of conversing with his apparent U.K.-based tormentor, Ngo agreed to meet him in Guam to finalize the deal. But immediately after stepping off of the plane in Guam, he was apprehended by Secret Service agents.

“One of the names of his identity theft services was findget[.]me,” O’Neill said. “We took that seriously, and we did like he asked.”

In an interview with KrebsOnSecurity, Ngo said he spent about two months in a Guam jail awaiting transfer to the United States. A month passed before he was allowed a 10 minute phone call to his family and explain what he’d gotten himself into.

“This was a very tough time,” Ngo said. “They were so sad and they were crying a lot.”

First stop on his prosecution tour was New Jersey, where he ultimately pleaded guilty to hacking into MicroBilt, the first of several data brokers whose consumer databases would power different iterations of his identity theft service over the years.

Next came New Hampshire, where another guilty plea forced him to testify in three different trials against identity thieves who had used his services for years. Among them was Lance Ealy, a serial ID thief from Dayton, Ohio who used Ngo’s service to purchase more than 350 “fullz” — a term used to describe a package of everything one would need to steal someone’s identity, including their Social Security number, mother’s maiden name, birth date, address, phone number, email address, bank account information and passwords.

Ealy used Ngo’s service primarily to conduct tax refund fraud with the U.S. Internal Revenue Service (IRS), claiming huge refunds in the names of ID theft victims who first learned of the fraud when they went to file their taxes and found someone else had beat them to it.

Ngo’s cooperation with the government ultimately led to 20 arrests, with a dozen of those defendants lured into the open by O’Neill and other Secret Service agents posing as Ngo.

The Secret Service had difficulty pinning down the exact amount of financial damage inflicted by Ngo’s various ID theft services over the years, primarily because those services only kept records of what customers searched for — not which records they purchased.

But based on the records they did have, the government estimated that Ngo’s service enabled approximately $1.1 billion in new account fraud at banks and retailers throughout the United States, and roughly $64 million in tax refund fraud with the states and the IRS.

“We interviewed a number of Ngo’s customers, who were pretty open about why they were using his services,” O’Neill said. “Many of them told us the same thing: Buying identities was so much better for them than stolen payment card data, because card data could be used once or twice before it was no good to them anymore. But identities could be used over and over again for years.”

O’Neill said he still marvels at the fact that Ngo’s name is practically unknown when compared to the world’s most infamous credit card thieves, some of whom were responsible for stealing hundreds of millions of cards from big box retail merchants.

“I don’t know of anyone who has come close to causing more material harm than Ngo did to the average American,” O’Neill said. “But most people have probably never heard of him.”

Ngo said he wasn’t surprised that his services were responsible for so much financial damage. But he was utterly unprepared to hear about the human toll. Throughout the court proceedings, Ngo sat through story after dreadful story of how his work had ruined the financial lives of people harmed by his services.

“When I was running the service, I didn’t really care because I didn’t know my customers and I didn’t know much about what they were doing with it,” Ngo said. “But during my case, the federal court received like 13,000 letters from victims who complained they lost their houses, jobs, or could no longer afford to buy a home or maintain their financial life because of me. That made me feel really bad, and I realized I’d been a terrible person.”

Even as he bounced from one federal detention facility to the next, Ngo always seemed to encounter ID theft victims wherever he went, including prison guards, healthcare workers and counselors.

“When I was in jail at Beaumont, Texas I talked to one of the correctional officers there who shared with me a story about her friend who lost her identity and then lost everything after that,” Ngo recalled. “Her whole life fell apart. I don’t know if that lady was one of my victims, but that story made me feel sick. I know now that what I was doing was just evil.”

Ngo’s former ID theft service usearching[.]info.

The Vietnamese hacker was released from prison a few months ago, and is now finishing up a mandatory three-week COVID-19 quarantine in a government-run facility near Ho Chi Minh city. In the final months of his detention, Ngo started reading everything he could get his hands on about computer and Internet security, and even authored a lengthy guide written for the average Internet user with advice about how to avoid getting hacked or becoming the victim of identity theft.

Ngo said while he would like to one day get a job working in some cybersecurity role, he’s in no hurry to do so. He’s already had at least one job offer in Vietnam, but he turned it down. He says he’s not ready to work yet, but is looking forward to spending time with his family — and specifically with his dad, who was recently diagnosed with Stage 4 cancer.

Longer term, Ngo says, he wants to mentor young people and help guide them on the right path, and away from cybercrime. He’s been brutally honest about his crimes and the destruction he’s caused. His LinkedIn profile states up front that he’s a convicted cybercriminal.

“I hope my work can help to change the minds of somebody, and if at least one person can change and turn to do good, I’m happy,” Ngo said. “It’s time for me to do something right, to give back to the world, because I know I can do something like this.”

Still, the recidivism rate among cybercriminals tends to be extremely high, and it would be easy for him to slip back into his old ways. After all, few people know as well as he does how best to exploit access to identity data.

O’Neill said he believes Ngo probably will keep his nose clean. But he added that Ngo’s service if it existed today probably would be even more successful and lucrative given the sheer number of scammers involved in using stolen identity data to defraud states and the federal government out of pandemic assistance loans and unemployment insurance benefits.

“It doesn’t appear he’s looking to get back into that life of crime,” O’Neill said. “But I firmly believe the people doing fraudulent small business loans and unemployment claims cut their teeth on his website. He was definitely the new coin of the realm.”

Ngo maintains he has zero interest in doing anything that might send him back to prison.

“Prison is a difficult place, but it gave me time to think about my life and my choices,” he said. “I am committing myself to do good and be better every day. I now know that money is just a part of life. It’s not everything and it can’t bring you true happiness. I hope those cybercriminals out there can learn from my experience. I hope they stop what they are doing and instead use their skills to help make the world better.”

Worse Than FailureCodeSOD: Win By Being Last

I’m going to open with just one line, just one line from Megan D, before we dig into the story:

public static boolean comparePasswords(char[] password1, char[] password2)

A long time ago, someone wrote a Java 1.4 application. It’s all about getting data out of data files, like CSVs and Excel and XML, and getting it into a database, where it can then be turned into plots and reports. Currently, it has two customers, but boy, there’s a lot of technology invested in it, so the pointy-hairs decided that it needed to be updated so they could sell it to new customers.

The developers played a game of “Not It!” and Megan lost. It wasn’t hard to see why no one wanted to touch this code. The UI section was implemented in code generated by an Eclipse plugin that no longer exists. There was UI code which wasn’t implemented that way, but there were no code paths that actually showed it. The project didn’t have one “do everything” class of utilities- it had many of them.

The real magic was in Database.java. All the data got converted into strings before going into the database, and data got pulled back out as lists of strings- one string per row, prepended with the number of columns in that row. The string would get split up and converted back into the actual real datatypes.

Getting back to our sample line above, Megan adds:

No restrictions on any data in the database, or even input cleaning - little Bobby Tables would have a field day. There are so many issues that the fact that passwords are plaintext barely even registers as a problem.

A common convention used in the database layer is “loop and compare”. Want to check if a username exists in the database? SELECT username FROM users WHERE username = 'someuser', loop across the results, and if the username in the result set matches 'someuser', set a flag to true (set it to false otherwise). Return the flag. And if you're wondering why they need to look at each row instead of just seeing a non-zero number of matches, so am I.

Usernames are not unique, but the username/group combination should be.

Similarly, if you’re logging in, it uses a “loop and compare”. Find all the rows for users with that username. Then, find all the groups for that username. Loop across all the groups and check if any of them match the user trying to log in. Then loop across all the stored- plaintext stored passwords and see if they match.

But that raises the question: how do you tell if two strings match? Just use an equality comparison? Or a .equals? Of course not.

We use “loop and compare” on sequences of rows, so we should also use “loop and compare” on sequences of characters. What could be wrong with that?

/**
   * Compares two given char arrays for equality.
   * 
   * @param password1
   *          The first password to compare.
   * @param password2
   *          The second password to compare.
   * @return True if the passwords are equal false otherwise.
   */
  public static boolean comparePasswords(char[] password1, char[] password2)
  {
    // assume false until prove otherwise
    boolean aSameFlag = false;
    if (password1 != null && password2 != null)
    {
      if (password1.length == password2.length)
      {
        for (int aIndex = 0; aIndex < password1.length; aIndex++)
        {
          aSameFlag = password1[aIndex] == password2[aIndex];
        }
      }
    }
    return aSameFlag;
  }

If the passwords are both non-null, if they’re both the same length, compare them one character at a time. For each character, set the aSameFlag to true if they match, false if they don’t.

Return the aSameFlag.

The end result of this is that only the last letter matters, so from the perspective of this code, there’s no difference between the word “ship” and a more accurate way to describe this code.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Planet Linux AustraliaDavid Rowe: Open IP over VHF/UHF 2

The goal of this project is to develop a “100 kbit/s IP link” for VHF/UHF using just a Pi and RTLSDR hardware, and open source modem software. Since the first post in this series there’s been quite a bit of progress:

  1. I have created a GitHub repo with build scripts, a project plan and command lines on how to run some basic tests.
  2. I’ve built some integrated applications, e.g. rtl_fsk.c – that combines rtl_sdr, a CSDR decimator, and the Codec 2 fsk_demod in one handy application.
  3. Developed a neat little GUI system so we can see whats going on. I’ve found real time GUIs invaluable for physical layer experimentation. That’s one thing you don’t get with a chipset.

Spectral Purity

Bill and I have built and tested (on a spec-an) Tx filters for our Pis, that ensure the transmitted signal meets Australian Ham spurious requirements (which are aligned with international ITU requirements). I also checked the phase noise at 1MHz offset and measured -90dBm/Hz, similar to figures I have located online for my FT-817 at a 5W output level (e.g. DF9IC website quotes +37dBm-130dBc=-93dBm/Hz).

While there are no regulations for Ham phase noise in Australia, my Tx does appear to be compliant with ETSI EN 300 220-1 which deals with short range ISM band devices (maximum of -36dBm in a 100kHz bandwidth at 1MHz offset). Over an ideal 10km link, a -90dBm/Hz signal would be attenuated down to -180dBm/Hz, beneath the thermal noise floor of -174dBm/Hz.

I set up an experiment to pulse the Pi Tx signal off and on at 1 second intervals. Listening on a SSB Rx at +/-50kHz and +/-1MHz about 50m away and in a direct line of site to the roof mounted Pi Tx antenna I can hear no evidence of interference from phase noise.

I am satisfied my Pi based Tx won’t be interfering with anyone.

However as Mark VK5QI suggested I would not recommend amplifying the Tx level to the several watt level. If greater powers are required there are some other Tx options. For example the FSK transmitters of chipset transmitters work quite well, and some have better phase noise specs.

Over the Air Tests

Bill and I spent a few afternoons attempting to send packet at various bit rates. We measured our path loss at -135dBm over a 10km, non-line of site suburban path. Using our FT817s, this path is a noisy copy on SSB using a few watts.

Before I start any tests I ask myself “what would we expect to see?”. Well with 12dBm Tx power thats +12 – 135 = -123dBm into the receiver. Re-arranging for Eb/No, and using Rb=1000 bits/s and a RTL-SDR noise figure of 7dB:

Rx    = Eb/No + 10log10(Rb) + NF - 174
Eb/No = Rx - 10log10(Rb) - NF + 174
      = -123 - 10*log10(1000) - 7 + 174
      = 14 dB

Here is a plot of Eb/No versus BER generated by the mfsk.m script:

Looking up our 2FSK Bit Error Rate (BER) for Eb/No of 14dB, we should get better than 1E-4 (it’s off the graph – but actually about 2E-6). So at 1000 bit/s, we expect practically no errors.

I was disappointed with the real world OTA results: I received packets at 1000 bit/s with 8% BER (equivalent to an Eb/No of 5.5dB) which suggests we are losing 8.5dB somewhere. Our receiver seems a bit deaf.

Here’s a screenshot of the “dashboard” with a 4FSK signal sent by Bill (we experimented with 4FSK as well as 2FSK):

You can see a bunch of other signals (perhaps local EMI) towering over our little 4FSK signal. The red crosses show the frequency estimates of the demodulator – they should lock onto each of the FSK tones (4 in this case).

One risk with low cost SDRs (in a city) is strong signal interference causing the receiver to block and fall over entirely. When I connected my RTL-SDR to my antenna, I had to back the gain off about 3dB, not too bad. However Bill needed to back his gain off 20dB. So that’s one real-world factor we need to deal with.

Still we did it – sending packets 10km across town, through 135dB worth of trees and buildings with just a 12mW Pi and a RTLSDR! That’s a start, and no one said this would be easy! Time to dig in and find some missing dBs.

Over the Bench Testing

I decided to drill down into the MDS performance and system noise figure. After half a days wrangling with command line utilties I had a HackRF rigged up to play FSK signals. The HackRF has the benefit of a built in attenuator, so the output level can be quite low (-68dBm). This makes it much easier to reliably set levels in combination with an external switched attenuator, compared to using relatively high (+14dBm) output of the Pi which gets into everything and requires lots of shielding. Low levels out of your Tx greatly simplifies “over the bench” testing.

After a few days of RF and DSP fiddling I tracked down a problem with the sample rate. I was running the RTL-SDR at a sample rate of 240 kHz, using it’s internal hardware to handle the sample rate conversion. Sampling at 1.8MHz, and reducing the sample rate externally improved the performance by 3dB. I’m guessing this is due to the internal fixed point precision of the RTL-SDR, it may have significant quantisation noise with weak signals.

OK, so now I was getting system noise figures between 7.5 and 9dB when tested “over the bench”. Close, but a few dB above what I would expect. I eventually traced that to a very subtle measurement bug. In the photo you can see a splitter at the output of the switched attenuator. One side feeds to the RTL-SDR, the other to my spec-an. To calibrate the system I play a single freq carrier from the HackRF, as this makes measurement easier on the spec-an using power averaging.

Turns out, the RTL-SDR input port is only terminated when RTL-SDR software is running, i.e. the dongle is active. I often had the software off when measuring levels, so the levels were high by about 1dB, as one port of the splitter was un-terminated!

The following table maps the system noise figure for various gain settings:

g Rx (dBm) BER Eb/No Est NF
0 -128.0 0.01 9.0 7.0
49 -128.0 0.015 8.5 7.5
46 -128 0.06 6 10.0
40 -126.7 0.018 8 12
35 -123 0.068 6 15
30 -119 0.048 7 18

When repeating the low level measurements with -g 0 I obtained 8, 6.5, 7.0, 7.7, so there is some spread. The automatic gain (-g 0) seems about 0.5dB ahead of maximum manual gain (-g 49).

These results are consistent with those reported in this fine report, which measured the NF of the SDRs directly. I have also previously measured RTLSDR noise figures at around 7dB, although on an earlier model.

This helps us understand the effect of receiver gain. Lowering it is bad, especially lowering it a lot. However we may need to run at a lower gain setting, especially if the receiver is being overloaded by strong signals. At least this lets us engineer the link, and understand the effect of gain on system performance.

For fun I hooked up a LNA and using 4FSK I managed 2% BER at -136dBm, which works out to a 2.5dB system noise figure. This is a little higher than I would expect, however I could see some evidence of EMI on the dashboard. Such low levels are difficult on the bench without a Faraday cage of some sort.

Engineering the Physical Layer

With this project the physical layer (modem and radio) is wide open for us to explore. With chipset based approaches you get a link or your don’t, and perhaps a little diagnostic information like a received signal strength. Then again they “just work” most of the time so that’s probably OK! I like looking inside the black boxes and pushing up against the laws of physics, rather that the laws of business.

It gets interesting when you can measure the path loss. You have a variety of options to improve your bit error rate or increase your bit rate:

  1. Add transmit power
  2. Reposition your antenna above the terrain, or out of multipath nulls
  3. Lower loss coax run to your antenna, or mount your Pi and antenna together on top of the mast
  4. Use an antenna with a higher gain like Bill’s Yagi
  5. Add a low noise amplifier to reduce your noise figure
  6. Adjust your symbol/bit rate to spread that energy over fewer (or more) bits/s
  7. Use a more efficient modulation scheme, e.g. 4FSK performs 3dB better than 2FSK at the same bit rate
  8. Move to the country where the ambient RF and EMI is hopefully lower

Related Projects

There are some other cool projects in the “200kHz channel/several 100 kbit/s/IP” UHF Ham data space:

  1. New packet radio NPR70 project – very cool GFSK/TDMA system using a chipset modem approach. I’ve been having a nice discussion with the author Guillaume F4HDK around modem sensitivity and FEC. This project is very well documented.
  2. HNAP – a sophisticated OFDM/QAM/TDMA sytem that is being developed on Pluto SDR hardware as part of a masters thesis.

I’m a noob when it comes to protocols, so have a lot to learn from these projects. However I am pretty focussed on modem/FEC performance which is often sub-optimal (or delegated to a chipset) in the Ham Radio space. There are many dB to be gained from good modem design, which I prefer over adding a PA.

Next Steps

OK, so we have worked though a few bugs and can now get results consistent with the estimated NF of the receiver. It doesn’t explain the entire 8.5dB loss we experienced over the air, but it’s a step in the right direction. The bugs tend to reveal themselves one at a time ……

One possible reason for reduced sensitivity is EMI or ambient RF noise. There are some signs of this in the dashboard plot above. This is more subtle than strong signal overload, but could be increasing the effective noise figure on our link. All our calculations above assume no additional noise being fed into the antenna.

I feel it’s time for another go at Over The Air (OTA) tests. My goal is to get a solid 1000 bit/s link in both directions over our path, and understand any impact on performance such as strong signals or a raised noise floor. We can then proceed to the next steps in the project plan.

Reading Further

GitHub repo for this project with build scripts, a project plan and command lines on how to run some basic tests.
Open IP over VHF/UHF – first post in this series
Bill and I are documenting our OTA tests in this Pull Request
4FSK on 25 Microwatts low bit rate packets with sophisticated FEC at very low power levels. We’ll use the same FEC on this project.
Measuring SDR Noise Figure
Measuring SDR Noise Figure in Real Time
Evaluation of SDR Boards V1.0 – A fantastic report on the performance of several SDRs
NPR70 – New Packet Radio Web Site
HNAP Web site

Krebs on SecurityConfessions of an ID Theft Kingpin, Part I

At the height of his cybercriminal career, the hacker known as “Hieupc” was earning $125,000 a month running a bustling identity theft service that siphoned consumer dossiers from some of the world’s top data brokers. That is, until his greed and ambition played straight into an elaborate snare set by the U.S. Secret Service. Now, after more than seven years in prison Hieupc is back in his home country and hoping to convince other would-be cybercrooks to use their computer skills for good.

Hieu Minh Ngo, in his teens.

For several years beginning around 2010, a lone teenager in Vietnam named Hieu Minh Ngo ran one of the Internet’s most profitable and popular services for selling “fullz,” stolen identity records that included a consumer’s name, date of birth, Social Security number and email and physical address.

Ngo got his treasure trove of consumer data by hacking and social engineering his way into a string of major data brokers. By the time the Secret Service caught up with him in 2013, he’d made over $3 million selling fullz data to identity thieves and organized crime rings operating throughout the United States.

Matt O’Neill is the Secret Service agent who in February 2013 successfully executed a scheme to lure Ngo out of Vietnam and into Guam, where the young hacker was arrested and sent to the mainland U.S. to face prosecution. O’Neill now heads the agency’s Global Investigative Operations Center, which supports investigations into transnational organized criminal groups.

O’Neill said he opened the investigation into Ngo’s identity theft business after reading about it in a 2011 KrebsOnSecurity story, “How Much is Your Identity Worth?” According to O’Neill, what’s remarkable about Ngo is that to this day his name is virtually unknown among the pantheon of infamous convicted cybercriminals, the majority of whom were busted for trafficking in huge quantities of stolen credit cards.

Ngo’s businesses enabled an entire generation of cybercriminals to commit an estimated $1 billion worth of new account fraud, and to sully the credit histories of countless Americans in the process.

“I don’t know of any other cybercriminal who has caused more material financial harm to more Americans than Ngo,” O’Neill told KrebsOnSecurity. “He was selling the personal information on more than 200 million Americans and allowing anyone to buy it for pennies apiece.”

Freshly released from the U.S. prison system and deported back to Vietnam, Ngo is currently finishing up a mandatory three-week COVID-19 quarantine at a government-run facility. He contacted KrebsOnSecurity from inside this facility with the stated aim of telling his little-known story, and to warn others away from following in his footsteps.

BEGINNINGS

Ten years ago, then 19-year-old hacker Ngo was a regular on the Vietnamese-language computer hacking forums. Ngo says he came from a middle-class family that owned an electronics store, and that his parents bought him a computer when he was around 12 years old. From then on out, he was hooked.

In his late teens, he traveled to New Zealand to study English at a university there. By that time, he was already an administrator of several dark web hacker forums, and between his studies he discovered a vulnerability in the school’s network that exposed payment card data.

“I did contact the IT technician there to fix it, but nobody cared so I hacked the whole system,” Ngo recalled. “Then I used the same vulnerability to hack other websites. I was stealing lots of credit cards.”

Ngo said he decided to use the card data to buy concert and event tickets from Ticketmaster, and then sell the tickets at a New Zealand auction site called TradeMe. The university later learned of the intrusion and Ngo’s role in it, and the Auckland police got involved. Ngo’s travel visa was not renewed after his first semester ended, and in retribution he attacked the university’s site, shutting it down for at least two days.

Ngo said he started taking classes again back in Vietnam, but soon found he was spending most of his time on cybercrime forums.

“I went from hacking for fun to hacking for profits when I saw how easy it was to make money stealing customer databases,” Ngo said. “I was hanging out with some of my friends from the underground forums and we talked about planning a new criminal activity.”

“My friends said doing credit cards and bank information is very dangerous, so I started thinking about selling identities,” Ngo continued. “At first I thought well, it’s just information, maybe it’s not that bad because it’s not related to bank accounts directly. But I was wrong, and the money I started making very fast just blinded me to a lot of things.”

MICROBILT

His first big target was a consumer credit reporting company in New Jersey called MicroBilt.

“I was hacking into their platform and stealing their customer database so I could use their customer logins to access their [consumer] databases,” Ngo said. “I was in their systems for almost a year without them knowing.”

Very soon after gaining access to MicroBilt, Ngo says, he stood up Superget[.]info, a website that advertised the sale of individual consumer records. Ngo said initially his service was quite manual, requiring customers to request specific states or consumers they wanted information on, and he would conduct the lookups by hand.

Ngo’s former identity theft service, superget[.]info

“I was trying to get more records at once, but the speed of our Internet in Vietnam then was very slow,” Ngo recalled. “I couldn’t download it because the database was so huge. So I just manually search for whoever need identities.”

But Ngo would soon work out how to use more powerful servers in the United States to automate the collection of larger amounts of consumer data from MicroBilt’s systems, and from other data brokers. As I wrote of Ngo’s service back in November 2011:

“Superget lets users search for specific individuals by name, city, and state. Each “credit” costs USD$1, and a successful hit on a Social Security number or date of birth costs 3 credits each. The more credits you buy, the cheaper the searches are per credit: Six credits cost $4.99; 35 credits cost $20.99, and $100.99 buys you 230 credits. Customers with special needs can avail themselves of the “reseller plan,” which promises 1,500 credits for $500.99, and 3,500 credits for $1000.99.

“Our Databases are updated EVERY DAY,” the site’s owner enthuses. “About 99% nearly 100% US people could be found, more than any sites on the internet now.”

Ngo’s intrusion into MicroBilt eventually was detected, and the company kicked him out of their systems. But he says he got back in using another vulnerability.

“I was hacking them and it was back and forth for months,” Ngo said. “They would discover [my accounts] and fix it, and I would discover a new vulnerability and hack them again.”

COURT (AD)VENTURES, AND EXPERIAN

This game of cat and mouse continued until Ngo found a much more reliable and stable source of consumer data: A U.S. based company called Court Ventures, which aggregated public records from court documents. Ngo wasn’t interested in the data collected by Court Ventures, but rather in its data sharing agreement with a third-party data broker called U.S. Info Search, which had access to far more sensitive consumer records.

Using forged documents and more than a few lies, Ngo was able to convince Court Ventures that he was a private investigator based in the United States.

“At first [when] I sign up they asked for some documents to verify,” Ngo said. “So I just used some skill about social engineering and went through the security check.”

Then, in March 2012, something even more remarkable happened: Court Ventures was purchased by Experian, one of the big three major consumer credit bureaus in the United States. And for nine months after the acquisition, Ngo was able to maintain his access.

“After that, the database was under control by Experian,” he said. “I was paying Experian good money, thousands of dollars a month.”

Whether anyone at Experian ever performed due diligence on the accounts grandfathered in from Court Ventures is unclear. But it wouldn’t have taken a rocket surgeon to figure out that this particular customer was up to something fishy.

For one thing, Ngo paid the monthly invoices for his customers’ data requests using wire transfers from a multitude of banks around the world, but mostly from new accounts at financial institutions in China, Malaysia and Singapore.

O’Neill said Ngo’s identity theft website generated tens of thousands of queries each month. For example, the first invoice Court Ventures sent Ngo in December 2010 was for 60,000 queries. By the time Experian acquired the company, Ngo’s service had attracted more than 1,400 regular customers, and was averaging 160,000 monthly queries.

More importantly, Ngo’s profit margins were enormous.

“His service was quite the racket,” he said. “Court Ventures charged him 14 cents per lookup, but he charged his customers about $1 for each query.”

By this time, O’Neill and his fellow Secret Service agents had served dozens of subpoenas tied to Ngo’s identity theft service, including one that granted them access to the email account he used to communicate with customers and administer his site. The agents discovered several emails from Ngo instructing an accomplice to pay Experian using wire transfers from different Asian banks.

TLO

Working with the Secret Service, Experian quickly zeroed in on Ngo’s accounts and shut them down. Aware of an opportunity here, the Secret Service contacted Ngo through an intermediary in the United Kingdom — a known, convicted cybercriminal who agreed to play along. The U.K.-based collaborator told Ngo he had personally shut down Ngo’s access to Experian because he had been there first and Ngo was interfering with his business.

“The U.K. guy told Ngo, ‘Hey, you’re treading on my turf, and I decided to lock you out. But as long as you’re paying a vig through me, your access won’t go away’,” O’Neill recalled.

The U.K. cybercriminal, acting at the behest of the Secret Service and U.K. authorities, told Ngo that if he wanted to maintain his access, he could agree to meet up in person. But Ngo didn’t immediately bite on the offer.

Instead, he weaseled his way into another huge data store. In much the same way he’d gained access to Court Ventures, Ngo got an account at a company called TLO, another data broker that sells access to extremely detailed and sensitive information on most Americans.

TLO’s service is accessible to law enforcement agencies and to a limited number of vetted professionals who can demonstrate they have a lawful reason to access such information. In 2014, TLO was acquired by Trans Union, one of the other three big U.S. consumer credit reporting bureaus.

And for a short time, Ngo used his access to TLO to power a new iteration of his business — an identity theft service rebranded as usearching[.]info. This site also pulled consumer data from a payday loan company that Ngo hacked into, as documented in my Sept. 2012 story, ID Theft Service Tied to Payday Loan Sites. Ngo said the hacked payday loans site gave him instant access to roughly 1,000 new fullz records each day.

Ngo’s former ID theft service usearching[.]info.

BLINDED BY GREED

By this time, Ngo was a multi-millionaire: His various sites and reselling agreements with three Russian-language cybercriminal stores online had earned him more than USD $3 million. He told his parents his money came from helping companies develop websites, and even used some of his ill-gotten gains to pay off the family’s debts (its electronics business had gone belly up, and a family member had borrowed but never paid back a significant sum of money).

But mostly, Ngo said, he spent his money on frivolous things, although he says he’s never touched drugs or alcohol.

“I spent it on vacations and cars and a lot of other stupid stuff,” he said.

When TLO locked Ngo out of his account there, the Secret Service used it as another opportunity for their cybercriminal mouthpiece in the U.K. to turn the screws on Ngo yet again.

“He told Ngo he’d locked him out again, and the he could do this all day long,” O’Neill said. “And if he truly wanted lasting access to all of these places he used to have access to, he would agree to meet and form a more secure partnership.”

After several months of conversing with his apparent U.K.-based tormentor, Ngo agreed to meet him in Guam to finalize the deal. Ngo says he understood at the time that Guam is an unincorporated territory of the United States, but that he discounted the chances that this was all some kind of elaborate law enforcement sting operation.

“I was so desperate to have a stable database, and I got blinded by greed and started acting crazy without thinking,” Ngo said. “Lots of people told me ‘Don’t go!,’ but I told them I have to try and see what’s going on.”

But immediately after stepping off of the plane in Guam, he was apprehended by Secret Service agents.

“One of the names of his identity theft services was findget[.]me,” O’Neill said. “We took that seriously, and we did like he asked.”

This is Part I of a multi-part series. Part II in this series is available at this link.

Worse Than FailureCodeSOD: Where to Insert This

If you run a business of any size, you need some sort of resource-management/planning software. Really small businesses use Excel. Medium businesses use Excel. Enterprises use Excel. But in addition to that, the large businesses also pay through the nose for a gigantic ERP system, like Oracle or SAP, that they can wire up to Excel.

Small and medium businesses can’t afford an ERP, but they might want to purchase a management package in the niche realm of “SMB software”- small and medium business software. Much like their larger cousins, these SMB tools have… a different idea of code quality.

Cassandra’s company had deployed such a product, and with it came a slew of tickets. The performance was bad. There were bugs everywhere. While the company provided support, Cassandra’s IT team was expected to also do some diagnosing.

While digging around in one nasty performance problem, Cassandra found that one button in the application would generate and execute this block of SQL code using a SQLCommand object in C#.

DECLARE @tmp TABLE (Id uniqueidentifier)

--{ Dynamic single insert statements, may be in the hundreds. }

IF NOT EXISTS (SELECT TOP 1 1 FROM SomeTable AS st INNER JOIN @tmp t ON t.Id = st.PK)
BEGIN
    INSERT INTO SomeTable (PK, SomeDate) SELECT Id, getdate() as SomeDate FROM @tmp 
END
ELSE 
BEGIN
    UPDATE st
        SET SomeDate = getdate()
        FROM @tmp t
        LEFT JOIN SomeTable AS st ON t.Id = st.PK AND SomeDate = NULL
END

At its core, the purpose of this is to take a temp-table full of rows and perform an “upsert” for all of them: insert if a record with that key doesn’t exist, update if a record with that key does. Now, this code is clearly SQL Server code, so a MERGE handles that.

But okay, maybe they’re trying to be as database agnostic as possible, and don’t want to use something that, while widely supported, has some dialect differences across databases. Fine, but there’s another problem here.

Whoever built this understood that in SQL Server land, cursors are frowned upon, so they didn’t want to iterate across every row. But here’s their problem: some of the records may exist, some of them may not, so they need to check that.

As you saw, this was their approach:

IF NOT EXISTS (SELECT TOP 1 1 FROM SomeTable AS st INNER JOIN @tmp t ON t.Id = st.PK)

This is wrong. This will be true only if none of the rows in the dynamically generated INSERT statements exist in the base table. If some of the rows exist and some don’t, you aren’t going to get the results you were expecting, because this code only goes down one branch: it either inserts or updates.

There are other things wrong with this code. For example, SomeDate = NULL is going to have different behavior based on whether the ANSI_NULLS database flag is OFF (in which case it works), or ON (in which case it doesn’t). There’s a whole lot of caveats about whether you set it at the database level, on the connection string, during your session, but in Cassandra’s example, ANSI_NULLS was ON at the time this ran, so that also didn’t work.

There are other weird choices and performance problems with this code, but the important thing is that this code doesn’t work. This is in a shipped product, installed by over 4,000 businesses (the vendor is quite happy to cite that number in their marketing materials). And it ships with code that can’t work.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Kevin RuddAFR: How Mitch Hooke Axed the Mining Tax and Climate Action

Published by The Australian Financial Review on 25 August 2020

The Australian political arena is full of reinventions.

Tony Abbott has gone from pushing emissions cuts under the Paris climate agreement to demanding Australia withdraw from the treaty altogether. And Scott Morrison, who accused Labor of presiding over “crippling” debt, now binges on wasteful debt-fuelled spending that makes our government’s stimulus look like a rounding error.

However, neither of these metamorphoses comes close to the transformation of Mitch Hooke, the former Minerals Council chief and conservative political operative, who now pretends he is a lifelong evangelist of carbon pricing.

Writing in The Australian Financial Review, (Ken Henry got it wrong on climate wars, mining tax on August 11) Hooke said he supported emissions trading throughout the mid-2000s until my government came to power in 2007.

I then supposedly “trashed that consensus” by using the proceeds of a carbon price to compensate motorists, low-income households and trade-exposed industries.

How dreadful to help those most impacted by a carbon price! The very point of an emissions trading scheme is that it can change consumers’ behaviour without making people on low to middle incomes worse off. That’s why you increase the price of emissions-intensive goods and services (relative to less polluting alternatives) then give that money back to people through the tax or benefits system so they’re no worse off. But they are then able to choose a more climate-friendly product.

The alternative is the government just pockets the cash – thereby defeating the entire purpose of a market-based scheme. Obviously this is pure rocket science for Mitch.

Hooke also seems to have forgotten that such compensation was not only appropriate, but it was exactly what Malcolm Turnbull was demanding in exchange for Liberal support for our proposal in the Senate. Without it, any emissions trading scheme would be a non-starter.

When that deal was tested in the Liberal party room, it was defeated by a single vote. Even so, enough Liberal senators crossed the floor to give the Green political party the balance of power.

Showing their true colours, Bob Brown’s senators sided with Tony Abbott and Barnaby Joyce to kill the legislation. The Green party has, to this day, been unable to adequately explain its decision to voters. If they hadn’t, Australia would now be 10 years down the path of steady decarbonisation.

For Hooke, the reality is that he never wanted an emissions trading scheme if he could avoid one. But rather than state this outright, he just insists on impossible preconditions. As for Hooke’s most beloved Howard government, John Winston would in all probability have gone even further than Labor in compensating people affected by his own proposed emissions trading scheme, given Howard’s legendary ability to bake middle-class welfare into any national budget. Just ask Peter Costello.

Hooke has, like Abbott, been one of the most destructive voices in Australian national climate change action. He also expresses zero remorse for his deceptive campaign of misinformation, in partnership with those wonderful corporate citizens at Rio, targeting my government’s efforts to introduce a profits-based tax for minerals, mirroring the petroleum resource rent tax implemented by the Hawke government in the 1980s.

Our Resource Super Profits Tax would have funded new infrastructure to address looming capacity constraints affecting the sector as well as an across-the-board company tax cut to 28 per cent. Most importantly it sought to fairly spread the proceeds of mining profits when they vastly exceeded the industry norms – such as during commodity price booms – with the broader Australian public. Lest we forget, they actually own those resources. Rio just rents them.

In response, Hooke and his mates at Rio and BHP accumulated a $90 million war chest and $22.2 million of shareholders’ funds were poured into a political advertising campaign over six weeks.

Another $1.9 million was tipped into Liberal and National party coffers to keep conservative politicians on side. All to keep Rio and BHP happy, while ignoring the deep structural interests of the rest of our mining sector, many of whom supported our proposal.

At their height, Hooke’s television ads were screening around 33 times per day on free-to-air channels. Claims the tax would be a “hand grenade” to retirement savings were blasted by the Australian Institute of Superannuation Trustees which referred the “irresponsible” and “scaremongering” campaign to regulators.

This was not an exercise in public debate to refine aspects of the tax’s design; it was a systematic effort to use the wealth of two multinational mining companies to bludgeon the government into submission.

And when Gillard and Swan capitulated as the first act of their new government, they essentially turned over the drafting pen to Hooke to write a new rent tax that collected almost zero revenue.

The industry, however, was far from unified. Fortescue Metals Group chairman Andrew “Twiggy” Forrest understood what we were trying to achieve, having circumvented Hooke’s spin machine to deal directly with my resources minister Martin Ferguson.

We ultimately agreed that Forrest would stand alongside me and pledge to support the tax. The next day, Gillard and Swan struck. And Hooke has been a happy man ever since, even though Australia is the poorer for it.

It doesn’t matter where you sit on the political spectrum, everyone involved in public debate should hope that they’ve helped to improve the lives of ordinary people.

That is not Hooke’s legacy. Nor his interest. However much he may now seek to rationalise his conduct, Hooke’s stock and trade was brutal, destructive politics in direct service of BHP, Rio and the carbon lobby.

He was paid handsomely to thwart climate change action and ensure wealthy multinationals didn’t pay a dollar more in tax than was absolutely necessary. He succeeded. But I’m not sure his grandchildren will be all that proud of his destructive record.

Congratulations, Mitch.

The post AFR: How Mitch Hooke Axed the Mining Tax and Climate Action appeared first on Kevin Rudd.

LongNowThe Alchemical Brothers: Brian Eno & Roger Eno Interviewed

Long Now co-founder Brian Eno on time, music, and contextuality in a recent interview, rhyming on Gregory Bateson’s definition of information as “a difference that makes a difference”:

If a Martian came to Earth and you played her a late Beethoven String Quartet and then another written by a first-year music student, it is unlikely that she would a) understand what the point of listening to them was at all, and b) be able to distinguish between them.

What this makes clear is that most of the listening experience is constructed in our heads. The ‘beauty’ we hear in a piece of music isn’t something intrinsic and immutable – like, say, the atomic weight of a metal is intrinsic – but is a product of our perception interacting with that group of sounds in a particular historical context. You hear the music in relation to all the other experiences you’ve had of listening to music, not in a vacuum. This piece you are listening to right now is the latest sentence in a lifelong conversation you’ve been having. What you are hearing is the way it differs from, or conforms to, the rest of that experience. The magic is in our alertness to novelty, our attraction to familiarity, and the alchemy between the two.

The idea that music is somehow eternal, outside of our interaction with it, is easily disproven. When I lived for a few months in Bangkok I went to the Chinese Opera, just because it was such a mystery to me. I had no idea what the other people in the audience were getting excited by. Sometimes they’d all leap up from their chairs and cheer and clap at a point that, to me, was effectively identical to every other point in the performance. I didn’t understand the language, and didn’t know what the conversation had been up to that point. There could be no magic other than the cheap thrill of exoticism.

So those poor deluded missionaries who dragged gramophones into darkest Africa because they thought the experience of listening to Bach would somehow ‘civilise the natives’ were wrong in just about every way possible: in thinking that ‘the natives’ were uncivilised, in not recognising that they had their own music, and in assuming that our Western music was culturally detachable and transplantable – that it somehow carried within it the seeds of civilisation. This cultural arrogance has been attached to classical music ever since it lost its primacy as the popular centre of the Western musical universe, as though the soundtrack of the Austro-Hungarian Empire in the 19th Century was somehow automatically universal and superior.

Worse Than FailureCodeSOD: Wait a Minute

Hanna's co-worker implemented a new service, got it deployed, and then left for vacation someplace where there's no phones or Internet. So, of course, Hanna gets a call from one of the operations folks: "That new service your team deployed keeps crashing on startup, but there's nothing in the log."

Hanna took it on herself to check into the VB.Net code.

Public Class Service Private mContinue As Boolean = True Private mServiceException As System.Exception = Nothing Private mAppSettings As AppSettings '// ... snip ... // Private Sub DoWork() Try Dim aboutNowOld As String = "" Dim starttime As String = DateTime.Now.AddSeconds(5).ToString("HH:mm") While mContinue Threading.Thread.Sleep(1000) Dim aboutnow As String = DateTime.Now.ToString("HH:mm") If starttime = aboutnow And aboutnow <> aboutNowOld Then '// ... snip ... // starttime = DateTime.Now.AddMinutes(mAppSettings.pollingInterval).ToString("HH:mm") End If aboutNowOld = aboutnow End While Catch ex As Exception mServiceException = ex End Try If mServiceException IsNot Nothing Then EventLog.WriteEntry(mServiceException.ToString, Diagnostics.EventLogEntryType.Error) Throw mServiceException End If End Sub End Class

Presumably whatever causes the crash is behind one of those "snip"s, but Hanna didn't include that information. Instead, let's focus on our unique way to idle.

First, we pick our starttime to be the minute 5 seconds into the future. Then we enter our work loop. Sleep for one second, and then check which minute we're on. If that minute is our starttime and this loop hasn't run during this minute before, we can get into our actual work (snipped), and then calculate the nextstarttime, based on our app settings.

If there are any exceptions, we break the loop, log and re-throw it- but don't do that from the exception handler. No, we store the exception in a member variable and then if it IsNot Nothing we log it out.

Hanna writes: "After seeing this I gave up immediately before I caused a time paradox. Guess we'll have to wait till she's back from the future to fix it."

It's not quite a paradox, but it's certainly far more complex than it ever needs to be. First, we have the stringly-typed date handling. That's just awful. Then, we have the once-per-second polling, but we except pollingInterval to be in minutes. But AddMinutes takes doubles, so it could be seconds, expressed as fractional minutes. But wait, if we know how long we want to wait between executions, couldn't we just Sleep that long? Why poll every second? Does this job absolutely have to run in the first second of every minute? Even if it does, we could easily calculate that sleep time with reasonable accuracy if we actually looked at the seconds portion of the current time.

The developer who wrote this saw the problem of "execute this code once every polling interval" and just called it a day.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet Linux AustraliaLev Lafayette: Process Locally, Backup Remotely

Recently, a friend expressed a degree of shock that I could pull old, even very old, items of conversation from emails, Facebook messenger, etc., with apparent ease. "But I wrote that 17 years ago". They were even dismayed when I revealed that this all just stored as plain-text files, suggesting that perhaps I was like a spy, engaging in some sort of data collection on them by way of mutual conversations.

For my own part, I was equally shocked by their reaction. Another night of fitful sleep, where feelings of self-doubt percolate. Is this yet another example that I have some sort of alien psyche? But of course, this is not the case, as keeping old emails and the like as local text files is completely normal in computer science. All my work and professional colleagues do this.

What is the cause of this disparity between the computer scientist and the ubiquitous computer user? Once I realised that the disparity of expected behaviour was not personal, but professional, there was clarity. Essentially, the convenience of cloud technologies and their promotion of applications through Software as a Service (SaaS) has led to some very poor computational habits among general users that have significant real-world inefficiencies.

Webmail

The earliest example of a SaaS application that is convenient and inefficient is webmail. Early providers such as Hotmail and Yahoo! offered the advantage of one being able to access their email from anywhere on any device with a web-browser, and that convenience far out-weighed the more efficient method of processing email with a client with POP (Post-Office Protocol), because POP would delete the email from the mail server as part of the transfer.

Most people opted for various webmail implementations for convenience. There were some advantages of POP, such as storage being limited only by the local computer's capacity, the speed that one could process emails being independent of the local Internet connection, and the security of having emails transferred locally rather than on a remote server. But these weren't enough for the convenience and en masse people adopted cloud solutions, which now have very easily been integrated into the corporate world.

During all this time a different email protocol, IMAP (Internet Access Message Protocol), became more common. IMAP provided numerous advantages over POP. Rather than deleting emails from the server, it copied them to the client and kept them on the server (although some still saw this as a security issue). IMAP clients stay connected to the server whilst active, rather than the short retrieve connection used by POP. IMAP allowed for multiple simultaneous connections, message state information, and multiple mail-boxes. Effectively, IMAP provided the local processing performance, but also the device and location convenience of webmail.

The processing performance that one suffers from webmail is effectively two-fold. One's ability to use and process webmail is dependent on the speed of the Internet connection to the mail server and the spare capacity of that mail-server to perform tasks. For some larger providers (e.g., Gmail) it will only be the former that it is really an issue. But even then, the more complex the regular expression in the search the harder it is, as basic RegEx tools (grep, sed, awk, perl) typically aren't available. With local storage of emails, kept in plain text files, the complexity of the search criteria depends only on the competence of the user and the speed with the performance of the local system. That is why pulling up an archaic email with specified regular expressions search terms is a relatively trivial task for computer professionals who keep their old emails stored locally, but less so for computer users who store them using a webmail application.

Messenger Systems

Where would we be without our various messenger systems? We love the instant message convenience, it's like SMS on steroids. Whether Facebook Messenger, WhatsApp, Instagram, Zoom, WeChat, Discord, or any range of similar products there is the very same issue that one is confronted with webmail because it too operates as a SaaS application. Your ability to process data will depend on the speed of your Internet connection and the spare computational capacity on the receiving server. Yeah, good luck with that on your phone. Have you ever tried to scroll through a few thousand Facebook Messenger posts? It's absolutely hopeless.

But when something like an extensive Facebook message chat is downloaded as a plain HTML file searching and scrolling becomes trivial and blindingly fast in comparison. Google Chrome, for example, has a pretty decent Facebook Message/Chat Downloader. WhattsApp backs up chats automatically to one's mobile device and can be setup to periodically downloaded to Google Drive which then can be moved to a local device (regular expressions on 'phones are not the greatest, yet), and Slack (with some restrictions) offers a download feature. For Discord, there is an open-source chat exporters. Again, the advantages of local copies become clear; speed of processing, the capability of conducting complex searches.

Supercomputers and the Big Reveal

What could this discussion of web-apps and local processing possibility have to do with the operations of big iron supercomputers? Quite a lot actually, as the example illustrates the issue at a scale. Essentially the issue involved a research group that had a very large dataset that was stored interstate. The normal workflow in such a situation is to transfer the dataset as needed to the supercomputer and conduct the operations there. Finding this inconvenient, they made a request to have mount points on each of the compute nodes so the data could stay interstate, the computation could occur on the supercomputer, and they could ignore the annoying requirement of actually transferring the data. Except physics gets in the way, and physics always wins.

The problem was that the computational task primarily involved the use of the Burrows-Wheeler Aligner (BWA) which aligns relatively short nucleotide sequences against a long reference sequence such as the human genome. In carrying out this activity this program makes very extensive use of disk read-writes. Now, you can imagine the issue of the data being sent interstate, the computation being run on the supercomputer, and then the data being sent back to the remote disk, thousands of times per second. The distance between where the data is and where the processing is carried out is meaningful. It is much more efficient to conduct the computational processing close to where the data is. As Grace Hopper, peace-be-upon-her said: "Mind your nanoseconds!"

I like to hand out c30cm pieces of string at the start of introductory HPC training workshops, telling researchers that should hang on to the piece of string to remind themselves to think of where their data is, and where their compute is. And that is the basic rule; keep your data that you want to be processed, even when it's something trivial like a searching email, or even as complex as aligns relatively short nucleotide sequences, close to the processor itself, and the closer the better.

What then is the possible advantage of cloud-enabled services? Despite their ubiquity, when it comes to certain performance-related tasks there isn't much, although the centralised services can provide many advantages in itself. Perhaps the very best processing use is perhaps treating the cloud as "somebody else's computer", that is, a resource where one can offload data for certain computational tasks for can be conducted on the remote system. Rather like how many and an increasing number of researchers find themselves going to HPC facilities when they discover that their datasets and computational problems are too large or too complex for their personal systems.

There is also one extremely useful advantage called cloud services from the user perspective and that is, of course, off-site backup. Whilst various local NAS systems are very good for serving datasets within, say, a household especially for the provision of media, it is perhaps not the wisest choice for the ultimate level of backup. Eventually, disks will fail and with disk failure, there is quite a cost in data recovery. Large cloud storage providers, such as Google, offer massive levels of redundancy which ensures that data loss is extremely improbable, and with tools such as Rclone, synchronisation and backups of local data to remote services can be easily automated. In a nutshell: process data close to the CPU, backup remotely.

,

Rondam RamblingsThey knew. They still know.

Never forget what conservatives were saying about Donald Trump before he cowed them into submission.(Sorry about the tiny size of the embedded video.  That's the default that Blogger gave me and I can't figure out how to adjust the size.  If it bothers you, click on the link above to see the original.)

Rondam RamblingsRepublicans officially endorse a Trump dictatorship

The Republican party has formally decided not to adopt a platform this year, instead passing a resolution that says essentially, "we will support whatever the Dear Leader says".  Since the resolution calls out the media for its biased reporting, I will quote the resolution here in its entirety, with the salient portions highlighted: WHEREAS, The Republican National Committee (RNC) has

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 14)

Here’s part fourteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Planet Linux AustraliaSimon Lyall: Sidewalk Delivery Robots: An Introduction to Technology and Vendors

At the start of 2011 Uber was in one city (San Francisco). Just 3 years later it was in hundreds of cities worldwide including Auckland and Wellington. Dockless Electric Scooters took only a year from their first launch to reach New Zealand. In both cases the quick rollout in cities left the public, competitors and regulators scrambling to adapt.

Delivery Robots could be the next major wave to rollout worldwide and disrupt existing industries. Like driverless cars these are being worked on by several companies but unlike driverless cars they are delivering real packages for real users in several cities already.

Note: I plan to cover other aspects of Sidewalk Delivery Robots including their impact of society in a followup article.

What are Delivery Robots?

Delivery Robots are driverless vehicles/drones that cover the last mile. They are loaded with a cargo and then will go to a final destination where they are unloaded by the customer.

Indoor Robots are designed to operate within a building. An example of these is The Keenon Peanut. These deliver items to guests in hotels or restaurants . They allow delivery companies to leave food and other items with the robot at the entrance/lobby of a building rather than going all the way to a customer’s room or apartment.

Keenon Peanut

Flying Delivery Drones are being tested by several companies. Wing which is owned by Google’s parent company Alphabet, is testing in Canberra, Australia. Amazon also had a product called Amazon Prime Air which appears to have been shelved.

Wing Flying Robot

The next size up are sidewalk delivery robots which I’ll be concentrating on in the article. Best known of these is Starship Technologies but there is also Kiwi and Amazon Scout. These are designed to drive at slow speeds on the footpaths rather than mix with cars and other vehicles on the road. They cross roads at standard crossings.

KiwiBot
Starship Delivery Robot
Amazon Scout

Finally some companies are rolling out Car sized Delivery Robots designed to drive on roads and mix with normal vehicles. The REV-1 from Reflection AI is at the smaller end with company videos showing it using both car and bike lanes. Larger is the Small-Car sized Nuro.

REV-1
Nuro

Sidewalk Delivery Robots

I’ll concentrate most on Sidewalk Delivery Robots in this article because I believe they are the most mature and likeliest to have an effect on society in the short term (next 2-10 years).

  • In-building bots are a fairly niche product that most people won’t interact with regularly.
  • Flying Drones are close to working but it it seems to be some time before they can function safely in a built-up environment and autonomously. Cargo capacity is currently limited in most models and larger units will bring new problems.
  • Car (or motorbike) sized bots have the same problems as driverless cars. They have to drive fast and be fully autonomous in all sorts of road conditions. No time to ask for human help, a vehicle on the road will at best block traffic or at potentially be involved in an accident. These stringent requirements mean widespread deployment is probably at least 10 years away.

Sidewalk bots are much further along in their evolution and they have simpler problems to solve.

  • A small vehicle that can carry a takeaway or small grocery order is buildable using today’s technology and not too expensive.
  • Footpaths exist most places they need to go.
  • Walking pace ( up to 6km/h ) is fast enough to be good enough even for hot food.
  • Ubiquitous wireless connectivity enables the robots to be controlled remotely if they cannot handle a situation automatically.
  • Everything unfolds slower on the sidewalk. If a sidewalk bot encounters a problem it can just slow to a stop and wait for remote help. If that process takes 20 seconds then it is usually no problem.

Starship Technologies

Starship are the best known vendor and most advanced vendor in the sector. They launched in 2015 and have a good publicity team.

In late 2019 Starship announced a major rollout to US university campuseswith their abundance of walking paths, well-defined boundaries, and smartphone-using, delivery-minded student bodies“. Campuses include The University of Mississippi and Bowling Green State University .

The push into college campuses was unluckily timed with many being closed in 2020 due to Covid-19. Starship has increased delivery areas outside of campus in some places to try and compensate. It has also seen a doubling of demand in Milton Keynes. However the company has laid of some workers in March 2020.

Kiwibot

Kiwibot

Kiwibot is one of the few other companies that has gone beyond the prototype stage to servicing actual customers. It is some way behind Starship with the robots being less autonomous and needing more onsite helpers.

  • Based in Columbia with a major deployment in Berkley, California around the UCB campus area
  • Robots cost $US 3,500 each
  • Smaller than Starship with just 1 cubic foot of capacity. Range and speed reportedly lower
  • Guided by remote control using way-points by operators in Medellín, Colombia. Each operator can control up to 3 bots.
  • On-site operators in Berkley maintain robots (and rescue them when they get stuck).
  • Some orders delivered wholly or partially by humans
  • Concentrating on the Restaurant delivery market
  • Costs for the Business
    • Lease/rent starts at $20/day per robot
    • Order capacity 6-18/day per Robot depending on demand & distance.
    • Order fees are $1.99/order with 1-4 Kiwibots leased
    • or $0.99/order if you have 5-10 Kiwibots leased
  • Website, Kiwibot Campus Vision Video , Kiwibot end-2019 post

An interesting feature is that Kiwibot publish their prices for businesses and provide a calculator with which you can calculate break-even points for robot delivery.

As with Starship, Kiwibot was hit by Covid19 closing College campuses. In July 2020 they announced a rollout in the city of San Jose, California in partnership with Shopify and Ordermark. The company is trying to pivot towards just building the robot infrastructure and partner with companies that already have that [marketplace] in mind. They are also trying to crowdfund for investment money.

Amazon Scout

Amazon Scout

Amazon are only slowly rolling out their Scout Robots. It is similar in appearance to the Starship Robots vehicle but is larger.

  • Announced in January 2019
  • Weight “about 100 pounds” (50 kg). No further specs available.
  • A video of the development team at Amazon
  • Initially delivering to Snohomish County, Washington near Amazon HQ
  • Added Irvine, California in August 2019 but still supervised by human
  • In July 2020 announced rollouts in Atlanta, Georgia and Franklin, Tennessee, but still “initially be accompanied by an Amazon Scout Ambassador”.

Other Companies

There are several other companies also working on Sidewalk Delivery Robots. The most advanced are Restaurant Delivery Company Postmates (now owned by Uber) has their own robot called Serve which is in early testing. Video of it on the street.

Several other companies have also announced projects. None appear to be near rolling out to live customers though.

Business Model and Markets

At present Starship and Kiwi are mainly targeting the restaurant deliver market against established players such as Uber Eats. Reasons for going for this market include

  • Established market, not something new
  • Short distances and small cargo
  • Customers unload produce quickly product so no waiting around
  • Charges by existing players quite high. Ballpark costs of $5 to the customer (plus a tip in some countries) and the restaurant also being charged 30% of the bill
  • Even with the high charges drivers end up making only around minimum wage.
  • The current business model is only just working. While customers find it convenient and the delivery cost reasonable, restaurants and drivers are struggling to make money.

Starship and Amazon are also targeting the general delivery market. This requires higher capacity and also customers may not be home when the vehicle arrives. However it may be the case that if vehicles are cheap enough they could just wait till the customer gets home.

Still more to cover

This article as just a quick introduction of the Sidewalk Delivery Robots out there. I hope to do a later post covering more including what the technology will mean for the delivery industry and for other sidewalk users as well as society in general.

Share

Worse Than FailureCodeSOD: Sudon't

There are a few WTFs in today's story. Let's get the first one out of the way: Jan S downloaded a shell script and ran it as root, without reading it. Now, let's be fair, that's honestly a pretty mild WTF; we've all done something similar, and popular software tools still tell you to install them with a curl … | sh, and then sudo themselves extra permissions in the script.

The software being installed in this case is a tool for accessing Bitlocker encrypted drives from Linux. And the real WTF for this one is the install script, which we'll dig into in a moment. This is not, however, some small scale open source project thrown together by hobbyists, but instead released by Initech's "Data Recovery" business. In this case, this is the open source core of a larger data recovery product- if you're willing to muck around with low level commands and configs, you can do it for free, but if you want a vaguely usable UI, get ready to pony up $40.

With that in mind, let's take a look at the script. We're going to do this in chunks, because nearly everything is wrong. You might think I'm exaggerating, but here's the first two lines of the script:

#!/bin/bash home_dir="/home/"${USER}"/initech.bitlocker"

That is not how you find out the user's home directory. We'll usually use ${HOME}, or since the shebang tells us this is definitely bash, we could just use ~. Jan also points out that while a username probably shouldn't have a space, it's possible, and since the ${USER} isn't in quotes, this breaks in that case.

echo ${home_dir} install_dir=$1 if [ ! -d "${install_dir}" ]; then install_dir=${home_dir} if [ ! -d "${install_dir}" ]; then echo "create dir : "${install_dir} mkdir ${install_dir}

Who wants indentation in their scripts? And if a script supports arguments, should we tell the user about it? Of course not! Just check to see if they supplied an argument, and if they did, we'll treat that as the install directory.

As a bonus, the mkdir line protects people like Jan who run this script as root, at least if their home directory is /root, which is common. When it tries to mkdir /home/root/initech.bitlocker, the script fails there.

echo "Install software to ${install_dir}" cp -rf ./* ${install_dir}"/"

Once again, the person who wrote this script doesn't seem to understand what the double quotes in Bash are for, but the real magic is the next segment:

echo "Copy runtime environment ..." sudo cp -f ./libcrypto.so.1.0.0 /usr/lib/ sudo cp -f ./libssl.so.1.0.0 /usr/lib64 sudo cp -f ./libcrypto.so.1.0.0 /usr/lib/ sudo cp -f ./libssl.so.1.0.0 /usr/lib64

Did you have libssl already installed in your system? Well now you have this version! Hope that's okay for you. We like our version of libssl and libcrypto so much we're copying them into your library directories twice. They probably meant to copy libcrypto and libssl to both lib and lib64, but messed up.

Well, that is assuming you already have a lib64 directory, because if you don't, you now have a lib64 file which contains the data from libssl.so.1.0.0.

This is the installer for a piece of software which has been released as part of a product that Initech wants to sell, and they don't successfully install it.

sudo ln -s ${install_dir}/mount.bitlocker /usr/bin/mount.bitlocker sudo ln -s ${install_dir}/bitlocker.creator /usr/bin/create.bitlocker sudo ln -s ${install_dir}/activate.sh /usr/bin/initech.bitlocker.active sudo ln -s ${install_dir}/initech.mount.sh /usr/bin/initech.bitlocker.mount sudo ln -s ${install_dir}/initech.bitlocker.sh /usr/bin/initech.bitlocker

Hey, here's an install step with no critical mistakes, assuming that no other package or tool has tried to claim those names in /usr/bin, which is probably true (Jan actually checked this using dpkg -S … to see if any packages wanted to use that path).

source /etc/os-release case $ID in debian|ubuntu|devuan) echo "Installing dependent package - curl ..." sudo apt-get install curl -y echo "Installing dependent package - openssl ..." sudo apt-get install openssl -y echo "Installing dependent package - fuse ..." sudo apt-get install fuse -y echo "Installing dependent package - gksu ..." sudo apt-get install gksu -y ;;

Here's the first branch of our case. They've learned to indent. They've chosen to slap the -y flag on all the apt-get commands, which means the user isn't going to get a choice about installing these packages, which is mildly annoying. It's also worth noting that sourceing /etc/os-release can be considered harmful, but clearly "not doing harm" isn't high on this script's agenda.

centos|fedora|rhel) yumdnf="yum" if test "$(echo "$VERSION_ID >= 22" | bc)" -ne 0; then yumdnf="dnf" fi echo "Installing dependent package - curl ..." sudo $yumdnf install -y curl echo "Installing dependent package - openssl ..." sudo $yumdnf install -y openssl echo "Installing dependent package - fuse ..." sudo $yumdnf install -y fuse3-libs.x86_64 ;;

So, maybe they just don't think if supports additional indentation? They indent the case fine. I'm not sure what their thinking is.

Speaking of if, look closely at that version check: test "$(echo "$VERSION_ID >= 22" | bc)" -ne 0.

Now, this is almost clever. If your Linux version number uses decimal values, like 18.04, you can't do a simple if [ "$VERSION_ID" -ge 22]…: you'd get an integer expression expected error. So using bc does make sense…ish. It would be good to check if, y'know, bc were actually installed- it probably is, but you don't know- and it might be better to actually think about the purpose of the check.

They don't actually care what version of Redhat Linux you're running. What they're checking is if your version uses yum for package management, or its successor dnf. A more reliable check would be to simply see if dnf is a valid command, and if not, fallback to yum.

Let's finish out the case statement:

*) exit 1 ;; esac

So if your system doesn't use an apt based package manager or a yum/dnf based package manager, this just bails at this point. No error message, just an error number. You know it failed, and you don't know why, and it failed after copying a bunch of crap around your system.

So first it mostly installs itself, then it checks to see if it can successfully install all of its dependencies. And if it fails, does it clean up the changes it made? You better believe it doesn't!

echo "" echo "Initech BitLocker Loader has been installed to "${install_dir}" successfully." echo "Run initech.bitlocker --help to learn more about Initech BitLocker Loader"

This is a pretty optimistic statement, and while yes, it has theoretically been installed to ${install_dir}, assuming that we've gotten this far, it's really installed to your /usr/bin directory.

The real extra super-special puzzle to me is that it interfaces with your package manager to install dependencies. But it also installs its own versions of libcrypto and libssl, which don't come from your package manager. Ignoring the fact that it probably *installs them into the wrong places*, it seems bad. Suspicious, bad, and troubling.

Jan didn't send us the uninstall script, and honestly, I assume there isn't one. But if there is one, you know it probably tries to do rm -rf /${SOME_VAR_THAT_MIGHT_BE_EMPTY} somewhere in there. Which, in consideration, is probably the safest way to uninstall this software anyway.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Krebs on SecurityFBI, CISA Echo Warnings on ‘Vishing’ Threat

The Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) on Thursday issued a joint alert to warn about the growing threat from voice phishing or “vishing” attacks targeting companies. The advisory came less than 24 hours after KrebsOnSecurity published an in-depth look at a crime group offering a service that people can hire to steal VPN credentials and other sensitive data from employees working remotely during the Coronavirus pandemic.

“The COVID-19 pandemic has resulted in a mass shift to working from home, resulting in increased use of corporate virtual private networks (VPNs) and elimination of in-person verification,” the alert reads. “In mid-July 2020, cybercriminals started a vishing campaign—gaining access to employee tools at multiple companies with indiscriminate targeting — with the end goal of monetizing the access.”

As noted in Wednesday’s story, the agencies said the phishing sites set up by the attackers tend to include hyphens, the target company’s name, and certain words — such as “support,” “ticket,” and “employee.” The perpetrators focus on social engineering new hires at the targeted company, and impersonate staff at the target company’s IT helpdesk.

The joint FBI/CISA alert (PDF) says the vishing gang also compiles dossiers on employees at the specific companies using mass scraping of public profiles on social media platforms, recruiter and marketing tools, publicly available background check services, and open-source research. From the alert:

“Actors first began using unattributed Voice over Internet Protocol (VoIP) numbers to call targeted employees on their personal cellphones, and later began incorporating spoofed numbers of other offices and employees in the victim company. The actors used social engineering techniques and, in some cases, posed as members of the victim company’s IT help desk, using their knowledge of the employee’s personally identifiable information—including name, position, duration at company, and home address—to gain the trust of the targeted employee.”

“The actors then convinced the targeted employee that a new VPN link would be sent and required their login, including any 2FA [2-factor authentication] or OTP [one-time passwords]. The actor logged the information provided by the employee and used it in real-time to gain access to corporate tools using the employee’s account.”

The alert notes that in some cases the unsuspecting employees approved the 2FA or OTP prompt, either accidentally or believing it was the result of the earlier access granted to the help desk impersonator. In other cases, the attackers were able to intercept the one-time codes by targeting the employee with SIM swapping, which involves social engineering people at mobile phone companies into giving them control of the target’s phone number.

The agencies said crooks use the vished VPN credentials to mine the victim company databases for their customers’ personal information to leverage in other attacks.

“The actors then used the employee access to conduct further research on victims, and/or to fraudulently obtain funds using varying methods dependent on the platform being accessed,” the alert reads. “The monetizing method varied depending on the company but was highly aggressive with a tight timeline between the initial breach and the disruptive cashout scheme.”

The advisory includes a number of suggestions that companies can implement to help mitigate the threat from these vishing attacks, including:

• Restrict VPN connections to managed devices only, using mechanisms like hardware checks or installed certificates, so user input alone is not enough to access the corporate VPN.

• Restrict VPN access hours, where applicable, to mitigate access outside of allowed times.

• Employ domain monitoring to track the creation of, or changes to, corporate, brand-name domains.

• Actively scan and monitor web applications for unauthorized access, modification, and anomalous activities.

• Employ the principle of least privilege and implement software restriction policies or other controls; monitor authorized user accesses and usage.

• Consider using a formalized authentication process for employee-to-employee communications made over the public telephone network where a second factor is used to
authenticate the phone call before sensitive information can be discussed.

• Improve 2FA and OTP messaging to reduce confusion about employee authentication attempts.

• Verify web links do not have misspellings or contain the wrong domain.

• Bookmark the correct corporate VPN URL and do not visit alternative URLs on the sole basis of an inbound phone call.

• Be suspicious of unsolicited phone calls, visits, or email messages from unknown individuals claiming to be from a legitimate organization. Do not provide personal information or information about your organization, including its structure or networks, unless you are certain of a person’s authority to have the information. If possible, try to verify the caller’s identity directly with the company.

• If you receive a vishing call, document the phone number of the caller as well as the domain that the actor tried to send you to and relay this information to law enforcement.

• Limit the amount of personal information you post on social networking sites. The internet is a public resource; only post information you are comfortable with anyone seeing.

• Evaluate your settings: sites may change their options periodically, so review your security and privacy settings regularly to make sure that your choices are still appropriate.

Worse Than FailureError'd: Just a Suggestion

"Sure thing Google, I guess I'll change my language to... let's see...Ah, how about English?" writes Peter G.

 

Marcus K. wrote, "Breaking news: tt tttt tt,ttt!"

 

Tim Y. writes, "Nothing makes my day more than someone accidentially leaving testing mode enabled (and yes, the test number went through!)"

 

"I guess even thinning brows and psoriasis can turn political these days," Lawrence W. wrote.

 

Strahd I. writes, "It was evident at the time that King Georges VI should have gone asked for a V12 instead."

 

"Well, gee, ZDNet, why do you think I enabled this setting in the first place?" Jeroen V. writes.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

LongNowPeople slept on comfy grass beds 200,000 years ago

The oldest beds known to science now date back nearly a quarter of a million years: traces of silicate from woven grasses found in the back of Border Cave (in South Africa, which has a nearly continuous record of occupation dating back to 200,000 BCE).

Ars Technica reports:

Most of the artifacts that survive from more than a few thousand years ago are made of stone and bone; even wooden tools are rare. That means we tend to think of the Paleolithic in terms of hard, sharp stone tools and the bones of butchered animals. Through that lens, life looks very harsh—perhaps even harsher than it really was. Most of the human experience is missing from the archaeological record, including creature comforts like soft, clean beds.

Given recent work on the epidemic of modern orthodontic issues caused in part by sleeping with “bad oral posture” due to too-soft bedding, it seems like the bed may be another frontier for paleo re-thinking of high-tech life. (See also the controversies over barefoot running, prehistoric diets, and countless other forms of atavism emerging from our future-shocked society.) When technological innovation shuffles the “pace layers” of human existence, changing the built environment faster than bodies can adapt, sometimes comfort’s micro-scale horizon undermines the longer, slower beat of health.

Another plus to making beds of grass is their disposability and integration with the rest of ancient life:

Besides being much softer than the cave floor, these ancient beds were probably surprisingly clean. Burning dirty bedding would have helped cut down on problems with bedbugs, lice, and fleas, not to mention unpleasant smells. [Paleoanthropologist Lyn] Wadley and her colleagues suggest that people at Border Cave may even have raked some extra ashes in from nearby hearths ‘to create a clean, odor-controlled base for bedding.’

And charcoal found in the bedding layers includes bits of an aromatic camphor bush; some modern African cultures use another closely related camphor bush in their plant bedding as an insect repellent. The ash may have helped, too; Wadley and her colleagues note that ‘several ethnographies report that ash repels crawling insects, which cannot easily move through the fine powder because it blocks their breathing and biting apparatus and eventually leaves them dehydrated.’

Finding beds as old as Homo sapiens itself revives the (not quite as old) debate about what makes us human. Defining our humanity as “artists” or “ritualists” seems to weave together modern definitions of technology and craft, ceremony and expression, just as early people wove together sedges for a place to sleep. At least, they are the evidence of a much more holistic, integrated way of life — one that found every possible synergy between day and night, cooking and sleeping:

Imagine that you’ve just burned your old, stale bedding and laid down a fresh layer of grass sheaves. They’re still springy and soft, and the ash beneath is still warm. You curl up and breathe in the tingly scent of camphor, reassured that the mosquitoes will let you sleep in peace. Nearby, a hearth fire crackles and pops, and you stretch your feet toward it to warm your toes. You nudge aside a sharp flake of flint from the blade you were making earlier in the day, then drift off to sleep.

Worse Than FailureCodeSOD: A Backwards For

Aurelia is working on a project where some of the code comes from a client. In this case, it appears that the client has very good reasons for hiring an outside vendor to actually build the application.

Imagine you have some Java code which needs to take an array of integers and iterate across them in reverse, to concatenate a string. Oh, and you need to add one to each item as you do this.

You might be thinking about some combination of a map/reverseString.join operation, or maybe a for loop with a i-- type decrementer.

I’m almost certain you aren’t thinking about this.

public String getResultString(int numResults) {
	StringBuffer sb = null;
	
	for (int result[] = getResults(numResults); numResults-- > 0;) {
		int i = result[numResults];
		if( i == 0){
			int j = i + 1; 
			if (sb == null)
				sb = new StringBuffer();
			else
				sb.append(",");
				sb.append(j);
		}else{
			int j = i + 1; 
			if (sb == null)
				sb = new StringBuffer();
			else
				sb.append(",");
				sb.append(j);
		}
	}
	return sb.toString();
}

I really, really want you to look at that for loop: for (int result[] = getResults(numResults); numResults-- > 0;)

Just look at that. It’s… not wrong. It’s not… bad. It’s just written by an alien wearing a human skin suit. Our initializer actually populates the array we’re going to iterate across. Our bounds check also contains the decrement operation. We don’t have a decrement clause.

Then, if i == 0 we’ll do the exact same thing as if i isn’t 0, since our if and else branches contain the same code.

Increment i, and store the result in j. Why we don’t use the ++i or some other variation to be in-line with our weird for loop, I don’t know. Maybe they were done showing off.

Then, if our StringBuffer is null, we create one, otherwise we append a ",". This is one solution to the contatenator’s comma problem. Again, it’s not wrong, it’s just… unusual.

But this brings us to the thing which is actually, objectively, honestly bad. The indenting.

			if (sb == null)
				sb = new StringBuffer();
			else
				sb.append(",");
				sb.append(j);

Look at that last line. Does that make you angry? Look more closely. Look for the curly brackets. Oh, you don’t see any? Very briefly, when I was looking at this code, I thought, “Wait, does this discard the first item?” No, it just eschews brackets and then indents wrong to make sure we’re nice and confused when we look at the code.

It should read:

			if (sb == null)
				sb = new StringBuffer();
			else
				sb.append(",");
                        sb.append(j);
[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityVoice Phishers Targeting Corporate VPNs

The COVID-19 epidemic has brought a wave of email phishing attacks that try to trick work-at-home employees into giving away credentials needed to remotely access their employers’ networks. But one increasingly brazen group of crooks is taking your standard phishing attack to the next level, marketing a voice phishing service that uses a combination of one-on-one phone calls and custom phishing sites to steal VPN credentials from employees.

According to interviews with several sources, this hybrid phishing gang has a remarkably high success rate, and operates primarily through paid requests or “bounties,” where customers seeking access to specific companies or accounts can hire them to target employees working remotely at home.

And over the past six months, the criminals responsible have created dozens if not hundreds of phishing pages targeting some of the world’s biggest corporations. For now at least, they appear to be focusing primarily on companies in the financial, telecommunications and social media industries.

“For a number of reasons, this kind of attack is really effective,” said Allison Nixon, chief research officer at New York-based cyber investigations firm Unit 221B. “Because of the Coronavirus, we have all these major corporations that previously had entire warehouses full of people who are now working remotely. As a result the attack surface has just exploded.”

TARGET: NEW HIRES

A typical engagement begins with a series of phone calls to employees working remotely at a targeted organization. The phishers will explain that they’re calling from the employer’s IT department to help troubleshoot issues with the company’s virtual private networking (VPN) technology.

The employee phishing page bofaticket[.]com. Image: urlscan.io

The goal is to convince the target either to divulge their credentials over the phone or to input them manually at a website set up by the attackers that mimics the organization’s corporate email or VPN portal.

Zack Allen is director of threat intelligence for ZeroFOX, a Baltimore-based company that helps customers detect and respond to risks found on social media and other digital channels. Allen has been working with Nixon and several dozen other researchers from various security firms to monitor the activities of this prolific phishing gang in a bid to disrupt their operations.

Allen said the attackers tend to focus on phishing new hires at targeted companies, and will often pose as new employees themselves working in the company’s IT division. To make that claim more believable, the phishers will create LinkedIn profiles and seek to connect those profiles with other employees from that same organization to support the illusion that the phony profile actually belongs to someone inside the targeted firm.

“They’ll say ‘Hey, I’m new to the company, but you can check me out on LinkedIn’ or Microsoft Teams or Slack, or whatever platform the company uses for internal communications,” Allen said. “There tends to be a lot of pretext in these conversations around the communications and work-from-home applications that companies are using. But eventually, they tell the employee they have to fix their VPN and can they please log into this website.”

SPEAR VISHING

The domains used for these pages often invoke the company’s name, followed or preceded by hyphenated terms such as “vpn,” “ticket,” “employee,” or “portal.” The phishing sites also may include working links to the organization’s other internal online resources to make the scheme seem more believable if a target starts hovering over links on the page.

Allen said a typical voice phishing or “vishing” attack by this group involves at least two perpetrators: One who is social engineering the target over the phone, and another co-conspirator who takes any credentials entered at the phishing page and quickly uses them to log in to the target company’s VPN platform in real-time.

Time is of the essence in these attacks because many companies that rely on VPNs for remote employee access also require employees to supply some type of multi-factor authentication in addition to a username and password — such as a one-time numeric code generated by a mobile app or text message. And in many cases, those codes are only good for a short duration — often measured in seconds or minutes.

But these vishers can easily sidestep that layer of protection, because their phishing pages simply request the one-time code as well.

A phishing page (helpdesk-att[.]com) targeting AT&T employees. Image: urlscan.io

Allen said it matters little to the attackers if the first few social engineering attempts fail. Most targeted employees are working from home or can be reached on a mobile device. If at first the attackers don’t succeed, they simply try again with a different employee.

And with each passing attempt, the phishers can glean important details from employees about the target’s operations, such as company-specific lingo used to describe its various online assets, or its corporate hierarchy.

Thus, each unsuccessful attempt actually teaches the fraudsters how to refine their social engineering approach with the next mark within the targeted organization, Nixon said.

“These guys are calling companies over and over, trying to learn how the corporation works from the inside,” she said.

NOW YOU SEE IT, NOW YOU DON’T

All of the security researchers interviewed for this story said the phishing gang is pseudonymously registering their domains at just a handful of domain registrars that accept bitcoin, and that the crooks typically create just one domain per registrar account.

“They’ll do this because that way if one domain gets burned or taken down, they won’t lose the rest of their domains,” Allen said.

More importantly, the attackers are careful to do nothing with the phishing domain until they are ready to initiate a vishing call to a potential victim. And when the attack or call is complete, they disable the website tied to the domain.

This is key because many domain registrars will only respond to external requests to take down a phishing website if the site is live at the time of the abuse complaint. This requirement can stymie efforts by companies like ZeroFOX that focus on identifying newly-registered phishing domains before they can be used for fraud.

“They’ll only boot up the website and have it respond at the time of the attack,” Allen said. “And it’s super frustrating because if you file an abuse ticket with the registrar and say, ‘Please take this domain away because we’re 100 percent confident this site is going to be used for badness,’ they won’t do that if they don’t see an active attack going on. They’ll respond that according to their policies, the domain has to be a live phishing site for them to take it down. And these bad actors know that, and they’re exploiting that policy very effectively.”

A phishing page (github-ticket[.]com) aimed at siphoning credentials for a target organization’s access to the software development platform Github. Image: urlscan.io

SCHOOL OF HACKS

Both Nixon and Allen said the object of these phishing attacks seems to be to gain access to as many internal company tools as possible, and to use those tools to seize control over digital assets that can quickly be turned into cash. Primarily, that includes any social media and email accounts, as well as associated financial instruments such as bank accounts and any cryptocurrencies.

Nixon said she and others in her research group believe the people behind these sophisticated vishing campaigns hail from a community of young men who have spent years learning how to social engineer employees at mobile phone companies and social media firms into giving up access to internal company tools.

Traditionally, the goal of these attacks has been gaining control over highly-prized social media accounts, which can sometimes fetch thousands of dollars when resold in the cybercrime underground. But this activity gradually has evolved toward more direct and aggressive monetization of such access.

On July 15, a number of high-profile Twitter accounts were used to tweet out a bitcoin scam that earned more than $100,000 in a few hours. According to Twitter, that attack succeeded because the perpetrators were able to social engineer several Twitter employees over the phone into giving away access to internal Twitter tools.

Nixon said it’s not clear whether any of the people involved in the Twitter compromise are associated with this vishing gang, but she noted that the group showed no signs of slacking off after federal authorities charged several people with taking part in the Twitter hack.

“A lot of people just shut their brains off when they hear the latest big hack wasn’t done by hackers in North Korea or Russia but instead some teenagers in the United States,” Nixon said. “When people hear it’s just teenagers involved, they tend to discount it. But the kinds of people responsible for these voice phishing attacks have now been doing this for several years. And unfortunately, they’ve gotten pretty advanced, and their operational security is much better now.”

A phishing page (vzw-employee[.]com) targeting employees of Verizon. Image: DomainTools

PROPER ADULT MONEY-LAUNDERING

While it may seem amateurish or myopic for attackers who gain access to a Fortune 100 company’s internal systems to focus mainly on stealing bitcoin and social media accounts, that access — once established — can be re-used and re-sold to others in a variety of ways.

“These guys do intrusion work for hire, and will accept money for any purpose,” Nixon said. “This stuff can very quickly branch out to other purposes for hacking.”

For example, Allen said he suspects that once inside of a target company’s VPN, the attackers may try to add a new mobile device or phone number to the phished employee’s account as a way to generate additional one-time codes for future access by the phishers themselves or anyone else willing to pay for that access.

Nixon and Allen said the activities of this vishing gang have drawn the attention of U.S. federal authorities, who are growing concerned over indications that those responsible are starting to expand their operations to include criminal organizations overseas.

“What we see now is this group is really good on the intrusion part, and really weak on the cashout part,” Nixon said. “But they are learning how to maximize the gains from their activities. That’s going to require interactions with foreign gangs and learning how to do proper adult money laundering, and we’re already seeing signs that they’re growing up very quickly now.”

WHAT CAN COMPANIES DO?

Many companies now make security awareness and training an integral part of their operations. Some firms even periodically send test phishing messages to their employees to gauge their awareness levels, and then require employees who miss the mark to undergo additional training.

Such precautions, while important and potentially helpful, may do little to combat these phone-based phishing attacks that tend to target new employees. Both Allen and Nixon — as well as others interviewed for this story who asked not to be named — said the weakest link in most corporate VPN security setups these days is the method relied upon for multi-factor authentication.

A U2F device made by Yubikey, plugged into the USB port on a computer.

One multi-factor option — physical security keys — appears to be immune to these sophisticated scams. The most commonly used security keys are inexpensive USB-based devices. A security key implements a form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by inserting the USB device and pressing a button on the device. The key works without the need for any special software drivers.

The allure of U2F devices for multi-factor authentication is that even if an employee who has enrolled a security key for authentication tries to log in at an impostor site, the company’s systems simply refuse to request the security key if the user isn’t on their employer’s legitimate website, and the login attempt fails. Thus, the second factor cannot be phished, either over the phone or Internet.

In July 2018, Google disclosed that it had not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical security keys in place of one-time codes.

Probably the most popular maker of security keys is Yubico, which sells a basic U2F Yubikey for $20. It offers regular USB versions as well as those made for devices that require USB-C connections, such as Apple’s newer Mac OS systems. Yubico also sells more expensive keys designed to work with mobile devices. [Full disclosure: Yubico was recently an advertiser on this site].

Nixon said many companies will likely balk at the price tag associated with equipping each employee with a physical security key. But she said as long as most employees continue to work remotely, this is probably a wise investment given the scale and aggressiveness of these voice phishing campaigns.

“The truth is some companies are in a lot of pain right now, and they’re having to put out fires while attackers are setting new fires,” she said. “Fixing this problem is not going to be simple, easy or cheap. And there are risks involved if you somehow screw up a bunch of employees accessing the VPN. But apparently these threat actors really hate Yubikey right now.”

Kevin RuddWashington Post: China’s thirst for coal is economically shortsighted and environmentally reckless

First published in the Washington Post on 19 August 2020

Carbon emissions have fallen in recent months as economies have been shut down and put into hibernation. But whether the world will emerge from the pandemic in a stronger or weaker position to tackle the climate crisis rests overwhelmingly on the decisions that China will take.

China, as part of its plans to restart its economy, has already approved the construction of new coal-fired power plants accounting for some 17 gigawatts of energy this year, sending a collective shiver down the spines of environmentalists. This is more coal plants than it approved in the previous two years combined, and the total capacity now under development in China is larger than the remaining fleet operating in the United States.

At the same time, China has touted investments in so-called “new infrastructure,” such as electric-vehicle charging stations and rail upgrades, as integral to its economic recovery. But frankly, none of this will matter much if these new coal-fired power plants are built.

To be fair, the decisions to proceed with these coal projects largely rest in the hands of China’s provincial and regional governments and not in Beijing. However, this does not mean the central government has no power, nor that it won’t wear the reputational damage if the plants become a reality.

First, it is hard to see how China could meet one of its own commitments under the 2015 Paris climate agreement to peak its emissions by 2030 if these new plants are built. The pledge relies on China retiring much of its existing and relatively young coal fleet, which has been operational only for an average of 14 years. Bringing yet more coal capacity online now is therefore either economically shortsighted or environmentally reckless.

It would also put at risk the world’s collective long-term goal under the Paris agreement to keep temperature increases within 1.5 degrees Celsius, which the Intergovernmental Panel on Climate Change has said requires halving of global emissions between 2018 and 2030 and reaching net-zero emissions by the middle of the century.

It also is completely contrary to China’s own domestic interests, including President Xi Jinping’s desire to grow the economy, improve energy security and clean up the environment (or, as he says, to “make our skies blue again”).

But perhaps most importantly for the geopolitical hard heads in Beijing, it also risks unravelling the goodwill China has built up in recent years for staying the course on the fight against climate change in the face of the Trump administration’s retreat. This will especially be the case in the eyes of many vulnerable developing countries, including the world’s lowest-lying island nations that could face even greater risks if these plants are built.

For his part, former vice president Joe Biden has already got China’s thirst for coal in his sights. He speaks of the need for the United States to focus on how China is “exporting more dirty coal” through its support of carbon-intensive projects in its Belt and Road InitiativeStudies have found a Chinese role in more than 100 gigawatts of additional coal plants under construction across Asia and Africa, and even in Eastern Europe. It is hard to see how the first few months of a Biden administration would not make this an increasingly uncomfortable reality for Beijing at precisely the time the world would be welcoming with open the arms the return of U.S. climate leadership.

As a new paper published by the Asia Society Policy Institute highlights, China’s decisions on coal will also be among the most closely watched as it finalizes its next five-year plan, due out in 2021, as well as its mid-century decarbonization strategy and enhancements to its Paris targets ahead of the 2021 United Nations Climate Change Conference in Glasgow, Scotland. And although China may also have an enormously positive story to tell — continuing to lead the world in the deployment of renewable energy in 2019 — it is China’s decisions on coal that will loom large.

(Photo: Gwendolyn Stansbury/IFPRI)

The post Washington Post: China’s thirst for coal is economically shortsighted and environmentally reckless appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Shallow Perspective

There are times where someone writes code which does nothing. There are times where someone writes code which does something, but nothing useful. This is one of those times.

Ray H was going through some JS code, and found this “useful” method.

mapRowData (data) {
  if (isNullOrUndefined(data)) return null;
  return data.map(x => x);
}

Technically, this isn’t a “do nothing” method. It converts undefined values to null, and it returns a shallow copy of an array, assuming that you passed in an array.

The fact that it can return a null value or an array is one of those little nuisances that we accept, but probably should code around (without more context, it’s probably fine if this returned an empty array on bad inputs, for example).

But Ray adds: “Where this is used, it could just use the array data directly and get the same result.” Yes, it’s used in a handful of places, and in each of those places, there’s no functional difference between the original array and the shallow copy.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Rondam RamblingsHere we go again

Here is a snapshot of the current map of temporary flight restrictions (TFRs) issued by the FAA across the western U.S.:Almost every one of those red shapes is a major fire burning.  Compare that to a similar snapshot taken two years ago at about this same time of year.The regularity of these extreme heat and fire events is starting to get really scary.

,

Kevin RuddMonocle 24 Radio: The Big Interview

INTERVIEW AUDIO
MONOCLE 24 RADIO
‘THE BIG INTERVIEW’
RECORDED LATE 2019
BROADCAST AUGUST 2020

The post Monocle 24 Radio: The Big Interview appeared first on Kevin Rudd.

Kevin RuddABC Late Night Live: US-China Relations

INTERVIEW AUDIO
RADIO INTERVIEW
ABC
LATE NIGHT LIVE
17 AUGUST 2020

Main topic: Foreign Affairs article ‘Beware the Guns of August — in Asia’

 

Image: The USS Ronald Reagan steams through the San Bernardino Strait, July 3, 2020, crossing from the Philippine Sea into the South China Sea. (Navy Petty Officer 3rd Class Jason Tarleton)

The post ABC Late Night Live: US-China Relations appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Carbon Copy

I avoid writing software that needs to send emails. It's just annoying code to build, interfacing with mailservers is shockingly frustrating, and honestly, users don't tend to like the emails that my software tends to send. Once upon a time, it was a system which would tell them it was time to calibrate a scale, and the business requirements were basically "spam them like three times a day the week a scale comes do," which shockingly everyone hated.

But Krista inherited some code that sends email. The previous developer was a "senior", but probably could have had a little more supervision and maybe some mentoring on the C# language.

One commit added this method, for sending emails:

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

That's not so bad, as these things go, though one has to wonder about parameters like fileName1 and fileName2. Do they only ever send exactly two files? Well, maybe when this method was written, but a few commits later, an overloaded version gets added:

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2, String fileName3) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); if (File.Exists(fileName3)) mailMsg.Attachments.Add(new Attachment(fileName3)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

And then, a few commits later, someone decided that they needed to send four files, sometimes.

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2, String fileName3, String fileName4) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); if (File.Exists(fileName3)) mailMsg.Attachments.Add(new Attachment(fileName3)); if (File.Exists(fileName4)) mailMsg.Attachments.Add(new Attachment(fileName4)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

Each time someone discovered a new case where they wanted to include a different number of attachments, the previous developer copy/pasted the same code, with minor revisions.

Krista wrote a single version which used a paramarray, which replaced all of these versions (and any other possible versions), without changing the calling semantics.

Though the real WTF is probably still forcing the BodyEncoding to be ASCII at this point in time. There's a whole lot of assumptions about your dataset which are probably not true, or at least no reliably true.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Rondam RamblingsIrit Gat, Ph.D. 25 November 1966 - 11 August 2020

With a heavy heart I bear witness to the untimely passing of Dr. Irit Gat last Tuesday at the age of 53.  Irit was the Dean of Behavioral and Social Sciences at Antelope Valley College in Lancaster, California.  She was also my younger sister.  She died peacefully of natural causes.I am going to miss her.  A lot.  I'm going to miss her smile.  I'm going to miss the way she said "Hey bro" when we

Worse Than FailureCodeSOD: Perls Can Change

Tomiko* inherited some web-scraping/indexing code from Dennis. The code started out just scanning candidate profiles for certain keywords, but grew, mutated, and eventually turned into something that also needed to download their CVs.

Now, Dennis was, as Tomiko puts it, "an interesting engineer". "Any agreed upon standard, he would aggressively oppose, and this can be seen in this code."

"This code" also happens to be in Perl, the "best" language for developers who don't like standards. And, it also happens to be connected to this infrastructure.

So let's start with the code, because this is the rare CodeSOD where the code itself isn't the WTF:

foreach my $n (0 .. @{$lines} - 1) { next if index($lines->[$n], 'RT::Spider::Deepweb::Controller::Factory->make(') == -1; # Don't let other cv_id survive. $lines->[$n] =~ s/,\s*cv_id\s*=>[^,)]+//; $lines->[$n] =~ s/,\s*cv_type\s*=>[^,)]+// if defined $cv_type; # Insert the new options. $lines->[$n] =~ s/\)/$opt)/; }

Okay, so it's a pretty standard for-each loop. We skip lines if they contain… wait, that looks like a Perl expression- RT::Spider::Deepweb::Controller::Factory->make('? Well, let's hold onto that thought, but keep trucking on.

Next, we do a few find-and-replace operations to ensure that we Don't let other cv_id survive. I'm not really sure what exactly that's supposed to mean, but Tomiko says, "Dennis never wrote a single meaningful comment".

Well, the regexes are pretty standard character-salad expressions; ugly, but harmless. If you take this code in isolation, it's not good, but it doesn't look terrible. Except, there's that next if line. Why are we checking to see if the input data contains a Perl expression?

Because our input data is a Perl script. Dennis was… efficient. He already had code that would download the candidate profiles. Instead of adding new code to download CVs, instead of refactoring the existing code so that it was generic enough to download both, Dennis instead decided to load the profile code into memory, scan it with regexes, and then eval it.

As Tomiko says: "You can't get more Perl than that."

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Krebs on SecurityMicrosoft Put Off Fixing Zero Day for 2 Years

A security flaw in the way Microsoft Windows guards users against malicious files was actively exploited in malware attacks for two years before last week, when Microsoft finally issued a software update to correct the problem.

One of the 120 security holes Microsoft fixed on Aug. 11’s Patch Tuesday was CVE-2020-1464, a problem with the way every supported version of Windows validates digital signatures for computer programs.

Code signing is the method of using a certificate-based digital signature to sign executable files and scripts in order to verify the author’s identity and ensure that the code has not been changed or corrupted since it was signed by the author.

Microsoft said an attacker could use this “spoofing vulnerability” to bypass security features intended to prevent improperly signed files from being loaded. Microsoft’s advisory makes no mention of security researchers having told the company about the flaw, which Microsoft acknowledged was actively being exploited.

In fact, CVE-2020-1464 was first spotted in attacks used in the wild back in August 2018. And several researchers informed Microsoft about the weakness over the past 18 months.

Bernardo Quintero is the manager at VirusTotal, a service owned by Google that scans any submitted files against dozens of antivirus services and displays the results. On Jan. 15, 2019, Quintero published a blog post outlining how Windows keeps the Authenticode signature valid after appending any content to the end of Windows Installer files (those ending in .MSI) signed by any software developer.

Quintero said this weakness would particularly acute if an attacker were to use it to hide a malicious Java file (.jar). And, he said, this exact attack vector was indeed detected in a malware sample sent to VirusTotal.

“In short, an attacker can append a malicious JAR to a MSI file signed by a trusted software developer (like Microsoft Corporation, Google Inc. or any other well-known developer), and the resulting file can be renamed with the .jar extension and will have a valid signature according Microsoft Windows,” Quintero wrote.

But according to Quintero, while Microsoft’s security team validated his findings, the company chose not to address the problem at the time.

“Microsoft has decided that it will not be fixing this issue in the current versions of Windows and agreed we are able to blog about this case and our findings publicly,” his blog post concluded.

Tal Be’ery, founder of Zengo, and Peleg Hadar, senior security researcher at SafeBreach Labs, penned a blog post on Sunday that pointed to a file uploaded to VirusTotal in August 2018 that abused the spoofing weakness, which has been dubbed GlueBall. The last time that August 2018 file was scanned at VirusTotal (Aug 14, 2020), it was detected as a malicious Java trojan by 28 of 59 antivirus programs.

More recently, others would likewise call attention to malware that abused the security weakness, including this post in June 2020 from the Security-in-bits blog.

Image: Securityinbits.com

Be’ery said the way Microsoft has handled the vulnerability report seems rather strange.

“It was very clear to everyone involved, Microsoft included, that GlueBall is indeed a valid vulnerability exploited in the wild,” he wrote. “Therefore, it is not clear why it was only patched now and not two years ago.”

Asked to comment on why it waited two years to patch a flaw that was actively being exploited to compromise the security of Windows computers, Microsoft dodged the question, saying Windows users who have applied the latest security updates are protected from this attack.

“A security update was released in August,” Microsoft said in a written statement sent to KrebsOnSecurity. “Customers who apply the update, or have automatic updates enabled, will be protected. We continue to encourage customers to turn on automatic updates to help ensure they are protected.”

Update, 12:45 a.m. ET: Corrected attribution on the June 2020 blog article about GlueBall exploits in the wild.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 13)

Here’s part thirteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

LongNowKathryn Cooper’s Wildlife Movement Photography

Amazing wildlife photography by Kathryn Cooper reveals the brushwork of birds and their flocks through sky, hidden by the quickness of the human eye.

“Staple Newk” by Kathryn Cooper.

Ever since Eadweard Muybridge’s pioneering photography of animal locomotion in 01877 and 01878 (including the notorious “horse shot by pistol” frames from an era less concerned with animal experiments), the trend has been to unpack our lived experience of movement into serial, successive frames. The movie camera smears one pace layer out across another, lets the eye scrub over one small moment.

“UFO” by Kathryn Cooper.

In contrast, time-lapse and long exposure camerawork implodes the arc of moments, an integral calculus that gathers the entire gesture. Cooper’s flock photography is less the autopsy of high-speed video and more the graceful enzo drawn by a zen master.

Learn More

LongNowPuzzling artifacts found at Europe’s oldest battlefield

Bronze-Age crime scene forensics: newly discovered artifacts only deepen the mystery of a 3,300-year-old battle. What archaeologists previously thought to be a local skirmish looks more and more like a regional conflict that drew combatants in from hundreds of kilometers away…but why?

Much like the total weirdness of the Ediacaran fauna of 580 million years ago, this oldest Bronze-Age battlefield is the earliest example of its kind in the record…and firsts are always difficult to analyze:

Among the stash are also three bronze cylinders that may have been fittings for bags or boxes designed to hold personal gear—unusual objects that until now have only been discovered hundreds of miles away in southern Germany and eastern France.

‘This was puzzling for us,’ says Thomas Terberger, an archaeologist at the University of Göttingen in Germany who helped launch the excavation at Tollense and co-authored the paper. To Terberger and his team, that lends credence to their theory that the battle wasn’t just a northern affair.

Anthony Harding, an archaeologist and Bronze Age specialist who was not involved with the research: ‘Why would a warrior be going round with a lot of scrap metal?’ he asks. To interpret the cache—which includes distinctly un-warlike metalworking gear—as belonging to warriors is ‘a bit far-fetched to me,’ he says.

,

Krebs on SecurityMedical Debt Collection Firm R1 RCM Hit in Ransomware Attack

R1 RCM Inc. [NASDAQ:RCM], one of the nation’s largest medical debt collection companies, has been hit in a ransomware attack.

Formerly known as Accretive Health Inc., Chicago-based R1 RCM brought in revenues of $1.18 billion in 2019. The company has more than 19,000 employees and contracts with at least 750 healthcare organizations nationwide.

R1 RCM acknowledged taking down its systems in response to a ransomware attack, but otherwise declined to comment for this story.

The “RCM” portion of its name refers to “revenue cycle management,” an industry which tracks profits throughout the life cycle of each patient, including patient registration, insurance and benefit verification, medical treatment documentation, and bill preparation and collection from patients.

The company has access to a wealth of personal, financial and medical information on tens of millions of patients, including names, dates of birth, Social Security numbers, billing information and medical diagnostic data.

It’s unclear when the intruders first breached R1’s networks, but the ransomware was unleashed more than a week ago, right around the time the company was set to release its 2nd quarter financial results for 2020.

R1 RCM declined to discuss the strain of ransomware it is battling or how it was compromised. Sources close to the investigation tell KrebsOnSecurity the malware is known as Defray.

Defray was first spotted in 2017, and its purveyors have a history of specifically targeting companies in the healthcare space. According to Trend Micro, Defray usually is spread via booby-trapped Microsoft Office documents sent via email.

“The phishing emails the authors use are well-crafted,” Trend Micro wrote. For example, in an attack targeting a hospital, the phishing email was made to look like it came from a hospital IT manager, with the malicious files disguised as patient reports.

Email security company Proofpoint says the Defray ransomware is somewhat unusual in that it is typically deployed in small, targeted attacks as opposed to large-scale “spray and pray” email malware campaigns.

“It appears that Defray may be for the personal use of specific threat actors, making its continued distribution in small, targeted attacks more likely,” Proofpoint observed.

A recent report (PDF) from Corvus Insurance notes that ransomware attacks on companies in the healthcare industry have slowed in recent months, with some malware groups even dubiously pledging they would refrain from targeting these firms during the COVID-19 pandemic. But Corvus says that trend is likely to reverse in the second half of 2020 as the United States moves cautiously toward reopening.

Corvus found that while services that scan and filter incoming email for malicious threats can catch many ransomware lures, an estimated 75 percent of healthcare companies do not use this technology.

Planet Linux AustraliaRussell Coker: Jitsi on Debian

I’ve just setup an instance of the Jitsi video-conference software for my local LUG. Here is an overview of how to set it up on Debian.

Firstly create a new virtual machine to run it. Jitsi is complex and has lots of inter-dependencies. It’s packages want to help you by dragging in other packages and configuring them. This is great if you have a blank slate to start with, but if you already have one component installed and running then it can break things. It wants to configure the Prosody Jabber server and a web server and my first attempt at an install failed when it tried to reconfigure the running instances of Prosody and Apache.

Here’s the upstream install docs [1]. They cover everything fairly well, but I’ll document the configuration I wanted (basic public server with password required to create a meeting).

Basic Installation

The first thing to do is to get a short DNS name like j.example.com. People will type that every time they connect and will thank you for making it short.

Using Certbot for certificates is best. It seems that you need them for j.example.com and auth.j.example.com.

apt install curl certbot
/usr/bin/letsencrypt certonly --standalone -d j.example.com,auth.j.example.com -m you@example.com
curl https://download.jitsi.org/jitsi-key.gpg.key | gpg --dearmor > /etc/apt/jitsi-keyring.gpg
echo "deb [signed-by=/etc/apt/jitsi-keyring.gpg] https://download.jitsi.org stable/" > /etc/apt/sources.list.d/jitsi-stable.list
apt-get update
apt-get -y install jitsi-meet

When apt installs jitsi-meet and it’s dependencies you get asked many questions for configuring things. Most of it works well.

If you get the nginx certificate wrong or don’t have the full chain then phone clients will abort connections for no apparent reason, it seems that you need to edit /etc/nginx/sites-enabled/j.example.com.conf to use the following ssl configuration:

ssl_certificate /etc/letsencrypt/live/j.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/j.example.com/privkey.pem;

Then you have to edit /etc/prosody/conf.d/j.example.com.cfg.lua to use the following ssl configuration:

key = "/etc/letsencrypt/live/j.example.com/privkey.pem";
certificate = "/etc/letsencrypt/live/j.example.com/fullchain.pem";

It seems that you need to have an /etc/hosts entry with the public IP address of your server and the names “j.example.com j auth.j.example.com”. Jitsi also appears to use the names “speakerstats.j.example.com conferenceduration.j.example.com lobby.j.example.com conference.j.example.com conference.j.example.com internal.auth.j.example.com” but they aren’t required for a basic setup, I guess you could add them to /etc/hosts to avoid the possibility of strange errors due to it not finding an internal host name. There are optional features of Jitsi which require some of these names, but so far I’ve only used the basic functionality.

Access Control

This section describes how to restrict conference creation to authenticated users.

The secure-domain document [2] shows how to restrict access, but I’ll summarise the basics.

Edit /etc/prosody/conf.avail/j.example.com.cfg.lua and use the following line in the main VirtualHost section:

        authentication = "internal_hashed"

Then add the following section:

VirtualHost "guest.j.example.com"
        authentication = "anonymous"
        c2s_require_encryption = false
        modules_enabled = {
            "turncredentials";
        }

Edit /etc/jitsi/meet/j.example.com-config.js and add the following line:

        anonymousdomain: 'guest.j.example.com',

Edit /etc/jitsi/jicofo/sip-communicator.properties and add the following line:

org.jitsi.jicofo.auth.URL=XMPP:j.example.com

Then run commands like the following to create new users who can create rooms:

prosodyctl register admin j.example.com

Then restart most things (Prosody at least, maybe parts of Jitsi too), I rebooted the VM.

Now only the accounts you created on the Prosody server will be able to create new meetings. You should be able to add, delete, and change passwords for users via prosodyctl while it’s running once you have set this up.

Conclusion

Once I gave up on the idea of running Jitsi on the same server as anything else it wasn’t particularly difficult to set up. Some bits were a little fiddly and hopefully this post will be a useful resource for people who have trouble understanding the documentation. Generally it’s not difficult to install if it is the only thing running on a VM.

LongNowHow to Be in Time

Photograph: Scott Thrift.

“We already have timepieces that show us how to be on time. These are timepieces that show us how to be in time.”

– Scott Thrift

Slow clocks are growing in popularity, perhaps as a tonic for or revolt against the historical trend of ever-faster timekeeping mechanisms.

Given that bell tower clocks were originally used to keep monastic observances of the sacred hours, it seems appropriate to restore some human agency in timing and give kairos back some of the territory it lost to the minute and second hands so long ago…

Scott Thrift’s three conceptual timepieces measure with only one hand each, counting 24 hour, one-month, and one-year cycles with each revolution. Not quite 10,000 years, but it’s a consumer-grade start.

“Right now we’re living in the long-term effects of short-term thinking. I don’t think it’s possible really for us to commonly think long term if the way that we tell time is with a short-term device that just shows the seconds, minutes, and hours. We’re precluded to seeing things in the short term.”

-Scott Thrift

Worse Than FailureError'd: New Cat Nullness

"Honest! If I could give you something that had a 'cat' in it, I would!" wrote Gordon P.

 

"You'd think Outlook would hage told me sooner about these required updates," Carlos writes.

 

Colin writes, "Asking for a friend, does balsamic olive oil still have to be changed every 3,000 miles?"

 

"I was looking for Raspberry Pi 4 cases on my local Amazon.co.jp when I stumbled upon a pretty standard, boring WTF. Desparate to find an actual picture of the case I was after, I changed to Amazon.com and I guess I got what I wanted," George wrote. (Here are the short versions: https://www.amazon.co.jp/dp/B07TFDFGZFhttps://www.amazon.com/dp/B07TFDFGZF)

 

Kevin wrote, "Ah, I get it. Shiny and blinky ads are SO last decade. Real container advertisers nowadays get straight to the point!"

 

"I noticed this in the footer of an email from my apartment management company and well, I'm intrigued at the possibility of 'rewards'," wrote Peter C.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Planet Linux AustraliaDavid Rowe: M17 Open Source Radio

In 2019 Wojciech Kaczmarski (SP5WWP), started working on the M17 open source radio project. Since then the project has made great progress – assembling a team of Hams involved, a project website, mailing list – and steady progress on the development of the TR9 radio, the first to be built to the M17 standard.

For the RF deck, the TR9 is using the chipset approach and a RRC filtered 4FSK waveform, similar to DMR. For my own work, I like to optimise low SNR performance by building the physical layer, but I’m a bit of a modem nerd, and it complicates the RF side. Compared to a low cost SDR/custom RF design – the RF chipset approach this will get them on the air quickly, and the filtered 4FSK will have a narrow on air bandwidth (12.5kHz channel spacing). Sensitivity will be similar to analog FM/DMR radios.

Links

M17 Project Web site
M17 Twitter Feed
M17 Spec

,

Krebs on SecurityWhy & Where You Should Plant Your Flag

Several stories here have highlighted the importance of creating accounts online tied to your various identity, financial and communications services before identity thieves do it for you. This post examines some of the key places where everyone should plant their virtual flags.

As KrebsOnSecurity observed back in 2018, many people — particularly older folks — proudly declare they avoid using the Web to manage various accounts tied to their personal and financial data — including everything from utilities and mobile phones to retirement benefits and online banking services. From that story:

“The reasoning behind this strategy is as simple as it is alluring: What’s not put online can’t be hacked. But increasingly, adherents to this mantra are finding out the hard way that if you don’t plant your flag online, fraudsters and identity thieves may do it for you.”

“The crux of the problem is that while most types of customer accounts these days can be managed online, the process of tying one’s account number to a specific email address and/or mobile device typically involves supplying personal data that can easily be found or purchased online — such as Social Security numbers, birthdays and addresses.”

In short, although you may not be required to create online accounts to manage your affairs at your ISP, the U.S. Postal Service, the credit bureaus or the Social Security Administration, it’s a good idea to do so for several reasons.

Most importantly, the majority of the entities I’ll discuss here allow just one registrant per person/customer. Thus, even if you have no intention of using that account, establishing one will be far easier than trying to dislodge an impostor who gets there first using your identity data and an email address they control.

Also, the cost of planting your flag is virtually nil apart from your investment of time. In contrast, failing to plant one’s flag can allow ne’er-do-wells to create a great deal of mischief for you, whether it be misdirecting your service or benefits elsewhere, or canceling them altogether.

Before we dive into the list, a couple of important caveats. Adding multi-factor authentication (MFA) at these various providers (where available) and/or establishing a customer-specific personal identification number (PIN) also can help secure online access. For those who can’t be convinced to use a password manager, even writing down all of the account details and passwords on a slip of paper can be helpful, provided the document is secured in a safe place.

Perhaps the most important place to enable MFA is with your email accounts. Armed with access to your inbox, thieves can then reset the password for any other service or account that is tied to that email address.

People who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control.

Secondly, guard the security of your mobile phone account as best you can (doing so might just save your life). The passwords for countless online services can be reset merely by entering a one-time code sent via text message to the phone number on file for the customer’s account.

And thanks to the increasing prevalence of a crime known as SIM swapping, thieves may be able to upend your personal and financial life simply by tricking someone at your mobile service provider into diverting your calls and texts to a device they control.

Most mobile providers offer customers the option of placing a PIN or secret passphrase on their accounts to lessen the likelihood of such attacks succeeding, but these protections also usually fail when the attackers are social engineering some $12-an-hour employee at a mobile phone store.

Your best option is to reduce your overall reliance on your phone number for added authentication at any online service. Many sites now offer MFA options that are app-based and not tied to your mobile service, and this is your best option for MFA wherever possible.

YOUR CREDIT FILES

First and foremost, all U.S. residents should ensure they have accounts set up online at the three major credit bureaus — Equifax, Experian and Trans Union.

It’s important to remember that the questions these bureaus will ask to verify your identity are not terribly difficult for thieves to answer or guess just by referencing public records and/or perhaps your postings on social media.

You will need accounts at these bureaus if you wish to freeze your credit file. KrebsOnSecurity has for many years urged all readers to do just that, because freezing your file is the best way to prevent identity thieves from opening new lines of credit in your name. Parents and guardians also can now freeze the files of their dependents for free.

For more on what a freeze entails and how to place or thaw one, please see this post. Beyond the big three bureaus, Innovis is a distant fourth bureau that some entities use to check consumer creditworthiness. Fortunately, filing a freeze with Innovis likewise is free and relatively painless.

It’s also a good idea to notify a company called ChexSystems to keep an eye out for fraud committed in your name. Thousands of banks rely on ChexSystems to verify customers who are requesting new checking and savings accounts, and ChexSystems lets consumers place a security alert on their credit data to make it more difficult for ID thieves to fraudulently obtain checking and savings accounts. For more information on doing that with ChexSystems, see this link.

If you placed a freeze on your file at the major bureaus more than a few years ago but haven’t revisited the bureaus’ sites lately, it might be wise to do that soon. Following its epic 2017 data breach, Equifax reconfigured its systems to invalidate the freeze PINs it previously relied upon to unfreeze a file, effectively allowing anyone to bypass that PIN if they can glean a few personal details about you. Experian’s site also has undermined the security of the freeze PIN.

I mentioned planting your flag at the credit bureaus first because if you plan to freeze your credit files, it may be wise to do so after you have planted your flag at all the other places listed in this story. That’s because these other places may try to check your identity records at one or more of the bureaus, and having a freeze in place may interfere with that account creation.

YOUR FINANCIAL INSTITUTIONS

I can’t tell you how many times people have proudly told me they don’t bank online, and prefer to manage all of their accounts the old fashioned way. I always respond that while this is totally okay, you still need to establish an online account for your financial providers because if you don’t someone may do it for you.

This goes doubly for any retirement and pension plans you may have. It’s a good idea for people with older relatives to help those individuals set up and manage online identities for their various accounts — even if those relatives never intend to access any of the accounts online.

This process is doubly important for parents and relatives who have just lost a spouse. When someone passes away, there’s often an obituary in the paper that offers a great deal of information about the deceased and any surviving family members, and identity thieves love to mine this information.

YOUR GOVERNMENT

Whether you’re approaching retirement, middle-aged or just starting out in your career, you should establish an account online at the U.S. Social Security Administration. Maybe you don’t believe Social Security money will actually still be there when you retire, but chances are you’re nevertheless paying into the system now. Either way, the plant-your-flag rules still apply.

Ditto for the Internal Revenue Service. A few years back, ID thieves who specialize in perpetrating tax refund fraud were massively registering people at the IRS’s website to download key data from their prior years’ tax transcripts. While the IRS has improved its taxpayer validation and security measures since then, it’s a good idea to mark your territory here as well.

The same goes for your state’s Department of Motor Vehicles (DMV), which maintains an alarming amount of information about you whether you have an online account there or not. Because the DMV also is the place that typically issues state drivers licenses, you really don’t want to mess around with the possibility that someone could register as you, change your physical address on file, and obtain a new license in your name.

Last but certainly not least, you should create an account for your household at the U.S. Postal Service’s Web site. Having someone divert your mail or delay delivery of it for however long they like is not a fun experience.

Also, the USPS has this nifty service called Informed Delivery, which lets residents view scanned images of all incoming mail prior to delivery. In 2018, the U.S. Secret Service warned that identity thieves have been abusing Informed Delivery to let them know when residents are about to receive credit cards or notices of new lines of credit opened in their names. Do yourself a favor and create an Informed Delivery account as well. Note that multiple occupants of the same street address can each have their own accounts.

YOUR HOME

Online accounts coupled with the strongest multi-factor authentication available also are important for any services that provide you with telephone, television and Internet access.

Strange as it may sound, plenty of people who receive all of these services in a bundle from one ISP do not have accounts online to manage their service. This is dangerous because if thieves can establish an account on your behalf, they can then divert calls intended for you to their own phones.

My original Plant Your Flag piece in 2018 told the story of an older Florida man who had pricey jewelry bought in his name after fraudsters created an online account at his ISP and diverted calls to his home phone number so they could intercept calls from his bank seeking to verify the transactions.

If you own a home, chances are you also have an account at one or more local utility providers, such as power and water companies. If you don’t already have an account at these places, create one and secure access to it with a strong password and any other access controls available.

These frequently monopolistic companies traditionally have poor to non-existent fraud controls, even though they effectively operate as mini credit bureaus. Bear in mind that possession of one or more of your utility bills is often sufficient documentation to establish proof of identity. As a result, such records are highly sought-after by identity thieves.

Another common way that ID thieves establish new lines of credit is by opening a mobile phone account in a target’s name. A little-known entity that many mobile providers turn to for validating new mobile accounts is the National Consumer Telecommunications and Utilities Exchange, or nctue.com. Happily, the NCTUE allows consumers to place a freeze on their file by calling their 800-number, 1-866-349-5355. For more information on the NCTUE, see this page.

Have I missed any important items? Please sound off in the comments below.

,

Cory DoctorowTerra Nullius

Terra Nullius is my March 2019 column in Locus magazine; it explores the commonalities between the people who claim ownership over the things they use to make new creative works and the settler colonialists who arrived in various “new worlds” and declared them to be empty, erasing the people who were already there as a prelude to genocide.

I was inspired by the story of Aloha Poke, in which a white dude from Chicago secured a trademark for his “Aloha Poke” midwestern restaurants, then threatened Hawai’ians who used “aloha” in the names of their restaurants (and later, by the Dutch grifter who claimed a patent on the preparation of teff, an Ethiopian staple grain that has been cultivated and refined for about 7,000 years).

MP3 Link

LongNowScientists Have a Powerful New Tool to Investigate Triassic Dark Ages

The time-honored debate between catastrophists and gradualists (those who believe major Earth changes were due to sudden violent events or happened over long periods of time) has everything to do with the coarse grain of the geological record. When paleontologists only have a series of thousand-year flood deposits to study, it’s almost impossible to say what was really going on at shorter timescales. So many of the great debates of natural history hinge on the resolution at which data can be collected, and boil down to something like, “Was it a meteorite impact that caused this extinction, or the inexorable climate changes caused by continental drift?”

One such gap in our understanding is in the Late Triassic — a geological shadow during which major regime changes in terrestrial fauna took place, setting the stage for The Age of Dinosaurs. But the curtains were closed during that scene change…until, perhaps, now:

By determining the age of the rock core, researchers were able to piece together a continuous, unbroken stretch of Earth’s history from 225 million to 209 million years ago. The timeline offers insight into what has been a geologic dark age and will help scientists investigate abrupt environmental changes from the peak of the Late Triassic and how they affected the plants and animals of the time.

Cool new detective work on geological “tree rings” from the Petrified Forest National Park (where I was lucky enough to do some revolutionary paleontological reconstruction work under Dr. Bill Parker back in 2005).

,

LongNowThe Deep Sea

As detailed in the exquisite documentary Proteus, the ocean floor was until very recently a repository for the dreams of humankind — the receptacle for our imagination. But when the H.M.S. Challenger expedition surveyed the world’s deep-sea life and brought it back for cataloging by now-legendary illustrator Ernst Haeckel (who coined the term “ecology”), the hidden benthic universe started coming into view. What we found, and what we continue to discover on the ocean floor, is far stranger than the monsters we’d projected.

This spectacular site by Neal Agarwal brings depth into focus. You’ve surfed the Web; now take a few and dive all the way down to Challenger Deep, scrolling past the animals that live at every depth.

Just as The Long Now situates us in a humbling, Copernican experience of temporality, Deep Sea reminds us of just how thin of a layer surface life exists in. Just as with Stewart Brand’s pace layers, the further down you go, the slower everything unfolds: the cold and dark and pressure slow the evolutionary process, dampening the frequency of interactions between creatures, bestowing space and time for truly weird and wondrous and as-yet-uncategorized life.

Dig in the ground and you might pull up the fossils of some strange long-gone organisms. Dive to the bottom of the ocean and you might find them still alive down there, the unmolested records of an ancient world still drifting in slow motion, going about their days-without-days…

For evidence of time-space commutability, settle in for a sublime experience that (like benthic life itself) makes much of very little: just one page, one scroll bar, and one journey to a world beyond.

(Mobile device suggested: this scroll goes in, not just across…)

Learn More:

  • The “Big Here” doesn’t get much bigger than Neal Agarwal‘s The Size of Space, a new interactive visualization that provides a dose of perspective on our place in the universe.

Planet Linux AustraliaFrancois Marier: Setting the default web browser on Debian and Ubuntu

If you are wondering what your default web browser is set to on a Debian-based system, there are several things to look at:

$ xdg-settings get default-web-browser
brave-browser.desktop

$ xdg-mime query default x-scheme-handler/http
brave-browser.desktop

$ xdg-mime query default x-scheme-handler/https
brave-browser.desktop

$ ls -l /etc/alternatives/x-www-browser
lrwxrwxrwx 1 root root 29 Jul  5  2019 /etc/alternatives/x-www-browser -> /usr/bin/brave-browser-stable*

$ ls -l /etc/alternatives/gnome-www-browser
lrwxrwxrwx 1 root root 29 Jul  5  2019 /etc/alternatives/gnome-www-browser -> /usr/bin/brave-browser-stable*

Debian-specific tools

The contents of /etc/alternatives/ is system-wide defaults and must therefore be set as root:

sudo update-alternatives --config x-www-browser
sudo update-alternatives --config gnome-www-browser

The sensible-browser tool (from the sensible-utils package) will use these to automatically launch the most appropriate web browser depending on the desktop environment.

Standard MIME tools

The others can be changed as a normal user. Using xdg-settings:

xdg-settings set default-web-browser brave-browser-beta.desktop

will also change what the two xdg-mime commands return:

$ xdg-mime query default x-scheme-handler/http
brave-browser-beta.desktop

$ xdg-mime query default x-scheme-handler/https
brave-browser-beta.desktop

since it puts the following in ~/.config/mimeapps.list:

[Default Applications]
text/html=brave-browser-beta.desktop
x-scheme-handler/http=brave-browser-beta.desktop
x-scheme-handler/https=brave-browser-beta.desktop
x-scheme-handler/about=brave-browser-beta.desktop
x-scheme-handler/unknown=brave-browser-beta.desktop

Note that if you delete these entries, then the system-wide defaults, defined in /etc/mailcap, will be used, as provided by the mime-support package.

Changing the x-scheme-handler/http (or x-scheme-handler/https) association directly using:

xdg-mime default brave-browser-nightly.desktop x-scheme-handler/http

will only change that particular one. I suppose this means you could have one browser for insecure HTTP sites (hopefully with HTTPS Everywhere installed) and one for HTTPS sites though I'm not sure why anybody would want that.

Summary

In short, if you want to set your default browser everywhere (using Brave in this example), do the following:

sudo update-alternatives --config x-www-browser
sudo update-alternatives --config gnome-www-browser
xdg-settings set default-web-browser brave-browser.desktop

Planet Linux AustraliaBen Martin: Small 1/4 inch socket set into a nicer walnut tray

 I was recently thinking about how I could make a selection of 1/4 inch drive bits easier to use. It seems I am not alone in the crowd of people who leave the bits in the case they came in. Some folks do that for many decades. Apart from being trapped into what "was in the set" this also creates an issue when you have some 1/4 inch parts in a case that includes many more 3/8 inch drive bits. I originally marked the smaller drive parts and though about leaving them in the blow molded case as is the common case.

The CNC fiend in me eventually got the better of me and the below is the result. I cut a prototype in pine first, knowing that the chances of getting it all as I wanted on the first try was not impossible, but not probable either. Version 1 is shown below.

 

 The advantage is that now I have the design in Fusion 360 I can cut this design in about an hour. So if I want to add a bunch of deep sockets to the set I can do that for the time cost mostly of gluing up a panel, fixturing it and a little sand a shellac. Not a trivial en devour but the result I think justifies the means.

Below is the board still fixtured in the cnc machine. I think I will make a jig with some sliding toggle clamps so I can fix panels to the jig and then bolt the jig into the cnc instead of directly using hold down clamps.

I have planned to use a bandsaw to but a profile around the tools and may end up with some handle(s) on the tray. That part is something I have to think more about. The thinking about how I want the tools to be stored and accessed is an interesting side project.



 

 

,

LongNowChildhood as a solution to explore–exploit tensions

Big questions abound regarding the protracted childhood of Homo sapiens, but there’s a growing argument that it’s an adaptation to the increased complexity of our social environment and the need to learn longer and harder in order to handle the ever-raising bar of adulthood. (Just look to the explosion of requisite schooling over the last century for a concrete example of how childhood grows along with social complexity.)

It’s a tradeoff between genetic inheritance and enculturation — see also Kevin Kelly’s remarks in The Inevitable that we have entered an age of lifelong learning and the 21st Century requires all of us to be permanent “n00bs”, due to the pace of change and the scale at which we have to grapple with evolutionarily relevant sociocultural information.

New research from Past Long Now Seminar Speaker Alison Gopnik:

“I argue that the evolution of our life history, with its distinctively long, protected human childhood, allows an early period of broad hypothesis search and exploration, before the demands of goal-directed exploitation set in. This cognitive profile is also found in other animals and is associated with early behaviours such as neophilia and play. I relate this developmental pattern to computational ideas about explore–exploit trade-offs, search and sampling, and to neuroscience findings. I also present several lines of empirical evidence suggesting that young human learners are highly exploratory, both in terms of their search for external information and their search through hypothesis spaces. In fact, they are sometimes more exploratory than older learners and adults.”

Alison Gopnik, “Childhood as a solution to explore-exploit tensions” in Philosophical Transactions of the Royal Society B.