Planet Russell

,

Worse Than FailureNews Roundup: We're Going to Need a Bigger Boat


You’ll have to stay patient with me on this post, since the point I will eventually get to really is the confluence of a number of different threads that have been going through my head the past few weeks.

Let’s start with my time in business school from 2007 to 2009. Charles Dickens couldn’t have penned a better dichotomy between the beginning and end of my time in school. In short: 2007 = the economy couldn’t be better, 2009 = the economy couldn’t be worse.

As my dream of a career in finance seemed further and further from reality while my time in school was coming to a close, I took a chance on a product development class for my last semester. It was the first time that the concept of ‘user experience’ was introduced to me, and judging from Google Trends, interest in user experience was at a real low point.

graph of searches for the term 'user experience' over time

The class was really an exercise in patience by our professor; it involved us students continually frustrating the professor with our complete lack of creativity. For our final project, we were randomly assigned to teams to find a real user problem and an elegant solution. My team followed a pretty standard process:

  1. Brainstorm problems we observed
  2. Do user research to validate the problem and identify pain points
  3. Design solutions to the pain points
  4. Test out the solutions against potential customers
  5. Optimize the product based on feedback
  6. Repeat steps 4 and 5

Our team consisted of 5 of the most passive, easy-going individuals - which was not conducive to the sort of analytical and critical thinking necessary to build a great product. After waffling for weeks over an appropriate problem, one of the teammates revealed that his wife was about to give birth and that a recent concern had been finding a mobile baby unit that would hang over their crib that would be both fun and educational.

A little aside here: A point of emphasis from our professor was Henry Ford’s famous quote (which he may not have actually said), “If I had asked people what they wanted, they would have said faster horses”. People seem to take out of this quote what they want to hear, and it made little impression on me at the time, but I understand it to mean the following: “Linear solutions to complex problems don’t result in novel solutions for customer problems. They result in old solutions with more bells and whistles. What is oftentimes required is starting from scratch in building user-friendly solutions that solve for customer needs, even if they are unstated.”

Mobile baby units were already fun and engaging, so it was the educational part that hung us up. I’ll cut to the chase: our final deliverable was a monstrosity that combined every feature of every current mobile baby unit into one ugly mass. Think the chandelier from ‘The Phantom of the Opera’ with all of the theatrics of it swinging through the crowd (in fact our prototype fell from the ceiling into a crib we were using for testing). We fell into the trap of a linear solution to a complex problem, if it even was a problem to begin with! We took a solution already in place, added more functionality, and got a failing grade on our final project.

This whole experience helped me relate to a product that was recently brought to my attention. Even though user experience is at its apex in interest according to Google Trends, people continue to create linear solutions to complex problems. Case in point: The Expanscape, the all-in-one security operations workstation; a product that purports to solve every problem a security operations analyst may need.

a photo of a laptop with too many screens hanging off of it, like comically ridiculous ...How about designed for no one?

The specs advertise 7 screens, but is only 60% of its 10 kg goal. What does getting a percentage of a weight goal even mean? Does it mean that it weighs 16.67 kgs, since 60% of that weight makes 10 kgs?

Honestly the weight itself isn’t even that egregious, since it’s in line with most gaming machines nowadays. The problem being solved for is quite straightforward: “Design and build a proper mobile Security Operations Center.” It is indeed mobile, with its ability to “fold down compactly to facilitate travel”. But is it proper? I think not.

This all-in-one, mobile bundle is trying to solve every problem linearly, by adding more features to an existing laptop. Judging by the sheer number of screens, I wonder if any real user testing was done. I can’t imagine any single human not feeling anxiety just looking at this machine. The problem being solved was not to increase efficiency and shrink a team to just one person, it was to design and build a proper security operations center! Who can focus on this many screens and information at once?

I don’t want to be harsh on the Expanscape; history is filled with examples of linear and poorly-executed tech solutions. Who can forget the Nokia N-Gage, which was a mash-up of every phone feature at the time? Or Google Glass, which was trying to allow users to engage with the world without even needing to look at a phone, trying to force the wrong solution to their stated problem? Or the Apple Newton, which while arguably was ahead of its time, focused too heavily on functionality over user experience?

I’m left thinking of the famous quote by Ian Malcolm. They are good words for us all to live by:

Ian Malcom's quote: You were so preoccupied with wether or not you could you didn't stop to think if you should

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

LongNowPodcast: The Transformation | Peter Leyden

The Long Now Foundation · Peter Leyden – The Transformation: A Future History of the World from 02020 to 02050

A compelling case can be made that we are in the early stages of another tech and economic boom in the next 30 years that will help solve our era’s biggest challenges like climate change, and lead to a societal transformation that will be understood as civilizational change by the year 02100.

Peter Leyden has built the case for this extremely positive yet plausible scenario of the period from 02020 to 02050 as a sequel to the Wired cover story and book he co-authored with Long Now cofounder Peter Schwartz 25 years ago called The Long Boom: The Future History of the World 1980 to 2020.

His latest project, The Transformation, is an optimistic analysis on what lies ahead, based on deep interviews with 25 world-class experts looking at new technologies and long-term trends that are largely positive, and could come together in surprisingly synergistic ways.

Listen on Apple Podcasts.

Listen on Spotify.

Krebs on SecurityCheckout Skimmers Powered by Chip Cards

Easily the most sophisticated skimming devices made for hacking terminals at retail self-checkout lanes are a new breed of PIN pad overlay combined with a flexible, paper-thin device that fits inside the terminal’s chip reader slot. What enables these skimmers to be so slim? They draw their power from the low-voltage current that gets triggered when a chip-based card is inserted. As a result, they do not require external batteries, and can remain in operation indefinitely.

A point-of-sale skimming device that consists of a PIN pad overlay (top) and a smart card skimmer (a.k.a. “shimmer”). The entire device folds onto itself, with the bottom end of the flexible card shimmer fed into the mouth of the chip card acceptance slot.

The overlay skimming device pictured above consists of two main components. The one on top is a regular PIN pad overlay designed to record keypresses when a customer enters their debit card PIN. The overlay includes a microcontroller and a small data storage unit (bottom left).

The second component, which is wired to the overlay skimmer, is a flexible card skimmer (often called a “shimmer”) that gets fed into the mouth of the chip card acceptance slot. You’ll notice neither device contains a battery, because there simply isn’t enough space to accommodate one.

Virtually all payment card terminals at self-checkout lanes now accept (if not also require) cards with a chip to be inserted into the machine. When a chip card is inserted, the terminal reads the data stored on the smart card by sending an electric current through the chip.

Incredibly, this skimming apparatus is able to siphon a small amount of that power (a few milliamps) to record any data transmitted by the payment terminal transaction and PIN pad presses. When the terminal is no longer in use, the skimming device remains dormant.

The skimmer pictured above does not stick out of the payment terminal at all when it’s been seated properly inside the machine. Here’s what the fake PIN pad overlay and card skimmer looks like when fully inserted into the card acceptance slot and viewed head-on:

The insert skimmer fully ensconced inside the compromised payment terminal. Image: KrebsOnSecurity.com

Would you detect an overlay skimmer like this? Here’s what it looks like when attached to a customer-facing payment terminal:

The PIN pad overlay and skimmer, fully seated on a payment terminal.

REALLY SMART CARDS

The fraud investigators I spoke with about this device (who did so on condition of anonymity) said initially they couldn’t figure out how the thieves who plant these devices go about retrieving the stolen data from the skimmer. Normally, overlay skimmers relay this data wirelessly using a built-in Bluetooth circuit board. But that also requires the device to have a substantial internal power supply, such as a somewhat bulky cell phone battery.

The investigators surmised that the crooks would retrieve the stolen data by periodically revisiting the compromised terminals with a specialized smart card that — when inserted — instructs the skimmer to dump all of the saved information onto the card. And indeed, this is exactly what investigators ultimately found was the case.

“Originally it was just speculation,” the source told KrebsOnSecurity. “But a [compromised] merchant found a couple of ‘white’ smartcards with no markings on them [that] were left at one of their stores. They informed us that they had a lab validate that this is how it worked.”

Some readers might reasonably be asking why it would be the case that the card acceptance slot on any chip-based payment terminal would be tall enough to accommodate both a chip card and a flexible skimming device such as this.

The answer, as with many aspects of security systems that decrease in effectiveness over time, has to do with allowances made for purposes of backward compatibility. Most modern chip-based cards are significantly thinner than the average payment card was just a few years ago, but the design specifications for these terminals state that they must be able to allow the use of older, taller cards — such as those that still include embossing (raised numbers and letters). Embossing is a practically stone-age throwback to the way credit cards were originally read, through the use of manual “knuckle-buster” card imprint machines and carbon-copy paper.

“The bad guys are taking advantage of that, because most smart cards are way thinner than the specs for these machines require,” the source explained. “In fact, these slots are so tall that you could fit two cards in there.”

IT’S ALL BACKWARDS

Backward compatibility is a major theme in enabling many types of card skimming, including devices made to compromise automated teller machines (ATMs). Virtually all chip-based cards (at least those issued in the United States) still have much of the same data that’s stored in the chip encoded on a magnetic stripe on the back of the card. This dual functionality also allows cardholders to swipe the stripe if for some reason the card’s chip or a merchant’s smartcard-enabled terminal has malfunctioned.

Chip-based credit and debit cards are designed to make it infeasible for skimming devices or malware to clone your card when you pay for something by dipping the chip instead of swiping the stripe. But thieves are adept at exploiting weaknesses in how certain financial institutions have implemented the technology to sidestep key chip card security features and effectively create usable, counterfeit cards.

Many people believe that skimmers are mainly a problem in the United States, where some ATMs still do not require more secure chip-based cards that are far more expensive and difficult for thieves to clone. However, it’s precisely because some U.S. ATMs lack this security requirement that skimming remains so prevalent in other parts of the world.

Mainly for reasons of backward compatibility to accommodate American tourists, a great number of ATMs outside the U.S. allow non-chip-based cards to be inserted into the cash machine. What’s more, many chip-based cards issued by American and European banks alike still have cardholder data encoded on a magnetic stripe in addition to the chip.

When thieves skim non-U.S. ATMs, they generally sell the stolen card and PIN data to fraudsters in Asia and North America. Those fraudsters in turn will encode the card data onto counterfeit cards and withdraw cash at older ATMs here in the United States and elsewhere.

Interestingly, even after most U.S. banks put in place fully chip-capable ATMs, the magnetic stripe will still be needed because it’s an integral part of the way ATMs work: Most ATMs in use today require a magnetic stripe for the card to be accepted into the machine. The main reason for this is to ensure that customers are putting the card into the slot correctly, as embossed letters and numbers running across odd spots in the card reader can take their toll on the machines over time.

And there are the tens of thousands of fuel pumps here in the United States that still allow chip-based card accounts to be swiped. The fuel pump industry has for years won delay after delay in implementing more secure payment requirements for cards (primarily by flexing their ability to favor their own fuel-branded cards, which largely bypass the major credit card networks).

Unsurprisingly, the past two decades have seen the emergence of organized gas theft gangs that take full advantage of the single weakest area of card security in the United States. These thieves use cloned cards to steal hundreds of gallons of gas at multiple filling stations. The gas is pumped into hollowed-out trucks and vans, which ferry the fuel to a giant tanker truck. The criminals then sell and deliver the gas at cut rate prices to shady and complicit fuel station owners and truck stops.

A great many people use debit cards for everyday purchases, but I’ve never been interested in assuming the added risk and pay for everything with cash or a credit card. Armed with your PIN and debit card data, thieves can clone the card and pull money out of your account at an ATM. Having your checking account emptied of cash while your bank sorts out the situation can be a huge hassle and create secondary problems (bounced checks, for instance).

The next skimmer post here will examine an inexpensive and ingenious analog device that helps retail workers quickly check whether their payment terminals have been tampered with by bad guys.

Sociological ImagesHappy Birthday, W. E. B. Du Bois!

“W. E. B. Du Bois and his Atlanta School of Sociology pioneered scientific sociology in the United States.”

– Dr. Aldon Morris

I had the good fortune to see Dr. Morris give a version of this talk a few years ago, and it is one of my favorites. If you haven’t seen it before, take a few minutes today and check it out.

Also, go check out the #DuBoisChallenge on Twitter! Data visualization nerds are re-making Du Bois’ pioneering charts and graphs on race and inequality in the United States.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureCodeSOD: A Lack of Progress

Progress bars and throbbers are, in theory, tools that let your user know that a process is working. It's important to provide feedback when your program needs to do some long-running task.

Hegel inherited a rather old application, written in early versions of VB.Net. When you kicked off a long running process, it would update the status bar with a little animation, cycling from ".", to "..", to "...".

Private Sub StatusRunningText(ByVal Str As String) If ServRun = True Then Select Case Me.Tim.Now.Second Case 0, 3, 6, 9, 12, 16, 19, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 51, 54, 57, 59 Me.StatusBarPanel1.Text = Str + "." Case 1, 4, 7, 10, 13, 17, 20, 22, 25, 28, 31, 34, 37, 40, 43, 46, 49, 52, 55, 58 Me.StatusBarPanel1.Text = Str + ".." Case Else Me.StatusBarPanel1.Text = Str + "..." End Select End If End Sub

Now, you or I might have been tempted to use modulus here. Second % 3 + 1 is the exact number of periods this outputs. But how cryptic is that? It involves math. This developer has lovingly handcrafted their animation, specifying what should be displayed on each and every frame.

Modular arithmetic is math, but this code, this is art.

Bad art, but still, art.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianDima Kogan: feedgnuplot: labelled bar charts and a guide

I just released feedgnuplot 1.57, which includes two new pieces that I've long thought about adding:

Labelled bar charts

I've thought about adding these for a while, but had no specific need for them. Finally, somebody asked for it, and I wrote the code. Now that I can, I will probably use these all the time. The new capability can override the usual numerical tic labels on the x axis, and instead use text from a column in the data stream.

The most obvious use case is labelled bar graphs:

echo "# label value
      aaa     2
      bbb     3
      ccc     5
      ddd     2" | \
feedgnuplot --vnl \
            --xticlabels \
            --with 'boxes fill solid border lt -1' \
            --ymin 0 --unset grid

xticlabels-basic.svg

But the usage is completely generic. All --xticlabels does, is to accept a data column as labels for the x-axis tics. Everything else that's supported by feedgnuplot and gnuplot works as before. For instance, I can give a domain, and use a style that takes y values and a color:

echo "# x label y color
        5 aaa   2 1
        6 bbb   3 2
       10 ccc   5 4
       11 ddd   2 1" | \
feedgnuplot --vnl --domain \
            --xticlabels \
            --tuplesizeall 3 \
            --with 'points pt 7 ps 2 palette' \
            --xmin 4 --xmax 12 \
            --ymin 0 --ymax 6 \
            --unset grid

xticlabels-points-palette.svg

And we can use gnuplot's support for clustered histograms:

echo "# x label a b
        5 aaa   2 1
        6 bbb   3 2
       10 ccc   5 4
       11 ddd   2 1" | \
vnl-filter -p label,a,b | \
feedgnuplot --vnl \
            --xticlabels \
            --set 'style data histogram' \
            --set 'style histogram cluster gap 2' \
            --set 'style fill solid border lt -1' \
            --autolegend \
            --ymin 0 --unset grid

xticlabels-clustered.svg

Or we can stack the bars on top of one another:

echo "# x label a b
        5 aaa   2 1
        6 bbb   3 2
       10 ccc   5 4
       11 ddd   2 1" | \
vnl-filter -p label,a,b | \
feedgnuplot --vnl \
            --xticlabels \
            --set 'style data histogram' \
            --set 'style histogram rowstacked' \
            --set 'boxwidth 0.8' \
            --set 'style fill solid border lt -1' \
            --autolegend \
            --ymin 0 --unset grid

xticlabels-stacked.svg

This is gnuplot's "row stacking". It also supports "column stacking", which effectively transposes the data, and it's not obvious to me that makes sense in the context of feedgnuplot. Similarly, it can label y and/or z axes; I can't think of a specific use case, so I don't have a realistic usage in mind, and I don't support that yet. If anybody can think of a use case, email me.

Notes and limitations:

  • Since with --domain you can pass in both an x value and a tic label, it is possible to give it conflicting tic labels for the same x value. gnuplot itself has this problem too, and it just takes the last label it has for a given x. This is probably good-enough.
  • feedgnuplot uses whitespace-separated columns with no escape mechanism, so the field labels cannot have whitespace in it. Fixing this is probably not worth the effort.
  • These tic labels do not count towards the tuplesize
  • I really need to add a similar feature to gnuplotlib. This will happen when I need it or when somebody asks for it, whichever comes first.

A feedgnuplot guide

This fills in a sorely needed missing part of the documentation: the main feedgnuplot website now has a page containing examples and corresponding graphical output. This serves as a tutorial and a gallery demonstrating some usages. It's somewhat incomplete, since it can't show streaming plots, or real-world interfacing with stuff that produces data: some of those usages remain the the README. It's a million times better than what I had before though, which was nothing.

Internally this is done just like the gnuplotlib guide: the thing is an org-mode document with org-babel snippets that are evaluated by emacs to make the images. There's some fancy emacs lisp to tie it all together. Works great!

,

Planet DebianDirk Eddelbuettel: pkgKitten 0.2.1: Now with roxygen2 support

kitten

A new release 0.2.1 of pkgKitten hit CRAN earlier today, and has been uploaded to Debian as well. pkgKitten makes it simple to create new R packages via a simple function invocation. A wrapper kitten.r exists in the littler package to make it even easier.

This release builds on the support for tinytest we added in release 0.2.0 by adding more optional support, this time for roxygen2. It also corrects a minor documentation snafu, and updates the CI use.

Changes in version 0.2.1 (2021-02-22)

  • A small documentation error was corrected (David Dalpiaz in #15).

  • A new option ‘bunny’ adds support for roxygen2.

  • Continuous integration now use run.sh from r-ci.

More details about the package are at the pkgKitten webpage, the pkgKitten docs site, and the pkgKitten GitHub repo.

Courtesy of my CRANberries site, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Chaotic IdealismShould you tell your date you’re asexual?

If you’re not familiar with asexuality, here’s a brief definition: Asexuals are people who aren’t sexually attracted to anybody. Many asexuals don’t want sex; some are outright disgusted by the idea of having sex, while others merely find it boring.

Some asexuals will have sex with their partners, the way you might attend a football game with your sports-fan partner even if you don’t like sports yourself; some asexuals are sex-positive, meaning they don’t feel sexually attracted to anybody, but do enjoy having sex when they get the opportunity. For demisexuals, sexual attraction emerges only once they are already deeply connected, emotionally, to another person. Asexuality is a sexual orientation, just like being bi, straight, or gay.

So… should you tell your date you’re asexual? And if so, when?

First, and most importantly: No asexual should ever have to feel like they have to disclose their sexual orientation just to protect themselves from being forced to have sex before they feel they’re ready. “I didn’t know they were asexual” is not a valid reason for your date to push you into sex, because there is never a valid reason to do that. If you say “no” and your partner pressures you anyway, that’s a huge red flag that they don’t respect you; that’s not the sort of person you want as a partner. Dump them, and don’t look back.

Obviously, if you’re the sort of asexual who finds sex disgusting or so boring you’d rather watch paint dry, and they’re looking for a relationship that, if successful, will eventually become sexual, then you need to tell your date right away, preferably before you’re on a date to begin with–otherwise you’re wasting your time and theirs.

But things get more complicated for non-sex-repulsed aces and demisexuals. If you’re open to sex, then you aren’t going to be automatically incompatible with someone who wants sex, so you wouldn’t be wasting their time not telling them immediately. Once you have a more mature relationship, it’ll be natural to tell them everything about yourself, including your asexuality. Or you can tell them right away (and I recommend it, because I think it’s good to have everything in the open at once, whether that’s asexuality, or disability, or religion, or your desire to have six children or no children at all)–but you are not obligated to do so.

If your friends are the sort who start having sex while dating only casually, then you might not realize how common it is for people to wait until they feel deeply attached or have formalized their commitment. Even allosexuals don’t all jump right into bed with one another. Some wait for marriage, or for deep, true love. Some simply don’t enjoy casual sex. Before birth control, it was held up as the universal ideal to prevent couples from having a baby without a family to raise one; but just because we have birth control doesn’t mean we have to rush right into sex. There are many valid emotional, social, philosophical, and religious reasons to want to wait.

Those who want to wait to have sex are often shamed as being “prudish” because they turn down sex when it’s offered; or they’re told they’re “admirable” for waiting for marriage, as though it were the default to want to have sex, and anyone who said “no” must be denying themselves. That can be hard to deal with, especially in a world where sex is wedged into every storyline, used to sell advertisements, and seen as a “basic human need” right up there with oxygen.

You can tell them right away that you are ace, and that your attraction to them isn’t sexual–it’s romantic, or perhaps platonic. If you are demisexual, you can tell them that you won’t feel like having sex unless you have a deep connection. You can put it right in your dating profile or on your social-media accounts. Or you can wait until the topic of sex comes up.

If you get the impression that the other person expects a hookup for casual sex, and that’s not what you’re looking for, then make sure you’re on the same page. If the other person looks to be trying to initiate a sexual relationship, then tell them. You can use words like “demisexual” or “sex-positive asexual”, if you like, or you can just explain it by describing what you personally need to feel comfortable with sex. Just remember that if a relationship is respectful and mature, as it should be, nobody will be forcing anyone into anything they don’t want.

Planet DebianCharles Plessy: Containers

I was using a container for a bioinformatics tool released two weeks ago, but my shell script wrapping the tools could not run because the container was built around an old version of Debian (Jessie) that was released in 2015. I was asked to use a container for bioinformatics, based on conda, and found one that distributes coreutils, but it did not include a real version of sed. I try Debian's docker image. No luck; it does not contain ps, which my workflow manager needs. But fortunately I eventually figured out that Ubuntu's Docker image contains coreutils, sed and ps together! In the world of containers, this sounds like a little miracle.

Cory DoctorowPrivacy Without Monopoly: Data Protection and Interoperability (Part 2)

This week on my podcast, Part Two of “Privacy Without Monopoly: Data Protection and Interoperability,” a major new EFF paper by my colleague Bennett Cyphers and me.

It’s a paper that tries to resolve the tension between demanding that tech platforms gather, retain and mine less of our data, and the demand that platforms allow alternatives (nonprofits, co-ops, tinkerers, startups) to connect with their services.

MP3

Worse Than FailureCodeSOD: Self-Documented

Molly's company has a home-grown database framework. It's not just doing big piles of string concatenation, and has a bunch of internal checks to make sure things happen safely, but it still involves a lot of hardcoded SQL strings.

Recently, Molly was reviewing a pull request, and found a Java block which looked like this:

if (Strings.isNullOrEmpty(getFilter_status())) { sql.append(" and review in ('g','t','b','n','w','c','v','e','x','')"); } else if (!"a".equals(getFilter_status())) { sql.append(" and review = '"+getFilter_status()+"'"); } else { limit_results=true; }

"Hey, I get that the database schema is a little rough, but that big block of options in that in clause is incomprehensible. Instead of magic characters, maybe an enum, or at the very least, could you give us a comment?"

So, the developer responsible went back and helpfully added a comment:

if (Strings.isNullOrEmpty(getFilter_status())) { sql.append(" and review in ('g','t','b','n','w','c','v','e','x','')"); // f="Resolved", s="Resolved - 1st Call" } else if (!"a".equals(getFilter_status())) { sql.append(" and review = '"+getFilter_status()+"'"); } else { limit_results=true; }

This, of course, helpfully clarifies the meaning of the f flag, and the s flag, two flags which don't appear in the in clause.

Before Molly could reply back, someone else on the team approved and merged the pull request.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianJohn Goerzen: Recovering Our Lost Free Will Online: Tools and Techniques That Are Available Now

As I’ve been thinking and writing about privacy and decentralization lately, I had a conversation with a colleague this week, and he commented about how loss of privacy is related to loss of agency: that is, loss of our ability to make our own choices, pursue our own interests, and be master of our own attention.

In terms of telecommunications, we have never really been free, though in terms of Internet and its predecessors, there have been times where we had a lot more choice. Many are too young to remember this, and for others, that era is a distant memory.

The irony is that our present moment is one of enormous consolidation of power, and yet also one of a proliferation of technologies that let us wrest back some of that power. In this post, I hope to enlighten or remind us of some of the choices we have lost — and also talk about the ways in which we can choose to regain them, already, right now.

I will talk about the possibilities, the big dreams that are possible now, and then go into more detail about the solutions.

The Problems & Possibilities

The limitations of “online”

We make the assumption that we must be “online” to exchange data. This is reinforced by many “modern” protocols; Twitter clients, for instance, don’t tend to let you make posts by relaying them through disconnected devices.

What would it be like if you could fully participate in global communities without a constant Internet connection? If you could share photos with your friends, read the news, read your email, etc. even if you don’t have a connection at present? Even if the device you use to do that never has a connection, but can route messages via other devices that do?

Would it surprise you to learn that this was once the case? Back in the days of UUCP, much email and Usenet news — a global discussion forum that didn’t require an Internet connection — was relayed via occasional calls over phone lines. This technology remains with us, and has even improved.

Sadly, many modern protocols make no effort in this regard. Some email clients will let you compose messages offline to send when you get online later, but the assumption always is that you will be connected to an IP network again soon.

NNCP, on the other hand, lets you relay messages over TCP, a radio, a satellite, or a USB stick. Email and Usenet, since they were designed in an era where store-and-forward was valued, can actually still be used in an entirely “offline” fashion (without ever touching an IP-based network). All it takes is for someone to care to make it happen. You can even still do it over UUCP if you like.

The physical and data link layers

Many of us just accept that we communicate in a few ways: Wifi for short distances, and then cable modems or DSL for our local Internet connection, and then many people are fuzzy about what happens after that. Or, alternatively, we have 4G phones that are the local Internet connection, and the same “fuzzy” things happen after.

Think about this for a moment. Which of these do you control in any way? Sometimes just wifi, sometimes maybe you have choices of local Internet providers. After that, your traffic is handled by enormous infrastructure companies.

There is choice here.

People in ham radio have been communicating digitally over long distances without the support of the traditional Internet for decades, but the technology to do this is now more accessible to anyone. Long-distance radio has had tremendous innovation in the last decade; cheap radios can now communicate over several miles/km without any other infrastructure at all. We all carry around radios (Wifi and Bluetooth) in our pockets that don’t have to be used as mere access points to the Internet or as drivers of headphones, but can also form their own networks directly (Briar).

Meshtastic is an example; it’s an instant messenger that can form a mesh over many miles/km and requires no IP infrastructure at all. Briar is similar. XBee radios form a mesh in hardware, allowing peers to reach each other (also over many miles/km) with a serial or framed protocol.

Loss of peer-to-peer

Back in the late 90s, I worked at a university. I had a 386 on my desk for a workstation – not a powerful computer even then. But I put the boa webserver on it and could just serve pages on the Internet. I didn’t have to get permission. Didn’t have to pay a hosting provider. I could just DO it.

And of course that is because the university had no firewall and no NAT. Every PC at the university was a full participant on the Internet as much as the servers at Microsoft or DEC. All I needed was a DNS entry. I could run my own SMTP server if I wanted, run a web or Gopher server, and that was that.

There are many reasons why this changed. Nowadays most residential ISPs will block SMTP for their customers, and if they didn’t, others would; large email providers have decided not to federate with IPs in residential address spaces. Most people have difficulty even getting a static IP address in the first place. Many are behind firewalls, NATs, or both, meaning that incoming connections of any kind are problematic.

Do you see what that means? It has weakened the whole point of the Internet being a network of peers. While IP still acts that way, as a practical matter, there are clients that are prevented from being servers by administrative policy they have no control over.

Imagine if you, a person with an Internet connection to your laptop or phone, could just decide to host a website, or a forum on it. For moderate levels of load, they are certainly capable of this. The only thing in the way is the network management policies you can’t control.

Elaborate technologies exist to try to bridge this divide, and some, like Tor or cjdns, can work quite well. More on this below.

Expense of running something popular

Related to the loss of peer-to-peer infrastructure is the very high cost of hosting something popular. Do you want to share videos with lots of people? That almost certainly is going to require expensive equipment and bandwidth.

There is a reason that there are only a small handful of popular video streaming sites online. It requires a ton of money to host videos at scale.

What if it didn’t? What if you could achieve economies of scale so much that you, an individual, could compete with the likes of YouTube? You wouldn’t necessarily have to run ads to support the service. You wouldn’t have to have billions of dollars or billions of viewers just to make it work.

This technology exists right now. Of course many of you are aware of how Bittorrent leverages the swarm for files. But projects like IPFS, Dat, and Peertube have taken this many steps further to integrate it into a global ecosystem. And, at least in the case of Peertube, this is a thing that works right now in any browser already!

Application-level “walled gardens”

I was recently startled at how much excitement there was when Github introduced “dark mode”. Yes, Github now offers two colors on its interface. Already back in the 80s and 90s, many DOS programs had more options than that.

Git is a decentralized protocol, but Github has managed to make it centralized.

Email is a decentralized protocol — pick your own provider, and they all communicate — but Facebook and Twitter aren’t. You can’t just pick your provider for Facebook. It’s Facebook or nothing.

There is a profit motive in locking others out; these networks want to keep you using their platforms because their real customers are advertisers, and they want to keep showing you ads.

Is it possible to have a world where you get to pick your own app for sharing photos, and it works even if your parents use a different one? Yes, yes it is.

Mastodon and the Fediverse are fantastic examples for social media. Pixelfed is specifically designed for photos, Mastodon for short-form communication, there’s Pleroma for more long-form communication, and they all work together. You can use Mastodon to read Pleroma content or look at Pixelfed photos, and there are many (free) providers of each.

Freedom from manipulation

I recently wrote about the dangers of the attention economy, so I won’t go into a lot of detail here. Fundamentally, you are not the customer of Facebook or Google; advertisers are. They optimize their site to keep you on it as much as possible so that they can show you as many ads as possible which makes them as much money as possible. Ads, of course, are fundamentally seeking to manipulate your behavior (“buy this product”).

By lowering the cost of running services, we can give a huge boost to hobbyists and nonprofits that want to do so without an ultimate profit motive. For-profit companies benefit also, with a dramatically reduced cost structure that frees them to pursue their mission instead of so many ads.

Freedom from snooping (privacy and anonymity)

These days, it’s not just government snooping that people think about. It’s data stolen by malware, spies at corporations (whether human or algorithmic), and even things like basic privacy of one’s own security footage. Here the picture is improving; encryption in transit, at least at a basic level, has become much more common with TLS being a standard these days. Sadly, end-to-end encryption (E2EE) is not nearly as much, perhaps because corporations have a profit motive to have access to your plaintext and metadata.

Closely related to privacy is anonymity: that is, being able to do things in an anonymous fashion. The two are not necessarily equal: you could send an encrypted message but reveal who the correspondents are, as with email; or, you could send a plaintext message over a Tor exit node that hides who the correspondents are. It is sometimes difficult to achieve both.

Nevertheless, numerous answers exist here that tackle one or both problems, from the Signal messenger to Tor.

Solutions That Exist Today

Let’s dive in to some of the things that exist today.

One concept you’ll see in many of these is integrated encryption with public keys used for addressing. In other words, your public key is akin to an IP address (and in some cases, is literally your IP address.)

Data link and networking technologies (some including P2P)

  • Starting with the low-power and long-distance technologies, I’ve written quite a bit about LoRA, which are low-power long-distance radios. They can easily achieve several miles/km while still using much less than 1W of power. LoRA is a common building block of mesh off-the-grid messenger systems such as meshtastic, which forms an ad-hoc mesh of LoRA devices with days-long battery life and miles-long communication abilities. LoRA trades speed for bandwidth; in its longest-distance modes, it may operate at 300bps or less. That is not a typo. Some LoRAWAN devices have battery life measured in years (usually one-way sensors and such). Also, the Pine64 folks are working to integrate LoRA on nearly all their product line, which includes single-board computers, phones, and laptops.
  • Similar to LoRA is XBee SX from Digi. While not quite as long-distance as LoRA, it does still do quite a bit with low power and also goes many miles. XBee modules have automatic mesh routing in firmware, and can be used in either frame mode or “serial cable emulation” mode in which they act as if they’re a serial cable. Unlike plain LoRA, XBee radios do hardware retransmit. They also run faster, at up to about 150Kbps – though that is still a lot slower than wifi.
  • I’ve written about secure mesh messengers recently. One of them, Briar, particularly stands out in that it is able to form an ad-hoc mesh using phone’s Bluetooth radios. It can also route messages over the public Internet, which it does exclusively using Tor.
  • I’ve also written a lot about NNCP, the sort of modernized UUCP. NNCP is completely different than the others here in that it is a store-and-forward network – sort of a modern UUCP. NNCP has easy built-in support for routing packets using USB drives, clean serial interfaces, TCP, basically anything you can pipe to, even broadcast satellite and such. And you don’t even have to pick one; you can use all of the above: Internet when it’s available, USB sticks or portable hard drives when not, etc. It uses Tor-line onion routing with E2EE. You’re not going to run TCP over NNCP, but files (including videos), backups, email, even remote execution are all possible. It is the most “Unixy” of the modern delay-tolerant networks and makes an excellent choice for a number of use cases where store-and-forward and extreme flexibility in transportation make a lot of sense.
  • Moving now into the range of speeds and technologies we’re more used to, there is a lot of material out there on building mesh networks on Wifi or Wifi-adjacent technology. Amateur radio operators have been active in this area for years, and even if you aren’t a licensed ham and don’t necessarily flash amateur radio firmware onto your access points, a lot of the ideas and concepts they cover could be of interest. For instance, the Amateur Radio Emergency Data Network covers both permanent and ad-hoc meshs, and this AREDN video covers device selection for AREDN — which also happens to be devices that would be useful for quite a few other mesh or long-distance point-to-point setups.
  • Once you have a physical link of some sort, cjdns and the Hyperboria network have the goals of literally replacing the Internet – but are fully functional immediately. cjdns assigns each node an IPv6 address based on its public key. The network uses DHT for routing between nodes. It can run directly atop Ethernet (and Wifi) as its own native protocol, without an IP stack underneath. It can also run as a layer atop the current Internet. And it can optionally be configured to let nodes find an exit node to reach the current public Internet, which they can do opportunistically if given permission. All traffic is E2EE. One can run an isolated network, or join the global Hyperboria network. The idea is that local meshes could be formed, and then geographically distant meshes can be linked together by simply using the current public Internet as a dumb transport. This, actually, strongly resembles the early days of Internet buildout under NSFNet. The Torento Mesh is a prominent user of cjdns, and they publish quite a bit of information online. cjdns as a standalone identity is in decline, but forms the basis of the pkt network, which is designed to foster an explosion in WISPs.
  • Similar in concept to cjdns is Yggdrasil, which uses a different routing algorithm. It is now more active than cjdns and has active participants and developers.
  • Althea is a startup in this space, hoping to encourage communities to build meshes whose purpose is to provide various routes to access to the traditional Internet, including digital currency micropayments. This story documents how one rural community is using it.
  • Tor is a somewhat interesting case. While it doesn’t provide kernel-level routing, it does provide a SOCKS5 proxy. Traditionally, Tor is used to achieve anonymity while browsing the public Internet via an exit node. However, you can stay entirely in-network by using onion services (basically ports that are open to Tor). All Tor traffic is onion-routed so that the originating IP cannot be discovered. Data within Tor is E2EE, though if you are using an exit node to the public Internet, that of course can’t apply there.
  • GNUnet is a large suite of tools for P2P communication. It includes file downloading, Tor-like IP over the network, a DNS replacement, and facilitates quite a few of the goals discussed here. (Added in a 2021-02-22 update)

P2P Infrastructure

While some of the technologies above, such as cjdns, explicitly facitilitate peer-to-peer communication, there are some other application-level technologies to look at.

  • IPFS has been having a lot of buzz lately, since the Brave browser integrated support. IPFS headlines as “powers the distributed web”, but it is actually more than that; various other apps layer atop it. The core idea is that content you request gets reshared by your node for some period of time, somewhat akin to Bittorrent. IPFS runs atop the regular Internet and is typically accessed through an app.
  • The Dat Protocol is somewhat similar in concept to IPFS, though the approach is somewhat different; it emphasizes efficient distribution of updates at the expense of requiring a git-like history.
  • IPFS itself is based on libp2p, which is designed to be a generic infrastructure for adding P2P capabilities to your own code. It is probably fair to say libp2p is still quite complex compared to ordinary TCP, and the language support is in its infancy, but nevertheless it is quite an exciting development to watch.
  • Of course almost all of us are familiar with Bittorrent, the software that first popularized the idea of a distributed mesh sharing knowledge about which chunks of a dataset they have in order to maximize the efficiency of distributing the whole thing. Bittorrent is still in wide use (and, despite its reputation, that wide use includes legitimate users such as archive.org and Debian).
  • I recently wrote about building a delay-tolerant offline-capable mesh with Syncthing. Syncthing, on its surface, is something like an open source Dropbox. But look into a bit and you realize it’s fully P2P, serverless, can support various network topologies including intermittent connectivity between network parts, and such. My article dives into that in more detail. If your needs are mostly related to files, Syncthing can make a fine mesh infrastructure that is auto-healing and is equally at home on the public Internet, a local wifi access point with no Internet at all, a private mesh like cjdns, etc.
  • Also showing some promise is Secure Scuttlebutt (SSB). Its most well-known application is a social network, but in my opinion some of the other applications atop SSB are more interesting. SSB is designed to be offline-friendly, can do things like automatically exchange data with peers on the same Wifi (eg, a coffee shop), etc., though it is an append-only log that can be unwieldy on mobile sometimes.

Instant Messengers and Chat

I won’t go into a lot of detail here since I recently wrote a roundup of secure mesh messengers and also a followup article about Signal and some hidden drawbacks of P2P. Please refer to those articles for some interesting things that are happening in this space.

Matrix is a distributed IM platform similar in concept to Slack or IRC, but globally distributed in a mesh. It supports optional E2EE.

Social Media

I wrote recently about how to join the Fediverse, which covered joining Mastodon, a federeated, decentralized social network. Mastodon is the largest of these, with several million users, and is something of a much nicer version of Twitter.

Mastodon is also part of what is known as the “Fediverse”, which are applications that are loosely joined together by their support of the ActivityPub protocol. Other popular Fediverse applications include Pixelfed (similar to Instagram) and Peertube for sharing video. Peertube is particularly interesting in that it supports Webtorrent for efficiently distributing popular videos. Webtorrent is akin to Bittorrent running efficiently inside your browser.

Concluding Remarks

Part of my goal with this is encouraging people to dream big, to ask questions like:

What could you do if offline were easy?

What is possible if you have freedom in the physical and data link layers? Dream big.

We’re so used to thinking that it’s quite difficult for two devices on the Internet to talk to each other. What would be possible if this were actually quite easy?

The assumption that costs rise dramatically as popularity increases is also baked into our thought processes. What if that weren’t the case — could you take on Youtube from your garage? Would lowering barriers to entry lower the ad economy and let nonprofits have more equal footing with large corporations?

We have so many walled gardens, from Github to Facebook, that we almost forget it doesn’t have to be that way.

So having asked these questions, my secondary point is to suggest that these aren’t pie-in-the-sky notions. These possibilites are with us right now.

You’ll notice from this list that virtually every one of these technologies is ad-free at its heart (though some would be capable of serving ads). They give you back your attention. Many preserve privacy, anonymity, or both. Many dramatically improve your freedom of association and communication. Technologies like IPFS and Bittorrent ease the burden of running something popular.

Some are quite easy to use (Mastodon or Peertube) while others are much more complex (libp2p or the lower-level mesh network systems).

Clearly there is still room for improvement in many areas.

But my fundamental point is this: good technology is here, right now. Technical people can vote with their feet and wallets and start using it. Early adopters will help guide the way for the next set of improvements. Join us!

Planet DebianRuss Allbery: Review: Finder

Review: Finder, by Suzanne Palmer

Series: Finder Chronicles #1
Publisher: DAW Books
Copyright: 2019
ISBN: 0-7564-1511-X
Format: Kindle
Pages: 391

Fergus Ferguson is a repo man, or professional finder as he'd prefer. He locates things taken by people who don't own them and returns them to their owners. In this case, the thing in question is a sentient starship, and the person who stole it is Arum Gilger, a warlord in a wired-together agglomeration of space habitats and mined-out asteroids named Cernekan. Cernee, as the locals call it, is in the backwaters of human space near the Gap between the spiral arms of the galaxy.

One of Fergus's first encounters in Cernee is with an old lichen farmer named Mattie Vahn who happens to take the same cable car between stations that he does. Bad luck for Fergus, since that's also why Gilger's men first disable and then blow up the cable car, leaving Mattie dead and Fergus using the auto-return feature of Mattie's crates to escape to the Vahns' home station. The Vahns are not a power in Cernee, not exactly, but they do have some important alliances and provide an inroads for Fergus to get the lay of the land and map out a plan to recover the Venetia's Sword.

This is a great hook. I would happily read a whole series about an interstellar repo man, particularly one like Fergus who only works for the good guys and recovers things from petty warlords. Fergus is a thoughtful, creative loner whose style is improvised plans, unexpected tactics, and thinking on his feet rather than either bluster or force (although there is a fair bit of death in this book, some of which is gruesome). About two-thirds of the book is in roughly that expected shape. Fergus makes some local contacts, maps out the political terrain, and maneuvers himself towards his target through a well-crafted slum of wired-together habitats and working-class miners. Also, full points for the creative security system on the starship that tries to solve a nearly impossible problem (a backdoor supplementing pre-shared keys with a cultural authentication scheme that can't be vulnerable to brute force or database searches).

Halfway through, though, Palmer throws a curve ball at the reader that involves the unexplained alien presence that's been lurking around the system. That part of the plot shifts focus somewhat abruptly from the local power struggle Fergus has been navigating to something far more intrusive and personal. Fergus has to both reckon with a drastic change in his life and deal with memories of his early life on an Earth drowning in climate change, his abusive childhood, and his time spent in the Martian resistance.

This is also a fine topic for an SF novel, but I think Finder suffered a bit from falling between two stools. The fun competence drama of the lone repossession agent striking back against petty tyrants by taking away their toys is derailed by the sudden burst of introspection and emotional processing, but the processing is not deep or complex enough to carry the story on its own. Fergus had an awful and emotionally alienated childhood followed by some nasty trauma, to which he has responded by carefully never getting close to anyone so that he never hurts anyone who relies on him. And yet, he's a fundamentally decent person and makes friends despite himself, and from there you can probably write the rest of the arc yourself. There's nothing wrong with this as the emotional substrate of a book that's primarily focused on an action plot, but the screeching change of focus threw me off.

The good news is that the end of the book returns to the bits that I liked about the first half. The mixed news is that I thought the political situation in Cernee resolved much too easily and much too straightforwardly. I would have preferred the twisty alliances to stay twisty, rather than collapse messily into a far simpler moral duality. I will also speak on behalf of all the sentient starship lovers out there and complain that the Venetia's Sword was woefully underused. It had better show up in a future volume!

This unsteadiness and a few missed opportunities make Finder a good book rather than a great one, but I was still happily entertained and willing to write that off as first-novel unevenness. There are a lot of background elements left unresolved for a future volume, but Finder comes to a satisfying conclusion. Recommended if you're looking for an undemanding space action story with a quick pace and decent, if not very deep, characters.

Followed by Driving the Deep.

Rating: 7 out of 10

Dave HallParameter Store vs Secrets Manager

Which AWS managed service is best for storing and managing your secrets?

,

Planet DebianErich Schubert: My first Rust crate: faster kmedoids clustering

I have written my first Rust crate: kmedoids.

Python users can use the wrapper package kmedoids.

It implements k-medoids clustering, and includes our new FasterPAM algorithm that drastically reduces the computational overhead. As long as you can afford to compute the distance matrix of your data set, clustering it with k-medoids is now feasible even for large k. (If your data is continuous and you are interested in minimizing squared errors, k-means surely remains the better choice!)

My take on Rust so far:

  • Pedantic. Which is good if you want quality code. Which is bad if you want others to contribute.
  • Run time was very fast, I liked that. The pedanticness gives the compiler additional information to optimize better, of course.
  • Tooling is okay, but can be improved. Compilers give good error messages, but the color scheme assumes a black background terminal.
  • I’d prefer to have it properly integrated in my OS, rather than having yet-another-package-manager in the form of rustup. This is the road to madness that everything now brings its own package manager, this should be part of the operating system.
  • The python module generation with PyO3 is crazy shit, but cool to have.
  • I like the exception handling and optionals so far. And with Rust you know that it will be optimize out very well. With Java you know pretty well that it wont when you’d most need it…
  • It is a pity that there seems to be a secret Rust convention to never documentation internal functions or code, only APIs. Java overdid it the other direction with the convention of documenting stupid getters and setters, but there ought to be a middle ground.
  • They overdid it with making everything as few characters as possible. Code does not get better if its shorter. I have never been a fan of omitting “return” statements (just 6 chars)! But Rust is not the worst here because at least it has strong typing. Implicit returns are error-prone.
  • A simple for i in 0..n { already causes a clippy warning; the clippy rule clearly is overshooting its own description. It fails to detect if the index i is actually needed. So the alternative would be a for (i, item) in list.iter().enumerate() {. And apparently there is some weird reason why iterators are even faster than a range for loop?!?
  • My first interactions with the Rust community were not particularly welcoming.

Will I use it more?

I don’t know. Probably if I need extreme performance, but I likely would not want to do everything my self in a pedantic language. So community is key, and I do not see Rust shine there.

Planet DebianEnrico Zini: Software development links

Next time we'll iterate on Himblick design and development, Raspberry Pi 4 can now run plain standard Debian, which should make a lot of things easier and cleaner when developing products based on it.

Somewhat related to nspawn-runner, random links somehow related to my feeling that nspawn comes from an ecosystem which gives me a bigger sense of focus on security and solidity than Docker:

I did a lot of work on A38, a Python library to deal with FatturaPA electronic invoicing, and it was a wonderful surprise to see a positive review spontaneously appear! ♥: Fattura elettronica, come visualizzarla con python | TuttoLogico

A beautiful, hands-on explanation of git internals, as a step by step guide to reimplementing your own git: Git Internals - Learn by Building Your Own Git

I recently tried meson and liked it a lot. I then gave unity builds a try, since it supports them out of the box, and found myself with doubts. I found I wasn't alone, and I liked The Evils of Unity Builds as a summary of the situation.

A point of view I liked on technological debt: Technical debt as a lack of understanding

Finally, a classic, and a masterful explanation for a question that keeps popping up: RegEx match open tags except XHTML self-contained tags

Planet DebianDmitry Shachnev: ReText turns 10 years

Exactly ten years ago, in February 2011, the first commit in ReText git repository was made. It was just a single 364 lines Python file back then (now the project has more than 6000 lines of Python code).

Since 2011, the editor migrated from SourceForge to GitHub, gained a lot of new features, and — most importantly — now there is an active community around it, which includes both long-time contributors and newcomers who create their first issues or pull requests. I don’t always have enough time to reply to issues or implement new features myself, but the community members help me with this.

Earlier this month, I made a new release (7.2), which adds a side panel with directory tree (contributed by Xavier Gouchet), option to fully highlight wrapped lines (contributed by nihillum), ability to search in the preview mode and much more — see the release page on GitHub.

Side panel in ReText

Also a new version of PyMarkups module was released, which contains all the code for processing various markup languages. It now supports markdown-extensions.yaml files which allow specifying complex extensions options and adds initial support for MathJax 3.

Also check out the release notes for 7.1 which was not announced on this blog.

Future plans include making at least one more release this year, adding support for Qt 6. Qt 5 support will last for at least one more year.

Planet DebianMatthew Garrett: Making hibernation work under Linux Lockdown

Linux draws a distinction between code running in kernel (kernel space) and applications running in userland (user space). This is enforced at the hardware level - in x86-speak[1], kernel space code runs in ring 0 and user space code runs in ring 3[2]. If you're running in ring 3 and you attempt to touch memory that's only accessible in ring 0, the hardware will raise a fault. No matter how privileged your ring 3 code, you don't get to touch ring 0.

Kind of. In theory. Traditionally this wasn't well enforced. At the most basic level, since root can load kernel modules, you could just build a kernel module that performed any kernel modifications you wanted and then have root load it. Technically user space code wasn't modifying kernel space code, but the difference was pretty semantic rather than useful. But it got worse - root could also map memory ranges belonging to PCI devices[3], and if the device could perform DMA you could just ask the device to overwrite bits of the kernel[4]. Or root could modify special CPU registers ("Model Specific Registers", or MSRs) that alter CPU behaviour via the /dev/msr interface, and compromise the kernel boundary that way.

It turns out that there were a number of ways root was effectively equivalent to ring 0, and the boundary was more about reliability (ie, a process running as root that ends up misbehaving should still only be able to crash itself rather than taking down the kernel with it) than security. After all, if you were root you could just replace the on-disk kernel with a backdoored one and reboot. Going deeper, you could replace the bootloader with one that automatically injected backdoors into a legitimate kernel image. We didn't have any way to prevent this sort of thing, so attempting to harden the root/kernel boundary wasn't especially interesting.

In 2012 Microsoft started requiring vendors ship systems with UEFI Secure Boot, a firmware feature that allowed[5] systems to refuse to boot anything without an appropriate signature. This not only enabled the creation of a system that drew a strong boundary between root and kernel, it arguably required one - what's the point of restricting what the firmware will stick in ring 0 if root can just throw more code in there afterwards? What ended up as the Lockdown Linux Security Module provides the tooling for this, blocking userspace interfaces that can be used to modify the kernel and enforcing that any modules have a trusted signature.

But that comes at something of a cost. Most of the features that Lockdown blocks are fairly niche, so the direct impact of having it enabled is small. Except that it also blocks hibernation[6], and it turns out some people were using that. The obvious question is "what does hibernation have to do with keeping root out of kernel space", and the answer is a little convoluted and is tied into how Linux implements hibernation. Basically, Linux saves system state into the swap partition and modifies the header to indicate that there's a hibernation image there instead of swap. On the next boot, the kernel sees the header indicating that it's a hibernation image, copies the contents of the swap partition back into RAM, and then jumps back into the old kernel code. What ensures that the hibernation image was actually written out by the kernel? Absolutely nothing, which means a motivated attacker with root access could turn off swap, write a hibernation image to the swap partition themselves, and then reboot. The kernel would happily resume into the attacker's image, giving the attacker control over what gets copied back into kernel space.

This is annoying, because normally when we think about attacks on swap we mitigate it by requiring an encrypted swap partition. But in this case, our attacker is root, and so already has access to the plaintext version of the swap partition. Disk encryption doesn't save us here. We need some way to verify that the hibernation image was written out by the kernel, not by root. And thankfully we have some tools for that.

Trusted Platform Modules (TPMs) are cryptographic coprocessors[7] capable of doing things like generating encryption keys and then encrypting things with them. You can ask a TPM to encrypt something with a key that's tied to that specific TPM - the OS has no access to the decryption key, and nor does any other TPM. So we can have the kernel generate an encryption key, encrypt part of the hibernation image with it, and then have the TPM encrypt it. We store the encrypted copy of the key in the hibernation image as well. On resume, the kernel reads the encrypted copy of the key, passes it to the TPM, gets the decrypted copy back and is able to verify the hibernation image.

That's great! Except root can do exactly the same thing. This tells us the hibernation image was generated on this machine, but doesn't tell us that it was done by the kernel. We need some way to be able to differentiate between keys that were generated in kernel and ones that were generated in userland. TPMs have the concept of "localities" (effectively privilege levels) that would be perfect for this. Userland is only able to access locality 0, so the kernel could simply use locality 1 to encrypt the key. Unfortunately, despite trying pretty hard, I've been unable to get localities to work. The motherboard chipset on my test machines simply doesn't forward any accesses to the TPM unless they're for locality 0. I needed another approach.

TPMs have a set of Platform Configuration Registers (PCRs), intended for keeping a record of system state. The OS isn't able to modify the PCRs directly. Instead, the OS provides a cryptographic hash of some material to the TPM. The TPM takes the existing PCR value, appends the new hash to that, and then stores the hash of the combination in the PCR - a process called "extension". This means that the new value of the TPM depends not only on the value of the new data, it depends on the previous value of the PCR - and, in turn, that previous value depended on its previous value, and so on. The only way to get to a specific PCR value is to either (a) break the hash algorithm, or (b) perform exactly the same sequence of writes. On system reset the PCRs go back to a known value, and the entire process starts again.

Some PCRs are different. PCR 23, for example, can be reset back to its original value without resetting the system. We can make use of that. The first thing we need to do is to prevent userland from being able to reset or extend PCR 23 itself. All TPM accesses go through the kernel, so this is a simple matter of parsing the write before it's sent to the TPM and returning an error if it's a sensitive command that would touch PCR 23. We now know that any change in PCR 23's state will be restricted to the kernel.

When we encrypt material with the TPM, we can ask it to record the PCR state. This is given back to us as metadata accompanying the encrypted secret. Along with the metadata is an additional signature created by the TPM, which can be used to prove that the metadata is both legitimate and associated with this specific encrypted data. In our case, that means we know what the value of PCR 23 was when we encrypted the key. That means that if we simply extend PCR 23 with a known value in-kernel before encrypting our key, we can look at the value of PCR 23 in the metadata. If it matches, the key was encrypted by the kernel - userland can create its own key, but it has no way to extend PCR 23 to the appropriate value first. We now know that the key was generated by the kernel.

But what if the attacker is able to gain access to the encrypted key? Let's say a kernel bug is hit that prevents hibernation from resuming, and you boot back up without wiping the hibernation image. Root can then read the key from the partition, ask the TPM to decrypt it, and then use that to create a new hibernation image. We probably want to prevent that as well. Fortunately, when you ask the TPM to encrypt something, you can ask that the TPM only decrypt it if the PCRs have specific values. "Sealing" material to the TPM in this way allows you to block decryption if the system isn't in the desired state. So, we define a policy that says that PCR 23 must have the same value at resume as it did on hibernation. On resume, the kernel resets PCR 23, extends it to the same value it did during hibernation, and then attempts to decrypt the key. Afterwards, it resets PCR 23 back to the initial value. Even if an attacker gains access to the encrypted copy of the key, the TPM will refuse to decrypt it.

And that's what this patchset implements. There's one fairly significant flaw at the moment, which is simply that an attacker can just reboot into an older kernel that doesn't implement the PCR 23 blocking and set up state by hand. Fortunately, this can be avoided using another aspect of the boot process. When you boot something via UEFI Secure Boot, the signing key used to verify the booted code is measured into PCR 7 by the system firmware. In the Linux world, the Shim bootloader then measures any additional keys that are used. By either using a new key to tag kernels that have support for the PCR 23 restrictions, or by embedding some additional metadata in the kernel that indicates the presence of this feature and measuring that, we can have a PCR 7 value that verifies that the PCR 23 restrictions are present. We then seal the key to PCR 7 as well as PCR 23, and if an attacker boots into a kernel that doesn't have this feature the PCR 7 value will be different and the TPM will refuse to decrypt the secret.

While there's a whole bunch of complexity here, the process should be entirely transparent to the user. The current implementation requires a TPM 2, and I'm not certain whether TPM 1.2 provides all the features necessary to do this properly - if so, extending it shouldn't be hard, but also all systems shipped in the past few years should have a TPM 2, so that's going to depend on whether there's sufficient interest to justify the work. And we're also at the early days of review, so there's always the risk that I've missed something obvious and there are terrible holes in this. And, well, given that it took almost 8 years to get the Lockdown patchset into mainline, let's not assume that I'm good at landing security code.

[1] Other architectures use different terminology here, such as "supervisor" and "user" mode, but it's broadly equivalent
[2] In theory rings 1 and 2 would allow you to run drivers with privileges somewhere between full kernel access and userland applications, but in reality we just don't talk about them in polite company
[3] This is how graphics worked in Linux before kernel modesetting turned up. XFree86 would just map your GPU's registers into userland and poke them directly. This was not a huge win for stability
[4] IOMMUs can help you here, by restricting the memory PCI devices can DMA to or from. The kernel then gets to allocate ranges for device buffers and configure the IOMMU such that the device can't DMA to anything else. Except that region of memory may still contain sensitive material such as function pointers, and attacks like this can still cause you problems as a result.
[5] This describes why I'm using "allowed" rather than "required" here
[6] Saving the system state to disk and powering down the platform entirely - significantly slower than suspending the system while keeping state in RAM, but also resilient against the system losing power.
[7] With some handwaving around "coprocessor". TPMs can't be part of the OS or the system firmware, but they don't technically need to be an independent component. Intel have a TPM implementation that runs on the Management Engine, a separate processor built into the motherboard chipset. AMD have one that runs on the Platform Security Processor, a small ARM core built into their CPU. Various ARM implementations run a TPM in Trustzone, a special CPU mode that (in theory) is able to access resources that are entirely blocked off from anything running in the OS, kernel or otherwise.

comment count unavailable comments

Planet DebianRuss Allbery: Review: The Fated Sky

Review: The Fated Sky, by Mary Robinette Kowal

Series: Lady Astronaut #2
Publisher: Tor
Copyright: August 2018
ISBN: 0-7653-9893-1
Format: Kindle
Pages: 380

The Fated Sky is a sequel to The Calculating Stars, but you could start with this book if you wanted to. It would be obvious you'd missed a previous book in the series, and some of the relationships would begin in medias res, but the story is sufficiently self-contained that one could puzzle through.

Mild spoilers follow for The Calculating Stars, although only to the extent of confirming that book didn't take an unexpected turn, and nothing that wouldn't already be spoiled if you had read the short story "The Lady Astronaut of Mars" that kicked this series off. (The short story takes place well after all of the books.) Also some minor spoilers for the first section of the book, since I have to talk about its outcome in broad strokes in order to describe the primary shape of the novel.

In the aftermath of worsening weather conditions caused by the Meteor, humans have established a permanent base on the Moon and are preparing a mission to Mars. Elma is not involved in the latter at the start of the book; she's working as a shuttle pilot on the Moon, rotating periodically back to Earth. But the political situation on Earth is becoming more tense as the refugee crisis escalates and the weather worsens, and the Mars mission is in danger of having its funding pulled in favor of other priorities. Elma's success in public outreach for the space program as the Lady Astronaut, enhanced by her navigation of a hostage situation when an Earth re-entry goes off course and is met by armed terrorists, may be the political edge supporters of the mission need.

The first part of this book is the hostage situation and other ground-side politics, but the meat of this story is the tense drama of experimental, pre-computer space flight. For those who aren't familiar with the previous book, this series is an alternate history in which a huge meteorite hit the Atlantic seaboard in 1952, potentially setting off runaway global warming and accelerating the space program by more than a decade. The Calculating Stars was primarily about the politics surrounding the space program. In The Fated Sky, we see far more of the technical details: the triumphs, the planning, and the accidents and other emergencies that each could be fatal in an experimental spaceship headed towards Mars. If what you were missing from the first book was more technological challenge and realistic detail, The Fated Sky delivers. It's edge-of-your-seat suspenseful and almost impossible to put down.

I have more complicated feelings about the secondary plot. In The Calculating Stars, the heart of the book was an incredibly well-told story of Elma learning to deal with her social anxiety. That's still a theme here but a lesser one; Elma has better coping mechanisms now. What The Fated Sky tackles instead is pervasive sexism and racism, and how Elma navigates that (not always well) as a white Jewish woman.

The centrality of sexism is about the same in both books. Elma's public outreach is tied closely to her gender and starts as a sort of publicity stunt. The space program remains incredibly sexist in The Fated Stars, something that Elma has to cope with but can't truly fix. If you found the sexism in the first book irritating, you're likely to feel the same about this installment.

Racism is more central this time, though. In The Calculating Stars, Elma was able to help make things somewhat better for Black colleagues. She has a much different experience in The Fated Stars: she ends up in a privileged position that hurts her non-white colleagues, including one of her best friends. The merits of taking a stand on principle are ambiguous, and she chooses not to. When she later tries to help Black astronauts, she does so in a way that's focused on her perceptions rather than theirs and is therefore more irritating than helpful. The opportunities she gets, in large part because she's seen as white, unfairly hurt other people, and she has to sit with that. It's a thoughtful and uncomfortable look at how difficult it is for a white person to live with discomfort they can't fix and to not make it worse by trying to wave it away or point out their own problems.

That was the positive side of this plot, although I'm still a bit wary and would like to read a review by a Black reviewer to see how well this plot works from their perspective. There are some other choices that I thought landed oddly. One is that the most racist crew member, the one who sparks the most direct conflict with the Black members of the international crew, is a white man from South Africa, which I thought let the United States off the hook too much and externalized the racism a bit too neatly. Another is that the three ships of the expedition are the Niña, the Pinta, and the Santa Maria, and no one in the book comments on this. Given the thoughtful racial themes of the book, I can't imagine this is an accident, and it is in character for United States of this novel to pick those names, but it was an odd intrusion of an unremarked colonial symbol. This may be part of Kowal's attempt to show that Elma is embedded in a racist and sexist world, has limited room to maneuver, and can't solve most of the problems, which is certainly a theme of the series. But it left me unsettled on whether this book was up to fully handling the fraught themes Kowal is invoking.

The other part of the book I found a bit frustrating is that it never seriously engaged with the political argument against Mars colonization, instead treating most of the opponents of space travel as either deluded conspiracy believers or cynical villains. Science fiction is still arguing with William Proxmire even though he's been dead for fifteen years and out of office for thirty. The strong argument against a Mars colony in Elma's world is not funding priorities; it's that even if it's successful, only a tiny fraction of well-connected elites will escape the planet to Mars. This argument is made in the book and Elma dismisses it as a risk she's trying to prevent, but it is correct. There is no conceivable technological future that leads to evacuating the Earth to Mars, but The Fated Sky declines to grapple with the implications of that fact.

There's more that I haven't remarked on, including an ongoing excellent portrayal of the complicated and loving relationship between Elma and her husband, and a surprising development in her antagonistic semi-friendship with the sexist test pilot who becomes the mission captain. I liked how Kowal balanced technical problems with social problems on the long Mars flight; both are serious concerns and they interact with each other in complicated ways.

The details of the perils and joys of manned space flight are excellent, at least so far as I can tell without having done the research that Kowal did. If you want a fictionalized Apollo 13 with higher stakes and less ground support, look no further; this is engrossing stuff. The interpersonal politics and sociology were also fascinating and gripping, but unsettling, in both good ways and bad. I like the challenge that Kowal presents to a white reader, although I'm not sure she was completely in control of it.

Cautiously recommended, although be aware that you'll need to grapple with a sexist and racist society while reading it. Also a content note for somewhat graphic gastrointestinal problems.

Followed by The Relentless Moon.

Rating: 8 out of 10

,

Planet DebianLouis-Philippe Véronneau: dput-ng or: How I Learned to Stop Worrying and Love the Hooks

As my contributions to Debian continue to grow in number, I find myself uploading to the archive more and more often.

Although I'm pretty happy with my current sbuild-based workflow, twice in the past few weeks I inadvertently made a binary upload instead of a source-only one.1

As it turns out, I am not the only DD who has had this problem before. As Nicolas Dandrimont kindly pointed to me, dput-ng supports pre and post upload hooks that can be used to lint your uploads. Even better, it also ships with a check-debs hook that lets you block binary uploads.

Pretty neat, right? In a perfect world, enabling the hook would only be a matter of adding it in the hook list of /etc/dput.d/metas/debian.json and using the following defaults:

"check-debs": {
    "enforce": "source",
    "skip": false
},

Sadly, bug #983160 currently makes this whole setup more complex than it should be and forces me to use two different dput-ng profiles pointing to two different files in /etc/dput.d/metas: a default source-only one (ftp-master) and a binary upload one (ftp-master-binary).

Otherwise, one could use a single profile that disallows binary uploads and when needed, override the hook using something like this:

$ dput --override "check-debs.enforce=debs" foo_1.0.0-1_amd64.changes

I did start debugging the --override issue in dput-ng, but I'm not sure I'll have time to submit a patch anytime soon. In the meantime, I'm happy to report I shouldn't be uploading the wrong .changes file by mistake again!


  1. Thanks to Holger Levsen and Adrian Bunk for catching those and notifying me. 

Planet DebianKentaro Hayashi: Tokyo area Debian meeting Feb, 2021 was held on online

I gave a short presentation - WAF on Debian.

Especially, I talked about usage of ModSecurity-nginx.

slide.rabbit-shocker.org

,

Planet DebianJonathan Dowland: Wrist Watches

red strap

red strap

This is everything I have to say about watches (or time pieces, or chronometers, if you prefer: I don't).

I've always worn a watch, and still do; but I've never really understood the appeal of the kind of luxury watches you see advertise here there and everywhere, with their chunky cases, over-complicated faces and enormous price-tags. So the world of watch-appreciation was closed to me, until my 30th birthday (a while ago) when my wife bought me a Mondaine Evo "Big Date" quartz watch.

It's not an analogue watch nor an "heirloom timepiece", neither of which are properties that matter to me. The large face has almost nothing extraneous on it, although my model includes day-of-the-month. I like it very much.

And so I cracked open the door a little onto the world of watches and watch fashion and had a short spell of interest in some other styles, types, and the like. This drew to a close with buying a selection of cheap, coloured nylon fabric "nato"-style straps. Now whenever I feel the itch for a change, I just change the strap.

Smart Watches have never appealed to me. I can see some of their advantages, but the last thing I need is another gadget to regularly charge, or another avenue to check my email.

I appreciate that wearing a wrist watch at all is anachronistic (sorry), and I did wonder whether it's a habit I could get out of. A few weeks ago, during our endless Lockdown, my watch battery ran out, so I spent a couple of weeks un-learning my reliance on a wristwatch to orient myself. I've managed to get it replaced now (some watch repair places being considered Essential Services) and I'm comfortably back in my default mode of wearing and relying upon it.

Planet DebianSteinar H. Gunderson: plocate LWN post

My debian-devel thread about getting plocate in standard didn't turn into anything in Debian, but evidently, it turned into an LWN post!

My favorite quote from the comments: “It's funny that some people argue that updatedb is too costly while others argue that "find /" (which costs hardly less) is fast enough.”

Krebs on SecurityMexican Politician Removed Over Alleged Ties to Romanian ATM Skimmer Gang

The leader of Mexico’s Green Party has been removed from office following allegations that he received money from a Romanian ATM skimmer gang that stole hundreds of millions of dollars from tourists visiting Mexico’s top tourist destinations over the past five years. The scandal is the latest fallout stemming from a three-part investigation into the organized crime group by KrebsOnSecurity in 2015.

One of the Bluetooth-enabled PIN pads pulled from a compromised ATM in Mexico. The two components on the left are legitimate parts of the machine. The fake PIN pad made to be slipped under the legit PIN pad on the machine, is the orange component, top right. The Bluetooth and data storage chips are in the middle.

Jose de la Peña Ruiz de Chávez, who leads the Green Ecologist Party of Mexico (PVEM), was dismissed this month after it was revealed that his were among 79 bank accounts seized as part of an ongoing law enforcement investigation into a Romanian organized crime group that owned and operated an ATM network throughout the country.

In 2015, KrebsOnSecurity traveled to Mexico’s Yucatan Peninsula to follow up on reports about a massive spike in ATM skimming activity that appeared centered around some of the nation’s primary tourist areas.

That three-part series concluded that Intacash, an ATM provider owned and operated by a group of Romanian citizens, had been paying technicians working for other ATM companies to install sophisticated Bluetooth-based skimming devices inside cash machines throughout the Quintana Roo region of Mexico, which includes Cancun, Cozumel, Playa del Carmen and Tulum.

Unlike most skimmers — which can be detected by looking for out-of-place components attached to the exterior of a compromised cash machine — these skimmers were hooked to the internal electronics of ATMs operated by Intacash’s competitors by authorized personnel who’d reportedly been bribed or coerced by the gang.

But because the skimmers were Bluetooth-based — allowing thieves periodically to collect stolen data just by strolling up to a compromised machine with a mobile device — KrebsOnSecurity was able to detect which ATMs had been hacked using nothing more than a cheap smart phone.

In a series of posts on Twitter, De La Peña denied any association with the Romanian organized crime gang, and said he was cooperating with authorities.

But it is likely the scandal will ensnare a number of other important figures in Mexico. According to a report in the Mexican publication Expansion Politica, the official list of bank accounts frozen by the Mexican Ministry of Finance include those tied to the notary Naín Díaz Medina; the owner of the Quequi newspaper, José Alberto Gómez Álvarez; the former Secretary of Public Security of Cancun, José Luis Jonathan Yong; his father José Luis Yong Cruz; and former governors of Quintana Roo.

In May 2020, the Mexican daily Reforma reported that the skimming gang enjoyed legal protection from a top anti-corruption official in the Mexican attorney general’s office.

The following month, my reporting from 2015 emerged as the primary focus of a documentary published by the Organized Crime and Corruption Reporting Project (OCCRP) into Intacash and its erstwhile leader — 44-year-old Florian “The Shark” Tudor. The OCCRP’s series painted a vivid picture of a highly insular, often violent transnational organized crime ring (referred to as the “Riviera Maya Gang“) that controlled at least 10 percent of the $2 billion annual global market for skimmed cards.

It also details how the group laundered their ill-gotten gains, and is alleged to have built a human smuggling ring that helped members of the crime gang cross into the U.S. and ply their skimming trade against ATMs in the United States. Finally, the series highlights how the Riviera Maya gang operated with impunity for several years by exploiting relationships with powerful anti-corruption officials in Mexico.

In 2019, police in Mexico arrested Tudor for illegal weapons possession, and raided his various properties there in connection with an investigation into the 2018 murder of his former bodyguardConstantin Sorinel Marcu.

According to prosecution documents, Marcu and The Shark spotted my reporting shortly after it was published in 2015, and discussed what to do next on a messaging app:

The Shark: Krebsonsecurity.com See this. See the video and everything. There are two episodes. They made a telenovela.

Marcu: I see. It’s bad.

The Shark: They destroyed us. That’s it. Fuck his mother. Close everything.

The intercepted communications indicate The Shark also wanted revenge on whoever was responsible for leaking information about their operations.

The Shark: Tell them that I am going to kill them.

Marcu: Okay, I can kill them. Any time, any hour.

The Shark: They are checking all the machines. Even at banks. They found over 20.

Marcu: Whaaaat?!? They found? Already??

Since the OCCRP published its investigation, KrebsOnSecurity has received multiple death threats. One was sent from an email address tied to a Romanian programmer and malware author who is active on several cybercrime forums. It read:

“Don’t worry.. you will be killed you and your wife.. all is matter of time amigo :)”

Kevin RuddABC Radio National: On Media Diversity and the Need for a Murdoch Royal Commission

RADIO INTERVIEW
ABC RADIO
RADIO NATIONAL
WITH STEVE CANNANE
19 FEBRUARY 2021

Steve Cannane
Well, the shockwaves from Facebook’s decision to block Australian’s access to news are reverberating at home and overseas. Small media publishers and millions of Australians on Facebook are caught in a battle over who should pay for news. Facebook’s actions were a response to the media bargaining code which is currently going through the parliament. It’s designed to make Google and Facebook pay news publishers for displaying their content. It’s legislation which has been backed by the major parties and the major news organizations including Rupert Murdoch’s News Corp, which is also the subject of a senate inquiry on media diversity, which is starting today. The former Labour Prime Minister Kevin Rudd has been campaigning for a royal commission into the Murdoch media, and will appear at that senate inquiry into media diversity this morning, Kevin Rudd, thanks very much for joining us.

Kevin Rudd
Thanks for having me on the program.

Steve Cannane
As someone who’s currently arguing the case for taking on powerful media monopolies. Are you backing the Morison government in this fight to make big tech companies pay for news?

Kevin Rudd
No, I don’t agree with the current legislative formula, which the Morrison government, I think has developed largely in collaboration with the Murdoch media monopoly, to deal with what is, however, a continuing challenge to diversity in this country, which is also the monopoly powers of the new digital platforms. Right now we’ve got a conflict between two major sets of media monopolies one traditional one, that’s called the Murdoch media monopoly, and the emerging media monopolies, dominated by the digital platforms, Facebook in particular. So that’s one of the reasons why we’ve called for a royal commission because this is a highly complex matter. Instead, what the Morrison government has done in effect is side with one monopoly against the other. I don’t think that is good for the future of democracy in this country.

Steve Cannane
Okay. You just said that this legislation was developed in collaboration with the Murdoch media. What’s your evidence of that? Wasn’t it designed in collaboration with the ACCC?

Kevin Rudd
Well the ACCC on the question, media diversity in this country has its own chequered record. Remember, it’s a competition watchdog, it is not designed to be a media regulator. I simply make one point, the ACCC approved Murdoch’s takeover of Australian provincial newspapers in 2016-17. The result? Murdoch on the cover of COVID darkness shut down 126 regional local newspapers right across the country last year. So frankly, I don’t trust the ACCC in terms of the model to produces, either on, as it were, the diversity of opinion in traditional print media, or in the formula that’s recommended for the future either. This is more complex than simply, as it were, using the powers of the Parliament to side with one traditional media monopoly against a new and emerging monopoly. Australia deserves better than this.

Steve Cannane
So you must be disappointed then that your party the Labor Party has voted for it to pass through the House of Representatives.

Kevin Rudd
Let’s see what happens in the Senate. I understand the debates associated with this are highly complex, but you asked me a bald-faced question, which was, Do I support Morrison’s approach? And my answer is no.

Steve Cannane
And so, therefore, surely, you’re disappointed that your party is passing is through the house of reps?

Kevin Rudd
Our party is not in government. At the moment, this government has initiated this legislation, it is sought Rod Sims’ advice in putting together the digital media bargaining code. I do not diminish the significance of the potential abuse of monopoly powers by the big digital platforms. It’s one of the reasons why in the terms of reference as suggested for the royal commission that I have put forward and which has now been signed as a petition by more than half a million Australians, we ask the Royal Commission to examine both the traditional media monopoly in Murdoch’s hands, plus the emerging monopolies. Because monopoly as a matter of principle is bad for an economy. It’s bad for politics, and it’s bad for media diversity.

Steve Cannane
So talking of monopolies. Facebook, do you classify the actions of Facebook in the last 24 hours as a form of bullying, in the same way that you’ve labelled the Murdoch media bullies?

Kevin Rudd
Absolutely. I think both these actions by bullying enterprises, whether they are on the new digital platforms or in the traditional media platforms are unacceptable. Let’s be clear about the fact when we look, however, the coverage of this today, look at The Daily Telegraph five pages of coverage today on Facebook’s actions yesterday, two lines of which actually deal with Facebook’s response to the position being put by News Corp. And so what you have classically today, in the coverage provided by The Daily Telegraph, is them, that is News Corp, merging news with opinion in the five pages of its newspaper to advance that media monopoly’s commercial and political interests in this country. That goes to the heart of the problem that I seek to explore through my own submission to the senate inquiry, which is one of the grave dangers in the abuse of the current media monopoly by Murdoch, is this fusing of editorial opinion with what’s supposed to be balanced news coverage, this has become one and the same thing in the Murdoch universe. And we see it writ large and the first five pages The Telegraph today,

Steve Cannane
It’s 20 to eight on RN Breakfast. We’re talking to former prime minister Kevin Rudd, we just heard the treasurer, Josh Frydenberg on AM he saying the media code has been successful so far. That money has been raised from Google, even though they haven’t got it from Facebook. But that money raised from Google will be invested back into the public interest journalism, including, for example, with the Nine Media Group, $30 million a year for them each year, over the next five years. Isn’t that a sign it’s working?

Kevin Rudd
Well, here’s a little challenge for Josh, the treasurer. What about the future of the Australian Associated Press, which the Nine media and Murdoch have effectively done everything they can to shut down? This is a source of Independent News around the country, which has existed since the 1930s, where they’ve just sought to undermine the funding model for AAP to continue and instead substituted in the case of News with its own news agency or wire service around the country. So my challenge to Frydenberg is what are you going to do about the future of AAP? I think, many, many Australians, particularly those in a very limited number of surviving local newspapers around the country, including the national broadcaster, we’d be all ears to hear the answer to that.

Steve Cannane
Okay. Well, let’s talk about this media diversity issue that’s before the Senate today. It’s looking into, amongst other things, the Murdoch media’s dominance in Australia. Do you think that money that’s meant to be raised from this new media bargaining code from Google and potentially Facebook, if they play ball. Could it help enhance media diversity?

Kevin Rudd
The question is, how do you enhance public, as it were, journalism in the broadest and traditional sense of the term. And that is independent news reporting, investigative reporting by multiple news organizations over time, for which there may not be sufficient funding base at present, given the pressures on the industry. That I think is the policy objective here. But when I see a large slab of cash being delivered off the back of Frydenberg reforms in a very Murdoch friendly package. A bucket of cash now being delivered out of Google, for example, to Murdoch. Murdoch’s approach to, as it were, public-interest journalism is zero. Murdoch’s approach to public-interest journalism is to prosecute, you know, the viciously biased form of reporting that he engages in and has done so for at least the last decade. For your listeners, it’s just important to understand this, 70% of the print readership of this country is currently owned by Murdoch, in my state of Queensland 100%, virtually the print readership is controlled by Murdoch. Sky News is now beginning to dominate the online space. This is unhealthy for our democracy.

Steve Cannane
All right, you’re criticizing Josh Frydenberg’s model. What model would you be pursuing if you were a Prime Minister?

Kevin Rudd
Why I have called for a royal commission is that these are massively complex matters. Where both monopolies, the current traditional monopoly held by Murdoch, which has been used in the last 19 federal and state elections to campaign viciously in support of one side of politics and viciously against the other. To deal with that simultaneously with all the issues which arise from the emerging monopolies of both Google and Facebook. What instead you have with the Morrison response, has been to side with one monopoly against the other. And frankly, I don’t think that provides us with any solution for the future. It compounds the problem.

Kevin Rudd
Okay, I want to move on to another very important issue. There’s been awful news this week of the alleged rape of a former Liberal staffer, Brittany Higgins in Parliament House. When you were Prime Minister, did you have processes in place that if so that if something as serious as this occurred, that you would have heard about it?

Kevin Rudd
In my period, as prime minister, I certainly don’t recall any report to me of an act of sexual violence against any government staffer or associated with any government minister that I can recall, we’re talking about events of more than 10 years ago.

Steve Cannane
But I’m talking about the processes in place.

Kevin Rudd
I’m coming to that, my friend. So that’s the first point because it relates to the second, which is the processes that I would see would be normally in place is that immediately if that’s such an event occurred in the government, which I led, then the Prime Minister’s Chief of Staff, as the chief staffer of the entire operation would have been mandatorily advised.

Steve Cannane
So what do you think it says about the processes in place in the current government, that Prime Minister Scott Morrison wasn’t told about these allegations.

Kevin Rudd
Well, I was in question time yesterday for the first time in a year or so and listening to the series of the question posed by Anthony Albanese and by Tanya Plibersek, to the Prime Minister on this. I’ve got to say I found the Prime Minister’s response is less than persuasive. For me, it’s a bit like what Malcolm Turnbull has said earlier, it doesn’t ring true, then when you have a case of such gravity, involving this young woman, who is alleging rape in a ministerial office, that this would not have immediately been informed, or provided as a report to the Prime Minister’s chief of staff.

Steve Cannane
So just finally, we’ve heard complaints about the culture in Parliament House for years now. How does it need to change?

Kevin Rudd
There has been an important submission put together by a number of staffers and former staffers which I’ve been copied, which goes to the particular needs of young women in the building. What we need, frankly, is not just a code of conduct, but a new culture in the Parliament House building, which takes the side of young women who, unfortunately, it seems to have been the subject of predatory behaviour. That I think would go a long way towards turning the corner on this. What apparently goes down in Parliament House today would not be acceptable in any other professional workforce or workplace in the country today. It seems to have got to a stage over the last decade whereby the culture in Parliament House has seen itself as a culture apart. I think that is unacceptable. Young women in particular who occupy a number of staffing positions right across the building for all political parties, frankly, need to be properly respected. And in terms of codes of conduct and the culture of the building. properly protected as well.

Steve Cannane
Kevin Rudd will have to leave it there. Thanks very much for joining us this morning.

Kevin Rudd
Good to be with you.

 

The post ABC Radio National: On Media Diversity and the Need for a Murdoch Royal Commission appeared first on Kevin Rudd.

Worse Than FailureError'd: The Timing is Off

Drew W discovers that the Daytona 500 is a different kind of exciting than we ever thought.

XXX Wins the Daytona 500

It feels like the past year has been a long one, and based on this graph's dates, it's going to get a lot longer.

Integer Underflow by Quarters

But time really does fly. Look how much earlier Scott Lewis's package is arriving.

Even earlier than 2/15, 2/15

Which, with the way time flies, it's good that this limited time offer on a video game will stick around long enough that Denilson will get a chance to take advantage of it.

On sale for 1000 years

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianReproducible Builds (diffoscope): diffoscope 167 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 167. This version includes the following changes:

* Temporary directory handling:
  - Ensure we cleanup our temporary directory by avoiding confusion between
    the TemporaryDirectory instance and the underlying directory.
    (Closes: #981123)
  - Use a potentially-useful suffix to our temporary directory based on the
    command-line passed to diffoscope.
  - Fix some tempfile/weakref interaction in Python 3.7 (ie. Debian buster).
    (Closes: reproducible-builds/diffoscope#239)
  - If our temporary directory does not exist anymore (eg. it has been
    cleaned up in tests, signal handling or reference counting),  make sure
    we recreate it.

* Bug fixes:
  - Don't rely on magic.Magic(...) to have an identical API between file's
    magic.py and PyPI's "python-magic" library.
    (Closes: reproducible-builds/diffoscope#238)
  - Don't rely on dumpimage returning an appropriate exit code; check that
    the file actually exists after we call it.

* Codebase changes:
  - Set a default Config.extended_filesystem_attributes.
  - Drop unused Config.acl and Config.xattr attributes.
  - Tidy imports in diffoscope/comparators/fit.py.

* Tests:
  - Add u-boot-tools to test dependencies so that salsa.debian.org pipelines
    actually test the new FIT comparator.
  - Strip newlines when determining Black version to avoid "requires black
    >= 20.8b1 (18.9b0\n detected)" in test output (NB. embedded newline).
  - Gnumeric is back in testing so re-add to test dependencies.
  - Use assert_diff (over get_data, etc.) in the FIT and APK comparators.
  - Mark test_apk.py::test_android_manifest as being allowed to fail for now.
  - Fix the FIT tests in buster and unstable.

You find out more by visiting the project homepage.

,

Planet DebianDirk Eddelbuettel: td 0.0.2 on CRAN: Updated and Expanded

The still very recent td package for accessing the twelvedata API for financial data has been updated and is now at version 0.0.2.

The time_series access point is now vectorised: supply a vector of symbols, and you receive list of data.frame (or xts) objects. See this tweet teasing out the earliest support for this new featire, and showing a quick four-securities plot. We also added simpler accessors get_quote() and get_price() rounding out the basic API support.

One first bug report alerting us to the fact that our use of RcppSimdJson requires an additional sanitizing of the temporary filename if used on Windows. We will fix that properly soon in new release 0.1.5 of that package; in the meantime you can get hot-fix binary 0.1.4.1 for Windows via install.packages("RcppSimdJson", repos="https://ghrr.github.io/drat") from the ghrr drat.

The NEWS entry follows.

Changes in version 0.0.2 (2021-02-18)

  • The time_series is now vectorised and can return a list of return objects when given a vector of symbols

  • The use of tools::R_user_dir() is now conditional on having R 4.0.0 or later, older versions can use env.var for api key

  • New helper function store_key to save api key.

  • New simple accessors get_quote and get_price

Courtesy of my CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cryptogram Twelve-Year-Old Vulnerability Found in Windows Defender

Researchers found, and Microsoft has patched, a vulnerability in Windows Defender that has been around for twelve years. There is no evidence that anyone has used the vulnerability during that time.

The flaw, discovered by researchers at the security firm SentinelOne, showed up in a driver that Windows Defender — renamed Microsoft Defender last year — uses to delete the invasive files and infrastructure that malware can create. When the driver removes a malicious file, it replaces it with a new, benign one as a sort of placeholder during remediation. But the researchers discovered that the system doesn’t specifically verify that new file. As a result, an attacker could insert strategic system links that direct the driver to overwrite the wrong file or even run malicious code.

It isn’t unusual that vulnerabilities lie around for this long. They can’t be fixed until someone finds them, and people aren’t always looking.

Cryptogram Dependency Confusion: Another Supply-Chain Vulnerability

Alex Birsan writes about being able to install malware into proprietary corporate software by naming the code files to be identical to internal corporate code files. From a ZDNet article:

Today, developers at small or large companies use package managers to download and import libraries that are then assembled together using build tools to create a final app.

This app can be offered to the company’s customers or can be used internally at the company as an employee tool.

But some of these apps can also contain proprietary or highly-sensitive code, depending on their nature. For these apps, companies will often use private libraries that they store inside a private (internal) package repository, hosted inside the company’s own network.

When apps are built, the company’s developers will mix these private libraries with public libraries downloaded from public package portals like npm, PyPI, NuGet, or others.

[…]

Researchers showed that if an attacker learns the names of private libraries used inside a company’s app-building process, they could register these names on public package repositories and upload public libraries that contain malicious code.

The “dependency confusion” attack takes place when developers build their apps inside enterprise environments, and their package manager prioritizes the (malicious) library hosted on the public repository instead of the internal library with the same name.

The research team said they put this discovery to the test by searching for situations where big tech firms accidentally leaked the names of various internal libraries and then registered those same libraries on package repositories like npm, RubyGems, and PyPI.

Using this method, researchers said they successfully loaded their (non-malicious) code inside apps used by 35 major tech firms, including the likes of Apple, Microsoft, PayPal, Shopify, Netflix, Yelp, Uber, and others.

Clever attack, and one that has netted him $130K in bug bounties.

More news articles.

Cryptogram GPS Vulnerabilities

Really good op-ed in the New York Times about how vulnerable the GPS system is to interference, spoofing, and jamming — and potential alternatives.

The 2018 National Defense Authorization Act included funding for the Departments of Defense, Homeland Security and Transportation to jointly conduct demonstrations of various alternatives to GPS, which were concluded last March. Eleven potential systems were tested, including eLoran, a low-frequency, high-power timing and navigation system transmitted from terrestrial towers at Coast Guard facilities throughout the United States.

“China, Russia, Iran, South Korea and Saudi Arabia all have eLoran systems because they don’t want to be as vulnerable as we are to disruptions of signals from space,” said Dana Goward, the president of the Resilient Navigation and Timing Foundation, a nonprofit that advocates for the implementation of an eLoran backup for GPS.

Also under consideration by federal authorities are timing systems delivered via fiber optic network and satellite systems in a lower orbit than GPS, which therefore have a stronger signal, making them harder to hack. A report on the technologies was submitted to Congress last week.

GPS is a piece of our critical infrastructure that is essential to a lot of the rest of our critical infrastructure. It needs to be more secure.

Cryptogram Router Security

This report is six months old, and I don’t know anything about the organization that produced it, but it has some alarming data about router security.

Conclusion: Our analysis showed that Linux is the most used OS running on more than 90% of the devices. However, many routers are powered by very old versions of Linux. Most devices are still powered with a 2.6 Linux kernel, which is no longer maintained for many years. This leads to a high number of critical and high severity CVEs affecting these devices.

Since Linux is the most used OS, exploit mitigation techniques could be enabled very easily. Anyhow, they are used quite rarely by most vendors except the NX feature.

A published private key provides no security at all. Nonetheless, all but one vendor spread several private keys in almost all firmware images.

Mirai used hard-coded login credentials to infect thousands of embedded devices in the last years. However, hard-coded credentials can be found in many of the devices and some of them are well known or at least easy crackable.

However, we can tell for sure that the vendors prioritize security differently. AVM does better job than the other vendors regarding most aspects. ASUS and Netgear do a better job in some aspects than D-Link, Linksys, TP-Link and Zyxel.

Additionally, our evaluation showed that large scale automated security analysis of embedded devices is possible today utilizing just open source software. To sum it up, our analysis shows that there is no router without flaws and there is no vendor who does a perfect job regarding all security aspects. Much more effort is needed to make home routers as secure as current desktop of server systems.

One comment on the report:

One-third ship with Linux kernel version 2.6.36 was released in October 2010. You can walk into a store today and buy a brand new router powered by software that’s almost 10 years out of date! This outdated version of the Linux kernel has 233 known security vulnerabilities registered in the Common Vulnerability and Exposures (CVE) database. The average router contains 26 critically-rated security vulnerabilities, according to the study.

We know the reasons for this. Most routers are designed offshore, by third parties, and then private labeled and sold by the vendors you’ve heard of. Engineering teams come together, design and build the router, and then disperse. There’s often no one around to write patches, and most of the time router firmware isn’t even patchable. The way to update your home router is to throw it away and buy a new one.

And this paper demonstrates that even the new ones aren’t likely to be secure.

Planet DebianJulian Andres Klode: APT 2.2 released

APT 2.2.0 marks the freeze of the 2.1 development series and the start of the 2.2 stable series.

Let’s have a look at what changed compared to 2.2. Many of you who run Debian testing or unstable, or Ubuntu groovy or hirsute will already have seen most of those changes.

New features

  • Various patterns related to dependencies, such as ?depends are now available (2.1.16)
  • The Protected field is now supported. It replaces the previous Important field and is like Essential, but only for installed packages (some minor more differences maybe in terms of ordering the installs).
  • The update command has gained an --error-on=any option that makes it error out on any failure, not just what it considers persistent ons.
  • The rred method can now be used as a standalone program to merge pdiff files
  • APT now implements phased updates. Phasing is used in Ubuntu to slow down and control the roll out of updates in the -updates pocket, but has previously only been available to desktop users using update-manager.

Other behavioral changes

  • The kernel autoremoval helper code has been rewritten from shell in C++ and now runs at run-time, rather than at kernel install time, in order to correctly protect the kernel that is running now, rather than the kernel that was running when we were installing the newest one.

    It also now protects only up to 3 kernels, instead of up to 4, as was originally intended, and was the case before 1.1 series. This avoids /boot partitions from running out of space, especially on Ubuntu which has boot partitions sized for the original spec.

Performance improvements

  • The cache is now hashed using XXH3 instead of Adler32 (or CRC32c on SSE4.2 platforms)
  • The hash table size has been increased

Bug fixes

  • * wildcards work normally again (since 2.1.0)
  • The cache file now includes all translation files in /var/lib/apt/lists, so multi-user systems with different locales correctly show translated descriptions now.
  • URLs are no longer dequoted on redirects only to be requoted again, fixing some redirects where servers did not expect different quoting.
  • Immediate configuration is now best-effort, and failure is no longer fatal.
  • various changes to solver marking leading to different/better results in some cases (since 2.1.0)
  • The lower level I/O bits of the HTTP method have been rewritten to hopefully improve stability
  • The HTTP method no longer infinitely retries downloads on some connection errors
  • The pkgnames command no longer accidentally includes source packages
  • Various fixes from fuzzing efforts by David

Security fixes

  • Out-of-bound reads in ar and tar implementations (CVE-2020-3810, 2.1.2)
  • Integer overflows in ar and tar (CVE-2020-27350, 2.1.13)

(all of which have been backported to all stable series, back all the way to 1.0.9.8.* series in jessie eLTS)

Incompatibilities

  • N/A - there were no breaking changes in apt 2.2 that we are aware of.

Deprecations

  • apt-key(1) is scheduled to be removed for Q2/2022, and several new warnings have been added.

    apt-key was made obsolete in version 0.7.25.1, released in January 2010, by /etc/apt/trusted.gpg.d becoming a supported place to drop additional keyring files, and was since then only intended for deleting keys in the legacy trusted.gpg keyring.

    Please manage files in trusted.gpg.d yourself; or place them in a different location such as /etc/apt/keyrings (or make up your own, there’s no standard location) or /usr/share/keyrings, and use signed-by in the sources.list.d files.

    The legacy trusted.gpg keyring still works, but will also stop working eventually. Please make sure you have all your keys in trusted.gpg.d. Warnings might be added in the upcoming months when a signature could not be verified using just trusted.gpg.d.

    Future versions of APT might switch away from GPG.

  • As a reminder, regular expressions and wildcards other than * inside package names are deprecated (since 2.0). They are not available anymore in apt(8), and will be removed for safety reasons in apt-get in a later release.

Planet DebianJonathan McDowell: Hacking and Bricking the EE Opsrey 2 Mini

I’ve mentioned in the past my twisted EE network setup from when I moved in to my current house. The 4GEE WiFi Mini (also known as the EE Osprey 2 Mini or the EE40VB, and actually a rebadged Alcatel Y853VB) has been sitting unused since then, so I figured I’d see about trying to get a shell on it.

TL;DR: Of course it’s running Linux, there’s a couple of test points internally which bring out the serial console, but after finding those and logging in I discovered it’s running ADB on port 5555 quite happily available without authentication both via wifi and the USB port. So if you have physical or local network access, instant root shell. Well done, folks. And then I bricked it before I could do anything more interesting.

There’s a lack of information about this device out there - most of the links I can find are around removing the SIM lock - so I thought I’d document the pieces I found just in case anyone else is trying to figure it out. It’s based around a Qualcomm MDM9607 SoC, paired with 64M RAM and 256M NAND flash. Wifi is via an RTL8192ES. Kernel is 3.18.20. Busybox is v1.23.1. It’s running dnsmasq but I didn’t grab the version. Of course there’s no source or offer of source provided.

Taking it apart is fairly easy. There’s a single screw to remove, just beside the SIM slot. The coloured rim can then be carefully pried away from the back, revealing the battery. There are then 4 screws in the corners which need removed in order to be able to lift out the actual PCB and gain access to the serial console test points.

EE40VB PCB serial console test points

My mistake was going poking around trying to figure out where the updates are downloaded from - I know I’m running a slightly older release than what’s current, and the device can do an automatic download + update. Top tip; don’t run Jrdrecovery. It’ll error on finding /cache/update.zip and wipe the main partition anyway. That’ll leave you in a boot loop where the device boots the recovery partition which tries to install /cache/update.zip which of course still doesn’t exist.

So. Where next? First, I need to get the device into a state where I can actually do something other than watch it boot into recovery, fail to flash and reboot. Best guess at present is to try and get it to enter the Qualcomm EDL (Emergency Download) mode. That might be possible with a custom USB cable that grounds D+ on boot. Alternatively I need to probe some of the other test points on the PCB and see if grounding any of those helps enter EDL mode. I then need a suitable “firehose” OEM-signed programmer image. And then I need to actually get hold of a proper EE40VB firmware image, either via one of the OTA update files or possibly via an Alcatel ADSU image (though no idea how to get hold of one, other than by posting to a random GSM device forum and hoping for the kindness of strangers). More updates if/when I make progress…

Qualcomm bootloader log
Format: Log Type - Time(microsec) - Message - Optional Info
Log Type: B - Since Boot(Power On Reset),  D - Delta,  S - Statistic
S - QC_IMAGE_VERSION_STRING=BOOT.BF.3.1.2-00053
S - IMAGE_VARIANT_STRING=LAATANAZA
S - OEM_IMAGE_VERSION_STRING=linux3
S - Boot Config, 0x000002e1
B -    105194 - SBL1, Start
D -     61885 - QSEE Image Loaded, Delta - (451964 Bytes)
D -     30286 - RPM Image Loaded, Delta - (151152 Bytes)
B -    459330 - Roger:boot_jrd_oem_main
B -    461526 - Welcome to key_check_poweron!!!
B -    466436 - REG0x00, rc=47
B -    469120 - REG0x01, rc=1f
B -    472018 - REG0x02, rc=1c
B -    474885 - REG0x03, rc=47
B -    477782 - REG0x04, rc=b2
B -    480558 - REG0x05, rc=
B -    483272 - REG0x06, rc=9e
B -    486139 - REG0x07, rc=
B -    488854 - REG0x08, rc=a4
B -    491721 - REG0x09, rc=80
B -    494130 - bq24295_probe: vflt/vsys/vprechg=0mV/0mV/0mV, tprechg/tfastchg=0Min/0Min, [0C, 0C]
B -    511546 - come to calculate vol and temperature!!
B -    511637 - ##############battery_core_convert_vntc: NTC_voltage=1785690
B -    517280 - battery_core_convert_vntc: <-44C, 1785690uV>, present=0
B -    529358 - bq24295_set_current_limit: setting=0mA, mode=-1, input/fastchg/prechg/termchg=-1mA/0mA/0mA/0mA
B -    534360 - bq24295_set_charge_current, rc=0,reg_val=0,i=0
B -    539636 - bq24295_enable_charge: setting=0, chg_enable=-1, otg_enable=0
B -    546072 - bq24295_enable_charging: enable_charging=0
B -    552172 - bq24295_set_current_limit: setting=0mA, mode=-1, input/fastchg/prechg/termchg=-1mA/0mA/0mA/0mA
B -    561566 - bq24295_set_charge_current, rc=0,reg_val=0,i=0
B -    567056 - bq24295_enable_charge: setting=0, chg_enable=0, otg_enable=0
B -    579286 - come to calculate vol and temperature!!
B -    579378 - ##############battery_core_convert_vntc: NTC_voltage=1785777
B -    585539 - battery_core_convert_vntc: <-44C, 1785777uV>, present=0
B -    597617 - charge_main: battery is plugout!!
B -    597678 - Welcome to pca955x_probe!!!
B -    601063 - pca955x_probe: PCA955X probed successfully!
D -     27511 - APPSBL Image Loaded, Delta - (179348 Bytes)
B -    633271 - QSEE Execution, Start
D -       213 - QSEE Execution, Delta
B -    638944 - >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Start writting JRD RECOVERY BOOT
B -    650107 - >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Start writting  RECOVERY BOOT
B -    653218 - >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>read_buf[0] == 0
B -    659044 - SBL1, End
D -    556137 - SBL1, Delta
S - Throughput, 2000 KB/s  (782884 Bytes,  278155 us)
S - DDR Frequency, 240 MHz
littlekernel aboot log
Android Bootloader - UART_DM Initialized!!!
[0] welcome to lk

[0] SCM call: 0x2000601 failed with :fffffffc
[0] Failed to initialize SCM
[10] platform_init()
[10] target_init()
[10] smem ptable found: ver: 4 len: 17
[10] ERROR: No devinfo partition found
[10] Neither 'config' nor 'frp' partition found
[30] voltage of NTC  is 1789872!
[30] voltage of BAT  is 3179553!
[30] usb present is 1!
[30] Loading (boot) image (4171776): start
[530] Loading (boot) image (4171776): done
[540] DTB Total entry: 25, DTB version: 3
[540] Using DTB entry 0x00000129/00010000/0x00000008/0 for device 0x00000129/00010000/0x00010008/0
[560] JRD_CHG_OFF_FEATURE!
[560] come to jrd_target_pause_for_battery_charge!
[570] power_on_status.hard_reset = 0x0
[570] power_on_status.smpl = 0x0
[570] power_on_status.rtc = 0x0
[580] power_on_status.dc_chg = 0x0
[580] power_on_status.usb_chg = 0x0
[580] power_on_status.pon1 = 0x1
[590] power_on_status.cblpwr = 0x0
[590] power_on_status.kpdpwr = 0x0
[590] power_on_status.bugflag = 0x0
[590] cmdline: noinitrd  rw console=ttyHSL0,115200,n8 androidboot.hardware=qcom ehci-hcd.park=3 msm_rtb.filter=0x37 lpm_levels.sleep_disabled=1  earlycon=msm_hsl_uart,0x78b3000  androidboot.serialno=7e6ba58c androidboot.baseband=msm rootfstype=ubifs rootflags=b
[620] Updating device tree: start
[720] Updating device tree: done
[720] booting linux @ 0x80008000, ramdisk @ 0x80008000 (0), tags/device tree @ 0x81e00000
Linux kernel console boot log
[    0.000000] Booting Linux on physical CPU 0x0
[    0.000000] Linux version 3.18.20 (linux3@linux3) (gcc version 4.9.2 (GCC) ) #1 PREEMPT Thu Aug 10 11:57:07 CST 2017
[    0.000000] CPU: ARMv7 Processor [410fc075] revision 5 (ARMv7), cr=10c53c7d
[    0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
[    0.000000] Machine model: Qualcomm Technologies, Inc. MDM 9607 MTP
[    0.000000] Early serial console at I/O port 0x0 (options '')
[    0.000000] bootconsole [uart0] enabled
[    0.000000] Reserved memory: reserved region for node 'modem_adsp_region@0': base 0x82a00000, size 56 MiB
[    0.000000] Reserved memory: reserved region for node 'external_image_region@0': base 0x87c00000, size 4 MiB
[    0.000000] Removed memory: created DMA memory pool at 0x82a00000, size 56 MiB
[    0.000000] Reserved memory: initialized node modem_adsp_region@0, compatible id removed-dma-pool
[    0.000000] Removed memory: created DMA memory pool at 0x87c00000, size 4 MiB
[    0.000000] Reserved memory: initialized node external_image_region@0, compatible id removed-dma-pool
[    0.000000] cma: Reserved 4 MiB at 0x87800000
[    0.000000] Memory policy: Data cache writeback
[    0.000000] CPU: All CPU(s) started in SVC mode.
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 17152
[    0.000000] Kernel command line: noinitrd  rw console=ttyHSL0,115200,n8 androidboot.hardware=qcom ehci-hcd.park=3 msm_rtb.filter=0x37 lpm_levels.sleep_disabled=1  earlycon=msm_hsl_uart,0x78b3000  androidboot.serialno=7e6ba58c androidboot.baseband=msm rootfstype=ubifs rootflags=bulk_read root=ubi0:rootfs ubi.mtd=16
[    0.000000] PID hash table entries: 512 (order: -1, 2048 bytes)
[    0.000000] Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
[    0.000000] Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
[    0.000000] Memory: 54792K/69632K available (5830K kernel code, 399K rwdata, 2228K rodata, 276K init, 830K bss, 14840K reserved)
[    0.000000] Virtual kernel memory layout:
[    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
[    0.000000]     fixmap  : 0xffc00000 - 0xfff00000   (3072 kB)
[    0.000000]     vmalloc : 0xc8800000 - 0xff000000   ( 872 MB)
[    0.000000]     lowmem  : 0xc0000000 - 0xc8000000   ( 128 MB)
[    0.000000]     modules : 0xbf000000 - 0xc0000000   (  16 MB)
[    0.000000]       .text : 0xc0008000 - 0xc07e6c38   (8060 kB)
[    0.000000]       .init : 0xc07e7000 - 0xc082c000   ( 276 kB)
[    0.000000]       .data : 0xc082c000 - 0xc088fdc0   ( 400 kB)
[    0.000000]        .bss : 0xc088fe84 - 0xc095f798   ( 831 kB)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[    0.000000] Preemptible hierarchical RCU implementation.
[    0.000000] NR_IRQS:16 nr_irqs:16 16
[    0.000000] GIC CPU mask not found - kernel will fail to boot.
[    0.000000] GIC CPU mask not found - kernel will fail to boot.
[    0.000000] mpm_init_irq_domain(): Cannot find irq controller for qcom,gpio-parent
[    0.000000] MPM 1 irq mapping errored -517
[    0.000000] Architected mmio timer(s) running at 19.20MHz (virt).
[    0.000011] sched_clock: 56 bits at 19MHz, resolution 52ns, wraps every 3579139424256ns
[    0.007975] Switching to timer-based delay loop, resolution 52ns
[    0.013969] Switched to clocksource arch_mem_counter
[    0.019687] Console: colour dummy device 80x30
[    0.023344] Calibrating delay loop (skipped), value calculated using timer frequency.. 38.40 BogoMIPS (lpj=192000)
[    0.033666] pid_max: default: 32768 minimum: 301
[    0.038411] Mount-cache hash table entries: 1024 (order: 0, 4096 bytes)
[    0.044902] Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes)
[    0.052445] CPU: Testing write buffer coherency: ok
[    0.057057] Setting up static identity map for 0x8058aac8 - 0x8058ab20
[    0.064242]
[    0.064242] **********************************************************
[    0.071251] **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
[    0.077817] **                                                      **
[    0.084302] ** trace_printk() being used. Allocating extra memory.  **
[    0.090781] **                                                      **
[    0.097320] ** This means that this is a DEBUG kernel and it is     **
[    0.103802] ** unsafe for produciton use.                           **
[    0.110339] **                                                      **
[    0.116850] ** If you see this message and you are not debugging    **
[    0.123333] ** the kernel, report this immediately to your vendor!  **
[    0.129870] **                                                      **
[    0.136380] **   NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE   **
[    0.142865] **********************************************************
[    0.150225] MSM Memory Dump base table set up
[    0.153739] MSM Memory Dump apps data table set up
[    0.168125] VFP support v0.3: implementor 41 architecture 2 part 30 variant 7 rev 5
[    0.176332] pinctrl core: initialized pinctrl subsystem
[    0.180930] regulator-dummy: no parameters
[    0.215338] NET: Registered protocol family 16
[    0.220475] DMA: preallocated 256 KiB pool for atomic coherent allocations
[    0.284034] cpuidle: using governor ladder
[    0.314026] cpuidle: using governor menu
[    0.344024] cpuidle: using governor qcom
[    0.355452] msm_watchdog b017000.qcom,wdt: wdog absent resource not present
[    0.361656] msm_watchdog b017000.qcom,wdt: MSM Watchdog Initialized
[    0.371373] irq: no irq domain found for /soc/pinctrl@1000000 !
[    0.381268] spmi_pmic_arb 200f000.qcom,spmi: PMIC Arb Version-2 0x20010000
[    0.389733] platform 4080000.qcom,mss: assigned reserved memory node modem_adsp_region@0
[    0.397409] mem_acc_corner: 0 <--> 0 mV
[    0.401937] hw-breakpoint: found 5 (+1 reserved) breakpoint and 4 watchpoint registers.
[    0.408966] hw-breakpoint: maximum watchpoint size is 8 bytes.
[    0.416287] __of_mpm_init(): MPM driver mapping exists
[    0.420940] msm_rpm_glink_dt_parse: qcom,rpm-glink compatible not matches
[    0.427235] msm_rpm_dev_probe: APSS-RPM communication over SMD
[    0.432977] smd_open() before smd_init()
[    0.437544] msm_mpm_dev_probe(): Cannot get clk resource for XO: -517
[    0.445730] smd_channel_probe_now: allocation table not initialized
[    0.453100] mdm9607_s1: 1050 <--> 1350 mV at 1225 mV normal idle
[    0.458566] spm_regulator_probe: name=mdm9607_s1, range=LV, voltage=1225000 uV, mode=AUTO, step rate=4800 uV/us
[    0.468817] cpr_efuse_init: apc_corner: efuse_addr = 0x000a4000 (len=0x1000)
[    0.475353] cpr_read_fuse_revision: apc_corner: fuse revision = 2
[    0.481345] cpr_parse_speed_bin_fuse: apc_corner: [row: 37]: 0x79e8bd327e6ba58c, speed_bits = 4
[    0.490124] cpr_pvs_init: apc_corner: pvs voltage: [1050000 1100000 1275000] uV
[    0.497342] cpr_pvs_init: apc_corner: ceiling voltage: [1050000 1225000 1350000] uV
[    0.504979] cpr_pvs_init: apc_corner: floor voltage: [1050000 1050000 1150000] uV
[    0.513125] i2c-msm-v2 78b8000.i2c: probing driver i2c-msm-v2
[    0.518335] i2c-msm-v2 78b8000.i2c: error on clk_get(core_clk):-517
[    0.524478] i2c-msm-v2 78b8000.i2c: error probe() failed with err:-517
[    0.531111] i2c-msm-v2 78b7000.i2c: probing driver i2c-msm-v2
[    0.536788] i2c-msm-v2 78b7000.i2c: error on clk_get(core_clk):-517
[    0.542886] i2c-msm-v2 78b7000.i2c: error probe() failed with err:-517
[    0.549618] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    0.555202] i2c-msm-v2 78b9000.i2c: error on clk_get(core_clk):-517
[    0.561374] i2c-msm-v2 78b9000.i2c: error probe() failed with err:-517
[    0.570613] msm-thermal soc:qcom,msm-thermal: msm_thermal:Failed reading node=/soc/qcom,msm-thermal, key=qcom,core-limit-temp. err=-22. KTM continues
[    0.583049] msm-thermal soc:qcom,msm-thermal: probe_therm_reset:Failed reading node=/soc/qcom,msm-thermal, key=qcom,therm-reset-temp err=-22. KTM continues
[    0.596926] msm_thermal:msm_thermal_dev_probe Failed reading node=/soc/qcom,msm-thermal, key=qcom,online-hotplug-core. err:-517
[    0.609370] sps:sps is ready.
[    0.613137] msm_rpm_glink_dt_parse: qcom,rpm-glink compatible not matches
[    0.619020] msm_rpm_dev_probe: APSS-RPM communication over SMD
[    0.625773] mdm9607_s2: 750 <--> 1275 mV at 750 mV normal idle
[    0.631584] mdm9607_s3_level: 0 <--> 0 mV at 0 mV normal idle
[    0.637085] mdm9607_s3_level_ao: 0 <--> 0 mV at 0 mV normal idle
[    0.643092] mdm9607_s3_floor_level: 0 <--> 0 mV at 0 mV normal idle
[    0.649512] mdm9607_s3_level_so: 0 <--> 0 mV at 0 mV normal idle
[    0.655750] mdm9607_s4: 1800 <--> 1950 mV at 1800 mV normal idle
[    0.661791] mdm9607_l1: 1250 mV normal idle
[    0.666090] mdm9607_l2: 1800 mV normal idle
[    0.670276] mdm9607_l3: 1800 mV normal idle
[    0.674541] mdm9607_l4: 3075 mV normal idle
[    0.678743] mdm9607_l5: 1700 <--> 3050 mV at 1700 mV normal idle
[    0.684904] mdm9607_l6: 1700 <--> 3050 mV at 1700 mV normal idle
[    0.690892] mdm9607_l7: 1700 <--> 1900 mV at 1700 mV normal idle
[    0.697036] mdm9607_l8: 1800 mV normal idle
[    0.701238] mdm9607_l9: 1200 <--> 1250 mV at 1200 mV normal idle
[    0.707367] mdm9607_l10: 1050 mV normal idle
[    0.711662] mdm9607_l11: 1800 mV normal idle
[    0.716089] mdm9607_l12_level: 0 <--> 0 mV at 0 mV normal idle
[    0.721717] mdm9607_l12_level_ao: 0 <--> 0 mV at 0 mV normal idle
[    0.727946] mdm9607_l12_level_so: 0 <--> 0 mV at 0 mV normal idle
[    0.734099] mdm9607_l12_floor_lebel: 0 <--> 0 mV at 0 mV normal idle
[    0.740706] mdm9607_l13: 1800 <--> 2850 mV at 2850 mV normal idle
[    0.746883] mdm9607_l14: 2650 <--> 3000 mV at 2650 mV normal idle
[    0.752515] msm_mpm_dev_probe(): Cannot get clk resource for XO: -517
[    0.759036] cpr_efuse_init: apc_corner: efuse_addr = 0x000a4000 (len=0x1000)
[    0.765807] cpr_read_fuse_revision: apc_corner: fuse revision = 2
[    0.771809] cpr_parse_speed_bin_fuse: apc_corner: [row: 37]: 0x79e8bd327e6ba58c, speed_bits = 4
[    0.780586] cpr_pvs_init: apc_corner: pvs voltage: [1050000 1100000 1275000] uV
[    0.787808] cpr_pvs_init: apc_corner: ceiling voltage: [1050000 1225000 1350000] uV
[    0.795443] cpr_pvs_init: apc_corner: floor voltage: [1050000 1050000 1150000] uV
[    0.803094] cpr_init_cpr_parameters: apc_corner: up threshold = 2, down threshold = 3
[    0.810752] cpr_init_cpr_parameters: apc_corner: CPR is enabled by default.
[    0.817687] cpr_init_cpr_efuse: apc_corner: [row:65] = 0x15000277277383
[    0.824272] cpr_init_cpr_efuse: apc_corner: CPR disable fuse = 0
[    0.830225] cpr_init_cpr_efuse: apc_corner: Corner[1]: ro_sel = 0, target quot = 631
[    0.837976] cpr_init_cpr_efuse: apc_corner: Corner[2]: ro_sel = 0, target quot = 631
[    0.845703] cpr_init_cpr_efuse: apc_corner: Corner[3]: ro_sel = 0, target quot = 899
[    0.853592] cpr_config: apc_corner: Timer count: 0x17700 (for 5000 us)
[    0.860426] apc_corner: 0 <--> 0 mV
[    0.864044] i2c-msm-v2 78b8000.i2c: probing driver i2c-msm-v2
[    0.869261] i2c-msm-v2 78b8000.i2c: error on clk_get(core_clk):-517
[    0.875492] i2c-msm-v2 78b8000.i2c: error probe() failed with err:-517
[    0.882225] i2c-msm-v2 78b7000.i2c: probing driver i2c-msm-v2
[    0.887775] i2c-msm-v2 78b7000.i2c: error on clk_get(core_clk):-517
[    0.893941] i2c-msm-v2 78b7000.i2c: error probe() failed with err:-517
[    0.900719] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    0.906256] i2c-msm-v2 78b9000.i2c: error on clk_get(core_clk):-517
[    0.912430] i2c-msm-v2 78b9000.i2c: error probe() failed with err:-517
[    0.919472] msm-thermal soc:qcom,msm-thermal: msm_thermal:Failed reading node=/soc/qcom,msm-thermal, key=qcom,core-limit-temp. err=-22. KTM continues
[    0.932372] msm-thermal soc:qcom,msm-thermal: probe_therm_reset:Failed reading node=/soc/qcom,msm-thermal,
key=qcom,therm-reset-temp err=-22. KTM continues
[    0.946361] msm_thermal:get_kernel_cluster_info CPU0 topology not initialized.
[    0.953824] cpu cpu0: dev_pm_opp_get_opp_count: device OPP not found (-19)
[    0.960300] msm_thermal:get_cpu_freq_plan_len Error reading CPU0 freq table len. error:-19
[    0.968533] msm_thermal:vdd_restriction_reg_init Defer vdd rstr freq init.
[    0.975846] cpu cpu0: dev_pm_opp_get_opp_count: device OPP not found (-19)
[    0.982219] msm_thermal:get_cpu_freq_plan_len Error reading CPU0 freq table len. error:-19
[    0.991378] cpu cpu0: dev_pm_opp_get_opp_count: device OPP not found (-19)
[    0.997544] msm_thermal:get_cpu_freq_plan_len Error reading CPU0 freq table len. error:-19
[    1.013642] qcom,gcc-mdm9607 1800000.qcom,gcc: Registered GCC clocks
[    1.019451] clock-a7 b010008.qcom,clock-a7: Speed bin: 4 PVS Version: 0
[    1.025693] a7ssmux: set OPP pair(400000000 Hz: 1 uV) on cpu0
[    1.031314] a7ssmux: set OPP pair(1305600000 Hz: 7 uV) on cpu0
[    1.038805] i2c-msm-v2 78b8000.i2c: probing driver i2c-msm-v2
[    1.043587] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.052935] i2c-msm-v2 78b8000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.062006] irq: no irq domain found for /soc/wcd9xxx-irq !
[    1.069884] i2c-msm-v2 78b7000.i2c: probing driver i2c-msm-v2
[    1.074814] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.083716] i2c-msm-v2 78b7000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.093850] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    1.098889] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.107779] i2c-msm-v2 78b9000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.167871] KPI: Bootloader start count = 24097
[    1.171364] KPI: Bootloader end count = 48481
[    1.175855] KPI: Bootloader display count = 3884474147
[    1.180825] KPI: Bootloader load kernel count = 16420
[    1.185905] KPI: Kernel MPM timestamp = 105728
[    1.190286] KPI: Kernel MPM Clock frequency = 32768
[    1.195209] socinfo_print: v0.10, id=297, ver=1.0, raw_id=72, raw_ver=0, hw_plat=8, hw_plat_ver=65536
[    1.195209]  accessory_chip=0, hw_plat_subtype=0, pmic_model=65539, pmic_die_revision=131074 foundry_id=0 serial_number=2120983948
[    1.216731] sdcard_ext_vreg: no parameters
[    1.220555] rome_vreg: no parameters
[    1.224133] emac_lan_vreg: no parameters
[    1.228177] usbcore: registered new interface driver usbfs
[    1.233156] usbcore: registered new interface driver hub
[    1.238578] usbcore: registered new device driver usb
[    1.244507] cpufreq: driver msm up and running
[    1.248425] ION heap system created
[    1.251895] msm_bus_fabric_init_driver
[    1.262563] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0 Power-on reason: Triggered from PON1 (secondary PMIC) and 'cold' boot
[    1.273747] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0: Power-off reason: Triggered from UVLO (Under Voltage Lock Out)
[    1.285430] input: qpnp_pon as /devices/virtual/input/input0
[    1.291246] PMIC@SID0: PM8019 v2.2 options: 3, 2, 2, 2
[    1.296706] Advanced Linux Sound Architecture Driver Initialized.
[    1.302493] Add group failed
[    1.305291] cfg80211: Calling CRDA to update world regulatory domain
[    1.311216] cfg80211: World regulatory domain updated:
[    1.317109] Switched to clocksource arch_mem_counter
[    1.334091] cfg80211:  DFS Master region: unset
[    1.337418] cfg80211:   (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp), (dfs_cac_time)
[    1.354087] cfg80211:   (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.361055] cfg80211:   (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.370545] NET: Registered protocol family 2
[    1.374082] cfg80211:   (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm), (N/A)
[    1.381851] cfg80211:   (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.389876] cfg80211:   (5250000 KHz - 5330000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.397857] cfg80211:   (5490000 KHz - 5710000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.405841] cfg80211:   (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.413795] cfg80211:   (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm), (N/A)
[    1.422355] TCP established hash table entries: 1024 (order: 0, 4096 bytes)
[    1.428921] TCP bind hash table entries: 1024 (order: 0, 4096 bytes)
[    1.435192] TCP: Hash tables configured (established 1024 bind 1024)
[    1.441528] TCP: reno registered
[    1.444738] UDP hash table entries: 256 (order: 0, 4096 bytes)
[    1.450521] UDP-Lite hash table entries: 256 (order: 0, 4096 bytes)
[    1.456950] NET: Registered protocol family 1
[    1.462779] futex hash table entries: 256 (order: -1, 3072 bytes)
[    1.474555] msgmni has been set to 115
[    1.478551] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    1.485041] io scheduler noop registered
[    1.488818] io scheduler deadline registered
[    1.493200] io scheduler cfq registered (default)
[    1.502142] msm_rpm_log_probe: OK
[    1.506717] msm_serial_hs module loaded
[    1.509803] msm_serial_hsl_probe: detected port #0 (ttyHSL0)
[    1.515324] AXI: get_pdata(): Error: Client name not found
[    1.520626] AXI: msm_bus_cl_get_pdata(): client has to provide missing entry for successful registration
[    1.530171] msm_serial_hsl_probe: Bus scaling is disabled                      [    1.074814] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.083716] i2c-msm-v2 78b7000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.093850] i2c-msm-v2 78b9000.i2c: probing driver i2c-msm-v2
[    1.098889] AXI: msm_bus_scale_register_client(): msm_bus_scale_register_client: Bus driver not ready.
[    1.107779] i2c-msm-v2 78b9000.i2c: msm_bus_scale_register_client(mstr-id:86):0 (not a problem)
[    1.167871] KPI: Bootloader start count = 24097
[    1.171364] KPI: Bootloader end count = 48481
[    1.175855] KPI: Bootloader display count = 3884474147
[    1.180825] KPI: Bootloader load kernel count = 16420
[    1.185905] KPI: Kernel MPM timestamp = 105728
[    1.190286] KPI: Kernel MPM Clock frequency = 32768
[    1.195209] socinfo_print: v0.10, id=297, ver=1.0, raw_id=72, raw_ver=0, hw_plat=8, hw_plat_ver=65536
[    1.195209]  accessory_chip=0, hw_plat_subtype=0, pmic_model=65539, pmic_die_revision=131074 foundry_id=0 serial_number=2120983948
[    1.216731] sdcard_ext_vreg: no parameters
[    1.220555] rome_vreg: no parameters
[    1.224133] emac_lan_vreg: no parameters
[    1.228177] usbcore: registered new interface driver usbfs
[    1.233156] usbcore: registered new interface driver hub
[    1.238578] usbcore: registered new device driver usb
[    1.244507] cpufreq: driver msm up and running
[    1.248425] ION heap system created
[    1.251895] msm_bus_fabric_init_driver
[    1.262563] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0 Power-on reason: Triggered from PON1 (secondary PMIC) and 'cold' boot
[    1.273747] qcom,qpnp-power-on qpnp-power-on-c7303800: PMIC@SID0: Power-off reason: Triggered from UVLO (Under Voltage Lock Out)
[    1.285430] input: qpnp_pon as /devices/virtual/input/input0
[    1.291246] PMIC@SID0: PM8019 v2.2 options: 3, 2, 2, 2
[    1.296706] Advanced Linux Sound Architecture Driver Initialized.
[    1.302493] Add group failed
[    1.305291] cfg80211: Calling CRDA to update world regulatory domain
[    1.311216] cfg80211: World regulatory domain updated:
[    1.317109] Switched to clocksource arch_mem_counter
[    1.334091] cfg80211:  DFS Master region: unset
[    1.337418] cfg80211:   (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp), (dfs_cac_time)
[    1.354087] cfg80211:   (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.361055] cfg80211:   (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm), (N/A)
[    1.370545] NET: Registered protocol family 2
[    1.374082] cfg80211:   (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm), (N/A)
[    1.381851] cfg80211:   (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.389876] cfg80211:   (5250000 KHz - 5330000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.397857] cfg80211:   (5490000 KHz - 5710000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.405841] cfg80211:   (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm), (N/A)
[    1.413795] cfg80211:   (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm), (N/A)
[    1.422355] TCP established hash table entries: 1024 (order: 0, 4096 bytes)
[    1.428921] TCP bind hash table entries: 1024 (order: 0, 4096 bytes)
[    1.435192] TCP: Hash tables configured (established 1024 bind 1024)
[    1.441528] TCP: reno registered
[    1.444738] UDP hash table entries: 256 (order: 0, 4096 bytes)
[    1.450521] UDP-Lite hash table entries: 256 (order: 0, 4096 bytes)
[    1.456950] NET: Registered protocol family 1
[    1.462779] futex hash table entries: 256 (order: -1, 3072 bytes)
[    1.474555] msgmni has been set to 115
[    1.478551] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
[    1.485041] io scheduler noop registered
[    1.488818] io scheduler deadline registered
[    1.493200] io scheduler cfq registered (default)
[    1.502142] msm_rpm_log_probe: OK
[    1.506717] msm_serial_hs module loaded
[    1.509803] msm_serial_hsl_probe: detected port #0 (ttyHSL0)
[    1.515324] AXI: get_pdata(): Error: Client name not found
[    1.520626] AXI: msm_bus_cl_get_pdata(): client has to provide missing entry for successful registration
[    1.530171] msm_serial_hsl_probe: Bus scaling is disabled
[    1.535696] 78b3000.serial: ttyHSL0 at MMIO 0x78b3000 (irq = 153, base_baud = 460800�[    1.544155] msm_hsl_console_setup: console setup on port #0
[    1.548727] console [ttyHSL0] enabled
[    1.548727] console [ttyHSL0] enabled
[    1.556014] bootconsole [uart0] disabled
[    1.556014] bootconsole [uart0] disabled
[    1.564212] msm_serial_hsl_init: driver initialized
[    1.578450] brd: module loaded
[    1.582920] loop: module loaded
[    1.589183] sps: BAM device 0x07984000 is not registered yet.
[    1.594234] sps:BAM 0x07984000 is registered.
[    1.598072] msm_nand_bam_init: msm_nand_bam_init: BAM device registered: bam_handle 0xc69f6400
[    1.607103] sps:BAM 0x07984000 (va:0xc89a0000) enabled: ver:0x18, number of pipes:7
[    1.616588] msm_nand_parse_smem_ptable: Parsing partition table info from SMEM
[    1.622805] msm_nand_parse_smem_ptable: SMEM partition table found: ver: 4 len: 17
[    1.630391] msm_nand_version_check: nand_major:1, nand_minor:5, qpic_major:1, qpic_minor:5
[    1.638642] msm_nand_scan: NAND Id: 0x1590aa98 Buswidth: 8Bits Density: 256 MByte
[    1.646069] msm_nand_scan: pagesize: 2048 Erasesize: 131072 oobsize: 128 (in Bytes)
[    1.653676] msm_nand_scan: BCH ECC: 8 Bit
[    1.657710] msm_nand_scan: CFG0: 0x290408c0,           CFG1: 0x0804715c
[    1.657710]             RAWCFG0: 0x2b8400c0,        RAWCFG1: 0x0005055d
[    1.657710]           ECCBUFCFG: 0x00000203,      ECCBCHCFG: 0x42040d10
[    1.657710]           RAWECCCFG: 0x42000d11, BAD BLOCK BYTE: 0x000001c5
[    1.684101] Creating 17 MTD partitions on "7980000.nand":
[    1.689447] 0x000000000000-0x000000140000 : "sbl"
[    1.694867] 0x000000140000-0x000000280000 : "mibib"
[    1.699560] 0x000000280000-0x000000e80000 : "efs2"
[    1.704408] 0x000000e80000-0x000000f40000 : "tz"
[    1.708934] 0x000000f40000-0x000000fa0000 : "rpm"
[    1.713625] 0x000000fa0000-0x000001000000 : "aboot"
[    1.718582] 0x000001000000-0x0000017e0000 : "boot"
[    1.723281] 0x0000017e0000-0x000002820000 : "scrub"
[    1.728174] 0x000002820000-0x000005020000 : "modem"
[    1.732968] 0x000005020000-0x000005420000 : "rfbackup"
[    1.738156] 0x000005420000-0x000005820000 : "oem"
[    1.742770] 0x000005820000-0x000005f00000 : "recovery"
[    1.747972] 0x000005f00000-0x000009100000 : "cache"
[    1.752787] 0x000009100000-0x000009a40000 : "recoveryfs"
[    1.758389] 0x000009a40000-0x00000aa40000 : "cdrom"
[    1.762967] 0x00000aa40000-0x00000ba40000 : "jrdresource"
[    1.768407] 0x00000ba40000-0x000010000000 : "system"
[    1.773239] msm_nand_probe: NANDc phys addr 0x7980000, BAM phys addr 0x7984000, BAM IRQ 164
[    1.781074] msm_nand_probe: Allocated DMA buffer at virt_addr 0xc7840000, phys_addr 0x87840000
[    1.791872] PPP generic driver version 2.4.2
[    1.801126] cnss_sdio 87a00000.qcom,cnss-sdio: CNSS SDIO Driver registered
[    1.807554] msm_otg 78d9000.usb: msm_otg probe
[    1.813333] msm_otg 78d9000.usb: OTG regs = c88f8000
[    1.820702] gbridge_init: gbridge_init successs.
[    1.826344] msm_otg 78d9000.usb: phy_reset: success
[    1.830294] qcom,qpnp-rtc qpnp-rtc-c7307000: rtc core: registered qpnp_rtc as rtc0
[    1.838474] i2c /dev entries driver
[    1.842459] unable to find DT imem DLOAD mode node
[    1.846588] unable to find DT imem EDLOAD mode node
[    1.851332] unable to find DT imem dload-type node
[    1.856921] bq24295-charger 4-006b: bq24295 probe enter
[    1.861161] qcom,iterm-ma = 128
[    1.864476] bq24295_otg_vreg: no parameters
[    1.868502] charger_core_register: Charger Core Version 5.0.0(Built at 20151202-21:36)!
[    1.877007] i2c-msm-v2 78b8000.i2c: msm_bus_scale_register_client(mstr-id:86):0x3 (ok)
[    1.885559] bq24295-charger 4-006b: bq24295_set_bhot_mode 3
[    1.890150] bq24295-charger 4-006b: power_good is 1,vbus_stat is 2
[    1.896588] bq24295-charger 4-006b: bq24295_set_thermal_threshold 100
[    1.902952] bq24295-charger 4-006b: bq24295_set_sys_min 3700
[    1.908639] bq24295-charger 4-006b: bq24295_set_max_target_voltage 4150
[    1.915223] bq24295-charger 4-006b: bq24295_set_recharge_threshold 300
[    1.922119] bq24295-charger 4-006b: bq24295_set_terminal_current_limit iterm_disabled=0, iterm_ma=128
[    1.930917] bq24295-charger 4-006b: bq24295_set_precharge_current_limit bdi->prech_cur=128
[    1.940038] bq24295-charger 4-006b: bq24295_set_safty_timer 0
[    1.945088] bq24295-charger 4-006b: bq24295_set_input_voltage_limit 4520
[    1.972949] sdhci: Secure Digital Host Controller Interface driver
[    1.978151] sdhci: Copyright(c) Pierre Ossman
[    1.982441] sdhci-pltfm: SDHCI platform and OF driver helper
[    1.989092] sdhci_msm 7824900.sdhci: sdhci_msm_probe: ICE device is not enabled
[    1.995473] sdhci_msm 7824900.sdhci: No vreg data found for vdd
[    2.001530] sdhci_msm 7824900.sdhci: sdhci_msm_pm_qos_parse_irq: error -22 reading irq cpu
[    2.009809] sdhci_msm 7824900.sdhci: sdhci_msm_pm_qos_parse: PM QoS voting for IRQ will be disabled
[    2.018600] sdhci_msm 7824900.sdhci: sdhci_msm_pm_qos_parse: PM QoS voting for cpu group will be disabled
[    2.030541] sdhci_msm 7824900.sdhci: sdhci_msm_probe: sdiowakeup_irq = 353
[    2.036867] sdhci_msm 7824900.sdhci: No vmmc regulator found
[    2.042027] sdhci_msm 7824900.sdhci: No vqmmc regulator found
[    2.048266] mmc0: SDHCI controller on 7824900.sdhci [7824900.sdhci] using 32-bit ADMA in legacy mode
[    2.080401] Welcome to pca955x_probe!!
[    2.084362] leds-pca955x 3-0020: leds-pca955x: Using pca9555 16-bit LED driver at slave address 0x20
[    2.095400] sdhci_msm 7824900.sdhci: card claims to support voltages below defined range
[    2.103125] i2c-msm-v2 78b7000.i2c: msm_bus_scale_register_client(mstr-id:86):0x5 (ok)
[    2.114183] msm_otg 78d9000.usb: Avail curr from USB = 1500
[    2.120251] come to USB_SDP_CHARGER!
[    2.123215] Welcome to sn3199_probe!
[    2.126718] leds-sn3199 5-0064: leds-sn3199: Using sn3199 9-bit LED driver at slave address 0x64
[    2.136511] sn3199->led_en_gpio=21
[    2.139143] i2c-msm-v2 78b9000.i2c: msm_bus_scale_register_client(mstr-id:86):0x6 (ok)
[    2.150207] usbcore: registered new interface driver usbhid
[    2.154864] usbhid: USB HID core driver
[    2.159825] sps:BAM 0x078c4000 is registered.
[    2.163573] bimc-bwmon 408000.qcom,cpu-bwmon: BW HWmon governor registered.
[    2.171080] devfreq soc:qcom,cpubw: Couldn't update frequency transition information.
[    2.178513] coresight-fuse a601c.fuse: QPDI fuse not specified
[    2.184242] coresight-fuse a601c.fuse: Fuse initialized
[    2.192407] coresight-csr 6001000.csr: CSR initialized
[    2.197263] coresight-tmc 6026000.tmc: Byte Counter feature enabled
[    2.203204] sps:BAM 0x06084000 is registered.
[    2.207301] coresight-tmc 6026000.tmc: TMC initialized
[    2.212681] coresight-tmc 6025000.tmc: TMC initialized
[    2.220071] nidnt boot config: 0
[    2.224563] mmc0: new ultra high speed SDR50 SDIO card at address 0001
[    2.231120] coresight-tpiu 6020000.tpiu: NIDnT on SDCARD only mode
[    2.236440] coresight-tpiu 6020000.tpiu: TPIU initialized
[    2.242808] coresight-replicator 6024000.replicator: REPLICATOR initialized
[    2.249372] coresight-stm 6002000.stm: STM initialized
[    2.255034] coresight-hwevent 606c000.hwevent: Hardware Event driver initialized
[    2.262312] Netfilter messages via NETLINK v0.30.
[    2.266306] nf_conntrack version 0.5.0 (920 buckets, 3680 max)
[    2.272312] ctnetlink v0.93: registering with nfnetlink.
[    2.277565] ip_set: protocol 6
[    2.280568] ip_tables: (C) 2000-2006 Netfilter Core Team
[    2.285723] arp_tables: (C) 2002 David S. Miller
[    2.290146] TCP: cubic registered
[    2.293915] NET: Registered protocol family 10
[    2.298740] ip6_tables: (C) 2000-2006 Netfilter Core Team
[    2.303407] sit: IPv6 over IPv4 tunneling driver
[    2.308481] NET: Registered protocol family 17
[    2.312340] bridge: automatic filtering via arp/ip/ip6tables has been deprecated. Update your scripts to load br_netfilter if you need this.
[    2.325094] Bridge firewalling registered
[    2.328930] Ebtables v2.0 registered
[    2.333260] NET: Registered protocol family 27
[    2.341362] battery_core_register: Battery Core Version 5.0.0(Built at 20151202-21:36)!
[    2.348466] pmu_battery_probe: vbat_channel=21, tbat_channel=17
[    2.420236] ubi0: attaching mtd16
[    2.723941] ubi0: scanning is finished
[    2.732997] ubi0: attached mtd16 (name "system", size 69 MiB)
[    2.737783] ubi0: PEB size: 131072 bytes (128 KiB), LEB size: 126976 bytes
[    2.744601] ubi0: min./max. I/O unit sizes: 2048/2048, sub-page size 2048
[    2.751333] ubi0: VID header offset: 2048 (aligned 2048), data offset: 4096
[    2.758540] ubi0: good PEBs: 556, bad PEBs: 2, corrupted PEBs: 0
[    2.764305] ubi0: user volume: 3, internal volumes: 1, max. volumes count: 128
[    2.771476] ubi0: max/mean erase counter: 192/64, WL threshold: 4096, image sequence number: 35657280
[    2.780708] ubi0: available PEBs: 0, total reserved PEBs: 556, PEBs reserved for bad PEB handling: 38
[    2.789921] ubi0: background thread "ubi_bgt0d" started, PID 96
[    2.796395] android_bind cdev: 0xC6583E80, name: ci13xxx_msm
[    2.801508] file system registered
[    2.804974] mbim_init: initialize 1 instances
[    2.809228] mbim_init: Initialized 1 ports
[    2.815074] rndis_qc_init: initialize rndis QC instance
[    2.819713] jrd device_desc.bcdDevice: [0x0242]
[    2.823779] android_bind scheduled usb start work: name: ci13xxx_msm
[    2.830230] android_usb gadget: android_usb ready
[    2.834845] msm_hsusb msm_hsusb: [ci13xxx_start] hw_ep_max = 32
[    2.840741] msm_hsusb msm_hsusb: CI13XXX_CONTROLLER_RESET_EVENT received
[    2.847433] msm_hsusb msm_hsusb: CI13XXX_CONTROLLER_UDC_STARTED_EVENT received
[    2.855851] input: gpio-keys as /devices/soc:gpio_keys/input/input1
[    2.861452] qcom,qpnp-rtc qpnp-rtc-c7307000: setting system clock to 1970-01-01 06:36:41 UTC (23801)
[    2.870315] open file error /usb_conf/usb_config.ini
[    2.876412] jrd_usb_start_work open file erro /usb_conf/usb_config.ini, retry_count:0
[    2.884324] parse_legacy_cluster_params(): Ignoring cluster params
[    2.889468] ------------[ cut here ]------------
[    2.894186] WARNING: CPU: 0 PID: 1 at /home/linux3/jrd/yanping.an/ee40/0810/MDM9607.LE.1.0-00130/apps_proc/oe-core/build/tmp-glibc/work-shared/mdm9607/kernel-source/drivers/cpuidle/lpm-levels-of.c:739 parse_cluster+0xb50/0xcb4()
[    2.914366] Modules linked in:
[    2.917339] CPU: 0 PID: 1 Comm: swapper Not tainted 3.18.20 #1
[    2.923171] [<c00132ac>] (unwind_backtrace) from [<c0011460>] (show_stack+0x10/0x14)
[    2.931092] [<c0011460>] (show_stack) from [<c001c6ac>] (warn_slowpath_common+0x68/0x88)
[    2.939175] [<c001c6ac>] (warn_slowpath_common) from [<c001c75c>] (warn_slowpath_null+0x18/0x20)
[    2.947895] [<c001c75c>] (warn_slowpath_null) from [<c034e180>] (parse_cluster+0xb50/0xcb4)
[    2.956189] [<c034e180>] (parse_cluster) from [<c034b6b4>] (lpm_probe+0xc/0x1d4)
[    2.963527] [<c034b6b4>] (lpm_probe) from [<c024857c>] (platform_drv_probe+0x30/0x7c)
[    2.971380] [<c024857c>] (platform_drv_probe) from [<c0246d54>] (driver_probe_device+0xb8/0x1e8)
[    2.980118] [<c0246d54>] (driver_probe_device) from [<c0246f30>] (__driver_attach+0x68/0x8c)
[    2.988467] [<c0246f30>] (__driver_attach) from [<c02455d0>] (bus_for_each_dev+0x6c/0x90)
[    2.996626] [<c02455d0>] (bus_for_each_dev) from [<c02465a4>] (bus_add_driver+0xe0/0x1c8)
[    3.004786] [<c02465a4>] (bus_add_driver) from [<c02477bc>] (driver_register+0x9c/0xe0)
[    3.012739] [<c02477bc>] (driver_register) from [<c080c3d8>] (lpm_levels_module_init+0x14/0x38)
[    3.021459] [<c080c3d8>] (lpm_levels_module_init) from [<c0008980>] (do_one_initcall+0xf8/0x1a0)
[    3.030217] [<c0008980>] (do_one_initcall) from [<c07e7d4c>] (kernel_init_freeable+0xf0/0x1b0)
[    3.038818] [<c07e7d4c>] (kernel_init_freeable) from [<c0582d48>] (kernel_init+0x8/0xe4)
[    3.046888] [<c0582d48>] (kernel_init) from [<c000dda0>] (ret_from_fork+0x14/0x34)
[    3.054432] ---[ end trace e9ec50b1ec4c8f73 ]---
[    3.059012] ------------[ cut here ]------------
[    3.063604] WARNING: CPU: 0 PID: 1 at /home/linux3/jrd/yanping.an/ee40/0810/MDM9607.LE.1.0-00130/apps_proc/oe-core/build/tmp-glibc/work-shared/mdm9607/kernel-source/drivers/cpuidle/lpm-levels-of.c:739 parse_cluster+0xb50/0xcb4()
[    3.083858] Modules linked in:
[    3.086870] CPU: 0 PID: 1 Comm: swapper Tainted: G        W      3.18.20 #1
[    3.093814] [<c00132ac>] (unwind_backtrace) from [<c0011460>] (show_stack+0x10/0x14)
[    3.101575] [<c0011460>] (show_stack) from [<c001c6ac>] (warn_slowpath_common+0x68/0x88)
[    3.109641] [<c001c6ac>] (warn_slowpath_common) from [<c001c75c>] (warn_slowpath_null+0x18/0x20)
[    3.118412] [<c001c75c>] (warn_slowpath_null) from [<c034e180>] (parse_cluster+0xb50/0xcb4)
[    3.126745] [<c034e180>] (parse_cluster) from [<c034b6b4>] (lpm_probe+0xc/0x1d4)
[    3.134126] [<c034b6b4>] (lpm_probe) from [<c024857c>] (platform_drv_probe+0x30/0x7c)
[    3.141906] [<c024857c>] (platform_drv_probe) from [<c0246d54>] (driver_probe_device+0xb8/0x1e8)
[    3.150702] [<c0246d54>] (driver_probe_device) from [<c0246f30>] (__driver_attach+0x68/0x8c)
[    3.159120] [<c0246f30>] (__driver_attach) from [<c02455d0>] (bus_for_each_dev+0x6c/0x90)
[    3.167285] [<c02455d0>] (bus_for_each_dev) from [<c02465a4>] (bus_add_driver+0xe0/0x1c8)
[    3.175444] [<c02465a4>] (bus_add_driver) from [<c02477bc>] (driver_register+0x9c/0xe0)
[    3.183398] [<c02477bc>] (driver_register) from [<c080c3d8>] (lpm_levels_module_init+0x14/0x38)
[    3.192107] [<c080c3d8>] (lpm_levels_module_init) from [<c0008980>] (do_one_initcall+0xf8/0x1a0)
[    3.200877] [<c0008980>] (do_one_initcall) from [<c07e7d4c>] (kernel_init_freeable+0xf0/0x1b0)
[    3.209475] [<c07e7d4c>] (kernel_init_freeable) from [<c0582d48>] (kernel_init+0x8/0xe4)
[    3.217542] [<c0582d48>] (kernel_init) from [<c000dda0>] (ret_from_fork+0x14/0x34)
[    3.225090] ---[ end trace e9ec50b1ec4c8f74 ]---
[    3.229667] /soc/qcom,lpm-levels/qcom,pm-cluster@0: No CPU phandle, assuming single cluster
[    3.239954] qcom,cc-debug-mdm9607 1800000.qcom,debug: Registered Debug Mux successfully
[    3.247619] emac_lan_vreg: disabling
[    3.250507] mem_acc_corner: disabling
[    3.254196] clock_late_init: Removing enables held for handed-off clocks
[    3.262690] ALSA device list:
[    3.264732]   No soundcard�[    3.274083] UBIFS (ubi0:0): background thread "ubifs_bgt0_0" started, PID 102
[    3.305224] UBIFS (ubi0:0): recovery needed
[    3.466156] UBIFS (ubi0:0): recovery completed
[    3.469627] UBIFS (ubi0:0): UBIFS: mounted UBI device 0, volume 0, name "rootfs"
[    3.476987] UBIFS (ubi0:0): LEB size: 126976 bytes (124 KiB), min./max. I/O unit sizes: 2048 bytes/2048 bytes
[    3.486876] UBIFS (ubi0:0): FS size: 45838336 bytes (43 MiB, 361 LEBs), journal size 9023488 bytes (8 MiB, 72 LEBs)
[    3.497417] UBIFS (ubi0:0): reserved for root: 0 bytes (0 KiB)
[    3.503078] UBIFS (ubi0:0): media format: w4/r0 (latest is w4/r0), UUID 4DBB2F12-34EB-43B6-839B-3BA930765BAE, small LPT model
[    3.515582] VFS: Mounted root (ubifs filesystem) on device 0:12.
[    3.520940] Freeing unused kernel memory: 276K (c07e7000 - c082c000)
INIT: version 2.88 booting

Worse Than FailureCodeSOD: Spacious Backup

Today's anonymous submitter works on a project which uses Apache Derby to provide database services. Derby is a tiny database you can embed into your Java application, like SQLite. Even though it's part of the application, that doesn't mean it doesn't need to be backed up from time to time.

Our submitter was handed the code because the backup feature was "peculiar", and failed for reasons no one had figured out yet. It didn't take too long to figure out that the failures were triggered by not having enough space on the device for a backup. But they definitely had a enoughFreeSpaceForBackup check, so what was going wrong?

private boolean enoughFreeSpaceForBackup() throws IOException { final Path databaseDir = workingDirectory.resolve("database"); // resolve folder where database is written to final FileStore store = Files.getFileStore(workingDirectory.getRoot()); // get the filestore final long backupFreeSpace = store.getTotalSpace(); // get the ... yes ... total size of the file store final long databaseSpace = sizeInBytes(databaseDir); // determine the complete size of the database return backupFreeSpace > databaseSpace / 2; // if our databasesize, divided by 2, is smaller than the total size of the file store ... we have enough space for a backup. }

As our submitter's comments highlight, it's the call to store.getTotalSpace(), which helpfully tells us the total size of the file store. Not, however, how much of that space is unallocated, which is what getUnallocatedSpace tell us. Or getUsableSpace, which tells us how much of the device is actually accessible by the JVM.

That is to say, there are three methods pertaining to size, and the developer managed to pick the one which was totally wrong.

But that's not the WTF. It's an easy mistake to make, after all, especially if you're not super familiar with the API. I mean, getTotalSpace sounds, well, like it's getting the total amount of space, not the amount I can use, but this is a simple enough mistake that it could happen to anyone.

But that's why you test. That's why you have code reviews. This software had been shipped, it was installed on people's devices, it was doing work. And in production environments, it was absolutely certain that it always had enough room to take a backup of itself, because the only time this wolud return false was if the database were two times larger than the drive it was on.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianRuss Allbery: Review: Solutions and Other Problems

Review: Solutions and Other Problems, by Allie Brosh

Publisher: Gallery Books
Copyright: September 2020
ISBN: 1-9821-5694-5
Format: Hardcover
Pages: 519

Solutions and Other Problems is the long-awaited second volume of Allie Brosh's work, after the amazing Hyperbole and a Half. The first collection was a mix of original material and pieces that first appeared on her blog. This is all new work, although one of the chapters is now on her blog as a teaser.

As with all of Brosh's previous work, Solutions and Other Problems is mostly drawings (in her highly original, deceptively simple style) with a bit of prose in between. It's a similar mix of childhood stories, off-beat interpretations of day-to-day life, and deeper and more personal topics. But this is not the same type of book as Hyperbole and a Half, in a way that is hard to capture in a review.

When this book was postponed and then temporarily withdrawn, I suspected that something had happened to Brosh. I was hoping that it was just the chaos of her first book publication, but, sadly, no. We find out about some of what happened in Solutions and Other Problems, in varying amounts of detail, and it's heart-wrenching. That by itself gives the book a more somber tone.

But, beyond that, I think Solutions and Other Problems represents a shift in mood and intention. The closest I can come to it is to say that Hyperbole and a Half felt like Brosh using her own experiences as a way to tell funny stories, and this book feels like Brosh using funny stories to talk about her experiences. There are still childhood hijinks and animal stories mixed in, but even those felt more earnest, more sad, and less assured or conclusive. This is in no way a flaw, to be clear; just be aware that if you were expecting more work exactly like Hyperbole and a Half, this volume is more challenging and a bit more unsettling.

This does not mean Brosh's trademark humor is gone. Chapter seventeen, "Loving-Kindness Exercise," is one of the funniest things I've ever read. "Neighbor Kid" captures my typical experience of interacting with children remarkably well. And there are, of course, more stories about not-very-bright pets, including a memorable chapter ("The Kangaroo Pig Gets Drunk") on just how baffling our lives must be to the animals around us. But this book is more serious, even when there's humor and absurdity layered on top, and anxiety felt like a constant companion.

As with her previous book, many of the chapters are stories from Brosh's childhood. I have to admit this is not my favorite part of Brosh's work, and the stories in this book in particular felt a bit less funny and somewhat more uncomfortable and unsettling. This may be a very individual reaction; you can judge your own in advance by reading "Richard," the second chapter of the book, which Brosh posted to her blog. I think it's roughly typical of the childhood stories here.

The capstone of Hyperbole and a Half was Brosh's fantastic two-part piece on depression, which succeeded in being hilarious and deeply insightful at the same time. I think the capstone of Solutions and Other Problems is the last chapter, "Friend," which is about being friends with yourself. For me, it was a good encapsulation of both the merits of this book and the difference in tone. It's less able to find obvious humor in a psychological struggle, but it's just as empathetic and insightful. The ending is more ambiguous and more conditional; the tone is more wistful. It felt more personal and more raw, and therefore a bit less generalized. Her piece on depression made me want to share it with everyone I knew; this piece made me want to give Brosh a virtual hug and tell her I'm glad she's alive and exists in the world. That about sums up my reaction to this book.

I bought Solutions and Other Problems in hardcover because I think this sort of graphic work benefits from high-quality printing, and I was very happy with that decision. Gallery Books used heavy, glossy paper and very clear printing. More of the text is outside of the graphic panels than I remember from the previous book. I appreciated that; I thought it made the stories much easier to read. My one quibble is that Brosh does use fairly small lettering in some of the panels and the color choices and the scrawl she uses for stylistic reasons sometimes made that text difficult for me to read. In those few places, I would have appreciated the magnifying capabilities of reading on a tablet.

I don't think this is as good as Hyperbole and a Half, but it is still very good and very worth reading. It's harder reading, though, and you'll need to brace yourself more than you did before. If you're new to Brosh, start with Hyperbole and a Half, or with the blog, but if you liked those, read this too.

Rating: 8 out of 10

Planet DebianDirk Eddelbuettel: dang 0.0.13: New intradayMarketMonitor

sp500 intraday monitor

A new release of the dang package got to CRAN earlier today, a few months since the last relase. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!) is one, this overbought/oversold price band plotter from an older blog post is another.

This release adds one function I tweeted about one month ago. It takes a function Josh Ulrich originally tweeted about in November with a reference to this gist. I refactored this into a proper functions and polished a few edges: the data now properly rolls off after a fixed delay (of two days), should work with other symbols (though we both focused on ^GSPC as a free (!!) real-time SP500 index (albeit only during trading hours), properly gaps between trading days and more. You can simply invoke it via

and a chart just like the one here will grow (though there is no “state”: if you stop it, or reboot, or … the plot starts from scratch).

The short NEWS entry follows.

Changes in version 0.0.13 (2021-02-17)

  • New function intradayMarketMonitor based on an earlier gist-posted snippet by Josh Ulrich.

  • The CI setup was generalized as a test for 'r-ci' and is used essentially unchanged with three different providers.

Courtesy of my CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Krebs on SecurityU.S. Indicts North Korean Hackers in Theft of $200 Million

The U.S. Justice Department today unsealed indictments against three men accused of working with the North Korean regime to carry out some of the most damaging cybercrime attacks over the past decade, including the 2014 hack of Sony Pictures, the global WannaCry ransomware contagion of 2017, and the theft of roughly $200 million and attempted theft of more than $1.2 billion from banks and other victims worldwide.

Investigators with the DOJ, U.S. Secret Service and Department of Homeland Security told reporters on Wednesday the trio’s activities involved extortion, phishing, direct attacks on financial institutions and ATM networks, as well as malicious applications that masqueraded as software tools to help people manage their cryptocurrency holdings.

Prosecutors say the hackers were part of an effort to circumvent ongoing international financial sanctions against the North Korean regime. The group is thought to be responsible for the attempted theft of approximately $1.2 billion, although it’s unclear how much of that was actually stolen.

Confirmed thefts attributed to the group include the 2016 hacking of the SWIFT payment system for Bangladesh Bank, which netted thieves $81 million; $6.1 million in a 2018 ATM cash out scheme targeting a Pakistani bank; and a total of $112 million in virtual currencies stolen between 2017 and 2020 from cryptocurrency companies in Slovenia, Indonesia and New York.

“The scope of the criminal conduct by the North Korean hackers was extensive and longrunning, and the range of crimes they have committed is staggering,” said Acting U.S. Attorney Tracy L. Wilkison for the Central District of California. “The conduct detailed in the indictment are the acts of a criminal nation-state that has stopped at nothing to extract revenge and obtain money to prop up its regime.”

The indictments name Jon Chang Hyok (a.k.a “Alex/Quan Jiang”), Kim Il (a.k.a. “Julien Kim”/”Tony Walker”), and Park Jin Hyok (a.k.a. Pak Jin Hek/Pak Kwang Jin). U.S. prosecutors say the men were members of the Reconnaissance General Bureau (RGB), an intelligence division of the Democratic People’s Republic of Korea (DPRK) that manages the state’s clandestine operations.

The Justice Department says those indicted were members of a DPRK-sponsored cybercrime group variously identified by the security community as the Lazarus Group and Advanced Persistent Threat 38 (APT 38). The government alleges the men reside in North Korea but were frequently stationed by the DPRK in other countries, including China and Russia.

Park was previously charged in 2018 in connection with the WannaCry and Sony Pictures attacks. But today’s indictments expanded the range of crimes attributed to Park and his alleged co-conspirators, including cryptocurrency thefts, phony cryptocurrency investment schemes and apps, and efforts to launder the proceeds of their crimes.

Prosecutors in California also today unsealed an indictment against Ghaleb Alaumary, a 37-year-old from Mississauga, Ontario who pleaded guilty in November 2020 to charges of laundering tens of millions of dollars stolen by the DPRK hackers.

The accused allegedly developed and marketed a series of cryptocurrency applications that were advertised as tools to help people manage their crypto holdings. In reality, prosecutors say, the programs were malware or downloaded malware after the applications were installed.

A joint cyber advisory from the FBI, the Treasury and DHS’s Cybersecurity and Infrastructure Agency (CISA) delves deeper into these backdoored cryptocurrency apps, a family of malware activity referred to as “AppleJeus. “Hidden Cobra” is the collective handle assigned to the hackers behind the AppleJeus malware.

“In most instances, the malicious application—seen on both Windows and Mac operating systems—appears to be from a legitimate cryptocurrency trading company, thus fooling individuals into downloading it as a third-party application from a website that seems legitimate,” the advisory reads. “In addition to infecting victims through legitimate-looking websites, HIDDEN COBRA actors also use phishing, social networking, and social engineering techniques to lure users into downloading the malware.”

The alert notes that these apps have been posing as cryptocurrency trading platforms since 2018, and have been tied to cryptocurrency thefts in more than 30 countries.

Image: CISA.

For example, the DOJ indictments say these apps were involved in stealing $11.8 million in August 2020 from a financial services company based in New York. Warrants obtained by the government allowed the FBI to seize roughly $1.9 million from two different cryptocurrency exchanges used by the hackers, money that investigators say will be returned to the New York financial services firm.

Other moneymaking and laundering schemes attributed to the North Korean hackers include the development and marketing of an initial coin offering (ICO) in 2017 called Marine Chain Token.

That blockchain-based cryptocurrency offering promised early investors the ability to purchase “fractional ownership in marine shipping vessels,” which the government says was just another way for the North Korean government to “secretly obtain funds from investors, control interests in marine shipping vessels, and evade U.S. sanctions.”

A copy of the indictments is available here (PDF).

Planet DebianNorbert Preining: Debian KDE/Plasma Status 2021-02-18

Lots of time has passed since the last status update, and Debian is going into pre-release freeze, so let us report a bit about the most recent changes: Debian/bullseye will have Plasma 5.20.5, Frameworks 5.78, Apps 20.12. Debian/experimental already carries Plasma 5.21 and Frameworks 5.79, and that is also the level at the OSC builds.

Debian Bullseye

We are in soft freeze now, and only targeted fixes are allowed, but Bullseye is carrying a good mixture consisting of the KDE Frameworks 5.78, including several backports of fixes from 5.79 to get smooth operation. Plasma 5.20.5, again with several cherry picks for bugs will be in Bullseye, too. The KDE/Apps are mostly at 20.12 level, and the KDE PIM group packages (akonadi, kmail, etc) are at 20.08.

Debian experimental

In the last days I have uploaded frameworks 5.79 and Plasma 5.21 to Debian/experimental. For Plasma there is still some NEW processing to be done, but in due time the packages will be available and installable from experimental.

OBS packages

The OBS packages as usual follow the latest release, and currently ship KDE Frameworks 5.79, KDE Apps 20.12.2, and Plasma 5.21.0. The package sources are as usual (note the different path for the Plasma packages and the App packages, containing the release version!), for Debian/unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma521/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2012/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./

and the same with Testing instead of Unstable for Debian/testing.

Digikam beta

There is also a separate repository for the upcoming digikam release:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/digikam-beta/Debian_Unstable/ ./

just in case you want to test the rc release of digikam 7.2.0.

Planet DebianMartin-Éric Racine: OpenWRT: WRT54GL: Backfire: IPv6 issues

While having a Debian boxen as a router feels nice, I kept on longing for something smaller and quieter. I then remembered that I still had my old WRT54GL somewhere. After upgrading the OpenWRT firmware to the latest supported version for that hardware (Backfire 10.03.1, r29592), I installed radvd and wide-dhcpv6-client. Configuring radvd to deliver consistent results was easy enough.

The issue I keep on experiencing is the external interface (wan) dropping the IPv6 address it received from the ISP via router advertisement, which in turn kills the default IPv6 route to the outside world. Logging in via SSH and manually running "rdisc6 eth0.1" restores the IPv6 gateway. I just honestly wished I didn't have to do this every time I need to reboot the router.

Does this issue sound familiar to anyone? What was the solution?

PS: No, I won't just go and ditch this WRT54GL just because new toys exist on the market. This is obviously a software issue, so I need a software solution.

PPS: IPv6 pretty much works out of the box on the Debian boxen I had been using as my router. I previously wrote about this on my blog. Basically, it's unlikely to be an ISP issue.

Sam VargheseWhy Australia is the developed world’s COVID vaccine laggard

A timeline of Australia’s COVID-19 vaccine saga courtesy of Justin Stevens, executive producer of the ABC’s 7.30 program

19/8/20 PM media release: “Australians will be among the first in the world to receive a COVID-19 vaccine, if it proves successful, through an agreement between the Australian Government &… AstraZeneca.”

7/9/20 Govt announces $1.7 billion Uni of Oxford/AstraZeneca & the Uni of QLD/CSL Manufacturing agreements. PM says “a home-grown sovereign plan for vaccines is the hope I bring to Australians today.”

5/11/20 Govt announced 2 more vaccines, Novavax (40 million doses) & 10 million doses Pfizer/BioNTech. PM: “…our Strategy puts Australia at the front of the queue, if our medical experts give the vaccines the green light.”

5/11/20 PM: “our policy and program, led by Prof Murphy, on getting Australia at the front of the pack when it comes to vaccines.”

11/12/20 PM: “…what we can do is vaccinate our population twice over. And we have one of the highest ratios of availability of doses of any country in the world…”

1/1/2021 PM: “we’re being careful to ensure that we dot all the Is & we cross all the Ts…We’re moving promptly to do that…but we’re not cutting corners…On the vaccine, you don’t rush to failure. That’s very dangerous for Australians.”

5/1/21 PM: “I don’t think Australians want us just willy-nilly sending out vials of vaccine that haven’t had their batches tested, which is the normal process that occurs with any TGA approved vaccine.”

7/1/21 PM: “we are now in a position where we believe we’ll be able to commence vaccinations of high priority groups in mid to late February.”

22/1/21 PM: “no we haven’t announced any date…we’ve talked about getting things done in mid to late February. These things…are very conditional upon the supply arrangements coming out of Pfizer in particular.”

22/1/21 PM: “our process is world leading. It’s world class. It’s a process that I believe Australians can have a lot of confidence in. We’re not rushing this, nor are we delaying it. We are getting it right.”

25/1/21 TGA approves Pfizer vaccine PM: “we remain on track to have those vaccines in Australia & ready to go from very small beginnings… very small beginnings, starting small…We are more looking at late February now than mid-February…”

1/2/21 PM: “from the end of February…we’ll be able to start vaccinating those in the most sensitive areas, those most vulnerable, those frontline health workers…& then over the course of the year, we expect to get through the population by October.”

12/2/21 PM: “our vaccination programme, it’s on track & it’s sovereign. We’re doing it here in Australia…within a matter of weeks, starting next week, as they finish, they do the final stage of that process. I call it the bottling process…”

15/2/21 Health Minister Greg Hunt announces “The Eagle has landed. I am pleased to be able to tell Australians that shortly after midday, the first shipment of Pfizer vaccines arrived in Australia.”

Worse Than FailureCodeSOD: Shorely a Bad Choice

"This was developed by the offshore team," is usually spoken as a warning. There are a lot of reasons why the code-quality from offshore teams has such a bad reputation. You can list off a bunch of reasons why this is true, but it all boils down to variations on the Princpal-Agent Problem: the people writing the code (the agents) don't have their goals aligned with your company (the principal).

Magnus M recently inherited some C# code which came from the offshore team, and it got principal-agented all over.

/// <summary> /// License Person CompanyName /// </summary> private string ErrorMsg { get { return "It is not possible to connect to the license server at this time." + Environment.NewLine + Environment.NewLine + "Please try again later or contact customer service for help at info@domain.com" + Process.Start("mailto:info@domain.com"); } }

When I started reading this code, I just got annoyed at the Environment.NewLine calls, and was thinking about how formatting an error message like this right in the code is such an awful code smell, but it's hardly a WTF- until I got to Process.Start("mailto:info@domain.com").

As the name implies, Process.Start starts a process. It's normally used to execute external programs, but here we pass a URL to it. Since this software runs on Windows, it should trigger the OS to open the default mail program, if there's one assigned. If there isn't, your attempt to access the ErrorMsg property just threw an unhandled exception.

There is no sensible reason why accessing a read-only property should launch a mail program. Even if "hey, just start a mail program when things go wrong," were an acceptable UX choice (spoilers: it isn't), this is so far away from "single responsibility principle" that it makes my head hurt.

Magnus adds:

There are many, many issues with the code, but I thought this snippet was a good representation of the general quality. … My favorite detail is probably the comment.

I don't know about you, but "License Person CompanyName" is actually the name I put on all my Git commits.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianLouis-Philippe Véronneau: What are the incentive structures of Free Software?

When I started my Master's degree in January 2018, I was confident I would be done in a year and half. After all, I only had one year of classes and I figured 6 months to write a thesis would be plenty.

Three years later, I'm finally done: the final version of my thesis was accepted on January 22nd 2021.

My thesis, entitled What are the incentive structures of Free Software? An economic analysis of Free Software's specific development model, can be found here 1. If you care about such things, both the data and the final document can be built from source with the code in this git repository.

Results and analysis

My thesis is divided in four main sections:

  1. an introduction to FOSS
  2. a chapter discussing the incentive structures of Free Software (and arguing the so called “Tragedy of the Commons” isn't inevitable)
  3. a chapter trying to use empirical data to validate the theories presented in the previous chapter
  4. an annex on the various FOSS business models

If you're reading this blog post, chances are you'll find both section 1 and 4 a tad boring, as you might already be familiar with these concepts.

Incentives

So, why do people contribute to Free Software? Unsurprisingly, it's complicated. Many economists have studied this topic, but for some reason, most research happened in the early 2000s.

Although papers don't all agree with each other and most importantly, about the variables' importance, the main incentives2 can be summarized by:

  • expectation of monetary gain
  • writing FOSS as a hobby (that includes “scratching your own itch”)
  • liking the FOSS community and feeling a sense of belonging
  • altruism (writing FOSS for Good™)

Giving weights to these variables is not an easy thing: the FOSS ecosystem is highly heterogeneous and thus, people tend to write FOSS for different reasons. Moreover, incentives tend to shift with time as the ecosystem does. People writing Free Software in the 1990s probably did it for different reasons than people in 2021.

These four variables can also be divided in two general categories: extrinsic and intrinsic incentives. Monetary gain expectancy is an extrinsic incentive (its value is delayed and mediated), whereas the three other ones are intrinsic (they have an immediate value by themselves).

Empirical analysis

Theory is nice, but it's even better when you can back it up with data. Sadly, most of the papers on the economic incentives of FOSS are either purely theoretical, or use sample sizes so small they could as well be.

Using the data from the StackOverflow 2018 survey, I thus tried to see if I could somehow confirm my previous assumptions.

With 129 questions and more than 100 000 respondents (which after statistical processing yields between 28 000 and 39 000 observations per variable of interest), the StackOverflow 2018 survey is a very large dataset compared to what economists are used to work with.

Sadly, it wasn't entirely enough to come up with hard answers. There is a strong and significant correlation between writing Free Software and having a higher salary, but endogeneity problems3 made it hard to give a reliable estimate of how much money this would represent. Same goes for writing code has a hobby: it seems there is a strong and significant correlation, but the exact numbers I came up with cannot really be trusted.

The results on community as an incentive to writing FOSS were the ones that surprised me the most. Although I expected the relation to be quite strong, the coefficients predicted were in fact quite small. I theorise this is partly due to only 8% of the respondents declaring they didn't feel like they belonged in the IT community. With such a high level of adherence, the margin for improvement has to be smaller.

As for altruism, I wasn't able get any meaningful results. In my opinion this is mostly due to the fact there was no explicit survey question on this topic and I tried to make up for it by cobbling data together.

Kinda anti-climatic, isn't it? I would've loved to come up with decisive conclusions on this topic, but if there's one thing I learned while writing this thesis, it is I don't know much after all.


  1. Note that the thesis is written in French. 

  2. Of course, life is complex and so are people's motivations. One could come up with dozen more reasons why people contribute to Free Software. The "fun" of theoretical modelisation is trying to make complex things somewhat simpler. 

  3. I'll spare you the details, but this means there is no way to know if this correlation is the result of a causal link between the two variables. There are ways to deal with this problem (using an instrumental variables model is a very popular one), but again, the survey didn't provide the proper instruments to do so. For example, it could very well be the correlation is due to omitted variables. If you are interested in this topic (and can read French), I talk about this issue in section 3.2.8. 

,

Google AdsenseAdministrative Business Partner

This masterclass is an opportunity for women leaders to enhance their business’ digital strategy through an exclusive series.

Planet DebianVincent Fourmond: QSoas tips and tricks: permanently storing meta-data

It is one thing to acquire and process data, but the data themselves are most often useless without the context, the conditions in which the experiments were made. These additional informations can be called meta-data. In a previous post, we have already described how one can set meta-data to data that are already loaded, and how one can make use of them.

QSoas is already able to figure out some meta-data in the case of electrochemical data, most notably in the case of files acquired by GPES, ECLab or CHI potentiostats. However, only a small number of constructors are supported as of now[1], and there are a number of experimental details that the software is never going to be able to figure out for you, such as the pH, the sample, what you were doing...

The new version of QSoas provides a means to permanently store meta-data for experimental data files:

QSoas> record-meta pH 7 file.dat
This command uses record-meta to permanently store the information pH = 7 for the file file.dat. Any time QSoas loads the file again, either today or in one year, the meta-data will contain the value 7 for the field pH. Behind the scenes, QSoas creates a single small file, file.dat.qsm, in which the meta-data are stored (in the form of a JSON dictionnary).

You can set the same meta-data to many files in one go, using wildcards (see load for more information). For instance, to set the pH=7 meta-data to all the .dat files in the current directory, you can use:

QSoas> record-meta pH 7 *.dat
You can only set one meta-data for each call to record-meta, but you can use it as many times as you like.

Finally, you can use the /for-which option to load or browse to select only the files which have the meta you need:

QSoas> browse /for-which=$meta.pH<=7
This command browses the files in the current directory, showing only the ones that have a pH meta-data which is 7 or below.

[1] I'm always ready to implement the parsing of other file formats that could be useful for you. If you need parsing of special files, please contact me, sending the given files and the meta-data you'd expect to find in those.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

Planet DebianMichael Prokop: How to properly use 3rd party Debian repository signing keys with apt

(Blogging this, since this is a recurring anti-pattern I noticed at several customers and often comes up during deployments of 3rd party repositories.)

Update on 2021-02-19: clarified, that Signed-By requires apt >= 1.1, thanks Vincent Bernat

Many upstream projects provide Debian repository instructions like this:

curl -fsSL https://example.com/stable/debian.gpg | sudo apt-key add -

Do not follow this, for different reasons, including:

  1. You do not see what you get before adding the GPG key to your global apt trust store
  2. You can’t easily script this via your preferred configuration management (the apt-key manpage clearly discourages programmatic usage)
  3. The signing key is considered valid for all your enabled Debian repositories (instead of only a specific one)
  4. You need GnuPG (either gnupg2 or gnupg1) on your system for usage with apt-key

There’s a much better approach to this: download the GPG key, make sure it’s in the appropriate format, then use it via `deb [signed-by=/usr/share/keyrings/…]` in your apt’s sources list configuration. Note and FTR: the Signed-By feature is available starting with apt 1.1 (so apt in Debian jessie/8 and older does not support it).

TL;DR:

  • Install GPG keys in ascii-armored / old public key block format as /usr/share/keyrings/example.asc and use `deb [signed-by=/usr/share/keyrings/example.asc] https://example.com/…` in apt’s sources.list configuration
  • Install GPG keys in binary OpenPGP format as /usr/share/keyrings/example.gpg and use `deb [signed-by=/usr/share/keyrings/example.gpg] https://example.com/…` in apt’s sources.list configuration

As an example, let’s demonstrate this with the Tailscale Debian repository for buster.
Downloading the GPG file will give you an ascii-armored GPG file:

% curl -fsSL -o buster.gpg https://pkgs.tailscale.com/stable/debian/buster.gpg
% gpg --keyid-format long buster.gpg 
gpg: WARNING: no command supplied.  Trying to guess what you mean ...
pub   rsa4096/458CA832957F5868 2020-02-25 [SC]
      2596A99EAAB33821893C0A79458CA832957F5868
uid                           Tailscale Inc. (Package repository signing key) <info@tailscale.com>
sub   rsa4096/B1547A3DDAAF03C6 2020-02-25 [E]
% file buster.gpg
buster.gpg: PGP public key block Public-Key (old)

If you have apt version >= 1.4 available (Debian >=stretch/9 and Ubuntu >=bionic/18.04), you can use this file directly as follows:

% sudo mv buster.gpg /usr/share/keyrings/tailscale.asc
% cat /etc/apt/sources.list.d/tailscale.list
deb [signed-by=/usr/share/keyrings/tailscale.asc] https://pkgs.tailscale.com/stable/debian buster main
% sudo apt update
[...]

And you’re done!

Iff your apt version really is older than 1.4, you need to convert the ascii-armored GPG file into a GPG key public ring file (AKA binary OpenPGP format), either by just dearmor-ing it (if you don’t care about checking ID + fingerprint):

% gpg --dearmor < buster.gpg > tailscale.gpg

or if you prefer to go via GPG, you can also use a temporary GPG home directory (if you don’t care about going through your personal GPG setup):

% mkdir --mode=700 /tmp/gpg-tmpdir
% gpg --homedir /tmp/gpg-tmpdir --import ./buster.gpg
gpg: keybox '/tmp/gpg-tmpdir/pubring.kbx' created
gpg: /tmp/gpg-tmpdir/trustdb.gpg: trustdb created
gpg: key 458CA832957F5868: public key "Tailscale Inc. (Package repository signing key) <info@tailscale.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1
% gpg --homedir /tmp/gpg-tmpdir --output tailscale.gpg  --export-options=export-minimal --export 0x458CA832957F5868
% rm -rf /tmp/gpg-tmpdir

The resulting GPG key public ring file should look like that:

% file tailscale.gpg 
tailscale.gpg: PGP/GPG key public ring (v4) created Tue Feb 25 04:51:20 2020 RSA (Encrypt or Sign) 4096 bits MPI=0xc00399b10bc12858...
% gpg tailscale.gpg 
gpg: WARNING: no command supplied.  Trying to guess what you mean ...
pub   rsa4096/458CA832957F5868 2020-02-25 [SC]
      2596A99EAAB33821893C0A79458CA832957F5868
uid                           Tailscale Inc. (Package repository signing key) <info@tailscale.com>
sub   rsa4096/B1547A3DDAAF03C6 2020-02-25 [E]

Then you can use this GPG file on your system as follows:

% sudo mv tailscale.gpg /usr/share/keyrings/tailscale.gpg
% cat /etc/apt/sources.list.d/tailscale.list
deb [signed-by=/usr/share/keyrings/tailscale.gpg] https://pkgs.tailscale.com/stable/debian buster main
% sudo apt update
[...]

Such a setup ensures:

  1. You can verify the GPG key file (ID + fingerprint)
  2. You can easily ship files via /usr/share/keyrings/ and refer to it in your deployment scripts, configuration management,… (and can also easily update or get rid of them again!)
  3. The GPG key is valid only for the repositories with the corresponding `[signed-by=/usr/share/keyrings/…]` entry
  4. You don’t need to install GnuPG (neither gnupg2 nor gnupg1) on the system which is using the 3rd party Debian repository

Thanks: Guillem Jover for reviewing an early draft of this blog article.

Worse Than FailureCodeSOD: Optimized

In modern times, there's almost no reason to use Assembly, outside of highly specific and limited cases. For example, I recently worked on a project that uses a PRU, and while you can program that in C, I wanted to be able to count instructions so that I could get extremely precise timings to control LEDs.

In modern times, there's also no reason to use Delphi, but Andre found this code a few years ago, and has been puzzling over it ever since.

procedure tvAdd(var a,b:timevectortype; Range: Integer); register; var i: Integer; pa,pb: PDouble; begin i:=succ(LongRec(Range).Lo-LongRec(Range).Hi); pa:=@a[LongRec(Range).Hi]; pb:=@b[LongRec(Range).Hi]; asm mov ecx, i mov eax, [pa] mov edx, [pb] @loop: fld qword ptr [eax] fadd qword ptr [edx] fstp qword ptr [eax] add eax,8 add edx,8 dec ecx jnz @loop wait end; { for i:=starts to ends do a[i] := a[i] + b[i]; } end;

The curly brackets at the end are a comment- they're telling us what the original Delphi code looked like, and it's pretty straightforward: loop across two lists, add them and store the result in the first list. The Assembly code was used to replace that to "boost performance". This code is as optimized as it can possibly be… if you ignore that it's not.

Now, at its core, the real problem is that we've replaced something fairly readable with something nigh incomprehensible for what is likely to be a very minor speedup. But this is actually worse: the assembly version is between 2-5 times slower.

The Assembly version also has a pretty serious bug. If i, the length of the range we want to add across, is zero, we'll load that into the register ecx. We'll still attempt to add values from lists a and b together, even though we probably shouldn't, and then we'll decrement the contents of ecx. So now it's -1. The jnz, or "jump non-zero" will check that register, and since it's not zero, it'll pop back up to the @loop label, and keep looping until ecx wraps all the way around and eventually hits zero again.

Talk about a buffer overrun.

Now, as it turns out, playing with the Range object did turn out to be kind of expensive, so Andre did fix the code with an optimization: he used integers intsead.

procedure tvAdd(var a,b:timevectortype; afrom, ato: Integer); register; var i: Integer; begin for i := afrom to ato do a[i] := a[i]+b[i]; end;
[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecurityBluetooth Overlay Skimmer That Blocks Chip

As a total sucker for anything skimming-related, I was interested to hear from a reader working security for a retail chain in the United States who recently found Bluetooth-enabled skimming devices placed over top of payment card terminals at several stores. Interestingly, these skimmers interfered with the terminal’s ability to read chip-based cards, forcing customers to swipe the stripe instead.

The payment card skimmer overlay transmitted stolen data via Bluetooth, physically blocked chip-based transactions, and included a PIN pad overlay.

Here’s a closer look at the electronic gear jammed into these overlay skimmers. It includes a hidden PIN pad overlay that captures, stores and transmits via Bluetooth data from cards swiped through the machine, as well as PINs entered on the device:

The hidden magnetic stripe reader is in the bottom left, just below the Bluetooth circuit board. A PIN pad overlay (center) intercepts any PINs entered by customers; the cell phone battery (right) powers all of the components.

My reader source shared these images on condition that the retailer in question not be named. But it’s worth pointing out these devices can be installed on virtually any customer-facing payment terminal in the blink of eye.

Newer, chip-based payment cards are more costly and difficult for thieves to clone, but virtually all cards still store card data on a magnetic stripe on the back of the cards — mainly for reasons of backwards compatibility. This overlay skimmer included a physical component designed to block the payment terminal from reading the chip, forcing the customer to swipe the stripe instead of dip the chip.

The magnetic stripe reader (top right) worked with a component designed to block the use of chip-based payment cards.

What’s remarkable is that these badboys went undetected for several weeks, particularly given that customers would have been forced to swipe.

“In this COVID19 world, with counter and terminal wipedowns frequent it was surprising that nobody noticed the overlay placements for a number of weeks,” the source said.

I realize a great many people use debit cards for everyday purchases, but I’ve never been interested in assuming the added risk and pay for everything with cash or a credit card. Armed with your PIN and debit card data, thieves can clone the card and pull money out of your account at an ATM. Having your checking account emptied of cash while your bank sorts out the situation can be a huge hassle and create secondary problems (bounced checks, for instance).

Want to learn more about overlay skimmers? Check out these other posts:

How to Spot Ingenico Self-Checkout Skimmers

Self-Checkout Skimmers Go Bluetooth

More on Bluetooth Ingenico Overlay Skimmers

Safeway Self-Checkout Skimmers Up Close

Skimmers Found at Wal-Mart: A Closer Look

Cryptogram US Cyber Command Valentine’s Day Cryptography Puzzles

The US Cyber Command has released a series of ten Valentine’s Day “Cryptography Challenge Puzzles.”

Slashdot thread. Reddit thread. (And here’s the archived link, in case Cyber Command takes the page down.)

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, January 2020

A Debian LTS logo Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

In January, we put aside 2175 EUR to fund Debian projects. As part of this Carles Pina i Estany started to work on better no-dsa support for the PTS which recently resulted in two merge requests which will hopefully be deployed soon.

We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In January, 13 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 9.0h (out of 14h assigned and 7h from December), thus carrying over 12h to February.
  • Adrian Bunk did 14h (out of 26h assigned), thus carrying over 12h to February, which he then gave back.
  • Ben Hutchings did 0.25h (out of 7h assigned and 8.5h from December), thus carrying over 15.25h to February.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did not report back about their work so we assume they did nothing (out of 26h assigned plus 9.5h from December), thus is carrying over 35.5h for February.
  • Holger Levsen did 6.5h coordinating/managing the LTS team..
  • Markus Koschany did 36.75h (out of 26h assigned and 10.75h from December).
  • Ola Lundqvist did 2.5h (out of 10.5h assigned and 11.5h from December) and gave back 9.5 hours, thus carrying over 10h to February.
  • Roberto C. Sánchez did 6h (out of 26h assigned), thus carrying over 20h to February, which he then gave back.
  • Sylvain Beucler did 26h (out of 26h assigned).
  • Thorsten Alteholz did 26h (out of 26h assigned).
  • Utkarsh Gupta did 26h (out of 26h assigned).

Evolution of the situation

In January we released 28 DLAs and held our first LTS team meeting for 2021 on IRC, with the next public IRC meeting coming up at the end of March. During that meeting Utkarsh shared that after he rolled out the python-certbot update (on December 8th 2020) the maintainer told him: “I just checked with Let’s Encrypt, and the stats show that you just saved 142,500 people from having their certificates start failing next month. I didn’t know LTS was still that used!”

Finally, we would like to welcome sipgate GmbH as a new silver sponsor. Also remember that we are constantly looking for new contributors. Please contact Holger if you are interested.

The security tracker currently lists 43 packages with a known CVE and the dla-needed.txt file has 23 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

Kevin RuddForeign Affairs: Short of War

Originally Published in Foreign Affairs Magazine in the March Edition 2021

Officials in Washington and Beijing don’t agree on much these days, but there is one thing on which they see eye to eye: the contest between their two countries will enter a decisive phase in the 2020s. This will be the decade of living dangerously. No matter what strategies the two sides pursue or what events unfold, the tension between the United States and China will grow, and competition will intensify; it is inevitable. War, however, is not. It remains possible for the two countries to put in place guardrails that would prevent a catastrophe: a joint framework for what I call “managed strategic competition” would reduce the risk of competition escalating into open conflict.

The Chinese Communist Party is increasingly confident that by the decade’s end, China’s economy will finally surpass that of the United States as the world’s largest in terms of GDP at market exchange rates. Western elites may dismiss the significance of that milestone; the CCP’s Politburo does not. For China, size always matters. Taking the number one slot will turbocharge Beijing’s confidence, assertiveness, and leverage in its dealings with Washington, and it will make China’s central bank more likely to float the yuan, open its capital account, and challenge the U.S. dollar as the main global reserve currency. Meanwhile, China continues to advance on other fronts, as well. A new policy plan, announced last fall, aims to allow China to dominate in all new technology domains, including artificial intelligence, by 2035. And Beijing now intends to complete its military modernization program by 2027 (seven years ahead of the previous schedule), with the main goal of giving China a decisive edge in all conceivable scenarios for a conflict with the United States over Taiwan. A victory in such a conflict would allow President Xi Jinping to carry out a forced reunification with Taiwan before leaving power—an achievement that would put him on the same level within the CCP pantheon as Mao Zedong.

Washington must decide how to respond to Beijing’s assertive agenda—and quickly. If it were to opt for economic decoupling and open confrontation, every country in the world would be forced to take sides, and the risk of escalation would only grow. Among policymakers and experts, there is understandable skepticism as to whether Washington and Beijing can avoid such an outcome. Many doubt that U.S. and Chinese leaders can find their way to a framework to manage their diplomatic relations, military operations, and activities in cyberspace within agreed parameters that would maximize stability, avoid accidental escalation, and make room for both competitive and collaborative forces in the relationship. The two countries need to consider something akin to the procedures and mechanisms that the United States and the Soviet Union put in place to govern their relations after the Cuban missile crisis—but in this case, without first going through the near-death experience of a barely avoided war.

Managed strategic competition would involve establishing certain hard limits on each country’s security policies and conduct but would allow for full and open competition in the diplomatic, economic, and ideological realms. It would also make it possible for Washington and Beijing to cooperate in certain areas, through bilateral arrangements and also multilateral forums. Although such a framework would be difficult to construct, doing so is still possible—and the alternatives are likely to be catastrophic.

BEIJING’S LONG VIEW

In the United States, few have paid much attention to the domestic political and economic drivers of Chinese grand strategy, the content of that strategy, or the ways in which China has been operationalizing it in recent decades. The conversation in Washington has been all about what the United States ought to do, without much reflection on whether any given course of action might result in real changes to China’s strategic course. A prime example of this type of foreign policy myopia was an address that then Secretary of State Mike Pompeo delivered last July, in which he effectively called for the overthrow of the CCP. “We, the freedom-loving nations of the world, must induce China to change,” he declared, including by “empower[ing] the Chinese people.”

The only thing that could lead the Chinese people to rise up against the party-state, however, is their own frustration with the CCP’s poor performance on addressing unemployment, its radical mismanagement of a natural disaster (such as a pandemic), or its massive extension of what is already intense political repression. Outside encouragement of such discontent, especially from the United States, is unlikely to help and quite likely to hinder any change. Besides, U.S. allies would never support such an approach; regime change has not exactly been a winning strategy in recent decades. Finally, bombastic statements such as Pompeo’s are utterly counterproductive, because they strengthen Xi’s hand at home, allowing him to point to the threat of foreign subversion to justify ever-tighter domestic security measures, thereby making it easier for him to rally disgruntled CCP elites in solidarity against an external threat.

That last factor is particularly important for Xi, because one of his main goals is to remain in power until 2035, by which time he will be 82, the age at which Mao passed away. Xi’s determination to do so is reflected in the party’s abolition of term limits, its recent announcement of an economic plan that extends all the way to 2035, and the fact that Xi has not even hinted at who might succeed him even though only two years remain in his official term. Xi experienced some difficulty in the early part of 2020, owing to a slowing economy and the COVID-19 pandemic, whose Chinese origins put the CCP on the defensive. But by the year’s end, official Chinese media were hailing him as the party’s new “great navigator and helmsman,” who had prevailed in a heroic “people’s war” against the novel coronavirus. Indeed, Xi’s standing has been aided greatly by the shambolic management of the pandemic in the United States and a number of other Western countries, which the CCP has highlighted as evidence of the inherent superiority of the Chinese authoritarian system. And just in case any ambitious party officials harbor thoughts about an alternative candidate to lead the party after Xi’s term is supposed to end in 2022, Xi recently launched a major purge—a “rectification campaign,” as the CCP calls it—of members deemed insufficiently loyal.

Meanwhile, Xi has carried out a massive crackdown on China’s Uighur minority in the region of Xinjiang; launched campaigns of repression in Hong Kong, Inner Mongolia, and Tibet; and stifled dissent among intellectuals, lawyers, artists, and religious organizations across China. Xi has come to believe that China should no longer fear any sanctions that the United States might impose on his country, or on individual Chinese officials, in response to violations of human rights. In his view, China’s economy is now strong enough to weather such sanctions, and the party can protect officials from any fallout, as well. Furthermore, unilateral U.S. sanctions are unlikely to be adopted by other countries, for fear of Chinese retaliation. Nonetheless, the CCP remains sensitive to the damage that can be done to China’s global brand by continuing revelations about its treatment of minorities. That is why Beijing has become more active in international forums, including the UN Human Rights Council, where it has rallied support for its campaign to push back against long-established universal norms on human rights, while also regularly attacking the United States for its own alleged abuses of those very norms.

Xi is also intent on achieving Chinese self-sufficiency to head off any effort by Washington to decouple the United States’ economy from that of China or to use U.S. control of the global financial system to block China’s rise. This push lies at the heart of what Xi describes as China’s “dual circulation economy”: its shift away from export dependency and toward domestic consumption as the long-term driver of economic growth and its plan to rely on the gravitational pull of the world’s biggest consumer market to attract foreign investors and suppliers to China on Beijing’s terms. Xi also recently announced a new strategy for technology R & D and manufacturing to reduce China’s dependence on imports of certain core technologies, such as semiconductors.

The trouble with this approach is that it prioritizes party control and state-owned enterprises over China’s hard-working, innovative, and entrepreneurial private sector, which has been primarily responsible for the country’s remarkable economic success over the last two decades. In order to deal with a perceived external economic threat from Washington and an internal political threat from private entrepreneurs whose long-term influence threatens the power of the CCP, Xi faces a dilemma familiar to all authoritarian regimes: how to tighten central political control without extinguishing business confidence and dynamism.

Xi faces a similar dilemma when it comes to what is perhaps his paramount goal: securing control over Taiwan. Xi appears to have concluded that China and Taiwan are now further away from peaceful reunification than at any time in the past 70 years. This is probably correct. But China often ignores its own role in widening the gulf. Many of those who believed that China would gradually liberalize its political system as it opened up its economic system and became more connected with the rest of the world also hoped that that process would eventually allow Taiwan to become more comfortable with some form of reunification. Instead, China has become more authoritarian under Xi, and the promise of reunification under a “one country, two systems” formula has evaporated as the Taiwanese look to Hong Kong, where China has imposed a harsh new national security law, arrested opposition politicians, and restricted media freedom.

With peaceful reunification off the table, Xi’s strategy now is clear: to vastly increase the level of military power that China can exert in the Taiwan Strait, to the extent that the United States would become unwilling to fight a battle that Washington itself judged it would probably lose. Without U.S. backing, Xi believes, Taiwan would either capitulate or fight on its own and lose. This approach, however, radically underestimates three factors: the difficulty of occupying an island that is the size of the Netherlands, has the terrain of Norway, and boasts a well-armed population of 25 million; the irreparable damage to China’s international political legitimacy that would arise from such a brutal use of military force; and the deep unpredictability of U.S. domestic politics, which would determine the nature of the U.S. response if and when such a crisis arose. Beijing, in projecting its own deep strategic realism onto Washington, has concluded that the United States would never fight a war it could not win, because to do so would be terminal for the future of American power, prestige, and global standing. What China does not include in this calculus is the reverse possibility: that the failure to fight for a fellow democracy that the United States has supported for the entire postwar period would also be catastrophic for Washington, particularly in terms of the perception of U.S. allies in Asia, who might conclude that the American security guarantees they have long relied on are worthless—and then seek their own arrangements with China.

As for China’s maritime and territorial claims in the East China and South China Seas, Xi will not concede an inch. Beijing will continue to sustain pressure on its Southeast Asian neighbors in the South China Sea, actively contesting freedom-of-navigation operations, probing for any weakening of individual or collective resolve—but stopping short of a provocation that might trigger a direct military confrontation with Washington, because at this stage, China is not fully confident it would win. In the meantime, Beijing will seek to cast itself in as reasonable a light as possible in its ongoing negotiations with Southeast Asian claimant states on the joint use of energy resources and fisheries in the South China Sea. Here, as elsewhere, China will fully deploy its economic leverage in the hope of securing the region’s neutrality in the event of a military incident or crisis involving the United States or its allies. In the East China Sea, China will continue to increase its military pressure on Japan around the disputed Diaoyu/Senkaku Islands, but as in Southeast Asia, here too Beijing is unlikely to risk an armed conflict, particularly given the unequivocal nature of the U.S. security guarantee to Japan. Any risk, however small, of China losing such a conflict would be politically unsustainable in Beijing and have massive domestic political consequences for Xi.

AMERICA THROUGH XI’S EYES

Underneath all these strategic choices lies Xi’s belief, reflected in official Chinese pronouncements and CCP literature, that the United States is experiencing a steady, irreversible structural decline. This belief is now grounded in a considerable body of evidence. A divided U.S. government failed to craft a national strategy for long-term investment in infrastructure, education, and basic scientific and technological research. The Trump administration damaged U.S. alliances, abandoned trade liberalization, withdrew the United States from its leadership of the postwar international order, and crippled U.S. diplomatic capacity. The Republican Party has been hijacked by the far right, and the American political class and electorate are so deeply polarized that it will prove difficult for any president to win support for a long-term bipartisan strategy on China. Washington, Xi believes, is highly unlikely to recover its credibility and confidence as a regional and global leader. And he is betting that as the next decade progresses, other world leaders will come to share this view and begin to adjust their strategic postures accordingly, gradually shifting from balancing with Washington against Beijing, to hedging between the two powers, to bandwagoning with China.

But China worries about the possibility of Washington lashing out at Beijing in the years before U.S. power finally dissipates. Xi’s concern is not just a potential military conflict but also any rapid and radical economic decoupling. Moreover, the CCP’s diplomatic establishment fears that the Biden administration, realizing that the United States will soon be unable to match Chinese power on its own, might form an effective coalition of countries across the democratic capitalist world with the express aim of counterbalancing China collectively. In particular, CCP leaders fear that President Joe Biden’s proposal to hold a summit of the world’s major democracies represents a first step on that path, which is why China acted rapidly to secure new trade and investment agreements in Asia and Europe before the new administration came into office.

Mindful of this combination of near-term risks and China’s long-term strengths, Xi’s general diplomatic strategy toward the Biden administration will be to de-escalate immediate tensions, stabilize the bilateral relationship as early as possible, and do everything possible to prevent security crises. To this end, Beijing will look to fully reopen the lines of high-level military communication with Washington that were largely cut off during the Trump administration. Xi might seek to convene a regular, high-level political dialogue, as well, although Washington will not be interested in reestablishing the U.S.-China Strategic and Economic Dialogue, which served as the main channel between the two countries until its collapse amid the trade war of 2018–19. Finally, Beijing may moderate its military activity in the immediate period ahead in areas where the People’s Liberation Army rubs up directly against U.S. forces, particularly in the South China Sea and around Taiwan—assuming that the Biden administration discontinues the high-level political visits to Taipei that became a defining feature of the final year of the Trump administration. For Beijing, however, these are changes in tactics, not in strategy.

As Xi tries to ratchet down tensions in the near term, he will have to decide whether to continue pursuing his hard-line strategy against Australia, Canada, and India, which are friends or allies of the United States. This has involved a combination of a deep diplomatic freeze and economic coercion—and, in the case of India, direct military confrontation. Xi will wait for any clear signal from Washington that part of the price for stabilizing the U.S.-Chinese relationship would be an end to such coercive measures against U.S. partners. If no such signal is forthcoming—there was none under President Donald Trump—then Beijing will resume business as usual.

Meanwhile, Xi will seek to work with Biden on climate change. Xi understands this is in China’s interests because of the country’s increasing vulnerability to extreme weather events. He also realizes that Biden has an opportunity to gain international prestige if Beijing cooperates with Washington on climate change, given the weight of Biden’s own climate commitments, and he knows that Biden will want to be able to demonstrate that his engagement with Beijing led to reductions in Chinese carbon emissions. As China sees it, these factors will deliver Xi some leverage in his overall dealings with Biden. And Xi hopes that greater collaboration on climate will help stabilize the U.S.-Chinese relationship more generally.

Adjustments in Chinese policy along these lines, however, are still likely to be tactical rather than strategic. Indeed, there has been remarkable continuity in Chinese strategy toward the United States since Xi came to power in 2013, and Beijing has been surprised by the relatively limited degree to which Washington has pushed back, at least until recently. Xi, driven by a sense of Marxist-Leninist determinism, also believes that history is on his side. As Mao was before him, Xi has become a formidable strategic competitor for the United States.

UNDER NEW MANAGEMENT

On balance, the Chinese leadership would have preferred to have seen the reelection of Trump in last year’s U.S. presidential election. That is not to say that Xi saw strategic value in every element of Trump’s foreign policy; he didn’t. The CCP found the Trump administration’s trade war humiliating, its moves toward decoupling worrying, its criticism of China’s human rights record insulting, and its formal declaration of China as a “strategic competitor” sobering. But most in the CCP’s foreign policy establishment view the recent shift in U.S. sentiment toward China as structural—an inevitable byproduct of the changing balance of power between the two countries. In fact, a number have been quietly relieved that open strategic competition has replaced the pretense of bilateral cooperation. With Washington having removed the mask, this thinking goes, China could now move more rapidly—and, in some cases, openly—toward realizing its strategic goals, while also claiming to be the aggrieved party in the face of U.S. belligerence.

But by far the greatest gift that Trump delivered to Beijing was the sheer havoc his presidency unleashed within the United States and between Washington and its allies. China was able to exploit the many cracks that developed between liberal democracies as they tried to navigate Trump’s protectionism, climate change denialism, nationalism, and contempt for all forms of multilateralism. During the Trump years, Beijing benefited not because of what it offered the world but because of what Washington ceased to offer. The result was that China achieved victories such as the massive Asia-Pacific free-trade deal known as the Regional Comprehensive Economic Partnership and the EU-China Comprehensive Agreement on Investment, which will enmesh the Chinese and European economies to a far greater degree than Washington would like.

China is wary of the Biden administration’s ability to help the United States recover from those self-inflicted wounds. Beijing has seen Washington bounce back from political, economic, and security disasters before. Nonetheless, the CCP remains confident that the inherently divisive nature of U.S. politics will make it impossible for the new administration to solidify support for any coherent China strategy it might devise.

Biden intends to prove Beijing wrong in its assessment that the United States is now in irreversible decline. He will seek to use his extensive experience on Capitol Hill to forge a domestic economic strategy to rebuild the foundations of U.S. power in the post-pandemic world. He is also likely to continue to strengthen the capabilities of the U.S. military and to do what it takes to sustain American global technological leadership. He has assembled a team of economic, foreign policy, and national security advisers who are experienced professionals and well versed in China—in stark contrast to their predecessors, who, with a couple of midranking exceptions, had little grasp of China and even less grasp of how to make Washington work. Biden’s advisers also understand that in order to restore U.S. power abroad, they must rebuild the U.S. economy at home in ways that will reduce the country’s staggering inequality and increase economic opportunities for all Americans. Doing so will help Biden maintain the political leverage he’ll need to craft a durable China strategy with bipartisan support—no mean feat when opportunistic opponents such as Pompeo will have ample incentive to disparage any plan he puts forward as little more than appeasement.

To lend his strategy credibility, Biden will have to make sure the U.S. military stays several steps ahead of China’s increasingly sophisticated array of military capabilities. This task will be made more difficult by intense budgetary constraints, as well as pressure from some factions within the Democratic Party to reduce military spending in order to boost social welfare programs. For Biden’s strategy to be seen as credible in Beijing, his administration will need to hold the line on the aggregate defense budget and cover increased expenses in the Indo-Pacific region by redirecting military resources away from less pressing theaters, such as Europe.

As China becomes richer and stronger, the United States’ largest and closest allies will become ever more crucial to Washington. For the first time in many decades, the United States will soon require the combined heft of its allies to maintain an overall balance of power against an adversary. China will keep trying to peel countries away from the United States—such as Australia, Canada, France, Germany, Japan, South Korea, and the United Kingdom—using a combination of economic carrots and sticks. To prevent China from succeeding, the Biden administration needs to commit itself to fully opening the U.S. economy to its major strategic partners. The United States prides itself on having one of the most open economies in the world. But even before Trump’s pivot to protectionism, that was not the case. Washington has long burdened even its closest allies with formidable tariff and nontariff barriers to trade, investment, capital, technology, and talent. If the United States wishes to remain the center of what until recently was called “the free world,” then it must create a seamless economy across the national boundaries of its major Asian, European, and North American partners and allies. To do so, Biden must overcome the protectionist impulses that Trump exploited and build support for new trade agreements anchored in open markets. To allay the fears of a skeptical electorate, he will need to show Americans that such agreements will ultimately lead to lower prices, better wages, more opportunities for U.S. industry, and stronger environmental protections and assure them that the gains won from trade liberalization can help pay for major domestic improvements in education, childcare, and health care.

The Biden administration will also strive to restore the United States’ leadership in multilateral institutions such as the UN, the World Bank, the International Monetary Fund, and the World Trade Organization. Most of the world will welcome this after four years of watching the Trump administration sabotage much of the machinery of the postwar international order. But the damage will not be repaired overnight. The most pressing priorities are fixing the World Trade Organization’s broken dispute-resolution process, rejoining the Paris agreement on climate change, increasing the capitalization of both the World Bank and the International Monetary Fund (to provide credible alternatives to China’s Asian Infrastructure Investment Bank and its Belt and Road Initiative), and restoring U.S. funding for critical UN agencies. Such institutions have not only been instruments of U.S. soft power since Washington helped create them after the last world war; their operations also materially affect American hard power in areas such as nuclear proliferation and arms control. Unless Washington steps up to the plate, the institutions of the international system will increasingly become Chinese satrapies, driven by Chinese finance, influence, and personnel.

MANAGED STRATEGIC COMPETITION

The deeply conflicting nature of U.S. and Chinese strategic objectives and the profoundly competitive nature of the relationship may make conflict, and even war, seem inevitable—even if neither country wants that outcome. China will seek to achieve global economic dominance and regional military superiority over the United States without provoking direct conflict with Washington and its allies. Once it achieves superiority, China will then incrementally change its behavior toward other states, especially when their policies conflict with China’s ever-changing definition of its core national interests. On top of this, China has already sought to gradually make the multilateral system more obliging of its national interests and values.

But a gradual, peaceful transition to an international order that accommodates Chinese leadership now seems far less likely to occur than it did just a few years ago. For all the eccentricities and flaws of the Trump administration, its decision to declare China a strategic competitor, formally end the doctrine of strategic engagement, and launch a trade war with Beijing succeeded in making clear that Washington was willing to put up a significant fight. And the Biden administration’s plan to rebuild the fundamentals of national U.S. power at home, rebuild U.S. alliances abroad, and reject a simplistic return to earlier forms of strategic engagement with China signals that the contest will continue, albeit tempered by cooperation in a number of defined areas.

The question for both Washington and Beijing, then, is whether they can conduct this high level of strategic competition within agreed-on parameters that would reduce the risk of a crisis, conflict, and war. In theory, this is possible; in practice, however, the near-complete erosion of trust between the two has radically increased the degree of difficulty. Indeed, many in the U.S. national security community believe that the CCP has never had any compunction about lying or hiding its true intentions in order to deceive its adversaries. In this view, Chinese diplomacy aims to tie opponents’ hands and buy time for Beijing’s military, security, and intelligence machinery to achieve superiority and establish new facts on the ground. To win broad support from U.S. foreign policy elites, therefore, any concept of managed strategic competition will need to include a stipulation by both parties to base any new rules of the road on a reciprocal practice of “trust but verify.”

The idea of managed strategic competition is anchored in a deeply realist view of the global order. It accepts that states will continue to seek security by building a balance of power in their favor, while recognizing that in doing so they are likely to create security dilemmas for other states whose fundamental interests may be disadvantaged by their actions. The trick in this case is to reduce the risk to both sides as the competition between them unfolds by jointly crafting a limited number of rules of the road that will help prevent war. The rules will enable each side to compete vigorously across all policy and regional domains. But if either side breaches the rules, then all bets are off, and it’s back to all the hazardous uncertainties of the law of the jungle.

The first step to building such a framework would be to identify a few immediate steps that each side must take in order for a substantive dialogue to proceed and a limited number of hard limits that both sides (and U.S. allies) must respect. Both sides must abstain, for example, from cyberattacks targeting critical infrastructure. Washington must return to strictly adhering to the “one China” policy, especially by ending the Trump administration’s provocative and unnecessary high-level visits to Taipei. For its part, Beijing must dial back its recent pattern of provocative military exercises, deployments, and maneuvers in the Taiwan Strait. In the South China Sea, Beijing must not reclaim or militarize any more islands and must commit to respecting freedom of navigation and aircraft movement without challenge; for its part, the United States and its allies could then (and only then) reduce the number of operations they carry out in the sea. Similarly, China and Japan could cut back their military deployments in the East China Sea by mutual agreement over time.

If both sides could agree on those stipulations, each would have to accept that the other will still try to maximize its advantages while stopping short of breaching the limits. Washington and Beijing would continue to compete for strategic and economic influence across the various regions of the world. They would keep seeking reciprocal access to each other’s markets and would still take retaliatory measures when such access was denied. They would still compete in foreign investment markets, technology markets, capital markets, and currency markets. And they would likely carry out a global contest for hearts and minds, with Washington stressing the importance of democracy, open economies, and human rights and Beijing highlighting its approach to authoritarian capitalism and what it calls “the China development model.”

Even amid escalating competition, however, there will be some room for cooperation in a number of critical areas. This occurred even between the United States and the Soviet Union at the height of the Cold War. It should certainly be possible now between the United States and China, when the stakes are not nearly as high. Aside from collaborating on climate change, the two countries could conduct bilateral nuclear arms control negotiations, including on mutual ratification of the Comprehensive Nuclear Test Ban Treaty, and work toward an agreement on acceptable military applications of artificial intelligence. They could cooperate on North Korean nuclear disarmament and on preventing Iran from acquiring nuclear weapons. They could undertake a series of confidence-building measures across the Indo-Pacific region, such as coordinated disaster-response and humanitarian missions. They could work together to improve global financial stability, especially by agreeing to reschedule the debts of developing countries hit hard by the pandemic. And they could jointly build a better system for distributing COVID-19 vaccines in the developing world.

That list is far from exhaustive. But the strategic rationale for all the items is the same: it is better for both countries to operate within a joint framework of managed competition than to have no rules at all. The framework would need to be negotiated between a designated and trusted high-level representative of Biden and a Chinese counterpart close to Xi; only a direct, high-level channel of that sort could lead to confidential understandings on the hard limits to be respected by both sides. These two people would also become the points of contact when violations occurred, as they are bound to from time to time, and the ones to police the consequences of any such violations. Over time, a minimum level of strategic trust might emerge. And maybe both sides would also discover that the benefits of continued collaboration on common planetary challenges, such as climate change, might begin to affect the other, more competitive and even conflictual areas of the relationship.

There will be many who will criticize this approach as naive. Their responsibility, however, is to come up with something better. Both the United States and China are currently in search of a formula to manage their relationship for the dangerous decade ahead. The hard truth is that no relationship can ever be managed unless there is a basic agreement between the parties on the terms of that management.

GAME ON

What would be the measures of success should the United States and China agree on such a joint strategic framework? One sign of success would be if by 2030 they have avoided a military crisis or conflict across the Taiwan Strait or a debilitating cyberattack. A convention banning various forms of robotic warfare would be a clear victory, as would the United States and China acting immediately together, and with the World Health Organization, to combat the next pandemic. Perhaps the most important sign of success, however, would be a situation in which both countries competed in an open and vigorous campaign for global support for the ideas, values, and problem-solving approaches that their respective systems offer—with the outcome still to be determined.

Success, of course, has a thousand fathers, but failure is an orphan. But the most demonstrable example of a failed approach to managed strategic competition would be over Taiwan. If Xi were to calculate that he could call Washington’s bluff by unilaterally breaking out of whatever agreement had been privately reached with Washington, the world would find itself in a world of pain. In one fell swoop, such a crisis would rewrite the future of the global order.

A few days before Biden’s inauguration, Chen Yixin, the secretary-general of the CCP’s Central Political and Legal Affairs Commission, stated that “the rise of the East and the decline of the West has become [a global] trend and changes of the international landscape are in our favor.” Chen is a close confidant of Xi and a central figure in China’s normally cautious national security apparatus, and so the hubris in his statement is notable. In reality, there is a long way to go in this race. China has multiple domestic vulnerabilities that are rarely noted in the media. The United States, on the other hand, always has its weaknesses on full public display—but has repeatedly demonstrated its capacity for reinvention and restoration. Managed strategic competition would highlight the strengths and test the weaknesses of both great powers—and may the best system win.

The post Foreign Affairs: Short of War appeared first on Kevin Rudd.

Cory DoctorowPrivacy Without Monopoly: Data Protection and Interoperability (Part 1)

This week on my podcast, Part One of “Privacy Without Monopoly: Data Protection and Interoperability,” a major new EFF paper by my colleague Bennett Cyphers and me.

It’s a paper that tries to resolve the tension between demanding that tech platforms gather, retain and mine less of our data, and the demand that platforms allow alternatives (nonprofits, co-ops, tinkerers, startups) to connect with their services.

MP3

Cryptogram On Vulnerability-Adjacent Vulnerabilities

At the virtual Enigma Conference, Google’s Project Zero’s Maggie Stone gave a talk about zero-day exploits in the wild. In it, she talked about how often vendors fix vulnerabilities only to have the attackers tweak their exploits to work again. From a MIT Technology Review article:

Soon after they were spotted, the researchers saw one exploit being used in the wild. Microsoft issued a patch and fixed the flaw, sort of. In September 2019, another similar vulnerability was found being exploited by the same hacking group.

More discoveries in November 2019, January 2020, and April 2020 added up to at least five zero-day vulnerabilities being exploited from the same bug class in short order. Microsoft issued multiple security updates: some failed to actually fix the vulnerability being targeted, while others required only slight changes that required just a line or two to change in the hacker’s code to make the exploit work again.

[…]

“What we saw cuts across the industry: Incomplete patches are making it easier for attackers to exploit users with zero-days,” Stone said on Tuesday at the security conference Enigma. “We’re not requiring attackers to come up with all new bug classes, develop brand new exploitation, look at code that has never been researched before. We’re allowing the reuse of lots of different vulnerabilities that we previously knew about.”

[…]

Why aren’t they being fixed? Most of the security teams working at software companies have limited time and resources, she suggests — and if their priorities and incentives are flawed, they only check that they’ve fixed the very specific vulnerability in front of them instead of addressing the bigger problems at the root of many vulnerabilities.

Another article on the talk.

This is an important insight. It’s not enough to patch existing vulnerabilities. We need to make it harder for attackers to find new vulnerabilities to exploit. Closing entire families of vulnerabilities, rather than individual vulnerabilities one at a time, is a good way to do that.

Chaotic IdealismWe Give Words Their Power

Recently, a friend of mine posted a meme that recommended we should use the term “enslaved people” rather than “slaves”, because being a slave is a circumstance, rather than an identity. I did not think it was particularly useful to do so; it misses the point. The important thing, when teaching about slavery, is to teach from the perspective of the slaves themselves, so that the student never forgets that the slaves are fellow humans rather than objects, and that they have been made property despite their intrinsic human equality with their legal masters.

I have often been confounded with the need to change my language this way. It happened when I was told that I could no longer say “all lives matter”, because it now meant that the only lives that did matter were the white, neurotypical ones. It happened when I was told that when I mourned their genocide during the Holocaust, I could not call them Gypsies, but must call them Roma instead, because that is what they call themselves.

Most of the time when this happens, I change my language, because I recognize that neurotypicals burden words with all sorts of things not in the words’ actual definition, and then when I say them, they hear all those extra meanings too. If I want to communicate, I have to keep up. But it bothers me a great deal, for several reasons.

First, it seems that people substitute a change in language for a change in behavior. One simply cannot say the n-word without being immediately branded a racist (for an experiment, imagine what you might think of me if I had not censored it). With the extra meanings loaded onto that word, that is exactly what it means now: “I am a racist.”

But although this word has become a taboo, many other things that are more hurtful to black people than a word will ever be, are not taboos. White people say they want to live in a good neighborhood; they mean they want to live outside a poor black neighborhood. They send their child to a “good school”, and leave the underfunded, crowded public schools for the black children. White people casually hire other white people for jobs, choose them as friends, date them, and generally perpetuate, informally, segregation. None of this is taboo, the way the n-word is. People who would never say the n-word will happily act in ways that say, “I am a racist”.

Because of this language taboo, saying you are a racist has become more shunned than actually acting like a racist.

Second, language is being used as a password into liberal, socially-conscious circles. If one does not say the right words, one is assumed not to care about human rights. The focus has changed. Instead of policing one another’s actions, people police one another’s language. A person who has not lifted a finger to help empower the minority groups in their own community can, with the full consensus of their social circle, brand another person as the enemy–even if the other person has been spending a great deal of time and effort working toward equality. Saying the right words has become a substitute for doing the right thing.

I’ve seen the same phenomenon in a very different milieu–that of fundamentalist Christianity. One must say the right words, pray the right prayers, or one is an outsider. Words are given near-magical power.

In fundamentalist circles, to use any kind of “bad language” is to be immediately castigated (and I don’t mean using God or Jesus as swear words, which would be understandable as it shows a lack of respect. Rather, it is the simple scatological and sexual language that is considered most sinful). But it is completely permitted to insult, belittle, or bully someone without that sort of language, especially if one can put it in polite terms. I have heard “God bless you” being used as a patronizing insult–multiple times.

There are superstitions surrounding language. People use “In Jesus’s name,” to close out a prayer, with the belief that if one does not pray in Jesus’s name, God will not hear. They talk about becoming a Christian by saying the right words–that one repents of sin, asks for forgiveness, and asks Christ into one’s heart–and believe that one cannot be a Christian unless one has said those words, whether or not one lives according to them.

Fundamentalists also identify one another, and exclude outsiders, by the use of language. There are so many words that are loaded with a ton of meaning outside their literal definition that communicating with a fundamentalist, in their own language, is like crossing a minefield. Terms like “God’s will”, “persecution”, “sinner”, or “end times”, come so loaded with meaning that anyone who hasn’t spent years in that culture will immediately sound like an outsider when they open their mouths. They too have fallen into the trap of policing one another’s language rather than their behavior.

It is so very similar to what I see in liberal circles, and that troubles me. Groups can lose sight of their purpose in this endless quest to affirm and reinforce their group identity, because they give language so much power.

As an autistic person, language is not my first language. Language is only what I translate my thoughts into when I want to communicate them to others. Yet neurotypicals seem convinced that words are thoughts and language is reality. Some even believe they can affect reality by saying the right words: Every tradition of magic, whether cultural or fictional, has to do with saying the right words, making the right gestures, and/or creating the right symbols. Does that sound familiar? It should; the ways of magic are also the ways of language, whether written, gestured, or spoken.

Neurotypicals give language power, and because culture is as real as any other idea, language is indeed granted the power they give it. But this is not intrinsic power. Language has only the power we give it, and we are giving it too much power.

As a language-user, I have no choice but to tiptoe across the minefield of connotation. If I say the wrong word, people hear things I am not saying or believe things of me that are not true. I have to spend a lot of time and effort on updating my language rather than actually doing useful things to mitigate or overturn the social systems that created the desire to linguistically distance ourselves from the atrocities associated with them. But it bothers me, because the more we focus on linguistic distance, the more we seem to lose focus on the need to actually change the way the world works.

If only we did not give language so much power, we would be much better off.

LongNowEvan “Skytree” Snyder on Atomic Priests and Crystal Synthesizers

Image for post
Evan “Skytree” Snyder in his studio. Source: Facebook.

Evan “Skytree” Snyder straddles two worlds: by day, he is a robotics engineer. By night, he produces electronic music that drops listeners into lush atmospheres evocative of both the ancient world and distant future.

We had a chance to speak with Snyder about his 02020 album Infraplanetary and his recent experiments with piezoelectric musical synthesis. Both projects ratchet up themes of deep time, inviting listeners to meditate on singing rocks and post-historic correspondences.

Our discussion has been edited for clarity and length.


Let’s talk about the lyrics to “Atomic Priest” off Infraplanetary.

An excerpt:

“This is for the humans living ten thousand years from now
With radioactive capsules, thousands of feet underground
Grabbin’ the mic to warn you of these hazardous sites
For those who lack in the sight in the black of the night
The least good that we could do is form an Atomic Priesthood
To keep the future species from going where no one should
We’ve buried the mistakes of past nuclear waste
Hidden underground for future races to face
It’s our task to leave signs for civilization to trace
But who’s to say what language these generations will embrace?
Basic symbols up for vast interpretation
Disasters resulting from grave mistranslation
This is not a place of honor and glory
This is a deep geological nuclear repository
Reaching through millennia to give some education
And preserve the evolution of beings and vegetation.”

These are hip-hop artist Jackson Whalan’s words, but you prompted him to write a fairly specific piece about communicating to the distant future. What motivated you to make this, and how does it fit into the way you consider and communicate deep time concerns in the rest of your work?

Skytree: I really appreciate the opportunity to discuss this with you. “Atomic Priest” is definitely inspired by my lifelong fascination with deep time — specifically its effect on design principles, engineering challenges, and bridging cultures. I’m intrigued by things that endure, how they endure, and why. The simple practice of considering the long-term is uniquely inspiring, and compared to the relative chaos of the present I find some refuge and meditative calm when reflecting on the decamillennial scale.

The long view also shows up in my process as a music producer. Building compositions is a months-long solo endeavor within my audio workstation. It’s an obsessive, detailed, and laborious process, and my reflecting on deeper timescales while composing is reflected in the product. I’m mindful that the end result feels timeless or out of sync with everyday chronology.

Collaboration makes the work less lonely. The lyrics to “Atomic Priest” were indeed written by Jackson. When I sent him the instrumental to record over I already had a title and theme, and included an article describing the unique challenges of the EPA’s Waste Isolation Pilot Plant (WIPP) — an attempt to contain nuclear materials that remain lethal for over 300,000 years. When I first read the article in 02006 I was captivated by the project’s concept sketches of how one might warn unknown future civilizations about nuclear contamination. I then researched the EPA’s Human Interference Task Force and the work of linguist Thomas Sebok, which I also provided to Jackson for reference. I was thrilled with the result. Combining something as contemporary and human as hip-hop with a subject so immense in scale feels very satisfying to me.

Image for post
Skytree’s song, “Atomic Priest,” was inspired in part by the Waste Isolation Pilot Plant in Carlsbad, New Mexico. Source: Center for Land Use Interpretation.

You’re touching on something that goes deeper than the future-chic aesthetic of many other electronic artists.

Futurism has always been an inspiration, but on this album I tried to go a bit deeper with it than just sounds or spaces. I often stepped back and reflected on what it might sound like to someone in the deep future, in the unlikely chance they’d find it. What sort of “message in a bottle” might surprise them, excite them, deviate from what they’d expect to find, or feel like a knowing hand-shake from the past?

This potential for a two-way dialogue between entities separated by eons is one of most tantalizing potentials of thinking in deep time scales. The Voyager craft are of course excellent literal vehicles for this potential — designed in the hopes to one day be found, perhaps light years from our star system and far, far in the future, by intelligences we may never meet or learn of but who realize we intended them to find this message. That is perhaps about as close to a real time machine as we may ever get. I’d like to think this album is the best result I’ve achieved to that end.

Image for post
A still from Skytree’s music video for “Out There.” Source: Instagram / Skytreemusic.

I want to talk with you more about your current project, linking up piezoelectric sensors to crystals to send CV signals to modular synthesizers. As someone who actually ate Moon dust as a kid, can you please wax philosophical about making music from stones, and what it is about this that stimulates your artistic or scientific imagination?

My grandfather was the chief of security at NASA during Apollo, and served there for 25 years. One of his most recognizable accomplishments was that he was personally responsible for safely transporting Moon specimens for public viewing and analysis from the NASA archives to the Smithsonian, where many of them are still on display today. He accompanied them on the last leg of their journey to the public’s eye. As a kid, I remember visiting the Smithsonian with my family and marveling at how he was a small but notable part of that incredible accomplishment.

Shortly thereafter, I took that a step too far and snuck a small taste of his personal sample of Moon dust while he was mowing his lawn. I was 8 years old. I remember carefully observing how long it took him to mow the lawn, when it obstructed his view into his house, where he kept his display case keys in his home office, and noting where the small step stool was that I needed to reach the top shelf. It wasn’t so much out of mischief, though outfoxing NASA’s former chief of security, as a child, on the very artifacts he was dutied to protect…feels pretty funny now. Rather, it was more out of a genuine need to try it. Something in me just had to see if I could eat part of the Moon. I did. It tasted chalky, powdery — about what you’d expect. If he were still alive today I wouldn’t dare share this story. He was a hardass and not someone to cross. (Rest in peace, Grandpa.)

So, my love of rocks goes pretty deep. For years, my artist bio has read, “sounds generated by minerals, plants, animals and artifacts.” This used to be tongue-in-cheek, avoiding genres, but I am now quite literally making sounds generated by minerals and plants, plus my already extensive use of animals and artifacts.

This series of experiments scratches a very particular itch. My favorite areas of any museum have always been geology and mineralogy. I remember staring into displays filled with crystals for so long my parents would have to pull me away — especially if they were interactive, illustrating principles like stratification, fossilization, or piezoelectricity. Ever since learning about the use of piezoelectric resonators and components in everyday electronics like radios and computers, I couldn’t help but wonder…could this same effect be demonstrated on a raw quartz point? It turns out it’s not even that difficult.

Just weeks ago, I found a successful method for turning raw quartz pieces in my collection into surprisingly effective piezoelectric pickups. Though I’d used standard factory-made piezos for years, making vibrations onto the surface of a crystal and hearing them come ringing through my headphones was an absolutely magical moment. All that’s needed is some copper tape, copper wire, the right leads, some amplification and signal processing to remove noise. Two electrodes are taped on opposite faces of the crystal point — one out of three sets of faces tends to work best and provides the greatest voltage output. Some crystals work better than others.

Image for post
Skytree uses a transducer to vibrate a crystal and records the output via piezoelectric signal. Source: Instagram / Skytreemusic.

At first I went for the tried-and-true approach of simply whacking on these specimens with a mallet, but I’ve gotten more refined with it. Using a function generator (output from a fancy oscilloscope) and a transducer (effectively a speaker without the cone), I’ve been able to impart specific frequencies onto quartz specimens, find resonant points, and record the resulting audio. Moreover, I’ve been able to use this piezoelectric signal as control voltage for my modular synth. I can’t underscore enough how much excitement and motivation this brings me and how happy I am to share this. There’s something incredible about using relatively unaltered geological specimens, perhaps hundreds of thousands of years old, in a modular synthesizer in 02021. It feels like a very raw and timeless dialogue between my creative self and immense forces of nature.

Image for post
Still from a video of Skytree explaining his modular “geosonification” rig. Source: Instagram / Skytreemusic.

I’m already imagining the crystal keyboard in the dash of Carl Sagan’s Ship of the Imagination, only it’s a Moog.

I’ve also been experimenting with using conductive specimens like meteorite and native copper as crude theremin antennae, to send control voltage to synth modules. This is far easier to set up than the piezoelectric experiments, but nonetheless highlights important and useful physical principles of these materials. My next experiments will involve pyrite, which shifts from an insulator to semiconductor to conductor depending on the strength of the magnetic field it’s exposed to. An electromagnet is sitting on my desk and ready to aid my continued explorations of literal rock music. For the time being, I’m calling this process “geosonification” as a nod to using plants in synthesis under the guise of “biosonification.”

It gives me a way to integrate my loves of music and science and make mutually reinforcing discoveries. With music, I often discover more about myself. With science experiments, I discover more about the world. Combining the two, I get both. It keeps me playing and interested. I’m not an exceptionally talented instrumentalist, but this gives me a way to tread new ground using some of the oldest tricks on Earth.

Since you mentioned plants, and as far as leaving a record for the future is concerned, we’re having this exchange in the context of the growing popularity of attaching sensors and MIDI converters to plants, and sonifying data in general. Data sonification seems key in the ongoing work of making multiple spatiotemporal scales easier to grasp and work with. And “letting plants speak” in music seems par for the course right now, as the Wood Wide Web becomes a colloquial idea and we collectively grapple with the ideas of personhood for companies or ecosystems operating on vastly different timescales.

Yeah, to the point of piezoelectrcity and plants, I have a synth module that turns subtle variations in capacitance from a plant, person or other semiconductor into usable control voltages. My dad has been a huge inspiration with all this. He recently retired after 27 years in the National Park Service as midwest region radio manager. Growing up, there were always electronics around; I was exposed to the fundamentals of these technologies pretty early on and first burned my hand on a soldering iron when I was ten.

One of the most fascinating stories my dad ever told me was about an unexplained vast radio deadzone in National Park land. It turned out that a miles-long row of trees had grown into an old line of forgotten barbed wire fence. This grid of metal wire turned the electrolytic trees into a giant capacitor, which significantly disrupted radio propagation in the entire region. That’s a pretty seamless, unintended, and unexpected blend of nature and technology. It’s also a reminder there really is a hidden dimension of energy running through things, and sometimes you find it by accident.

That’s a fine place to end this.

Thanks, Michael and Long Now, for your inspiring work, and thank you to all the long-view thinkers out there that share a sense of wonder, awe, and stillness when gazing into the unknowable future.

Learn More:

Worse Than FailureThe Therac-25 Incident

A few months ago, someone noted in the comments that they hadn't heard about the Therac-25 incident. I was surprised, and went off to do an informal survey of developers I know, only to discover that only about half of them knew what it was without searching for it.
I think it's important that everyone in our industry know about this incident, and upon digging into the details I was stunned by how much of a WTF there was.
Today's article is not fun, or funny. It describes incidents of death and maiming caused by faulty software engineering processes. If that's not what you want today, grab a random article from our archive, instead.

When you're strapping a patient to an electron gun capable of delivering a 25MeV particle beam, following procedure is vitally important. The technician operating the Therac-25 radiotherapy machine at the East Texas Cancer Center (ETCC) had been running this machine, and those like it, long enough that she had the routine down.

On March 21, 1986, the technician brought a patient into the treatment room. She checked their prescription, and positioned them onto the bed of the Therac-25. Above the patient was the end-point of the emitter, a turntable which allowed her to select what kind of beam the device would emit. First, she set the turntable to a simple optical laser mode, and used that to position the patient so that the beam struck a small section of his upper back, just to one side of his spine.

Therac 25 54.gif
By Ajzh2074 - Own work, CC BY-SA 4.0, Link

With the patient in the correct position, she rotated the turntable again. There were two other positions. One would position an array of magnets between the beam and the patient; these would shape and aim the beam. The other placed a block of metal between the beam and the patient. When struck by a 25MeV beam of electrons, the metal would radiate X-rays.

This patient's prescription was for an electron beam, so she positioned the turntable and left the room. In the room next door, shielded from the radiation, was the control terminal. The technician started keying in the prescription to begin the treatment.

If things were exactly following the routine, she'd be able to communicate with the patient via an intercom, and monitor the patient via a video camera. Sadly, that system had broken down today. Still, this patient had already had a number of treatments, so they knew what to expect, so that communication was hardly necessary. In fact, the Therac-25 and all the supporting equipment were always finicky, so "something doesn't work" practically was part of the routine.

The technician had run this process so many times she started keying in the prescription. She'd become an extremely fast typist, at least on this device, and perhaps too fast. In the field for beam type, she accidentally keyed in "X", for "x-ray". It was a natural mistake, as most patients got x-ray treatments, and it wasn't much of a problem: the computer would see that the turntable was in the wrong position and refuse to dose the patient. She quickly tapped the "UP" arrow on the keyboard to return to the field, corrected the value to "E", for electron, and confirmed the other parameters.

Her finger hovered over the "B" key on the keyboard while she confirmed her data entry. Once she was sure everything was correct, she pressed "B" for "beam start". There was no noise, there never was, but after a moment, the terminal read: "Malfunction 54", and then "Treatment Pause".

Error codes were no surprise. The technicians kept a chart next to the console, which documented all the error codes. In this case, "Malfunction 54" meant a "dose input 2" error.

That may not have explained anything, but the technician was used to the error codes being cryptic. And this was a "treatment pause", which meant the next step was to resume treatment. According to the terminal, no radiation had been delivered yet, so she hit the "P" key to unpause the beam.

That's when she heard the screaming.

The patient had been through a number of these sessions already, and knew they shouldn't feel a thing. The first time the technician activated the beam, however, he felt a burning sensation, which he later described like "hot coffee" being poured on his back. Without any intercom to call for help, he started to get off the treatment table. He was still extricating himself, screaming for help, when the technician unpaused the beam, at which point he felt something like a massive electric shock.

That, at first, was the diagnosis. A malfunction in the machine must have delivered an electric shock. The patient was sent home, and the hospital physicist examined the Therac-25, confirming everything was in working order and there were no signs of trouble. It didn't seem like it would happen again.

The patient had been prescribed a dose of 180 rads as part of a six-week treatment program that would deliver 6,000 rads in total. According to the Therac-25, the patient had received an underdose, a fraction of that radiation. No one knew it yet, but the malfunction had actually delivered between 16,000 and 25,000 rads. The patient seemed fine, but in fact, they were already dead and no one knew it yet.


The ETCC incident was not the first, and sadly was not the last malfunction of the Therac-25 system. Between June 1985 and July 1987, there were six accidents involving the Therac-25, manufactured by Atomic Energy Canada Limited (AECL). Each was a severe radiation overdose, which resulted in serious injuries, maimings, and deaths.

As the first incidents started to appear, no one was entirely certain what was happening. Radiation poisoning is hard to diagnose, especially if you don't expect it. As with the ETCC incident, the machine reported an underdose despite overdosing the patient. Hospital physicists even contacted AECL when they suspected an overdose, only to be told such a thing was impossible.

A few weeks later, there was a second overdose at ETCC, and it was around that time that the FDA and the press started to get involved. Early on, there was a great deal of speculation about the cause. Of interest is this comment from the RISKS mailing list from 1986.

Here is my speculation of what happened: I suspect that the current in the electron beam is probably much greater in X-ray mode (because you want similar dose rates in both modes, and the production of X-rays is more indirect). So when you select X-rays, I'll bet the target drops into place and the beam current is boosted. I suspect in this case, the currentwas boosted before the target could move into position, and a very high current electron beam went into the patient.

How could this be allowed to happen? My guess is that the software people would not have considered it necessary to guard against this failure mode. Machine designers have traditionally used electromechanical interlocks to ensure safety. Computer control of therapy machines is a fairly recent development and is layered on top of, rather than substituting for, the old electromechanical mechanisms.


The Therac-25 was the first entirely software-controlled radiotherapy device. As that quote from Jacky above points out: most such systems use hardware interlocks to prevent the beam from firing when the targets are not properly configured. The Therac-25 did not.

The software included a number of key modules that ran on a PDP-11. First, there were separate processes for handling each key function of the system: user input, beam alignment, dosage tracking, etc. Each of these processes was implemented in PDP-11 Assembly. Governing these processes was a real-time OS, also implemented in Assembly. All of this software, from the individual processes to the OS itself, were the work of a single software developer.

AECL had every confidence in this software, though, because it wasn't new. The earliest versions of the software appeared on the Therac-6. Development started in 1972, and the software was adapted to the Therac-25 in 1976. The same core was also used on the Therac-20. Within AECL, the attitude was that the software must be safe because they'd been using it for so long.

In fact, when AECL performed their own internal safety analysis of the Therac-25 in 1983, they did so with the following assumptions:

1) Programming errors have been reduced by extensive testing on a hardware simulator, and under field conditions on teletherapy units. Any residual software errors are not included in the analysis. 2) Program software does not decay due to wear, fatigue, or reproduction errors. 3) Computer software errors are caused by faulty hardware components, and "soft" (random) errors induced by alpha particles or electromagnetic noise.

In other words: we've used the software for a long time and software always copies and deploys perfectly. So, any bugs we see would have to be transient bugs caused by radiation or hardware errors.


After the second incident at ETCC, the hospital physicist took the Therac-25 out of service and worked with the technician to replicate the steps that caused the overdose. It wasn't easy to trigger the "Malfunction 54" error message, especially when they were trying to methodically replicate the exact steps, because as it turned out, if you entered the data slowly, there were no problems.

To trigger the overdose, you needed to type quickly, the kind of speed that an experienced operator might have. The physicist practiced until he could replicate the error, then informed AECL. While he was taking measurements to see how large the overdoses were, AECL called back. They couldn't replicate the issue. "It works on my machine," essentially.

After being coached on the required speed, the AECL technicians went back to it, and confirmed that they could trigger an overdose. When the hospital physicist took measurements, they found roughly 4,000 rads in the overdose. AECL, doing similar tests, triggered overdoses of 25,000 rads. The reality is that, depending on the timing, the output was potentially random.

With that information, the root cause was easier to understand: there was a race condition. Specifically, when the technician mistyped "X" for x-ray, the computer would calculate out the beam activation sequence to deliver a high-energy beam to create x-rays. When the technician hit the "UP" arrow to correct their mistake, it should've forced a recalculation of that activation sequence—but if the user typed too quickly, the UI would update and the recalculation would never happen.


By the middle of 1986, the Food and Drug Administration (FDA) was involved, and demanded that AECL provide a Corrective Action Plan (CAP). What followed was a lengthy process of revisions as AECL would provide their CAP and the FDA would follow up with questions, resulting in new revisions to the CAP.

For example, the FDA reviewed the first CAP revision and noted that it was incomplete. Specifically, it did not include a test plan. AECL responded:

no single test plan and report exists for the software since both hardware and software were tested and exercised separately together for many years.

The FDA was not pleased with that, and after more back and forth, replied:

We also expressed our concern that you did not intend to perform the [test] protocol to future modifications to the software. We believe that rigorous testing must be performed each time a modification is made to ensure the modification does not adversely affect the safety of the system.

While AECL struggled to include complex tasks like testing in their CAP, they had released instructions that allowed for a temporary fix to prevent future incidents. Unfortunately, in January, 1987, there was another incident, caused by a different software bug.

In this bug, there was a variable shared by multiple processes, meant as a flag to decide whether or not the beam collimator in the turntable needs to be checked to ensure everything is in the correct position. If the value is non-zero, the check needs to be performed. If the value is zero, it does not. Unfortunately, the software would increment the field, and the field was only one byte wide. This meant every 256th increment, the variable would be zero when it should have been non-zero. If that incorrect zero lined up with an operator action, the beam would fire at full energy without the turntable in the right position.

AECL had a fix for that (stop incrementing and just set the value), and amended their CAP to include that fix. The FDA recognized that was probably going to fix the problem, but still had concerns. In an internal memo:

We are in the position of saying that the proposed CAP can reasonably be expected to correct the deficiencies for which they were developed (Tyler). We cannot say that we are [reasonably] confident about the safety of the entire system…

This back-and-forth continued through a number of CAP revisions. At each step in the process, the FDA found issues with testing. AECL's test process up to this point was simply to run the machine and note if anything went wrong. Since the software had been in use, in some version, for over a decade, they did not see any reason to test the software, and thus had no capacity or plan for actually testing the software when the FDA required it.

The FDA, reviewing some test results, noted:

Amazingly, the test data presented to show that the software changes to handle the edit problems in the Therac-25 are appropriate prove the opposite result. … I can only assume the fix is not right, or the data were entered incorrectly.


Eventually, the software was fixed. Legislative and regulatory changes were made to ensure incidents like this couldn't happen in the future, at least not the same way.

It's worth noting that there was one developer who wrote all of this code. They left AECL in 1986, and thankfully for them, no one has ever revealed their identity. And while it may be tempting to lay the blame at their feet—they made every technical choice, they coded every bug—it would be wildly unfair to do that.

With AECL's continued failure to explain how to test their device, it should be clear that the problem was a systemic one. It doesn't matter how good your software developer is; software quality doesn't appear because you have good developers. It's the end result of a process, and that process informs both your software development practices, but also your testing. Your management. Even your sales and servicing.

While the incidents at the ETCC finally drove changes, they weren't the first incidents. Hospital physicists had already reported problems to AECL. At least one patient had already initiated a lawsuit. But that information didn't propagate through the organization; no one put those pieces together to recognize that the device was faulty.

On this site, we joke a lot at the expense of the Paula Beans and Roys of this world. But no matter how incompetent, no matter how reckless, no matter how ignorant the antagonist of a TDWTF article may be, they're part of a system, and that system put them in that position.

Failures in IT are rarely individual failures. They are process failures. They are systemic failures. They are organizational failures. The story of AECL and the Therac-25 illustrates how badly organizational failures can end up.

AECL did not have a software process. They didn't view software as anything more than a component of a larger whole. In that kind of environment, working on safety critical systems, no developer could have been entirely successful. Given that this was a situation where lives were literally on the line, building a system that produced safe, quality software seems like it should have been a priority. It wasn't.

While the Therac-25 incident is ancient history, software has become even more important. While we would hope safety-critical software has rigorous processes, we know that isn't always true. The 737MAX is an infamous, recent example. But with the importance of software in the modern world, even more trivial software problems can get multiplied at scale. Whether it's machine learning reinforcing racism, social networks turning into cesspools of disinformation or poorly secured IoT devices turning into botnets, our software exists and interacts with the world, and has real world consequences.

If nothing else, I hope this article makes you think about the process you use to create software. Is the process built to produce quality? What obstacles to quality are there? Is quality a priority, and if not, why not? Does your process consider quality at scale? You may know your software's failure modes, but do you understand your organization's failure modes? Its blind spots? The assumptions it makes which may not be valid in all cases?


Let's return for a moment to the race condition that caused the ETCC incidents. This was caused by users hitting the up arrow too quickly, preventing the system from properly registering their edits. While the FDA CAP process was grinding along, AECL wanted to ensure that people could still use the Therac-25 safely, and that meant publishing quick fixes that users could apply to their devices.

This is the letter AECL sent out to address that bug:

SUBJECT: CHANGE IN OPERATING PROCEDURES FOR THE THERAC-25 LINEAR ACCELERATOR
Effective immediately, and until further notice, the key used for moving the cursor back through the prescription sequence (i.e., cursor "UP" inscribed with an upward pointing arrow) must not be used for editing or any other purpose.
To avoid accidental use of this key, the key cap must be removed and the switch contacts fixed in the open position with electrical tape or other insulating material.
For assistance with the latter you should contact your local AECL service representative.
Disabling this key means that if any prescription data entered is incorrect, than "R" reset command must be used and the whole prescription reentered.
For those users of the Multiport option, it also means that editing of dose rate, dose, and time will not be possible between ports.

On one hand, this is a simple instruction that would effectively prevent the ETCC incidents from reoccurring. On the other, it's terrifying to imagine a patient's life hanging on a ripped up keycap and electrical tape.


This article is intended as a brief summary of the incident. Most of the technical details in this article come from this detailed account of the Therac-25 incident. That is the definitive source on the subject, and I recommend reading the whole thing. It contains much more detail, including deeper dives into the software choices and organizational failures.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianAxel Beckert: Starting a GNU Screen session via SSH's ~/.ssh/config

This is more or less a followup to this blog posting of mine about AutoSSH and GNU Screen from nearly ten years ago — which by the way is still valid and still the way, I use SSH and GNU Screen.

Recently a friend asked me how to automatically start or reconnect to a GNU Screen session directly via OpenSSH’s configuration file. Here’s how to do it:

Add an entry to ~/.ssh/config similar to this one:

Host screen_on_server
    Hostname server.example.org
    RequestTTY yes
    RemoteCommand screen -RD

and then just call ssh screen_on_server and you’ll get connected to an existing screen session if present, otherwise you’ll a new new one.

Should work with tmux, too, but might need commandlien different options.

Planet DebianRuss Allbery: Review: Spheres of Influence

Review: Spheres of Influence, by Ryk E. Spoor

Series: Arenaverse #2
Publisher: Baen
Copyright: November 2013
ISBN: 1-4516-3937-6
Format: Kindle
Pages: 576

Spheres of Influence is a direct sequel to Grand Central Arena, which introduces the world background and is also a much better book. There is a detailed recap of the previous volume (thank you!) and a summary of things that happened between the volumes (that was odd), so it's easy to refresh your memory, but there's no point in reading this book if you've not read the first one.

In this series, Spoor is explicitly writing a throw-back space adventure inspired by E.E. "Doc" Smith and similar SF from the 1920s to the 1950s. Grand Central Arena was the discovery and exploration story, which in my opinion is where that literary tradition is at its strongest. Spheres of Influence veers into a different and less appealing part of that tradition: the moment when the intrepid space explorer is challenged by the ignorant Powers That Be at home, who don't understand the importance of anything that's happening.

Captain Ariane Austin and her crew made a stunning debut into the Arena, successfully navigated its politics (mostly via sheer audacity and luck), and achieved a tentatively strong position for humanity. However, humanity had never intended them to play that role. There isn't much government in Spoor's (almost entirely unexplained) anarcho-libertarian future, but there is enough for political maneuvering and the appointment of a more official ambassador to the Arena who isn't Ariane. But the Arena has its own rules that care nothing about human politics, which gives Ariane substantial leverage to try to prevent Earth politicians from making a mess of things.

This plot could be worse. Unlike his source material, Spoor is not entirely invested in authoritarian politics, and the plot resolution is a bit friendlier to government oversight than one might expect. (It's disturbing, though, that this oversight seems to consist mostly of the military, and it's not clear how those people are selected.) But the tradition of investing vast powers in single people of great moral character is one of the less defensible tropes of early American SF, and Spoor chooses to embrace it to an unfortunate degree. Clearing out all the bureaucratic second-guessing to let the honorable person who has stumbled across vast power make all the decisions is a type of simplistic politics with a long, bad history in US fiction. The author can make it look like a good idea by yanking hard on the scales; Ariane makes all the right decisions because she's the heroine and therefore of course she does. I was unsettled, in this year of 2021, by the book's apparent message that her major failing is her unwillingness to consolidate her power.

This isn't the only problem I had with this book. Before we get to the political maneuvering, the plot takes a substantial digression into the Hyperion Project.

The Hyperion Project showed up in the first book as part of the backstory of one of the characters. I'll omit the details to avoid spoilers, but in the story it functioned as an excuse to model a character directly on E.E. "Doc" Smith characters. The details never seemed that interesting, but as background it was easy to read past, and the character in question was still moderately enjoyable.

Unfortunately, the author was more invested in this bit of background than I was. Spheres of Influence introduces four more characters from the same project, including Wu Kong, a cliched mash-up of numerous different Monkey King stories who becomes a major viewpoint character. (The decision to focus on a westernized, exoticized version of a Chinese character didn't seem that wise to me.) One problem is that Spoor clearly thinks Wu Kong is a more interesting character than I do, but my biggest complaint is that introducing these new characters was both unnecessary and pulled the story away from the pieces I was interested in. I want to read more about the Arena and its politics, alien technology, and awesome vistas, not about some significantly less interesting historical human project devoted to bringing fictional characters to life.

And that's the third problem with this book: not enough happens. Grand Central Arena had a good half-dozen significant plot developments set among some great sense-of-wonder exploration and alien first contact. There are only two major plot events in Spheres of Influence, both are dragged out with unnecessary description and posturing, and neither show us much that's exciting or new. The exploration of the Arena grinds nearly to a halt, postponing the one offered bit of deep exploration for the third book. There are some satisfying twists and turns in the bits of plot we do get, but nothing that justifies nearly 600 pages.

This is not a very good book, and huge step down from the first book of the series. In its defense, it still offers the sort of optimistic (and, to be honest, simplistic) adventure that I was looking for after reading a book full of depressing politics. It's hard not to enjoy the protagonists taking audacious risks, revealing hidden talents, and winning surprising victories. But I wanted the version with more exploration, more new sights, less authoritarian and militaristic politics, and less insertion of fictional characters.

Also, yes, we know that one of the characters is an E.E. "Doc" Smith character. Please give the cliched Smith dialogue tics a rest. All of the "check to nine decimal places" phrases are hard enough to handle in Smith's short and fast-moving books. They're agonizing in a slow-moving book three times as long.

Not recommended, although I'm still invested enough in the setting that I'll probably read the third book when I'm feeling in the mood for some feel-good adventure. It appears to have the plot developments I was hoping would be in this one.

Followed by Challenges of the Deeps.

Rating: 5 out of 10

,

Planet DebianEnrico Zini: Mindustry links

Mindustry is really well made computer game that I enjoyed playing a lot. It is Free Software.

Here are two guides to get a deeper idea about details of the game:

r/Mindustry - How unattended sector defense works (most effective turrets and such) is a useful explanation I found of what happens in Mindustry 6 when I leave a sector to itself.

And #959466 is the RFP (wink, wink!)

Planet DebianChris Lamb: The Silence of the Lambs: 30 Years On

No doubt it was someone's idea of a joke to release Silence of the Lambs on Valentine's Day, thirty years ago today. Although it references Valentines at one point and hints at a deeper relationship between Starling and Lecter, it was clearly too tempting to jeopardise so many date nights. After all, how many couples were going to enjoy their ribeyes medium-rare after watching this?

Given the muted success of Manhunter (1986), Silence of the Lambs was our first real introduction to Dr. Lecter. Indeed, many of the best scenes in this film are introductions: Starling's first encounter with Lecter is probably the best introduction in the whole of cinema, but our preceding introduction to the asylum's factotum carries a lot of cultural weight too, if only because the camera's measured pan around the environment before alighting on Barney has been emulated by so many first-person video games since.

We first see Buffalo Bill at the thirty-two minute mark. (Or, more tellingly, he sees us.) Delaying the viewer's introduction to the film's villain is the mark of a secure and confident screenplay, even if it was popularised by the budget-restricted Jaws (1975) which hides the eponymous shark for one hour and 21 minutes.

It is no mistake that the first thing we see of Starling do is, quite literally, pull herself up out of the unknown. With all of the focus on the Starling—Lecter repartee, the viewer's first introduction to Starling is as underappreciated as she herself is to the FBI. Indeed, even before Starling tells Lecter her innermost dreams, we learn almost everything we need to about Starling in the first few minutes: we see her training on an obstacle course in the forest, the unused rope telling us that she is here entirely voluntarily. And we can surely guess why; the passing grade for a woman in the FBI is to top of the class, and Starling's not going to let an early February in Virginia get in the way of that.

We need to wait a full three minutes before we get our first line of dialogue, and in just eight words ("Crawford wants to see you in his office...") we get our confirmation about the FBI too. With no other information other than he can send a messenger out into the cold, we can intuit that Crawford tends to get what Crawford wants. It's just plain "Crawford" too; everyone knows his actual title, his power, "his" office.

The opening minutes also introduce us to the film's use of visual hierarchy. Our Hermes towers above Starling throughout the brief exchange (she must push herself even to stay within the camera's frame). Later, Starling always descends to meet her demons: to the asylum's basement to visit Lecter and down the stairs to meet Buffalo Bill. Conversely, she feels safe enough to reveal her innermost self to Lecter on the fifth floor of the courthouse. (Bong Joon-ho's Parasite (2019) uses elevation in an analogous way, although a little more subtly.)

The messenger turns to watch Starling run off to Crawford. Are his eyes involuntarily following the movement or he is impressed by Starling's gumption? Or, almost two decades after John Berger's male gaze, is he simply checking her out? The film, thankfully, leaves it to us.

Crawford is our next real introduction, and our glimpse into the film's sympathetic treatment of law enforcement. Note that the first thing that the head of the FBI's Behavioral Science Unit does is to lie to Starling about the reason to interview Lecter, despite it being coded as justified within the film's logic. We learn in the book that even Barney deceives Starling, recording her conversations with Lecter and selling her out to the press. (Buffalo Bill always lies to Starling, of course, but I think we can forgive him for that.) Crawford's quasi-compliment of "You grilled me pretty hard on the Bureau's civil rights record in the Hoover years..." then encourages the viewer to conclude that the FBI's has been a paragon of virtue since 1972... All this (as well as her stellar academic record, Crawford's wielding of Starling's fragile femininity at the funeral home and the cool reception she receives from a power-suited Senator Ruth Martin), Starling must be constantly asking herself what it must take for anyone to take her seriously. Indeed, it would be unsurprising if she takes unnecessary risks to make that happen.

The cold open of Hannibal (2001) makes for a worthy comparison. The audience remembers they loved the dialogue between Starling and Lecter, so it is clumsily mentioned. We remember Barney too, so he is shoehorned in as well. Lacking the confidence to introduce new signifiers to its universe, Red Dragon (2002) aside, the hollow, 'clip show' feel of Hannibal is a taste of the zero-calorie sequels to come in the next two decades.

The film is not perfect, and likely never was. Much has been written on the fairly transparent transphobia in Buffalo Bill's desire to wear a suit made out of women's skin, but the film then doubles down on its unflattering portrayal by trying to have it both ways. Starling tells the camera that "there's no correlation between transsexualism and violence," and Lecter (the film's psychoanalytic authority, remember) assures us that Buffalo Bill is "not a real transsexual" anyway. Yet despite those caveats, we are continually shown a TERFy cartoon of a man in a wig tucking his "precious" between his legs and an absurdly phallic gun. And, just we didn't quite get the message, a decent collection of Nazi memorabilia.

The film's director repeated the novel's contention that Buffalo Bill is not actually transgender, but someone so damaged that they are seeking some kind of transformation. This, for a brief moment, almost sounds true, and the film's deranged depiction of what it might be like to be transgender combined with its ambivalence feels distinctly disingenuous to me, especially given that — on an audience and Oscar-adjusted basis — Silence of the Lambs may very well be the most transphobic film to come out of Hollywood. Still, I remain torn on the death of the author, especially when I discover that Jonathan Demme went on to direct Philadelphia (1993), likely the most positive film about homophobia and HIV.


§


Nevertheless, as an adaption of Thomas Harris' original novel, the movie is almost flawless. The screenplay excises red herrings and tuns down the volume on some secondary characters. Crucially for the format, it amplifies Lecter's genius by not revealing that he knew everything all along and cuts Buffalo Bill's origin story for good measure too — good horror, after all, does not achieve its effect on the screen, but in the mind of the viewer. The added benefit of removing material from the original means that the film has time to slowly ratchet up the tension, and can remain patient and respectful of the viewer's intelligence throughout: it is, you could almost say, "Ready when you are, Sgt. Pembury". Otherwise, the film does not deviate too far from the original, taking the most liberty when it interleaves two narratives for the famous 'two doorbells' feint.

Dr. Lecter's upright stance when we meet him reminds me of the third act of Alfred Hitchcock's Notorious (1946), another picture freighted with meaningful stairs. Stanley Kubrick's The Killing (1956) began the now-shopworn trope of concealing a weapon in a flower box.

Two other points of deviation from the novel might be worthy of mention. In the book, a great deal is made of Dr. Lecter's penchant for Bach's Goldberg Variations, inducing a cultural resonance with other cinematic villains who have a taste for high art. It is also stressed in the book that it is the Canadian pianist Glenn Gould's recording too, although this is likely an attempt by Harris to demonstrate his own refined sensibilities — Lecter would surely have prefered a more historically-informed performance on the harpsichord. Yet it is glaringly obvious that it isn't Gould playing in the film at all; Gould's hypercanonical 1955 recording is faster and focused, whilst his 1981 release is much slower and contemplative. No doubt tedious issues around rights prevented the use of either recording, but I like to imagine that Gould himself nixed the idea.

The second change revolves around the film's most iconic quote. Deep underground, Dr. Lecter tries to spook Starling:

A census taker once tried to test me. I ate his liver with some fava beans and a nice Chianti.

The novel has this as "some fava beans and a big Amarone". No doubt the movie-going audience could not be trusted to know what an Amarone was, just as they were not to capable of recognising a philosopher. Nevertheless, substituting Chianti works better here as it cleverly foreshadows Tuscany (we discover that Lecter is living in Florence in the sequel), and it avoids the un-Lecterian tautology of 'big' — Amarone's, I am reliably informed, are big-bodied wines. Like Buffalo Bill's victims.

Yet that's not all. "The audience", according to TV Tropes:

... believe Lecter is merely confessing to one of his crimes. What most people would not know is that a common treatment for Lecter's "brand of crazy" is to use drugs of a class known as MAOIs (monoamine oxidase inhibitors). There are several things one must not eat when taking MAOIs, as they can case fatally low blood pressure, and as a physician and psychiatrist himself, Dr. Lecter would be well aware of this. These things include liver, fava beans, and red wine. In short, Lecter was telling Clarice that he was off his medication.

I could write more, but as they say, I'm having an old friend for dinner. The starling may be a common bird, but The Silence of the Lambs is that extremely rara avis indeed — the film that's better than the book. Ta ta...

Planet DebianSteinar H. Gunderson: plocate 1.1.4 released

I made a minor release of plocate; as usual, https://plocate.sesse.net/ has the tarballs and such. The changelog reads:

plocate 1.1.4, February 14th, 2021

  - updatedb now uses ~15% less CPU time.

  - Installs a file CACHEDIR.tag into /var/lib/plocate, to mark the directory
    as autogenerated. Suggested by Marco d'Itri.

  - Manpage fixes; patch by Jakub Wilk.

The CPU time is, as usual, nothing really clever, but just a bunch of 1% optimizations. The plocate database is 0.1% larger or so, but it shouldn't really be noticed. There isn't io_uring support for updatedb yet, simply because I haven't bothered (it runs from cron/systemd anyway).

Also, upgrading libzstd from stable to unstable will make updatedb a few percent faster yet :-)

Planet DebianBits from Debian: I love Free Software Day 2021: Show your love for Free Software

ILoveFS banner

On this day February 14th, Debian joins the Free Software Foundation Europe in celebration of "I Love Free Software" day. This day takes the time to appreciate and applaud all those who contribute to the many areas of Free Software.

Debian sends all of our love and a giant “Thank you” to the upstream and downstream creators and maintainers, hosting providers, partners, and of course all of the Debian Developers and Contributors.

Thank you for all that you do in making Debian truly the Universal Operating System and for keeping and making Free Software Free!

Send some love and show some appreciation for Free Software by spreading the message and appreciation around the world, if you share in social media the hashtag used is: #ilovefs.

Planet DebianSteinar H. Gunderson: Idle language musings

PHP makes the easy things easy, and in the process makes the wrong things easy and the right things hard.

C makes the easy things hard, the hard things possible, and in the process makes the wrong things just as easy as the right things.

Rust makes everything hard, but the wrong things even harder.

Cryptogram SonicWall Zero-Day

Hackers are exploiting a zero-day in SonicWall:

In an email, an NCC Group spokeswoman wrote: “Our team has observed signs of an attempted exploitation of a vulnerabilitythat affects the SonicWall SMA 100 series devices. We are working closely with SonicWall to investigate this in more depth.”

In Monday’s update, SonicWall representatives said the company’s engineering team confirmed that the submission by NCC Group included a “critical zero-day” in the SMA 100 series 10.x code. SonicWall is tracking it as SNWLID-2021-0001. The SMA 100 series is a line of secure remote access appliances.

The disclosure makes SonicWall at least the fifth large company to report in recent weeks that it was targeted by sophisticated hackers. Other companies include network management tool provider SolarWinds, Microsoft, FireEye, and Malwarebytes. CrowdStrike also reported being targeted but said the attack wasn’t successful.

Neither SonicWall nor NCC Group said that the hack involving the SonicWall zero-day was linked to the larger hack campaign involving SolarWinds. Based on the timing of the disclosure and some of the details in it, however, there is widespread speculation that the two are connected.

The speculation is just that — speculation. I have no opinion in the matter. This could easily be part of the SolarWinds campaign, which targeted other security companies. But there are a lot of “highly sophisticated threat actors” — that’s how NCC Group described them — out there, and this could easily be a coincidence.

Were I working for a national intelligence organization, I would try to disguise my operations as being part of the SolarWinds attack.

EDITED TO ADD (2/9): SonicWall has patched the vulnerability.

Planet DebianFrançois Marier: Creating a Kodia media PC using a Raspberry Pi 4

Here's how I set up a media PC using Kodi (formerly XMBC) and a Raspberry Pi 4.

Hardware

The hardware is fairly straightforward, but here's what I ended up getting:

You'll probably want to add a remote control to that setup. I used an old Streamzap I had lying around.

Installing the OS on the SD-card

Plug the SD card into a computer using a USB adapter.

Download the imager and use it to install Raspbian on the SDcard.

Then you can simply plug the SD card into the Pi and boot.

System configuration

Using sudo raspi-config, I changed the following:

  • Set hostname (System Options)
  • Wait for network at boot (System Options): needed for NFS
  • Disable screen blanking (Display Options)
  • Enable ssh (Interface Options)
  • Configure locale, timezone and keyboard (Localisation Options)
  • Set WiFi country (Localisation Options)

Then I enabled automatic updates:

apt install unattended-upgrades anacron

echo 'Unattended-Upgrade::Origins-Pattern {
        "origin=Debian,codename=${distro_codename},label=Debian";
        "origin=Debian,codename=${distro_codename},label=Debian-Security";
        "origin=Raspbian,codename=${distro_codename},label=Raspbian";
        "origin=Raspberry Pi Foundation,codename=${distro_codename},label=Raspberry Pi Foundation";
};' | sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-raspbian

Headless setup

Should you need to do the setup without a monitor, you can enable ssh by inserting the SD card into a computer and then creating an empty file called ssh in the boot partition.

Plug it into your router and boot it up. Check the IP that it received by looking at the active DHCP leases in your router's admin panel.

Then login:

ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no pi@192.168.1.xxx

using the default password of raspberry.

Hardening

In order to secure the Pi, I followed most of the steps I usually take when setting up a new Linux server.

I created a new user account for admin and ssh access:

adduser francois
addgroup sshuser
adduser francois sshuser
adduser francois sudo

and changed the pi user password to a random one:

pwgen -sy 32
sudo passwd pi

before removing its admin permissions:

deluser pi adm
deluser pi sudo
deluser pi dialout
deluser pi cdrom
deluser pi lpadmin

Finally, I enabled the Uncomplicated Firewall by installing its package:

apt install ufw

and only allowing ssh connections.

After starting ufw using systemctl start ufw.service, you can check that it's configured as expected using ufw status. It should display the following:

Status: active

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
22/tcp (v6)                ALLOW       Anywhere (v6)

Installing Kodi

Kodi is very straightforward to install since it's now part of the Raspbian repositories:

apt install kodi

To make it start at boot/login, while still being able to exit and use other apps if needed:

cp /etc/xdg/lxsession/LXDE-pi/autostart ~/.config/lxsession/LXDE-pi/
echo "@kodi" >> ~/.config/lxsession/LXDE-pi/autostart

Network File System

In order to avoid having to have all media storage connected directly to the Pi via USB, I setup an NFS share over my local network.

First, give static IP allocations to the server and the Pi in your DHCP server, then add it to the /etc/hosts file on your NFS server:

192.168.1.3    pi

Install the NFS server package:

apt instal nfs-kernel-server

Setup the directories to share in /etc/exports:

/pub/movies    pi(ro,insecure,all_squash,subtree_check)
/pub/tv_shows  pi(ro,insecure,all_squash,subtree_check)

Open the right ports on your firewall by putting this in /etc/network/iptables.up.rules:

-A INPUT -s 192.168.1.3 -p udp -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp --dport 111 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 111 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 123 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp --dport 600:1124 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 600:1124 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp --dport 2049 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 2049 -j ACCEPT

Finally, apply all of these changes:

iptables-apply
systemctl restart nfs-kernel-server.service

On the Pi, put the server's static IP in /etc/hosts:

192.168.1.2    fileserver

and this in /etc/fstab:

fileserver:/data/movies  /kodi/movies  nfs  ro,bg,hard,noatime,async,nolock  0  0
fileserver:/data/tv      /kodi/tv      nfs  ro,bg,hard,noatime,async,nolock  0  0

Then create the mount points and mount everything:

mkdir -p /kodi/movies
mkdir /kodi/tv
mount /kodi/movies
mount /kodi/tv

,

Planet DebianDirk Eddelbuettel: RcppFastFloat 0.0.2: New Function

The second release of RcppFastFloat is now on CRAN. The package wraps fastfloat, another nice library by Daniel Lemire who showed in a recent arXiv paper that one can convert character representations of ‘numbers’ into floating point at rates at or exceeding one gigabyte per second.

Thanks to Brendan, this release adds a helper function as.double2() modeled after the base R function but using, of course, the features from fast_float in RcppFastFloat.

Release notes follow.

Changes in version 0.0.2 (2021-02-13)

  • New function as.double2() demonstrating fast_float (Brendan in #1)

Courtesy of my CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cryptogram Chinese Supply-Chain Attack on Computer Systems

Bloomberg News has a major story about the Chinese hacking computer motherboards made by Supermicro, Levono, and others. It’s been going on since at least 2008. The US government has known about it for almost as long, and has tried to keep the attack secret:

China’s exploitation of products made by Supermicro, as the U.S. company is known, has been under federal scrutiny for much of the past decade, according to 14 former law enforcement and intelligence officials familiar with the matter. That included an FBI counterintelligence investigation that began around 2012, when agents started monitoring the communications of a small group of Supermicro workers, using warrants obtained under the Foreign Intelligence Surveillance Act, or FISA, according to five of the officials.

There’s lots of detail in the article, and I recommend that you read it through.

This is a follow on, with a lot more detail, to a story Bloomberg reported on in fall 2018. I didn’t believe the story back then, writing:

I don’t think it’s real. Yes, it’s plausible. But first of all, if someone actually surreptitiously put malicious chips onto motherboards en masse, we would have seen a photo of the alleged chip already. And second, there are easier, more effective, and less obvious ways of adding backdoors to networking equipment.

I seem to have been wrong. From the current Bloomberg story:

Mike Quinn, a cybersecurity executive who served in senior roles at Cisco Systems Inc. and Microsoft Corp., said he was briefed about added chips on Supermicro motherboards by officials from the U.S. Air Force. Quinn was working for a company that was a potential bidder for Air Force contracts, and the officials wanted to ensure that any work would not include Supermicro equipment, he said. Bloomberg agreed not to specify when Quinn received the briefing or identify the company he was working for at the time.

“This wasn’t a case of a guy stealing a board and soldering a chip on in his hotel room; it was architected onto the final device,” Quinn said, recalling details provided by Air Force officials. The chip “was blended into the trace on a multilayered board,” he said.

“The attackers knew how that board was designed so it would pass” quality assurance tests, Quinn said.

Supply-chain attacks are the flavor of the moment, it seems. But they’re serious, and very hard to defend against in our deeply international IT industry. (I have repeatedly called this an “insurmountable problem.”) Here’s me in 2018:

Supply-chain security is an incredibly complex problem. US-only design and manufacturing isn’t an option; the tech world is far too internationally interdependent for that. We can’t trust anyone, yet we have no choice but to trust everyone. Our phones, computers, software and cloud systems are touched by citizens of dozens of different countries, any one of whom could subvert them at the demand of their government.

We need some fundamental security research here. I wrote this in 2019:

The other solution is to build a secure system, even though any of its parts can be subverted. This is what the former Deputy Director of National Intelligence Sue Gordon meant in April when she said about 5G, “You have to presume a dirty network.” Or more precisely, can we solve this by building trustworthy systems out of untrustworthy parts?

It sounds ridiculous on its face, but the Internet itself was a solution to a similar problem: a reliable network built out of unreliable parts. This was the result of decades of research. That research continues today, and it’s how we can have highly resilient distributed systems like Google’s network even though none of the individual components are particularly good. It’s also the philosophy behind much of the cybersecurity industry today: systems watching one another, looking for vulnerabilities and signs of attack.

It seems that supply-chain attacks are constantly in the news right now. That’s good. They’ve been a serious problem for a long time, and we need to take the threat seriously. For further reading, I strongly recommend this Atlantic Council report from last summer: “Breaking trust: Shades of crisis across an insecure software supply chain.

Planet DebianMolly de Blanc: Proprietary (definition) – 02

I’ve had some good conversations about this attempt to define proprietary software. In many of these conversations, people focused on explicitly what I’m trying to not do (i.e. define “proprietary” by saying it’s not FOSS). Some people helped me clarify that I’m looking to do really, which is have a pithy way to explain proprietary to people who are never going to look at source code or pay someone to write new code for them. How do you explain to people who don’t care about technical matters nor have the language to discuss them? How do you talk about licenses to people who may not have the language for it? (In a past life I explained Creative Commons licenses to academics and educators.)

Talking about licensing seemed very important to people, as licenses are what define freedoms, restrictions, and restrictions that protect freedoms. With these points in mind, I present the following:

Proprietary software is software that comes with restrictions that retain control of how software can be used, shared, and changed through the use of copyright and licensing.

I worry that this is “too technical” and then I worry that I’m worrying too much about that. In this I added a truncated version of a common explanation of the Four Freedoms (typically use, study, modify, share). This is in part because I believe “study” is included in “modify.”

I included “copyright and licensing” in hopes that a reader would understand at least one of them. I also wanted to take into account that communities may have other policies (e.g. community guidelines) that might in some way restrict how software is used, shared, and changed. I don’t like “retain control” as a phrase, but it was suggested to me (thanks! If you want credit, just ping me). I think it’s pretty clear about the intention and consequence of proprietary licensing.

A potential criticism I see is that it’s not clear enough that you must be able to do all three (use, share, and change) in order for software to be FOSS and that restrictions on any of them renders software proprietary.

,

Planet DebianDirk Eddelbuettel: RcppSimdJson 0.1.4 on CRAN: New Improvements

Brendan and I are happy to share that a new RcppSimdJson release 0.1.4 arrived on CRAN earlier today. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon (also voted best talk).

This version brings a new option to always return list types, tweaks to setting option in the the request and other small improvements. The NEWS entry follows.

Changes in version 0.1.4 (2021-02-12)

  • Support additional headers in fload (Dirk in #60).

  • Enable continuous integration via GitHub Actions using run.sh from r-ci repo (Dirk in #61, #62).

  • Add option to always return list to fparse()/fload() (Brendan in #65 closing #64).

Courtesy of my CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cryptogram Browser Tracking Using Favicons

Interesting research on persistent web tracking using favicons. (For those who don’t know, favicons are those tiny icons that appear in browser tabs next to the page name.)

Abstract: The privacy threats of online tracking have garnered considerable attention in recent years from researchers and practitioners alike. This has resulted in users becoming more privacy-cautious and browser vendors gradually adopting countermeasures to mitigate certain forms of cookie-based and cookie-less tracking. Nonetheless, the complexity and feature-rich nature of modern browsers often lead to the deployment of seemingly innocuous functionality that can be readily abused by adversaries. In this paper we introduce a novel tracking mechanism that misuses a simple yet ubiquitous browser feature: favicons. In more detail, a website can track users across browsing sessions by storing a tracking identifier as a set of entries in the browser’s dedicated favicon cache, where each entry corresponds to a specific subdomain. In subsequent user visits the website can reconstruct the identifier by observing which favicons are requested by the browser while the user is automatically and rapidly redirected through a series of subdomains. More importantly, the caching of favicons in modern browsers exhibits several unique characteristics that render this tracking vector particularly powerful, as it is persistent (not affected by users clearing their browser data), non-destructive (reconstructing the identifier in subsequent visits does not alter the existing combination of cached entries), and even crosses the isolation of the incognito mode. We experimentally evaluate several aspects of our attack, and present a series of optimization techniques that render our attack practical. We find that combining our favicon-based tracking technique with immutable browser-fingerprinting attributes that do not change over time allows a website to reconstruct a 32-bit tracking identifier in 2 seconds. Furthermore,our attack works in all major browsers that use a favicon cache, including Chrome and Safari. Due to the severity of our attack we propose changes to browsers’ favicon caching behavior that can prevent this form of tracking, and have disclosed our findings to browser vendors who are currently exploring appropriate mitigation strategies.

Another researcher has implemented this proof of concept:

Strehle has set up a website that demonstrates how easy it is to track a user online using a favicon. He said it’s for research purposes, has released his source code online, and detailed a lengthy explanation of how supercookies work on his website.

The scariest part of the favicon vulnerability is how easily it bypasses traditional methods people use to keep themselves private online. According to Strehle, the supercookie bypasses the “private” mode of Chrome, Safari, Edge, and Firefox. Clearing your cache, surfing behind a VPN, or using an ad-blocker won’t stop a malicious favicon from tracking you.

Cryptogram Virginia Data Privacy Law

Virginia is about to get a data privacy law, modeled on California’s law.

Cryptogram WEIS 2021 Call for Papers

The 20th Annual Workshop on the Economics of Information Security (WEIS 2021) will be held online in June. We just published the call for papers.

Cryptogram Malicious Barcode Scanner App

Interesting story about a barcode scanner app that has been pushing malware on to Android phones. The app is called Barcode Scanner. It’s been around since 2017 and is owned by the Ukrainian company Lavabird Ldt. But a December 2020 update included some new features:

However, a rash of malicious activity was recently traced back to the app. Users began noticing something weird going on with their phones: their default browsers kept getting hijacked and redirected to random advertisements, seemingly out of nowhere.

Generally, when this sort of thing happens it’s because the app was recently sold. That’s not the case here.

It is frightening that with one update an app can turn malicious while going under the radar of Google Play Protect. It is baffling to me that an app developer with a popular app would turn it into malware. Was this the scheme all along, to have an app lie dormant, waiting to strike after it reaches popularity? I guess we will never know.

Cryptogram Deliberately Playing Copyrighted Music to Avoid Being Live-Streamed

Vice is reporting on a new police hack: playing copyrighted music when being filmed by citizens, trying to provoke social media sites into taking the videos down and maybe even banning the filmers:

In a separate part of the video, which Devermont says was filmed later that same afternoon, Devermont approaches [BHPD Sgt. Billy] Fair outside. The interaction plays out almost exactly like it did in the department — when Devermont starts asking questions, Fair turns on the music.

Devermont backs away, and asks him to stop playing music. Fair says “I can’t hear you” — again, despite holding a phone that is blasting tunes.

Later, Fair starts berating Devermont’s livestreaming account, saying “I read the comments [on your account], they talk about how fake you are.” He then holds out his phone, which is still on full blast, and walks toward Devermont, saying “Listen to the music”.

In a statement emailed to VICE News, Beverly Hills PD said that “the playing of music while accepting a complaint or answering questions is not a procedure that has been recommended by Beverly Hills Police command staff,” and that the videos of Fair were “currently under review.”

However, this is not the first time that a Beverly Hills police officer has done this, nor is Fair the only one.

In an archived clip from a livestream shared privately to VICE Media that Devermont has not publicly reposted but he says was taken weeks ago, another officer can be seen quickly swiping through his phone as Devermont approaches. By the time Devermont is close enough to speak to him, the officer’s phone is already blasting “In My Life” by the Beatles — a group whose rightsholders have notoriously sued Apple numerous times. If you want to get someone in trouble for copyright infringement, the Beatles are quite possibly your best bet.

As Devermont asks about the music, the officer points the phone at him, asking, “Do you like it?”

Clever, really, and an illustration of the problem with context-free copyright enforcement.

Kevin RuddRemarks: National Apology Breakfast – Kevin Rudd

THE HON KEVIN RUDD AC

Co-chair, National Apology Foundation

26th Prime Minister of Australia

 

Remarks: National Apology Breakfast

12 February 2021

It’s been my pleasure every year since 2011 to be in Sydney for the anniversary of the Apology.

First at Government House.

Then at Parliament House.

Both within a stone’s throw of where European occupation and settlement of Australian began 232 years ago and the long saga of Indigenous dispossession began.

And I look forward to the day when we all Australians – black, white and people of every colour –  share a national day which brings us together rather than tears us apart.

I’m pleased to join you today from Meeanjin – or, as we call it, Brissie – and the land of the Turrbul and Jagera nations, whose land this has been across the millennia and I pay my deepest respects to all their elders past, present and still to come.

I also pay tribute to Message Stick and Michael McLeod for their continued commitment to ensuring this important occasion is not forgotten after what has been a year of living dangerously for all of us.

 
Closing the Gap

One year ago, with great caution and considerable reservation, I welcomed the Morrison Government’s move to adjust the Closing the Gap targets.

He used the word “refresh”. I wasn’t so sure about “refresh” because it sounded too much like a marketing makeover to me.

Back then, I set out three benchmarks by which I would judge it.

One, what would the new targets be?

Two, what resources will be allocated to meet those targets?

And three, how will we access the data to review success or failure against those targets?

In terms of what the new targets are, there is still not clarity. We now have sixteen targets with another four under negotiation.

When he announced this revamp, Prime Minister Morrison said he wanted “realistic or achievable” targets.

As it has transpired, many of them are fuzzy in the extreme. Deliberately so.

They call  for a “sustained increase” in this.

Or a “significant and sustained” trend in that.

Others again appear designed to be met without any government action at all – at least according to the ANU’s Centre for Aboriginal Economic Policy Research.

Being “achievable” doesn’t mean we make the targets so small that we get to give ourselves a giant pat on the back to say the job is done.

That’s not an “achievable” goal. That’s just a bullshit goal.

I nonetheless wait to see what the different jurisdictions have to say when they publish their implementation plans mid-year.

In terms of resources, my second criteria for evaluating the Morrison “refresh”, the situation is grim.

When my government first pushed the Closing the Gap framework through the Council of Australian Governments, we jointly committed $4.6 billion to achieve the six targets we set.

$4.6 billion.

After I came second in the 2013 election, Prime Minister Abbott added an additional target but cut more than half a billion dollars from programs to advance the lot of Indigenous Australians.

That funding still hasn’t been restored.

Now we have sixteen targets, and four more under negotiation, and the government has committed less than $47 million to achieving them.

$47 million. That’s not even 10% of what they’ve cut.

It’s not like this government has been clutching parsimoniously the purse-strings of the national budget.

Nope. They’ve been spending like drunken sailors. John Maynard Keynes would blush.

Australia’s net debt is on-track for $1.3 trillion – about seven times what the Coalition inherited from Labor.

And the deficit this year will be $214 billion – also seven times bigger than we left them.

And yet – despite shovelling money out the door at a rate of knots – there wasn’t a dollar spent on social housing (including indigenous housing) in this country, despite it being a proven economic multiplier.

During the global financial crisis and the great global recession that followed, we invested $5.6 billion to build or refurbish almost 200,000 homes for people in need.

Reviving that program could put roofs over the heads of our First Australians, get others out of over-crowded homes, and provide good jobs for Indigenous people on the way through.

Instead, we get the HomeBuilder program – a renovations subsidy for people on six-figure salaries who already have $150,000 to spend.

That’s not to discount the jobs supported by that program, but is it really the best use of taxpayers’ money? I don’t think so.

My third metric was this: how will we access the data to review success or failure against those targets?

Some governments have shown they’re pretty effective at dodging scrutiny and hiding data.

On the day of the Apology, we built a simple integrity measure to make sure that the Closing the Gap data doesn’t go missing.

Every year, for the last 13 years, the Prime Minister of the day has stood up in parliament to mark the anniversary of the Apology and presented a report on their government’s successes and failures to Close the Gap.

It’s called accountability. I did it. Julia Gillard did it. Tony Abbott did it. And Malcolm Turnbull did it.

It’s a very simple integrity measure.

Now Canberra is awash with rumour that for the first time since the Apology, it’s not going to happen this year.

If that happens – and I hope it doesn’t – Prime Minister Morrison is going to dodge that modest exercise in annual accountability.

There’s no good reason why he shouldn’t present a comprehensive report on the commonwealth government’s progress on each of these targets next week.

We managed to do it during the Global Financial Crisis, when the world was spiralling into a global recession.

Mr Morrison can surely do it during the current crisis.

If he isn’t prepared to face annual scrutiny like all his predecessors have – Labor and Liberal – it is the beginning of a crabwalk away from accountability to our First Australians.

I hope he has a change of heart over the weekend.

Uluru Statement
 
The Apology was one step in a long journey towards reconciliation as the arc of history slowly bends towards justice.

Closing the Gap is another.

Responding to the Uluru Statement from the Heart is the next.

The Uluru Statement remains, at just 439 words, a most remarkable document.

Its core demands are threefold: a national Voice to parliament, enshrined in the constitution, part of a wider act of constitutional recognition of the First Australians, and a Makarrata Commission to oversee truth-telling and treaty-making.

On a national Voice, Mr Morrison appears more focused on navigating the internal political shoals of the far right than he is on delivering effective constitutional change.

He’s tilting towards legislating a form of national Voice but without any form of constitutional entrenchment.

I’ve been around the political block a few times. And I don’t agree.

I can tell you that, without the deep change that comes with a referendum which enjoys bipartisan support, the political right will always seize every opportunity to trash such a limited national Voice as illegitimate.

If you doubt me, I only point out the examples of the Australian Broadcasting Corporation, or the Australian Human Rights Commission.

Both of these exist under legislation.

Both are subjected to rolling campaigns of delegitimisation, the withdrawal of funding and threats of abolition.

A Voice to parliament that lacks constitutional protection will be no more permanent than the dozens of quasi-autonomous government agencies that are created and abolished by governments every year.

There’s a reason why the Indigenous leaders who framed the Uluru Statement chose a constitutionally entrenched national Voice. They want it to be permanent. They want it to be above the ebb and flow of partisan politics. They want it to be part of the established national political architecture. That’s why so many Indigenous leaders are sticking by the idea of constitutional entrenchment. And I stand with them.

Meanwhile, this “legislation versus referendum” debate is designed to distract from other great matter outlined in Uluru: a Makarrata Commission leading to a treaty with our First Nations.

To those who set out to attack, challenge or water-down the Uluru Statement, answer me this…

What core Australian economic interest is diminished by formally making peace with our Indigenous Australians along the terms of the statement?

I can think of several core economic interests that might be enhanced – but can you tell me any that would be diminished?

They can’t.

Because there are none.

There was an arguable case with land rights. But there are none with this.

The uncomfortable truth is the debate about Uluru isn’t really about substance.

It’s all about the symbols of the white identity politics of the far right.

Just another battle in the seemingly endless culture wars in which any advancement – no matter how modest – in the cause of reconciliation must be opposed in order to throw more raw meat to the extreme right, thereby sustaining the wider coalition of interests that makes up the fragile fabric of Australian conservative politics.

For them, it’s no to constitutional recognition.

No to a constitutionally entrenched national Voice.

And it’s no to a treaty.

Do they really believe any of it?

Who can tell?

But the arc of history will continue to bend towards justice.

Because ultimately the arguments advanced against our great cause of a fully reconciled Australia will collapse.

And that’s because there is so little substance lying behind them.

The post Remarks: National Apology Breakfast – Kevin Rudd appeared first on Kevin Rudd.

Planet DebianSylvain Beucler: Godot GDScript REPL

When experimenting with Godot and its GDScript language, I realized that I missed a good old REPL (Read-Eval-Print Loop) to familiarize myself with the language and API.

This is now possible with this new Godot Editor plugin :)

Try it at:
https://godotengine.org/asset-library/asset/857

Worse Than FailureError'd: Sweet Sweet Summertime

Gastronome Carl hungrily drools "I haven't measured the speed of a snail but it's gotta be close. "

His Name Is BRIAN

 

While Dan B. rats out Petco's dwindling discount

I Had a Pet Snail Named Oscar

 

And William Blair wonders "Does this mean I'm actually UP 2%? "

But When He Laid Eggs I Changed His Name To Robin

 

Comics Fan Ken Mitchell seeks caped cave-cleaning customer support but "can't find NaN-aN-aN on my calendar!"

All for a crumby Batman gag

 

 

Yet at the end of the day, amateur meteorologist Esther lets us know "I noticed that summer will start early this year" because Brian needed a head start with Carl's dinner, Gary!

Remember kids, punctuation saves lives

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Dave HallA Lost Parcel Results in a New Website

When Australia Post lost a parcel, we found a lot of problems with one of their websites.

,

Cryptogram Medieval Security Techniques

Sonja Drummer describes (with photographs) two medieval security techniques. The first is a for authentication: a document has been cut in half with an irregular pattern, so that the two halves can be brought together to prove authenticity. The second is for integrity: hashed lines written above and below a block of text ensure that no one can add additional text at a later date.

Cryptogram Attack against Florida Water Treatment Facility

A water treatment plant in Oldsmar, Florida, was attacked last Friday. The attacker took control of one of the systems, and increased the amount of sodium hydroxide — that’s lye — by a factor of 100. This could have been fatal to people living downstream, if an alert operator hadn’t noticed the change and reversed it.

We don’t know who is behind this attack. Despite its similarities to a Russian attack of a Ukrainian power plant in 2015, my bet is that it’s a disgruntled insider: either a current or former employee. It just doesn’t make sense for Russia to be behind this.

ArsTechnica is reporting on the poor cybersecurity at the plant:

The Florida water treatment facility whose computer system experienced a potentially hazardous computer breach last week used an unsupported version of Windows with no firewall and shared the same TeamViewer password among its employees, government officials have reported.

Brian Krebs points out that the fact that we know about this attack is what’s rare:

Spend a few minutes searching Twitter, Reddit or any number of other social media sites and you’ll find countless examples of researchers posting proof of being able to access so-called “human-machine interfaces” — basically web pages designed to interact remotely with various complex systems, such as those that monitor and/or control things like power, water, sewage and manufacturing plants.

And yet, there have been precious few known incidents of malicious hackers abusing this access to disrupt these complex systems. That is, until this past Monday, when Florida county sheriff Bob Gualtieri held a remarkably clear-headed and fact-filled news conference about an attempt to poison the water supply of Oldsmar, a town of around 15,000 not far from Tampa.

Planet DebianDirk Eddelbuettel: td 0.0.1 on CRAN: New Finance Data Package

Thrilled to announce that a new package of mine just made it to CRAN: the td package accesses the twelvedata API for financial data.

Currently only the time_series REST access point is supported, but it is already supported with all meaningful options (we skipped only ‘JSON or CSV’ which makes no sense here) so for example any resolution between 1 minute and 1 month can be requested for any stock, etf or currency symbol for a wide array of exchanges. Historical access is available too via (optional) start and end dates. We return either raw JSON or a data.frame or an xts object making it trivial to call high-end plotting functions on the data–the project and repo pages show several examples.

As just one example, here is GME during the follies. We simply request via

(where the API key is either in a user-local config file accessed via the new-ish R function tools::R_user_dir("td") pointing at this package’s directory, or via an environment variable; either is accessed at package load or attachment) from which we can then plot via quantmod

> quantmod::chart_Series(gme, name=paste0(attr(gme, "symbol"), "/", attr(gme, "exchange")))

which shows how we also helpfully store metadata returned by twelvedata as extra attributes of the object. The chart is

You will need an API key to have up to 800 daily accesses for free, higher-performance plans (including websocket access) are available for paying customers too. I have only used the free API so far myself.

I plan to add quote and price support this weekend, and generalize the time series access to returning lists of objects as the API does in fact support multi-security access. As always, feedback is welcomed. Please post comments and suggestions at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJulien Danjou: Debugging C code on macOS

Debugging C code on macOS

I started to write C 25 years ago now, with many different tools over the year. As many open source developers, I spent most of my life working with the GNU tools out there.

As I've been using an Apple computer over the last years, I had to adapt to this environment and learn the tricks of the trade. Here are some of my notes so a search engine can index them — and I'll be able to find them later.

Debugger: lldb

I was used to `gdb` for most of years doing C. I never managed to install gdb correctly on macOS as it needs certificates, authorization, you name it, to work properly.

macOS provides a native debugger named lldb, which really looks like gdb to me — it runs in a terminal with a prompt.

I had to learn the few commands I mostly use, which are:

  • lldb -- myprogram -options to run the program with options
  • r to run the program
  • bt or bt N to get a backtrace of the latest N frames
  • f N to select frame N
  • p V to print some variable value or memory address

Those commands cover 99% of my use case with a debugger when writing C, so once I lost my old gdb habits, I was good to go.

Debugging Memory Overflows

On GNU/Linux

One of my favorite tools when writing C has always been Electric Fence (and DUMA more recently). It's a library that overrides the standard memory manipulation function (e.g., malloc) and instantly makes the program crash when an out of memory error is produced, rather than corrupting the heap.

Heap corruption issues are hard to debug without such tools as they can happen at any time and stay unnoticed for a while, crashing your program in a totally different location later.

There's no need to compile your program with those libraries. By using the dynamic loader, you can preload them and overload the standard C library functions.

LD_PRELOAD=/usr/lib/libefence.so.0.0 my-program
Run my-program with Eletric Fence loaded on GNU/Linux

My gdb configuration has been sprinkle with my friends efence and duma, and I would activate them from gdb easily with this configuration in ~/.gdbinit:

define efence
        set environment EF_PROTECT_BELOW 0
        set environment EF_ALLOW_MALLOC_0 1
        set environment LD_PRELOAD /usr/lib/libefence.so.0.0
        echo Enabled Electric Fence\n
end
document efence
Enable memory allocation debugging through Electric Fence (efence(3)).
        See also nofence and underfence.
end


define underfence
        set environment EF_PROTECT_BELOW 1
        set environment EF_ALLOW_MALLOC_0 1
        set environment LD_PRELOAD /usr/lib/libefence.so.0.0
        echo Enabled Electric Fence for underflow detection\n
end
document underfence
Enable memory allocation debugging for underflows through Electric Fence
(efence(3)).
        See also nofence and efence.
end

define nofence
        unset environment LD_PRELOAD
        echo Disabled Electric Fence\n
end
document nofence
Disable memory allocation debugging through Electric Fence (efence(3)).
end

define duma
        set environment DUMA_PROTECT_BELOW 0
        set environment DYMA_ALLOW_MALLOC_0 1
        set environment LD_PRELOAD /usr/lib/libduma.so
        echo Enabled DUMA\n
end
document duma
Enable memory allocation debugging through DUMA (duma(3)).
        See also noduma and underduma.
end

On macOS

I've been looking for equivalent features in macOS, and after many hours of research, I found out that this feature is shipped natively with libgmalloc. It works in the same way, and its features are documented by Apple.

My ~/.lldbinit  file now contains the following:

command alias gm _regexp-env DYLD_INSERT_LIBRARIES=/usr/lib/libgmalloc.dylib

This command alias allows enabling gmalloc by just typing gm at the lldb prompt and then run the program again to see if it crashes with gmalloc enabled.

Debugging CPython

It's not a mystery that I spend a lot of time writing Python code — that's the main reason I've been doing C lately.

When playing with CPython, it can be useful to, e.g., dump the content of PyObject structs on the heap or get the Python backtrace.

I've been using cpython-lldb for this with great success. It adds a few bells and whistles when debugging CPython or extensions inside lldb. For example, the alias py-bt is handy to get the Python traceback of your calls rather than a bunch of cryptic C frames.

Now, you should be ready to debug your nasty issues and memory problems on macOS efficiently!

Worse Than FailureCodeSOD: Self Improvement in Stages

Jake has a co-worker named "Eddie". Eddie is the kind of person who is always hoping to change and get better. They're gonna start eating healthier… after the holidays. They're gonna start doing test driven development… on the next project. They'll stop just copying and pasting code… someday.

At least, that's what we can get from this blob of code.

//TODO make this recursive, copy paste works for now though if (website_description != null) { if (website_description.length() > 25) { int i = website_description.indexOf(" ", 20); if (i != -1) { String firstsplit = website_description.substring(0, i); String secondsplit = website_description.substring(i); websiteWrapped = firstsplit + "<br/>" + secondsplit; if (secondsplit.length() > 25){ int split_two = secondsplit.indexOf(" ", 20); String part1 = secondsplit.substring(0, split_two); String part2 = secondsplit.substring(split_two); websiteWrapped = firstsplit + "<br/>" + part1 + "<br/>" + part2; if (part2.length() > 25){ int split_three = part2.indexOf(" ", 20); String part3 = part2.substring(0, split_three); String part4 = part2.substring(split_three); websiteWrapped = firstsplit + "<br/>" + part1 + "<br/>" + part3 + "<br/>" + part4; if (part4.length() > 25){ int split_four = part4.indexOf(" ", 20); String part5 = part4.substring(0, split_four); String part6 = part4.substring(split_four); websiteWrapped = firstsplit + "<br/>" + part1 + "<br/>" + part3 + "<br/>" + part5+ "<br/>" + part6; if (part6.length() > 25){ int split_five = part6.indexOf(" ", 20); String part7 = part6.substring(0, split_five); String part8 = part6.substring(split_five); websiteWrapped = firstsplit + "<br/>" + part1 + "<br/>" + part3 + "<br/>" + part5+ "<br/>" + part7+ "<br/>" + part8; if (part8.length() > 25){ int split_six = part8.indexOf(" ", 20); String part9 = part8.substring(0, split_six); String part10 = part8.substring(split_six); websiteWrapped = firstsplit + "<br/>" + part1 + "<br/>" + part3 + "<br/>" + part5+ "<br/>" + part7+ "<br/>" + part9+ "<br/>" + part10; if (part10.length() > 25){ int split_seven = part10.indexOf(" ", 20); String part11 = part10.substring(0, split_seven); String part12 = part10.substring(split_seven); websiteWrapped = firstsplit + "<br/>" + part1 + "<br/>" + part3 + "<br/>" + part5+ "<br/>" + part7+ "<br/>" + part9+ "<br/>" + part11+ "<br/>"+ part12; } } } } } } } else { websiteWrapped = website_description; } } else { websiteWrapped = website_description; } }

It is, of course, the comment which makes this sample: //TODO make this recursive, copy paste works for now though. But I would argue that recursion wouldn't actually help that much, not if we're gonna keep building every string via string concatenation.

I'm sure the comment is accurate: it works for now. I'm afraid, though, that it's probably going to keep working like this for a much, much longer period of time.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Dave HallWe Have a New Website (Finally)

After 15 years we rebuilt our website. Learn more about the new site.

,

Planet DebianDirk Eddelbuettel: RcppSMC 0.2.3 on CRAN: Updated Snapshot

A new release 0.2.3 of the RcppSMC package arrived on CRAN earlier today. Once again it progressed as a very quick pretest-publish within minutes of submission—thanks CRAN!

RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts.

This release somewhat belatedly merges a branch Leah had been working on and which we all realized “is ready”. We now have a good snapshot to base new work on, as maybe with the Google Summer of Code 2021.

Changes in RcppSMC version 0.2.3 (2021-02-10)

  • Addition of a Github Action CI runner (Dirk)

  • Switching to inheritance for the moveset rather than pointers to functions (Leah in #45).

Courtesy of my CRANberries, there is a diffstat report for this release.

More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityWhat’s most interesting about the Florida water system hack? That we heard about it at all.

Stories about computer security tend to go viral when they bridge the vast divide between geeks and luddites, and this week’s news about a hacker who tried to poison a Florida town’s water supply was understandably front-page material. But for security nerds who’ve been warning about this sort of thing for ages, the most surprising aspect of the incident seems to be that we learned about it at all.

Spend a few minutes searching Twitter, Reddit or any number of other social media sites and you’ll find countless examples of researchers posting proof of being able to access so-called “human-machine interfaces” — basically web pages designed to interact remotely with various complex systems, such as those that monitor and/or control things like power, water, sewage and manufacturing plants.

And yet, there have been precious few known incidents of malicious hackers abusing this access to disrupt these complex systems. That is, until this past Monday, when Florida county sheriff Bob Gualtieri held a remarkably clear-headed and fact-filled news conference about an attempt to poison the water supply of Oldsmar, a town of around 15,000 not far from Tampa.

Gualtieri told the media that someone (they don’t know who yet) remotely accessed a computer for the city’s water treatment system (using Teamviewer) and briefly increased the amount of sodium hydroxide (a.k.a. lye used to control acidity in the water) to 100 times the normal level.

“The city’s water supply was not affected,” The Tampa Bay Times reported. “A supervisor working remotely saw the concentration being changed on his computer screen and immediately reverted it, Gualtieri said. City officials on Monday emphasized that several other safeguards are in place to prevent contaminated water from entering the water supply and said they’ve disabled the remote-access system used in the attack.”

In short, a likely inexperienced intruder somehow learned the credentials needed to remotely access Oldsmar’s water system, did little to hide his activity, and then tried to change settings by such a wide margin that the alterations would be hard to overlook.

“The system wasn’t capable of doing what the attacker wanted,” said Joe Weiss, managing partner at Applied Control Solutions, a consultancy for the control systems industry. “The system isn’t capable of going up by a factor of 100 because there are certain physics problems involved there. Also, the changes he tried to make wouldn’t happen instantaneously. The operators would have had plenty of time to do something about it.”

Weiss was just one of a half-dozen experts steeped in the cybersecurity aspects of industrial control systems that KrebsOnSecurity spoke with this week. While all of those interviewed echoed Weiss’s conclusion, most also said they were concerned about the prospects of a more advanced adversary.

Here are some of the sobering takeaways from those interviews:

  • There are approximately 54,000 distinct drinking water systems in the United States.
  • The vast majority of those systems serve fewer than 50,000 residents, with many serving just a few hundred or thousand.
  • Virtually all of them rely on some type of remote access to monitor and/or administer these facilities.
  • Many of these facilities are unattended, underfunded, and do not have someone watching the IT operations 24/7.
  • Many facilities have not separated operational technology (the bits that control the switches and levers) from safety systems that might detect and alert on intrusions or potentially dangerous changes.

So, given how easy it is to search the web for and find ways to remotely interact with these HMI systems, why aren’t there more incidents like the one in Oldsmar making the news? One reason may be that these facilities don’t have to disclose such events when they do happen.

NO NEWS IS GOOD NEWS?

The only federal law that applies to the cybersecurity of water treatment facilities in the United States is America’s Water Infrastructure Act of 2018, which requires water systems serving more than 3,300 people “to develop or update risk assessments and emergency response plans.”

There is nothing in the law that requires such facilities to report cybersecurity incidents, such as the one that happened in Oldsmar this past weekend.

“It’s a difficult thing to get organizations to report cybersecurity incidents,” said Michael Arceneaux, managing director of the Water ISAC, an industry group that tries to facilitate information sharing and the adoption of best practices among utilities in the water sector. The Water ISAC’s 450 members serve roughly 200 million Americans, but its membership comprises less than one percent of the overall water utility industry.

“Some utilities are afraid that if their vulnerabilities are shared the hackers will have some inside knowledge on how to hack them,” Arceneaux said. “Utilities are rather hesitant to put that information in a public domain or have it in a database that could become public.”

Weiss said the federal agencies are equally reluctant to discuss such incidents.

“The only reason we knew about this incident in Florida was that the sheriff decided to hold a news conference,” Weiss said. “The FBI, Department of Homeland Security, none of them want to talk about this stuff publicly. Information sharing is broken.”

By way of example, Weiss said that not long ago he was contacted by a federal public defender representing a client who’d been convicted of hacking into a drinking water system. The attorney declined to share his client’s name, or divulge many details about the case. But he wanted to know if Weiss would be willing to serve as an expert witness who could help make the actions of a client sound less scary to a judge at sentencing time.

“He was defending this person who’d hacked into a drinking water system and had gotten all the way to the pumps and control systems,” Weiss recalled. “He said his client had only been in the system for about an hour, and he wanted to know how much damage could his client really could have done in that short a time. He was trying to get a more lenient sentence for the guy.”

Weiss said he’s tried to get more information about the defendant, but suspects the details of the case have been sealed.

Andrew Hildick-Smith is a consultant who served nearly 20 years managing remote access systems for the Massachusetts Water Resources Authority. Hildick-Smith said his experience working with numerous smaller water utilities has driven home the reality that most are severely under-staffed and underfunded.

“A decent portion of small water utilities depend on their community or town’s IT person to help them out with stuff,” he said. “When you’re running a water utility, there are so many things to take care of to keep it all running that there isn’t really enough time to improve what you have. That can spill over into the remote access side, and they may not have a IT person who can look at whether there’s a better way to do things, such as securing remote access and setting up things like two-factor authentication.”

Hildick-Smith said most of the cybersecurity incidents that he’s aware of involving water facilities fall into two categories. The most common are compromises where the systems affected were collateral damage from more opportunistic intrusions.

“There’ve been a bunch of times where water systems have had their control system breached, but it’s most often just sort of by chance, meaning whoever was doing it used the computer for setting up financial transactions, or it was a computer of convenience,” Hildick-Smith siad. “But attacks that involved the step of actually manipulating things is pretty short list.”

The other, increasingly common reason, he said, is of course ransomware attacks on the business side of water utilities.

“Separate from the sort of folks who wander into a SCADA system by mistake on the water side are a bunch of ransomware attacks against the business side of the water systems,” he said. “But even then you generally don’t get to hear the details of the attack.”

Hildick-Smith recalled a recent incident at a fairly large water utility that got hit with the Egregor ransomware strain.

“Things worked out internally for them, and they didn’t need to talk to the outside world or the press about it,” he said. “They made contact with the Water ISAC and the FBI, but it certainly didn’t become a press event, and any lessons they learned haven’t been able to be shared with folks.”

AN INTERNATIONAL CHALLENGE

The situation is no different in Europe and elsewhere, says Marcin Dudek, a control systems security researcher at CERT Polska, the computer emergency response team which handles cyber incident reporting in Poland.

Marcin said if water facilities have not been a major target of profit-minded criminal hackers, it is probably because most of these organizations have very little worth stealing and usually no resources for paying extortionists.

“The access part is quite easy,” he said. “There’s no business case for hacking these types of systems. Quite rarely do they have a proper VPN [virtual private network] for secure remote connection. I think it’s because there is not enough awareness of the problems of cybersecurity, but also because they are not financed enough. This goes not only for the US. It’s very similar here in Poland and different countries as well.”

Many security professionals have sounded off on social media saying public utilities have no business relying on remote access tools like Teamviewer, which by default allows complete control over the host system and is guarded by a simple password.

But Marcin says Teamviewer would actually be an improvement over the types of remote access systems he commonly finds in his own research, which involves HMI systems designed to be used via a publicly-facing website.

“I’ve seen a lot of cases where the HMI was directly available from a web page, where you just log in and are then able to change some parameters,” Marcin said. “This is particularly bad because web pages can have vulnerabilities, and those vulnerabilities can give the attacker full access to the panel.”

According to Marcin, utilities typically have multiple safety systems, and in an ideal environment those are separated from control systems so that a compromise of one will not cascade into the other.

“In reality, it’s not that easy to introduce toxins into the water treatment so that people will get sick, it’s not as easy as some people say,” he said. Still, he worries about more advanced attackers, such as those responsible for multiple incidents last year in which attackers gained access to some of Israel’s water treatment systems and tried to alter water chlorine levels before being detected and stopped.

“Remote access is something we cannot avoid today,” Marcin said. “Most installations are unmanned. If it is a very small water or sewage treatment plant, there will be no people inside and they just login whenever they need to change something.”

SELF EVALUTION TIME

Many smaller water treatment systems may soon be reevaluating their approach to securing remote access. Or at least that’s the hope of the Water Infrastructure Act of 2018, which gives utilities serving fewer than 50,000 residents until the end of June 2021 to complete a cybersecurity risk and resiliency assessment.

“The vast majority of these utilities have yet to really even think about where they stand in terms of cybersecurity,” said Hildick-Smith.

The only problem with this process is there aren’t any consequences for utilities that fail to complete their assessments by that deadline.

Hildick-Smith said while water systems are required to periodically report data about water quality to the U.S. Environmental Protection Agency (EPA), the agency has no real authority to enforce the cybersecurity assessments.

“The EPA has made some kind of vague threats, but they have no enforcement ability here,” he said. “Most water systems are going to wait until close the deadline, and then hire someone to do it for them. Others will probably just self-certify, raise their hands and say, ‘Yeah, we’re good.'”

Update, Feb. 11, 4:15 p.m. ET: Hildick-Smith has asked to qualify his last statement about the EPA’s authority. He says while the EPA is not collecting copies of the risk and resilience assessments and emergency response plans, or enforcing quality controls on the documents, they can fine utilities for not complying with the process and certifying that they have completed the requirements. The EPA explains more here (PDF).

Planet DebianBastian Venthur: Installing Debian on a Thinkpad T14s

Recently, I got a new work laptop. I opted for Lenovo's Thinkpad T14s that comes with AMD's Ryzen 7 PRO 4750U Processor. Once everything is installed, it works like a charm: all hardware is supported and works out of the box with Debian. However, the tricky part is actually installing Debian onto that machine: The laptop lacks a standard Ethernet port and comes with Intel's Wi-Fi 6 AX200 module. So if you don't happen to have a docking station or an Ethernet adapter available during install, you'll have to install everything over WiFi. The WiFi module, however requires non-free firmware and this is where the fun starts.

First, I downloaded an official netinst image and copied it onto a USB drive. Halfway through the installation, it complained that firmware for the WiFi module was missing, and I was stuck as I couldn't continue the installation without network access.

Ok, then -- missing non-free firmware it is. The wiki suggests using an unofficial image instead, as it supposedly contains "all non-free firmware packages directly".

So I tried an unofficial netinst image with non-free firmware. That also did not work, with the same error as above: the required firmware was missing. I checked the image later and actually couldn't find the non-free firmware either. Hum.

In the end, I had to prepare a second USB drive with the firmware downloaded from here. I unpacked the firmware into /firmware on the second USB. The installer checks at some point during the installation for firmware installed on other removable media and (hopefully) finds it. In my case it did, and I finally could install everything.

I'm quite happy with the laptop, but I have to admit how incredibly difficult it still is to install Debian on recent hardware. As a Debian Developer, I do understand Debian's position on non-free firmware, on the other hand however, a less technical person would probably have given up at some point and just installed some other Operating System.

Worse Than FailureCodeSOD: Stocking Up

Sometimes, you find some code that almost works, that almost makes sense. In a way, that's worse than just plain bad code. René was recently going through some legacy JavaScript code for their warehouse management system.

Like any such warehousing system, there's a problem you have to solve: sometimes, the number of units you need to pick to complete the order is larger than the stock you have available. At that point, you need to make a decision: do you hold the order until stock comes in, do you partially fill it and then follow up with a second shipment, or do you perhaps just cancel the order?

René found a line like this:

pick.qty_to_pick -= pick.qty_to_pick - stock.available

So, if I want to pick 100 units, but only have 25 in stock, I'll decrement the qty_to_pick by 75. Which vaguely makes sense, but also is a weird and awkward way of saying "make the qty_to_pick equal to the stock.available".

I assume there are guards around this line which make sure it's executed only if the qty_to_pick is greater than the stock.available. At least, I hope so, because if not, some customers are going to be surprised by the quantity when their order arrives.

In the end, this code isn't strictly wrong, it's just the weirdest most awkward way of copying one value to another variable.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.10.2.1.0: New Upstream Release

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 823 other packages on CRAN.

This release brings us Armadillo 10.2 with a few updates as detailed below in the list of changes. Upstream release 10.2 was made a couple of days ago, but we need to balance new upstream updates with a responsible release cadence at CRAN. As we needed a maintenance release in early January, I opted to wait four weeks with this one which hence gets us 10.2.0 and 10.2.1 at once. As tweeted (with a follow-up) it had yet another very smooth passage at CRAN so we again appreciate the excellent work of the CRAN maintainers and say Thank You!.

Anybody who desires more frequent updates show look at the RcppCore drat repo which provides more frequent interim updates. Here for example we also had 0.10.2.0.0 available for your testing pleasure.

Also of note is that here is now a Python variant pyarma for those who might want enjoy Armadillo with Python.

The full set of changes follows.

Changes in RcppArmadillo version 0.10.2.1.0 (2021-02-09)

  • Upgraded to Armadillo release 10.2.1 (Cicada Swarm)

    • faster handling of subcubes

    • added tgamma()

    • added .brief_print() for abridged printing of matrices & cubes

    • expanded forms of trimatu() and trimatl() with diagonal specification to handle sparse matrices

    • expanded eigs_sym() and eigs_gen() with optional shift-invert mode

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Krebs on SecurityMicrosoft Patch Tuesday, February 2021 Edition

Microsoft today rolled out updates to plug at least 56 security holes in its Windows operating systems and other software. One of the bugs is already being actively exploited, and six of them were publicized prior to today, potentially giving attackers a head start in figuring out how to exploit the flaws.

Nine of the 56 vulnerabilities earned Microsoft’s most urgent “critical” rating, meaning malware or miscreants could use them to seize remote control over unpatched systems with little or no help from users.

The flaw being exploited in the wild already — CVE-2021-1732 — affects Windows 10, Server 2016 and later editions. It received a slightly less dire “important” rating and mainly because it is a vulnerability that lets an attacker increase their authority and control on a device, which means the attacker needs to already have access to the target system.

Two of the other bugs that were disclosed prior to this week are critical and reside in Microsoft’s .NET Framework, a component required by many third-party applications (most Windows users will have some version of .NET installed).

Windows 10 users should note that while the operating system installs all monthly patch roll-ups in one go, that rollup does not typically include .NET updates, which are installed on their own. So when you’ve backed up your system and installed this month’s patches, you may want to check Windows Update again to see if there are any .NET updates pending.

A key concern for enterprises is another critical bug in the DNS server on Windows Server 2008 through 2019 versions that could be used to remotely install software of the attacker’s choice. CVE-2021-24078 earned a CVSS Score of 9.8, which is about as dangerous as they come.

Recorded Future says this vulnerability can be exploited remotely by getting a vulnerable DNS server to query for a domain it has not seen before (e.g. by sending a phishing email with a link to a new domain or even with images embedded that call out to a new domain). Kevin Breen of Immersive Labs notes that CVE-2021-24078 could let an attacker steal loads of data by altering the destination for an organization’s web traffic — such as pointing internal appliances or Outlook email access at a malicious server.

Windows Server users also should be aware that Microsoft this month is enforcing the second round of security improvements as part of a two-phase update to address CVE-2020-1472, a severe vulnerability that first saw active exploitation back in September 2020.

The vulnerability, dubbed “Zerologon,” is a bug in the core “Netlogon” component of Windows Server devices. The flaw lets an unauthenticated attacker gain administrative access to a Windows domain controller and run any application at will. A domain controller is a server that responds to security authentication requests in a Windows environment, and a compromised domain controller can give attackers the keys to the kingdom inside a corporate network.

Microsoft’s initial patch for CVE-2020-1472 fixed the flaw on Windows Server systems, but did nothing to stop unsupported or third-party devices from talking to domain controllers using the insecure Netlogon communications method. Microsoft said it chose this two-step approach “to ensure vendors of non-compliant implementations can provide customers with updates.” With this month’s patches, Microsoft will begin rejecting insecure Netlogon attempts from non-Windows devices.

A couple of other, non-Windows security updates are worth mentioning. Adobe today released updates to fix at least 50 security holes in a range of products, including Photoshop and Reader. The Acrobat/Reader update tackles a critical zero-day flaw that Adobe says is actively being exploited in the wild against Windows users, so if you have Adobe Acrobat or Reader installed, please make sure these programs are kept up to date.

There is also a zero-day flaw in Google’s Chrome Web browser (CVE-2021-21148) that is seeing active attacks. Chrome downloads security updates automatically, but users still need to restart the browser for the updates to fully take effect. If you’re a Chrome user and notice a red “update” prompt to the right of the address bar, it’s time to save your work and restart the browser.

Standard reminder: While staying up-to-date on Windows patches is a must, it’s important to make sure you’re updating only after you’ve backed up your important data and files. A reliable backup means you’re less likely to pull your hair out when the odd buggy patch causes problems booting the system.

So do yourself a favor and backup your files before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

Keep in mind that Windows 10 by default will automatically download and install updates on its own schedule. If you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches, see this guide.

And as always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Chaotic IdealismSocial rules for fat people, as observed by a fat person

  1. Never be seen eating, unless you are eating something exceedingly low-calorie and tasteless, such as a plain rice cake or a dry salad.
  2. Never admit to enjoying food.
  3. Never talk about your favorite food, your favorite restaurant, your favorite flavor, etc. You are not allowed to have these.
  4. Always be on a diet. Always.
  5. You cannot eat too little; fat people can never suffer from malnutrition or starvation.
  6. Never admit to over-eating.
  7. You are not allowed to exercise in public. People don’t want to see you moving, especially if you are wearing tight clothing.
  8. Do not go to a gym. You do not belong there.
  9. Do not participate in team sports or any form of athletic competition. You do not belong there.
  10. Do not go to a swimming pool, or the beach. You do not belong there.
  11. You are not allowed to complain when your doctor treats you as a second-class citizen. You deserve it.
  12. You are not allowed to complain when you physically cannot fit into a small seat. This is your fault.
  13. Do not acquire a physical disability that forces you to use any form of mobility assistance, especially a motorized scooter. This will be judged to be the result of your fat, and your refusal to lose your fat.
  14. Do not point out that losing weight has a lower success rate than quitting heroin. That doesn’t matter. Besides, it probably isn’t true because all those people on the infomercials lost weight, so it must actually be easy.
  15. Do not ever claim to be disciplined or responsible. You are obviously neither.
  16. You are not allowed to enjoy any part of your looks or your body, especially not anything related to your fat. For example, you are not allowed to appreciate your curves, your ability to move heavy objects, or your ability to stay put even if someone tries to move you.
  17. You are required to endanger your health with any and all weight-loss supplements, medications, or fad diets that come your way. Otherwise, you are not trying.
  18. You are required to appreciate others’ wise recommendations, such as, “It’s easy; just eat less,” or, “You should go jogging once in a while,” and act like you never thought of them before. You haven’t, right? After all, if you had, you’d be thin.
  19. You are not allowed to have an eating disorder. You’re obviously too fat for any kind of anorexia or bulimia; and binge-eating disorder is just another way to say “undisciplined”.
  20. You are not allowed to eat tasty food, even in private, without feeling guilty about it.
  21. Anything you could possibly eat can be interpreted as the cause of your fat. If you eat rice, you’re fat because you are eating carbs; if you eat chicken, you’re fat because you’re eating meat; if you eat salads, you are obviously getting fat from salad dressing. You can never eat the right thing.
  22. When accused of eating too much fast food, never claim that you think fast food is bland and you practically never eat it. You obviously eat way too much fast food, because you are fat.
  23. If you drink diet Coke, that is why you are fat. If you drink Coke with sugar in it, that is why you are fat.
  24. When your large size causes a problem of any sort, it is your fault, not the fault of the person who designed your environment not to be accessible to fat people.
  25. You are not allowed to wear revealing or tight clothing.
  26. If you wear loose clothing, you must admit that it is because you are ashamed of being fat, rather than because you find loose clothing comfortable.
  27. “You’ve lost weight” is a compliment, even if it comes after a two-week bout with the stomach flu and you’re feeling like death warmed over.
  28. If you get cancer, people will a.) assume it is your fault because of your fat, and b.) reassure you that at least chemo will make you lose weight (even though quite the opposite may be true). You are required to act as though this is encouraging.
  29. If you get sick, it is because you are fat. It cannot be due to germs, your genetics, your environment, or your simple bad luck.
  30. If you are injured, you should lose weight; your fat is preventing the injury from healing.
  31. If you take medicine to stay healthy, it is because you are fat.
  32. If you have a mental illness, it would resolve if you lost weight.
  33. If you have a physical disability, you would be cured if you lost weight.
  34. If you are mocked for a reason completely unrelated to being fat, you will also be mocked for being fat.
  35. Thin people will get the job you wanted. This is just, because you are obviously less responsible.
  36. Thin people will also get their diabetes, heart disease, high cholesterol, sleep apnea, etc., diagnosed way too late, because they are thin and could not possibly have diabetes, heart disease, high cholesterol, sleep apnea, etc. Despite this, few thin people will join you when you insist that the medical community stop assuming that diseases like this are inevitable in fat people and never found in thin people.
  37. If you are athletic and can lift more, work longer, or hike circles around your thin friends, you are not allowed to admit this, because you are fat and you obviously cannot.
  38. You are not allowed to find a loving relationship with someone who honestly loves you and your body. They are obviously a chubby-chaser, or a desperate case settling for less.
  39. Anyone who is thin is automatically superior to you.
  40. Anyone who is thin is automatically healthier than you.
  41. You are a second-class citizen, and you deserve it. Stay in your place.

There is a reason I recommend breaking social rules.

Planet DebianGunnar Wolf: And now, Bullseye images are also built for the RPi

Public service announcement

In case you want to run our latest release (still cooking, of course) in your Raspberries — I have enabled builds for both Debian 10 (Stable, Buster) and Debian 11 (Testing, Bullseye). Go grab it!

Oh… Yes, we are currently failing the builds of ARM64 (RPi3 and RPi4) ☹ Something due to python3-minimal unwilling to get installed right. But that should be fixed soon! Can you help us? Take a look at the [build log for RPi3, Bullseye](https://raspi.debian.net/daily/raspi_3_bullseye.log), or just focus on the step where it breaks It seems to have been fixed, woohoo!:

Setting up python3-minimal (3.9.1-1) ...

2021-02-09 08:56:38 DEBUG STDERR: E: Can not write log (Is /dev/pts mounted?) - posix_openpt (19: No such device)
Segmentation fault
dpkg: error processing package python3-minimal (--configure):
 installed python3-minimal package post-installation script subprocess returned error exit status 139
Errors were encountered while processing:
 python3-minimal
E: Sub-process /usr/bin/dpkg returned an error code (1)

Anyway, as you can see, the eight built images work fine and are tested, at least, for basic support!

Planet DebianMolly de Blanc: Proprietary (definition)

I recently had the occasion to try and find a definition of “proprietary” in terms of software that is not on Wikipedia. Most of the discussion on the issue I found was focused on what free and open source software is, and that anything that isn’t FOSS is proprietary. I don’t think the debate is as simple as this, especially if you want to get into conversations about nuance around things like Open Core.

The problem with defining proprietary software by what it isn’t, or at least that it isn’t FOSS, means that we cannot concisely communicate what makes something proprietary. Instead, we leave it up to the people we’re trying to communicate with to dig through a history of rhetoric, copyright law, and licensing in order to understand what it actually means for something to be FOSS, and what it means for something to be anything else. It is also just less satisfying, in my opinion, to define something by what it lacks rather than by what it is.

I’ll start by proposing the following definition:

Proprietary software is software that comes with restrictions on what users can do with the software and the source code that constitutes said software.

I think the most controversial part of this sentence is the wording “software that comes with restrictions.” In earlier attempts of this I wrote “software that restricts.” This sort of active wording, which I used for years in my capacity at work, is misleading. In the case of proprietary software, it is the licensing and laws around it that restrict what you can do. For software to restrict you, it must be that the way the software is being implemented or used restricts you.

To be clear, this is my first proposal. I look forward to discussing this further!

Worse Than FailureThe Economic Problem

One of the main tasks any company needs to do is allocate resources. Regardless of the product or the industry they're in, they have to decide how to employ the assets they have to make money. No one has really "solved" this problem, and that's why there are swarms of resource planning systems, project management tools, and cultish trend-following.

After a C-suite shuffle at James B's employer, one of the newly installed C-level execs had some big ideas. They were strongly influenced by one of the two life-changing books, and not the one involving orcs. A company needs to allocate resources. The economy, as a whole, needs to allocate resources. If, on the economic level, we use markets to allocate resources because they're more efficient than planning, then we should use markets internally as well.

For the most part, and for most groups in the company, this was just a book-keeping change. Everyone kept doing the same thing, but now instead of each department getting email accounts for every employee, each department got a pile of money, and used that to pay for email accounts for each employee. Instead of just getting a computer as part of the hiring process, departments "rented" a computer from IT. It created a surprising amount of paperwork for the supposedly "efficient" market, but at least at first, it wasn't a problem.

Before long, though, the C-suite started to notice that a lot of money flowed in to the IT department, but very little flowed back out. The obvious solution, then, was to cut the IT budget entirely. It would fund itself using the internal market, selling its services to other departments in the company.

The head of IT reacted in a vaguely reasonable way: they jacked the internal billing rates as high as they could. Since they technically owned the PCs, they installed them with physical locks on the cases. If you wanted a hard drive replacement, you needed to go through IT. The problem is that IT had exclusive contracts with vendors, and those vendor SLAs were pretty generous- to the vendors. One HDD failure could take a PC down for weeks while you waited for a replacement.

James was a victim of one such incident. While using a loaner PC to do his work, he and his boss Krista, got to talking about how frustrating this was. They were, after all, a software development team, and "having access to a computer, with all our software installed" was a priority.

"It makes me want to break the lock and replace the drive myself," James said. "It'd probably be cheaper too."

Krista laughed. "It'd be a lot cheaper. Heck, I could just buy you a new computer for what they charge to replace a hard drive."

Krista paused, then started mentally running the numbers. "Actually… I could do that." She immediately called a local vendor, a small company, and ordered a laptop for James. It arrived the next day, and once James set it up with his network credentials, he had full access to all the other IT services, like the shared drives.

Krista's team was one of the smaller teams in the company, but they needed a lot of IT services. Billed at the internal billing rates, that was a significant amount of money, and a big chunk of IT's budget came straight from Krista. But if she shopped around on her own, she could get everything- hardware, software licenses, basically everything but company email addresses and login credentials, for a fraction of the price.

And that's exactly what Krista did. She went through her department and found every piece of hardware they "leased" from IT, from PCs to network switches to even the cables, and replaced them.

The IT department wasn't happy about this. Most of their monthly spend was overhead that didn't change just because one tiny department stopped using their services. With Krista's team cutting off their funding, this meant IT had a budget crunch. Worse, other teams were starting to grumble.

This lead to a call where the head of IT laid out an ultimatum to Krista: "If you don't purchase your infrastructure from us, we will cut off your team's access to the network entirely. You can't just be plugging in any device you like to the network, it's bad for security."

"That's fine," Krista replied. "We can work on our own private LAN, and when we need to give software releases to the distribution team, we'll just walk down the hall and drop off a thumb drive or a CD, instead of using the network drive."

"You can't do that!"

"Why not? You're trying to bill me six figures a year to deliver a service I can replace with a short walk down the hall."

While the war between Krista and IT raged, elsewhere in the company, similar battles played out. Krista may have fired the first shot, but the internal market became a war zone.

The division which made Product Line A had no interest in selling Product Line B, despite the products being complimentary; their budget only made money when they sold A. Other departments tried to internalize other corporate functions- one department tried to spin up its own HR department, another stopped doing its primary job and just started selling accounting services to other departments. One of their hardware departments discovered that they could shift to reselling competitors products and make more money that way, so they did.

Within a year, the internal market was canceled. The C-level executive who had pushed for it had already moved on to another C-suite in another giant company, and was still preaching the gospel of the internal market. Without that influence, James's company instituted a new "Company Family" policy, which promised "no departmental boundaries". People still used internal budgeting to help them allocate resources, but gone were the big piles of money that could just be spent however. No department was trying to make money off other departments. The grand experiment in internal capitalism was over.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Krebs on SecurityArrest, Raids Tied to ‘U-Admin’ Phishing Kit

Cyber cops in Ukraine carried out an arrest and several raids last week in connection with the author of a U-Admin, a software package used to administer what’s being called “one of the world’s largest phishing services.” The operation was carried out in coordination with the FBI and authorities in Australia, which was particularly hard hit by phishing scams perpetrated by U-Admin customers.

The U-Admin phishing panel interface. Image: fr3d.hk/blog

The Ukrainian attorney general’s office said it worked with the nation’s police force to identify a 39-year-old man from the Ternopil region who developed a phishing package and special administrative panel for the product.

“According to the analysis of foreign law enforcement agencies, more than 50% of all phishing attacks in 2019 in Australia were carried out thanks to the development of the Ternopil hacker,” the attorney general’s office said, noting that investigators had identified hundreds of U-Admin customers.

Brad Marden, superintendent of cybercrime operations for the Australian Federal Police (AFP), said their investigation into who was behind U-Admin began in late 2018, after Australian citizens began getting deluged with phishing attacks via mobile text messages that leveraged the software.

“It was rampant,” Marden said, noting that the AFP identified the suspect and referred the case to the Ukrainians for prosecution. “At one stage in 2019 we had a couple of hundred SMS phishing campaigns tied to just this particular actor. Pretty much every Australian received a half dozen of these phishing attempts.”

U-Admin, a.k.a. “Universal Admin,” is crimeware platform that first surfaced in 2016. U-Admin was sold by an individual who used the hacker handle “Kaktys” on multiple cybercrime forums.

According to this comprehensive breakdown of the phishing toolkit, the U-Admin control panel isn’t sold on its own, but rather it is included when customers contact the developer and purchase a set of phishing pages designed to mimic a specific brand — such as a bank website or social media platform.

Cybersecurity threat intelligence firm Intel 471 describes U-Admin as an information stealing framework that uses several plug-ins in one location to help users pilfer victim credentials more efficiently. Those plug-ins include a phishing page generator, a victim tracker, and even a component to help manage money mules (for automatic transfers from victim accounts to people who were hired in advance to receive and launder stolen funds).

Perhaps the biggest selling point for U-Admin is a module that helps phishers intercept multi-factor authentication codes. This core functionality is what’s known as a “web inject,” because it allows phishers to inject content into the phishing page that prompts the victim to enter additional information. The video below, produced by the U-Admin developer, shows a few examples (click to enlarge).

A demonstration video showing the real-time web injection capabilities of the U-Admin phishing kit. Credit: blog.bushidotoken.net

There are multiple recent reports that U-Admin has been used in conjunction with malware — particularly Qakbot (a.k.a. Qbot) — to harvest one-time codes needed for multi-factor authentication.

“Paired with [U-Admin’s 2FA harvesting functionality], a threat actor can remotely connect to the Qakbot-infected device, enter the stolen credentials plus the 2FA token, and begin initiating transactions,” explains this Nov. 2020 blog post on an ongoing Qakbot campaign that was first documented three months earlier by Check Point Research.

In the days following the Ukrainian law enforcement action, several U-Admin customers on the forums where Kaktys was most active began discussing whether the product was still safe to use following the administrator’s arrest.

The AFP’s Marden hinted that the suspicions raised by U-Admin’s customer base might be warranted.

“I wouldn’t be unhappy with the crooks continuing to use that piece of kit, without saying anything more on that front,” Marden said.

While Kaktys’s customers may be primarily concerned about the risks of using a product supported by a guy who just got busted, perhaps they should be more worried about other crooks [or perhaps the victim banks themselves] moving in on their turf: It appears the U-Admin package being sold in the underground has long included a weakness that could allow anyone to view or alter data that was phished with the help of this kit.

The security flaw was briefly alluded to in a 2018 writeup on U-Admin by the SANS Internet Storm Center.

“Looking at the professionality of the code, the layout and the functionality I’m giving this control panel 3 out of 5 stars,” joked SANS guest author Remco Verhoef. “We wanted to give them 4 stars, but we gave one star less because of an SQL injection vulnerability” [link added].

That vulnerability was documented in more detail at exploit archive Packet Storm Security in March 2020 and indexed by Check Point Software in May 2020, suggesting it still persists in current versions of the product.

The best advice to sidestep phishing scams is to avoid clicking on links that arrive unbidden in emails, text messages and other mediums. This advice is the same whether you’re using a mobile or desktop device. In fact, this phishing framework specialized in lures specifically designed to be loaded on mobile devices.

Most phishing scams invoke a temporal element that warns of dire consequences should you fail to respond or act quickly. If you’re unsure whether the message is legitimate, take a deep breath and visit the site or service in question manually — ideally, using a browser bookmark so as to avoid potential typosquatting sites.

Further reading:

uAdmin Show & Tell
Gathering Intelligence on the Qakbot banking Trojan

Planet DebianKees Cook: security things in Linux v5.8

Previously: v5.7

Linux v5.8 was released in August, 2020. Here’s my summary of various security things that caught my attention:

arm64 Branch Target Identification
Dave Martin added support for ARMv8.5’s Branch Target Instructions (BTI), which are enabled in userspace at execve() time, and all the time in the kernel (which required manually marking up a lot of non-C code, like assembly and JIT code).

With this in place, Jump-Oriented Programming (JOP, where code gadgets are chained together with jumps and calls) is no longer available to the attacker. An attacker’s code must make direct function calls. This basically reduces the “usable” code available to an attacker from every word in the kernel text to only function entries (or jump targets). This is a “low granularity” forward-edge Control Flow Integrity (CFI) feature, which is important (since it greatly reduces the potential targets that can be used in an attack) and cheap (implemented in hardware). It’s a good first step to strong CFI, but (as we’ve seen with things like CFG) it isn’t usually strong enough to stop a motivated attacker. “High granularity” CFI (which uses a more specific branch-target characteristic, like function prototypes, to track expected call sites) is not yet a hardware supported feature, but the software version will be coming in the future by way of Clang’s CFI implementation.

arm64 Shadow Call Stack
Sami Tolvanen landed the kernel implementation of Clang’s Shadow Call Stack (SCS), which protects the kernel against Return-Oriented Programming (ROP) attacks (where code gadgets are chained together with returns). This backward-edge CFI protection is implemented by keeping a second dedicated stack pointer register (x18) and keeping a copy of the return addresses stored in a separate “shadow stack”. In this way, manipulating the regular stack’s return addresses will have no effect. (And since a copy of the return address continues to live in the regular stack, no changes are needed for back trace dumps, etc.)

It’s worth noting that unlike BTI (which is hardware based), this is a software defense that relies on the location of the Shadow Stack (i.e. the value of x18) staying secret, since the memory could be written to directly. Intel’s hardware ROP defense (CET) uses a hardware shadow stack that isn’t directly writable. ARM’s hardware defense against ROP is PAC (which is actually designed as an arbitrary CFI defense — it can be used for forward-edge too), but that depends on having ARMv8.3 hardware. The expectation is that SCS will be used until PAC is available.

Kernel Concurrency Sanitizer infrastructure added
Marco Elver landed support for the Kernel Concurrency Sanitizer, which is a new debugging infrastructure to find data races in the kernel, via CONFIG_KCSAN. This immediately found real bugs, with some fixes having already landed too. For more details, see the KCSAN documentation.

new capabilities
Alexey Budankov added CAP_PERFMON, which is designed to allow access to perf(). The idea is that this capability gives a process access to only read aspects of the running kernel and system. No longer will access be needed through the much more powerful abilities of CAP_SYS_ADMIN, which has many ways to change kernel internals. This allows for a split between controls over the confidentiality (read access via CAP_PERFMON) of the kernel vs control over integrity (write access via CAP_SYS_ADMIN).

Alexei Starovoitov added CAP_BPF, which is designed to separate BPF access from the all-powerful CAP_SYS_ADMIN. It is designed to be used in combination with CAP_PERFMON for tracing-like activities and CAP_NET_ADMIN for networking-related activities. For things that could change kernel integrity (i.e. write access), CAP_SYS_ADMIN is still required.

network random number generator improvements
Willy Tarreau made the network code’s random number generator less predictable. This will further frustrate any attacker’s attempts to recover the state of the RNG externally, which might lead to the ability to hijack network sessions (by correctly guessing packet states).

fix various kernel address exposures to non-CAP_SYSLOG
I fixed several situations where kernel addresses were still being exposed to unprivileged (i.e. non-CAP_SYSLOG) users, though usually only through odd corner cases. After refactoring how capabilities were being checked for files in /sys and /proc, the kernel modules sections, kprobes, and BPF exposures got fixed. (Though in doing so, I briefly made things much worse before getting it properly fixed. Yikes!)

RISCV W^X detection
Following up on his recent work to enable strict kernel memory protections on RISCV, Zong Li has now added support for CONFIG_DEBUG_WX as seen for other architectures. Any writable and executable memory regions in the kernel (which are lovely targets for attackers) will be loudly noted at boot so they can get corrected.

execve() refactoring continues
Eric W. Biederman continued working on execve() refactoring, including getting rid of the frequently problematic recursion used to locate binary handlers. I used the opportunity to dust off some old binfmt_script regression tests and get them into the kernel selftests.

multiple /proc instances
Alexey Gladkov modernized /proc internals and provided a way to have multiple /proc instances mounted in the same PID namespace. This allows for having multiple views of /proc, with different features enabled. (Including the newly added hidepid=4 and subset=pid mount options.)

set_fs() removal continues
Christoph Hellwig, with Eric W. Biederman, Arnd Bergmann, and others, have been diligently working to entirely remove the kernel’s set_fs() interface, which has long been a source of security flaws due to weird confusions about which address space the kernel thought it should be accessing. Beyond things like the lower-level per-architecture signal handling code, this has needed to touch various parts of the ELF loader, and networking code too.

READ_IMPLIES_EXEC is no more for native 64-bit
The READ_IMPLIES_EXEC flag was a work-around for dealing with the addition of non-executable (NX) memory when x86_64 was introduced. It was designed as a way to mark a memory region as “well, since we don’t know if this memory region was expected to be executable, we must assume that if we need to read it, we need to be allowed to execute it too”. It was designed mostly for stack memory (where trampoline code might live), but it would carry over into all mmap() allocations, which would mean sometimes exposing a large attack surface to an attacker looking to find executable memory. While normally this didn’t cause problems on modern systems that correctly marked their ELF sections as NX, there were still some awkward corner-cases. I fixed this by splitting READ_IMPLIES_EXEC from the ELF PT_GNU_STACK marking on x86 and arm/arm64, and declaring that a native 64-bit process would never gain READ_IMPLIES_EXEC on x86_64 and arm64, which matches the behavior of other native 64-bit architectures that correctly didn’t ever implement READ_IMPLIES_EXEC in the first place.

array index bounds checking continues
As part of the ongoing work to use modern flexible arrays in the kernel, Gustavo A. R. Silva added the flex_array_size() helper (as a cousin to struct_size()). The zero/one-member into flex array conversions continue with over a hundred commits as we slowly get closer to being able to build with -Warray-bounds.

scnprintf() replacement continues
Chen Zhou joined Takashi Iwai in continuing to replace potentially unsafe uses of sprintf() with scnprintf(). Fixing all of these will make sure the kernel avoids nasty buffer concatenation surprises.

That’s it for now! Let me know if there is anything else you think I should mention here. Next up: Linux v5.9.

© 2021, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

,

Sam VargheseABC claims no funds, but sends Ferguson to the US to make worthless programs

The ABC is always whinging about the funding cuts it has had to suffer, with the Federal Government having cut the public broadcaster’s annual handout by a sizeable amount.

But the corporation makes its case weaker by splurging out on cosmetic exercises to keep its big names happy, with a case in point being Sarah Ferguson’s Four Corners program on the riot in the US capital on January 6.

Sarah Ferguson: vanity programs.

Had Ferguson’s effort offered some context about the incident, instead of bring a straight news program, it would have made some sense. But what is the point in having an Australian reporter file a 45-minute piece about an incident that occurred nearly a month prior?

Audiences in Australia have been enduring a surfeit of coverage about the riot, with there being no shortage of news clips from US broadcasters being shown on the ABC, the other public broadcaster SBS, and also the three Australian commercial channels.

Add to this the fact that the ABC already has three staff reporters in the US. The corporation’s news channel has a weekly program devoted to the US called Planet America so there is no dearth of coverage of the country.

Ferguson and her husband, Tony Jones, went to the US on January 15 along with a production crew. The ABC must have spent a decent sum on their upkeep and travel. So how can the corporation justify its grumbling about funding cuts when it seemingly has plenty of cash to indulge Ferguson and let her make vanity broadcasts?

This is nothing but an ego trip, an effort by Ferguson to pump up her tyres.

Ferguson has form in this regard. In 2018, she went to the US for more than three months and produced a three-part series on the alleged Russian involvement in the US presidential poll. The program was touted as the “story of the century”.

The three episodes of this program were a rehash of all the claims against former US president Donald Trump, which the American TV stations had gone over with a fine-toothed comb. However, Ferguson seemed convinced there was still something hidden for her to uncover.

At the time, a special counsel, former FBI chief Robert Mueller, was conducting an investigation into claims that Trump colluded with Russia to win the presidential election.

But in 2019, when Mueller announced the results of his probe, he had nothing to show. Zilch. Zero. Nada. Nothing. A big cipher.

And there was the famous interview with Hillary Clinton in 2017, when there was simply no reason to justify such an interview. It was a puff piece, where Ferguson gave her guest a pass on everything, though there numerous issues which could have been raised.

It is time for the ABC to either put up or shut up: either pull these vanity cases into line, or else keep quiet about the funding cuts.

Worse Than FailureNews Roundup: Flash Point

With nearly one month of 2021 in the books and the spectre of Covid-19 exhausting all of us, let’s do a quick inventory of the memorable moments of the past three months, shall we?

  1. 11/3/2020:  US Presidential Election occurs (election called by media for Joe Biden on 11/9)
  2. 11/11/2020:  Rudy Giuliani, acting as personal lawyer for President Donald Trump, holds press conference at Four Seasons Total Landscaping to dispute election results
  3. 12/31/2020:  Adobe Flash support discontinued and officially sunset
  4. 1/5/2021:  Runoff for both Georgia US Senate seats
  5. 1/6/2021:  Twitter suspends and later kicks Donald Trump off of its platform for inciting the US Capital riot, resulting in a surge in downloads of Twitter-alternatives like Gab and Parler
  6. 1/22/2021:  r/WallStreetBets leads an army of day traders to dramatically drive up the price of Gamestop and AMC theaters (among others), which will eventually cause two hedge funds to close their short positions and fall into near bankruptcy, or put more succinctly by @ParikPatelCFA on Twitter:

Wait...what was #3!?!? No it can’t be! Cue Elton John’s ‘Candle in the Wind’. RIP Flash.

As a child of the 90’s, I vividly remember merging onto the information superhighway and spending hours playing games on Newgrounds. And what technology made it possible to play audio and video in-browser back then? Flash.

The sheer numbers of games available, combined with my bad memory, makes it impossible for me to remember the names of any of these particular games, but check out the first Flash game ever created, a zombie game called AEvil. (For a full list of games click here.)

Flash’s ability to bring much needed interactivity to websites allowed it to stick around much longer than anyone could have predicted. (In fact YouTube used the technology in its first iteration back in 2005.) Eventually Flash developers just could not keep up with the demands of a rapidly evolving internet; security vulnerabilities, browser speed reduction, and mobile web issues eventually caught up to it.

To make matters worse In 2007, Flash’s mobile incompatibility forced YouTube to abandon the technology in order to be included with the launch of the iPhone. Steve Jobs may have put the nail in the coffin for good in 2010 with his (in)famous ‘Thoughts on Flash’ presentation.  The rest is history; now we have HTML5 to fill the gap (along with CSS and Javascript) that Flash left behind.

I did my own digging, and while interest in Flash has fallen since the late 2000’s, I think the real “flash point” (if you will) occurs around the summer of 2015 when the CW show, ‘The Flash’ overtook ‘Adobe Flash’ in Google search interest. (Sadly Flash Gordon and its amazing theme song have never really attracted much interest since 2004, as far as Google’s search trend data goes.)

But fear not. There are still ways to scratch that itch of ‘90s and ‘00s Flash game nostalgia. And we will be left with years of IT hilarity. 

Like the story from Dailan, China where a 20-hour battle was waged at the local train station to revert a Flash update to get their systems back up and running, all for locals to follow along via WeChat. The story has ups and downs, from when the team noticed something was wrong:

1411 hours. The station is back in crisis. Once again, we cannot use the printer.”

To when the team identified the source of the problem:

“0816 hours: After calls and online searches, we confirmed the source of the issue is American company Adobe’s comprehensive ban of Flash content.”

To when they banded together to slay their common enemy:

“Jan. 13, 0113 hours: ‘Wan Jia Ling station is fixed! Ling Ma shouted…we all gathered and confirmed. The room burst with cheers and applause.”

What a ride. The best part is that they installed a pirated version of Flash to solve the problem.

Or how about the South African tax office having to build a custom web browser with Flash built-in in order for people to be able to file their taxes. If you fail to plan, you plan to fail people!

Enough of the hilarity; I think Mike Davidson objectively positions Flash the best in his obituary to the technology:

Flash, from the very beginning, was a transitional technology. It was a language that compiled into a binary executable. This made it consistent and performant, but was in conflict with how most of the web works. It was designed for a desktop world which wasn’t compatible with the emerging mobile web. Perhaps most importantly, it was developed by a single company. This allowed it to evolve more quickly for awhile, but goes against the very spirit of the entire internet. Long-term, we never want single companies — no matter who they may be — controlling the very building blocks of the web. The internet is a marketplace of technologies loosely tied together, each living and dying in rhythm with the utility it provides.

Most technology is transitional if your window is long enough. Cassette tapes showed us that taking our music with us was possible. Tapes served their purpose until compact discs and then MP3s came along. Then they took their rightful place in history alongside other evolutionary technologies. Flash showed us where we could go, without ever promising that it would be the long-term solution once we got there.

So here lies Flash. Granddaddy of the rich, interactive internet. Inspiration for tens of thousands of careers in design and gaming. Loved by fans, reviled by enemies, but forever remembered for pushing us further down this windy road of interactive design, lighting the path for generations to come.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Kevin RuddAFR: Rod Sims’ big tech fixation blinds him to Murdoch’s monopoly

First published in the Australian Financial Review on 8 February 2021
It’s understandable that Rod Sims worries about emergent digital monopolies. They are huge. But the Australian Competition and Consumer Commission chairman’s fixation on new media monopolies has blinded him to existing ones – especially Rupert Murdoch’s 70 per cent stranglehold on print readership.

As Sims recently told The Australian Financial Review, Murdoch isn’t such a ‘‘big, bad guy’’. After all, he says, News Corp’s global market capitalisation is just a fraction that of Google or Facebook. This same attitude guided Sims to green-light Murdoch’s cementing of his undisputed domination in Queensland in 2016 through the acquisition of APN Australian Regional Media’s 12 daily newspapers, 60 community titles and 30 websites.

Murdoch already owned Brisbane’s The Courier-Mail, Gold Coast Bulletin, Townsville Bulletin and Cairns Post, plus local papers and The Australian. What was Sims’ rationale? Readers were ‘‘increasingly reading online sources of news, where there are alternatives’’. Sims’ decision had disastrous consequences. Only one of the APN dailies, the Toowoomba Chronicle, has survived. The others – spanning the Sunshine Coast, Fraser Coast, Ipswich, Gympie, Bundaberg, Gladstone, Mackay, Warwick, Stanthorpe, Lismore and Grafton – have all stopped, their newsrooms slashed and websites packed with non-local news.

Readers are now referred to The Courier-Mail, which is produced up to 800km away.

News Corp vowed to maintain APN’s ‘‘vibrant newspaper operations’’. In reality, Murdoch bought out his competitors, kept up appearances for a few years, then unceremoniously slaughtered them under the cover of COVID-19.

Sims should share responsibility for that result. Sims’ experience as an economist is acknowledged, but he is way out of his depth on the critical question of media diversity. The APN sale wasn’t just about transfer pricing or market capitalisation; it was about preserving the flow of copious, accurate, local information.

Murdoch’s predatory behaviour continues through his assault on AAP Newswire, which he seeks to replace with his own NCA NewsWire. If he is allowed to succeed, Murdoch’s content will be seeded throughout the media, including at the ABC. Yet the ACCC’s response to this attempted power grab has been barely audible.

Sims doesn’t realise that Murdoch’s print monopoly remains the feedstock for most broadcast media, telling TV and radio stations which stories are important, and framing the issues. Murdoch’s Sky News Australia is now broadcast free-to-air across 30 regional markets and nationwide through taxpayer-funded Foxtel, and it has a bigger YouTube base than channels Nine, Ten and Seven combined.

Its agenda-driven programs are steadily radicalising the Liberal Party and Nationals base, dragging more MPs to the far right.

He should ask Rupert’s own son, James Murdoch, who quit the family business after decades in its inner sanctum. He accuses it of abusing its power to deny science, pursue hidden agendas, legitimise disinformation and sow doubt into public debate.

A company with a record of bribery, hacking into innocent people’s devices, and tolerating sexual predators such as Roger Ailes should not be allowed to dominate the flow of information in our democracy.

This attitude persists even after the attempted insurrection in Washington, which was in part the consequence of Fox News spreading disinformation about a ‘‘stolen election’’, and its weaponisation of grievance-based identity politics.

Murdoch’s outlets indulged the QAnon cult before the insurrection, and make apologies for it now. Fox News has charted the path for even more extreme voices. It remains Murdoch’s template for Australia.

We should act against digital monopolies and their aggressive data harvesting. Indeed, the national petition calling for a royal commission to ensure a strong, diverse news media identified Google and Facebook as warranting scrutiny. The record-breaking petition gathered more than 500,000 signatures, including mine and Malcolm Turnbull’s. But I remain wary of Sims’ proposed news media bargaining code, which establishes an income stream for news outlets that would shrink and grow with the ‘‘shareability’’ of their articles.

The incentive for news outlets is clear: produce greater quantities of sensational, less reliable content in pursuit of social media clicks. This TikTokisation of news content risks the degradation of reliable journalism over time.

The code also promises news organisations advanced notice of algorithmic changes – a distinct advantage for the few organisations that have the resources to capitalise on such information. Murdoch’s market power is set to grow under the Sims code. Unlike Sims, I don’t believe Australia must be paralysed working out which monopolistic demon to slay. Successive regulatory failures over many decades have delivered us monopolies on multiple fronts. The course is clear: a royal commission to focus a light on the problem and offer options for the future.

Given time and resources, a commissioner could review the latest evidence and best examples of media governance around the world. Only then would Parliament have an appropriate evidence base from which to make informed decisions to ensure a free and reliable media as the lifeblood of our democracy.

The post AFR: Rod Sims’ big tech fixation blinds him to Murdoch’s monopoly appeared first on Kevin Rudd.

Planet DebianRuss Allbery: Review: The Future of Another Timeline

Review: The Future of Another Timeline, by Annalee Newitz

Publisher: Tor
Copyright: September 2019
ISBN: 0-7653-9212-7
Format: Kindle
Pages: 350

Tess is a time traveler from 2022, a member of the semi-secret Daughters of Harriet who are, under the cover of an academic research project, attempting to modify the timeline to improve women's rights in the United States. Beth is a teenager in suburban Irvine in Alta California, with an abusive father, a tight-knit group of friends, and a love of feminist punk rock. The story opens with both of them at a Grape Ape concert in 1992. Beth is hanging out with her friends, and Tess is looking for signs of a conspiracy to alter the timeline to further restrict the rights of women.

The Future of Another Timeline has a great science fiction premise. There are time machines buried in geologically-stable bedrock that have been there since before any current species evolved. The first was discovered by humans thousands of years before the start of the story. They can be controlled with vibrations in the rock and therefore don't need any modern technology to operate. Humanity has therefore lived with time travel for much of recorded history, albeit with a set of rules strictly imposed by these mysterious machines: individuals can only travel to their own time or earlier, and cannot carry any equipment with them. The timeline at the start of the book is already not ours, and it shifts further over the course of the plot.

Time travel has a potentially devastating effect on the foundations of narrative, so most SF novels that let the genie of time travel out of the bottle immediately start trying to stuff it back in again. Newitz does not, which is a refreshing change. The past is not immutable, there is no scientific or magical force that prevents history from changing, and people do not manage to keep something with a history of thousands of years either secret or well-controlled. It's not a free-for-all: There is a Chronology Academy that sets some rules for time travelers, the Machines themselves have rules that prevent time travel from being too casual, and most countries have laws about what time travelers are allowed to do. But it's also not horribly difficult to travel in time, not horribly uncommon to come across someone from the future, and most of the rules are not strictly enforced.

This does mean there are some things that one has to agree to not think about. (To take the most obvious example, the lack of government and military involvement in time travel is not believable, even given its constraints. One has to accept this as a story premise.) But it removes the claustrophobic rules-lawyering that's so common in time travel stories and lets Newitz tell a more interesting political story about the difficulty of achieving lasting social change.

Unfortunately, this is also one of those science fiction novels that is much less interested in its premise and machinery than I was as a reader. The Machines are fascinating objects: ancient, mysterious, and as we learn more about them over the course of the story, rich with intriguing detail. After reading this summary, you're probably curious where they came from, what they can do, and how they work. So am I, after reading the book. The Future of Another Timeline is completely uninterested in that or any related question. About halfway through the book, a time traveler from the future demonstrates interfaces in the time machines that no one knew existed, the characters express some surprise, and then no one asks any meaningful questions for the rest of the book. At another point, the characters have the opportunity to see a Machine in something closer to its original form before aspects of its interface have eroded away. They learn just enough to solve their immediate plot problem and show no further curiosity.

I found this immensely frustrating, in part due to the mixed signaling. Normally if an author is going to use a science fiction idea as pure plot device, they avoid spending much time on it, implicitly warning the reader that this isn't where the story is going. Newitz instead provides the little details and new revelations that normally signal that understanding these objects will be a key to the plot, and then shrugs and walks away, leaving every question unanswered.

Given how many people enjoyed Rendezvous with Rama, this apparently doesn't bother other readers as much as it bothers me. If you are like me, though, be warned.

But, fine, this is a character story built around a plot device rather than a technology story. That's a wholly valid mode of science fiction, and that part of the book has heft. It reminded me of the second-wave feminist science fiction of authors like Russ and Charnas, except updated to modern politics. The villains are a projection forward of the modern on-line misogynists (incels, specifically), but Newitz makes the unusual choice of not focusing on their motives or interior lives. They simply exist as a malevolent hostile force, much the way that women experience them today on-line. They have to be defeated, the characters of the book set out to defeat them, and this is done without melodrama, hand-wringing, or psychoanalysis. It's refreshingly straightforward and unambiguous, and it keeps the focus on the people trying to make the world a better place rather than on the redemption arc of some screaming asshole.

The part I was less enamored of is that these are two of the least introspective first-person protagonists that I've seen in a book. Normally, first-person perspective is used to provide a rich internal monologue about external events, but both Tess and Beth tell their stories as mostly-dry sequences of facts. Sometimes this includes a bit of what they're feeling, but neither character delves much into the why or how. This improves somewhat towards the end of the book, but I found the first two-thirds of the story oddly flat and had a hard time generating much interest in or sympathy for the characters. There are good in-story reasons for both Tess and Beth to heavily suppress their emotions, so I will not argue this is unrealistic, but character stories work better for me with more of an emotional hook.

Hand-in-hand with that is the problem that the ending didn't provide the catharsis that I was hoping for. Beth goes through absolute hell over the course of the book, and while that does reach a resolution that I know intellectually is the best type of resolution that her story can hope for, it felt wholly insufficient. Tess's story reaches a somewhat more satisfying conclusion, but one that reverses an earlier moral imperative in a way that I found overly sudden. And everything about this book is highly contingent and temporary in a way that is true to its theme and political statement but that left me feeling more weary than satisfied.

That type of ending is a valid authorial choice, and to some extent my complaint is only that this wasn't the book for me at the time I read it. But I have read other books with similarly conditional endings and withdrawn characters that still carried me along with the force and power of the writing (Daughters of the North comes to mind). The Future of Another Timeline is not poorly written, but neither do I think it achieves that level of skill. The writing is a bit wooden, the flow of sentences is a touch cliched and predictable, and the characters are a bit thin. It's serviceable writing had there been something else (such as a setting-as-character exploration of the origins and purpose of the Machines) to grab my attention and pull me along. But if the weight of the story has to be born by the quality of the writing, I don't think it was quite up to the task.

Overall, I think The Future of Another Timeline has a great premise that it treats with frustrating indifference, a satisfyingly different take on time travel with some obvious holes, some solid political ideas reminiscent of an earlier age of feminist SF, a refreshing unwillingness to center evil on its own terms, characters that took more than half the book to develop much depth, and a suitable but frustrating ending. I can see why other people liked it more than I did, but I can't recommend it.

Content warning: Rape, graphic violence, child abuse, gaslighting, graphic medical procedure, suicide, extreme misogyny, and mutilation, and this is spread throughout the book, not concentrated in one scene. I'm not very squeamish about non-horror fiction and it was still rather a lot, so please read with care.

Rating: 6 out of 10

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 31 – CONCLUSION)

Here’s part thirty-one, the conclusion of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:


Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

Planet DebianEnrico Zini: Language links

In English

In Italiano

Planet DebianChris Lamb: Favourite books of 2020

I won't reveal precisely how many books I read in 2020, but it was definitely an improvement on 74 in 2019, 53 in 2018 and 50 in 2017. But not only did I read more in a quantitative sense, the quality seemed higher as well. There were certainly fewer disappointments: given its cultural resonance, I was nonplussed by Nick Hornby's Fever Pitch and whilst Ian Fleming's The Man with the Golden Gun was a little thin (again, given the obvious influence of the Bond franchise) the booked lacked 'thinness' in a way that made it interesting to critique. The weakest novel I read this year was probably J. M. Berger's Optimal, but even this hybrid of Ready Player One late-period Black Mirror wasn't that cringeworthy, all things considered. Alas, graphic novels continue to not quite be my thing, I'm afraid.

I perhaps experienced more disappointments in the non-fiction section. Paul Bloom's Against Empathy was frustrating, particularly in that it expended unnecessary energy battling its misleading title and accepted terminology, and it could so easily have been an 20-minute video essay instead). (Elsewhere in the social sciences, David and Goliath will likely be the last Malcolm Gladwell book I voluntarily read.) After so many positive citations, I was also more than a little underwhelmed by Shoshana Zuboff's The Age of Surveillance Capitalism, and after Ryan Holiday's many engaging reboots of Stoic philosophy, his Conspiracy (on Peter Thiel and Hulk Hogan taking on Gawker) was slightly wide of the mark for me.

Anyway, here follows a selection of my favourites from 2020, in no particular order:


§


Fiction

Wolf Hall & Bring Up the Bodies & The Mirror and the Light

Hilary Mantel

During the early weeks of 2020, I re-read the first two parts of Hilary Mantel's Thomas Cromwell trilogy in time for the March release of The Mirror and the Light. I had actually spent the last few years eagerly following any news of the final instalment, feigning outrage whenever Mantel appeared to be spending time on other projects.

Wolf Hall turned out to be an even better book than I remembered, and when The Mirror and the Light finally landed at midnight on 5th March, I began in earnest the next morning. Note that date carefully; this was early 2020, and the book swiftly became something of a heavy-handed allegory about the world at the time. That is to say — and without claiming that I am Monsieur Cromuel in any meaningful sense — it was an uneasy experience to be reading about a man whose confident grasp on his world, friends and life was slipping beyond his control, and at least in Cromwell's case, was heading inexorably towards its denouement.

The final instalment in Mantel's trilogy is not perfect, and despite my love of her writing I would concur with the judges who decided against awarding her a third Booker Prize. For instance, there is something of the longueur that readers dislike in the second novel, although this might not be entirely Mantel's fault — after all, the rise of the "ugly" Anne of Cleves and laborious trade negotiations for an uninspiring mineral (this is no Herbertian 'spice') will never match the court intrigues of Anne Boleyn, Jane Seymour and that man for all seasons, Thomas More. Still, I am already looking forward to returning to the verbal sparring between King Henry and Cromwell when I read the entire trilogy once again, tentatively planned for 2022.


§


The Fault in Our Stars

John Green

I came across John Green's The Fault in Our Stars via a fantastic video by Lindsay Ellis discussing Roland Barthes famous 1967 essay on authorial intent. However, I might have eventually come across The Fault in Our Stars regardless, not because of Green's status as an internet celebrity of sorts but because I'm a complete sucker for this kind of emotionally-manipulative bildungsroman, likely due to reading Philip Pullman's His Dark Materials a few too many times in my teens.

Although its title is taken from Shakespeare's Julius Caesar, The Fault in Our Stars is actually more Romeo & Juliet. Hazel, a 16-year-old cancer patient falls in love with Gus, an equally ill teen from her cancer support group. Hazel and Gus share the same acerbic (and distinctly unteenage) wit and a love of books, centred around Hazel's obsession of An Imperial Affliction, a novel by the meta-fictional author Peter Van Houten. Through a kind of American version of Jim'll Fix It, Gus and Hazel go and visit Van Houten in Amsterdam.

I'm afraid it's even cheesier than I'm describing it. Yet just as there is a time and a place for Michelin stars and Haribo Starmix, there's surely a place for this kind of well-constructed but altogether maudlin literature. One test for emotionally manipulative works like this is how well it can mask its internal contradictions — while Green's story focuses on the universalities of love, fate and the shortness of life (as do almost all of his works, it seems), The Fault in Our Stars manages to hide, for example, that this is an exceedingly favourable treatment of terminal illness that is only possible for the better off. The 2014 film adaptation does somewhat worse in peddling this fantasy (and has a much weaker treatment of the relationship between the teens' parents too, an underappreciated subtlety of the book).

The novel, however, is pretty slick stuff, and it is difficult to fault it for what it is. For some comparison, I later read Green's Looking for Alaska and Paper Towns which, as I mention, tug at many of the same strings, but they don't come together nearly as well as The Fault in Our Stars. James Joyce claimed that "sentimentality is unearned emotion", and in this respect, The Fault in Our Stars really does earn it.


§


The Plague

Albert Camus

P. D. James' The Children of Men, George Orwell's Nineteen Eighty-Four, Arthur Koestler's Darkness at Noon ... dystopian fiction was already a theme of my reading in 2020, so given world events it was an inevitability that I would end up with Camus's novel about a plague that swept through the Algerian city of Oran.

Is The Plague an allegory about the Nazi occupation of France during World War Two? Where are all the female characters? Where are the Arab ones? Since its original publication in 1947, there's been so much written about The Plague that it's hard to say anything new today. Nevertheless, I was taken aback by how well it captured so much of the nuance of 2020. Whilst we were saying just how 'unprecedented' these times were, it was eerie how a novel written in the 1940s could accurately how many of us were feeling well over seventy years on later: the attitudes of the people; the confident declarations from the institutions; the misaligned conversations that led to accidental misunderstandings. The disconnected lovers.

The only thing that perhaps did not work for me in The Plague was the 'character' of the church. Although I could appreciate most of the allusion and metaphor, it was difficult for me to relate to the significance of Father Paneloux, particularly regarding his change of view on the doctrinal implications of the virus, and — spoiler alert — that he finally died of a "doubtful case" of the disease, beyond the idea that Paneloux's beliefs are in themselves "doubtful". Answers on a postcard, perhaps.

The Plague even seemed to predict how we, at least speaking of the UK, would react when the waves of the virus waxed and waned as well:

The disease stiffened and carried off three or four patients who were expected to recover. These were the unfortunates of the plague, those whom it killed when hope was high

It somehow captured the nostalgic yearning for high-definition videos of cities and public transport; one character even visits the completely deserted railway station in Oman simply to read the timetables on the wall.


§


Tinker, Tailor, Soldier, Spy

John le Carré

There's absolutely none of the Mad Men glamour of James Bond in John le Carré's icy world of Cold War spies:

Small, podgy, and at best middle-aged, Smiley was by appearance one of London's meek who do not inherit the earth. His legs were short, his gait anything but agile, his dress costly, ill-fitting, and extremely wet.

Almost a direct rebuttal to Ian Fleming's 007, Tinker, Tailor has broken-down cars, bad clothes, women with their own internal and external lives (!), pathetically primitive gadgets, and (contra Mad Men) hangovers that significantly longer than ten minutes. In fact, the main aspect that the mostly excellent 2011 film adaption doesn't really capture is the smoggy and run-down nature of 1970s London — this is not your proto-Cool Britannia of Austin Powers or GTA:1969, the city is truly 'gritty' in the sense there is a thin film of dirt and grime on every surface imaginable.

Another angle that the film cannot capture well is just how purposefully the novel does not mention the United States. Despite the US obviously being the dominant power, the British vacillate between pretending it doesn't exist or implying its irrelevance to the matter at hand. This is no mistake on Le Carré's part, as careful readers are rewarded by finding this denial of US hegemony in metaphor throughout --pace Ian Fleming, there is no obvious Felix Leiter to loudly throw money at the problem or a Sheriff Pepper to serve as cartoon racist for the Brits to feel superior about. By contrast, I recall that a clever allusion to "dusty teabags" is subtly mirrored a few paragraphs later with a reference to the installation of a coffee machine in the office, likely symbolic of the omnipresent and unavoidable influence of America. (The officer class convince themselves that coffee is a European import.) Indeed, Le Carré communicates a feeling of being surrounded on all sides by the peeling wallpaper of Empire.

Oftentimes, the writing style matches the graceless and inelegance of the world it depicts. The sentences are dense and you find your brain performing a fair amount of mid-flight sentence reconstruction, reparsing clauses, commas and conjunctions to interpret Le Carré's intended meaning. In fact, in his eulogy-cum-analysis of Le Carré's writing style, William Boyd, himself a ventrioquilist of Ian Fleming, named this intentional technique 'staccato'. Like the musical term, I suspect the effect of this literary staccato is as much about the impact it makes on a sentence as the imperceptible space it generates after it.

Lastly, the large cast in this sprawling novel is completely believable, all the way from the Russian spymaster Karla to minor schoolboy Roach — the latter possibly a stand-in for Le Carré himself. I got through the 500-odd pages in just a few days, somehow managing to hold the almost-absurdly complicated plot in my head. This is one of those classic books of the genre that made me wonder why I had not got around to it before.


§


The Nickel Boys

Colson Whitehead

According to the judges who awarded it the Pulitzer Prize for Fiction, The Nickel Boys is "a devastating exploration of abuse at a reform school in Jim Crow-era Florida" that serves as a "powerful tale of human perseverance, dignity and redemption". But whilst there is plenty of this perseverance and dignity on display, I found little redemption in this deeply cynical novel.

It could almost be read as a follow-up book to Whitehead's popular The Underground Railroad, which itself won the Pulitzer Prize in 2017. Indeed, each book focuses on a young protagonist who might be euphemistically referred to as 'downtrodden'. But The Nickel Boys is not only far darker in tone, it feels much closer and more connected to us today. Perhaps this is unsurprising, given that it is based on the story of the Dozier School in northern Florida which operated for over a century before its long history of institutional abuse and racism was exposed a 2012 investigation. Nevertheless, if you liked the social commentary in The Underground Railroad, then there is much more of that in The Nickel Boys:

Perhaps his life might have veered elsewhere if the US government had opened the country to colored advancement like they opened the army. But it was one thing to allow someone to kill for you and another to let him live next door.

Sardonic aperçus of this kind are pretty relentless throughout the book, but it never tips its hand too far into on nihilism, especially when some of the visual metaphors are often first-rate: "An American flag sighed on a pole" is one I can easily recall from memory. In general though, The Nickel Boys is not only more world-weary in tenor than his previous novel, the United States it describes seems almost too beaten down to have the energy conjure up the Swiftian magical realism that prevented The Underground Railroad from being overly lachrymose. Indeed, even we Whitehead transports us a present-day New York City, we can't indulge in another kind of fantasy, the one where America has solved its problems:

The Daily News review described the [Manhattan restaurant] as nouveau Southern, "down-home plates with a twist." What was the twist — that it was soul food made by white people?

It might be overly reductionist to connect Whitehead's tonal downshift with the racial justice movements of the past few years, but whatever the reason, we've ended up with a hard-hitting, crushing and frankly excellent book.


§


True Grit & No Country for Old Men

Charles Portis & Cormac McCarthy

It's one of the most tedious cliches to claim the book is better than the film, but these two books are of such high quality that even the Coen Brothers at their best cannot transcend them. I'm grouping these books together here though, not because their respective adaptations will exemplify some of the best cinema of the 21st century, but because of their superb treatment of language.

Take the use of dialogue. Cormac McCarthy famously does not use any punctuation — "I believe in periods, in capitals, in the occasional comma, and that's it" — but the conversations in No Country for Old Men together feel familiar and commonplace, despite being relayed through this unconventional technique. In lesser hands, McCarthy's written-out Texan drawl would be the novelistic equivalent of white rap or Jar Jar Binks, but not only is the effect entirely gripping, it helps you to believe you are physically present in the many intimate and domestic conversations that hold this book together. Perhaps the cinematic familiarity helps, as you can almost hear Tommy Lee Jones' voice as Sheriff Bell from the opening page to the last.

Charles Portis' True Grit excels in its dialogue too, but in this book it is not so much in how it flows (although that is delightful in its own way) but in how forthright and sardonic Maddie Ross is:

"Earlier tonight I gave some thought to stealing a kiss from you, though you are very young, and sick and unattractive to boot, but now I am of a mind to give you five or six good licks with my belt."

"One would be as unpleasant as the other."

Perhaps this should be unsurprising. Maddie, a fourteen-year-old girl from Yell County, Arkansas, can barely fire her father's heavy pistol, so she can only has words to wield as her weapon. Anyway, it's not just me who treasures this book. In her encomium that presages most modern editions, Donna Tartt of The Secret History fame traces the novels origins through Huckleberry Finn, praising its elegance and economy: "The plot of True Grit is uncomplicated and as pure in its way as one of the Canterbury Tales". I've read any Chaucer, but I am inclined to agree.

Tartt also recalls that True Grit vanished almost entirely from the public eye after the release of John Wayne's flimsy cinematic vehicle in 1969 — this earlier film was, Tartt believes, "good enough, but doesn't do the book justice". As it happens, reading a book with its big screen adaptation as a chaser has been a minor theme of my 2020, including P. D. James' The Children of Men, Kazuo Ishiguro's Never Let Me Go, Patricia Highsmith's Strangers on a Train, James Ellroy's The Black Dahlia, John Green's The Fault in Our Stars, John le Carré's Tinker, Tailor Soldier, Spy and even a staged production of Charles Dicken's A Christmas Carol streamed from The Old Vic. For an autodidact with no academic background in literature or cinema, I've been finding this an effective and enjoyable means of getting closer to these fine books and films — it is precisely where they deviate (or perhaps where they are deficient) that offers a means by which one can see how they were constructed. I've also found that adaptations can also tell you a lot about the culture in which they were made: take the 'straightwashing' in the film version of Strangers on a Train (1951) compared to the original novel, for example. It is certainly true that adaptions rarely (as Tartt put it) "do the book justice", but she might be also right to alight on a legal metaphor, for as the saying goes, to judge a movie in comparison to the book is to do both a disservice.


§


The Glass Hotel

Emily St. John Mandel

In The Glass Hotel, Mandel somehow pulls off the impossible; writing a loose roman-à-clef on Bernie Madoff, a Ponzi scheme and the ephemeral nature of finance capital that is tranquil and shimmeringly beautiful. Indeed, don't get the wrong idea about the subject matter; this is no over over-caffeinated The Big Short, as The Glass Hotel is less about a Madoff or coked-up financebros but the fragile unreality of the late 2010s, a time which was, as we indeed discovered in 2020, one event away from almost shattering completely.

Mandel's prose has that translucent, phantom quality to it where the chapters slip through your fingers when you try to grasp at them, and the plot is like a ghost ship that that slips silently, like the Mary Celeste, onto the Canadian water next to which the eponymous 'Glass Hotel' resides. Indeed, not unlike The Overlook Hotel, the novel so overflows with symbolism so that even the title needs to evoke the idea of impermanence — permanently living in a hotel might serve as a house, but it won't provide a home. It's risky to generalise about such things post-2016, but the whole story sits in that the infinitesimally small distance between perception and reality, a self-constructed culture that is not so much 'post truth' but between them.

There's something to consider in almost every character too. Take the stand-in for Bernie Madoff: no caricature of Wall Street out of a 1920s political cartoon or Brechtian satire, Jonathan Alkaitis has none of the oleaginous sleaze of a Dominic Strauss-Kahn, the cold sociopathy of a Marcus Halberstam nor the well-exercised sinuses of, say, Jordan Belford. Alkaitis is — dare I say it? — eminently likeable, and the book is all the better for it. Even the C-level characters have something to say: Enrico, trivially escaping from the regulators (who are pathetically late to the fraud without Mandel ever telling us explicitly), is daydreaming about the girlfriend he abandoned in New York: "He wished he'd realised he loved her before he left". What was in his previous life that prevented him from doing so? Perhaps he was never in love at all, or is love itself just as transient as the imaginary money in all those bank accounts? Maybe he fell in love just as he crossed safely into Mexico? When, precisely, do we fall in love anyway?

I went on to read Mandel's Last Night in Montreal, an early work where you can feel her reaching for that other-worldly quality that she so masterfully achieves in The Glass Hotel. Her fêted Station Eleven is on my must-read list for 2021. "What is truth?" asked Pontius Pilate. Not even Mandel cannot give us the answer, but this will certainly do for now.


§


Running the Light

Sam Tallent

Although it trades in all of the clichés and stereotypes of the stand-up comedian (the triumvirate of drink, drugs and divorce), Sam Tallent's debut novel depicts an extremely convincing fictional account of a touring road comic.

The comedian Doug Stanhope (who himself released a fairly decent No Encore for the Donkey memoir in 2020) hyped Sam's book relentlessly on his podcast during lockdown... and justifiably so. I ripped through Running the Light in a few short hours, the only disappointment being that I can't seem to find videos online of Sam that come anywhere close to match up to his writing style. If you liked the rollercoaster energy of Paul Beatty's The Sellout, the cynicism of George Carlin and the car-crash invertibility of final season Breaking Bad, check this great book out.


§


Non-fiction

Inside Story

Martin Amis

This was my first introduction to Martin Amis's work after hearing that his "novelised autobiography" contained a fair amount about Christopher Hitchens, an author with whom I had a one of those rather clichéd parasocial relationship with in the early days of YouTube. (Hey, it could have been much worse.) Amis calls his book a "novelised autobiography", and just as much has been made of its quasi-fictional nature as the many diversions into didactic writing advice that betwixt each chapter: "Not content with being a novel, this book also wants to tell you how to write novels", complained Tim Adams in The Guardian.

I suspect that reviewers who grew up with Martin since his debut book in 1973 rolled their eyes at yet another demonstration of his manifest cleverness, but as my first exposure to Amis's gift of observation, I confess that I was thought it was actually kinda clever. Try, for example, "it remains a maddening truth that both sexual success and sexual failure are steeply self-perpetuating" or "a hospital gym is a contradiction – like a young Conservative", etc. Then again, perhaps I was experiencing a form of nostalgia for a pre-Gamergate YouTube, when everything in the world was a lot simpler... or at least things could be solved by articulate gentlemen who honed their art of rhetoric at the Oxford Union.

I went on to read Martin's first novel, The Rachel Papers (is it 'arrogance' if you are, indeed, that confident?), as well as his 1997 Night Train. I plan to read more of him in the future.


§


The Collected Essays, Journalism and Letters: Volume 1 & Volume 2 & Volume 3 & Volume 4

George Orwell

These deceptively bulky four volumes contain all of George Orwell's essays, reviews and correspondence, from his teenage letters sent to local newspapers to notes to his literary executor on his deathbed in 1950. Reading this was part of a larger, multi-year project of mine to cover the entirety of his output.

By including this here, however, I'm not recommending that you read everything that came out of Orwell's typewriter. The letters to friends and publishers will only be interesting to biographers or hardcore fans (although I would recommend Dorian Lynskey's The Ministry of Truth: A Biography of George Orwell's 1984 first). Furthermore, many of his book reviews will be of little interest today. Still, some insights can be gleaned; if there is any inconsistency in this huge corpus is that his best work is almost 'too' good and too impactful, making his merely-average writing appear like hackwork. There are some gems that don't make the usual essay collections too, and some of Orwell's most astute social commentary came out of series of articles he wrote for the left-leaning newspaper Tribune, related in many ways to the US Jacobin. You can also see some of his most famous ideas start to take shape years — if not decades — before they appear in his novels in these prototype blog posts.

I also read Dennis Glover's novelised account of the writing of Nineteen-Eighty Four called The Last Man in Europe, and I plan to re-read some of Orwell's earlier novels during 2021 too, including A Clergyman's Daughter and his 'antebellum' Coming Up for Air that he wrote just before the Second World War; his most under-rated novel in my estimation. As it happens, and with the exception of the US and Spain, copyright in the works published in his lifetime ends on 1st January 2021. Make of that what you will.


§


Capitalist Realism & Chavs: The Demonisation of the Working Class

Mark Fisher & Owen Jones

These two books are not natural companions to one another and there is likely much that Jones and Fisher would vehemently disagree on, but I am pairing these books together here because they represent the best of the 'political' books I read in 2020.

Mark Fisher was a dedicated leftist whose first book, Capitalist Realism, marked an important contribution to political philosophy in the UK. However, since his suicide in early 2017, the currency of his writing has markedly risen, and Fisher is now frequently referenced due to his belief that the prevalence of mental health conditions in modern life is a side-effect of various material conditions, rather than a natural or unalterable fact "like weather". (Of course, our 'weather' is being increasingly determined by a combination of politics, economics and petrochemistry than pure randomness.) Still, Fisher wrote on all manner of topics, from the 2012 London Olympics and "weird and eerie" electronic music that yearns for a lost future that will never arrive, possibly prefiguring or influencing the Fallout video game series.

Saying that, I suspect Fisher will resonate better with a UK audience more than one across the Atlantic, not necessarily because he was minded to write about the parochial politics and culture of Britain, but because his writing often carries some exasperation at the suppression of class in favour of identity-oriented politics, a viewpoint not entirely prevalent in the United States outside of, say, Touré F. Reed or the late Michael Brooks. (Indeed, Fisher is likely best known in the US as the author of his controversial 2013 essay, Exiting the Vampire Castle, but that does not figure greatly in this book). Regardless, Capitalist Realism is an insightful, damning and deeply unoptimistic book, best enjoyed in the warm sunshine — I found it an ironic compliment that I had quoted so many paragraphs that my Kindle's copy protection routines prevented me from clipping any further.

Owen Jones needs no introduction to anyone who regularly reads a British newspaper, especially since 2015 where he unofficially served as a proxy and punching bag for expressing frustrations with the then-Labour leader, Jeremy Corbyn. However, as the subtitle of Jones' 2012 book suggests, Chavs attempts to reveal the "demonisation of the working class" in post-financial crisis Britain. Indeed, the timing of the book is central to Jones' analysis, specifically that the stereotype of the "chav" is used by government and the media as a convenient figleaf to avoid meaningful engagement with economic and social problems on an austerity ridden island. (I'm not quite sure what the US equivalent to 'chav' might be. Perhaps Florida Man without the implications of mental health.)

Anyway, Jones certainly has a point. From Vicky Pollard to the attacks on Jade Goody, there is an ignorance and prejudice at the heart of the 'chav' backlash, and that would be bad enough even if it was not being co-opted or criminalised for ideological ends.

Elsewhere in political science, I also caught Michael Brooks' Against the Web and David Graeber's Bullshit Jobs, although they are not quite methodical enough to recommend here. However, Graeber's award-winning Debt: The First 5000 Years will be read in 2021. Matt Taibbi's Hate Inc: Why Today's Media Makes Us Despise One Another is worth a brief mention here though, but its sprawling nature felt very much like I was reading a set of Substack articles loosely edited together. And, indeed, I was.


§


The Golden Thread: The Story of Writing

Ewan Clayton

A recommendation from a dear friend, Ewan Clayton's The Golden Thread is a journey through the long history of the writing from the Dawn of Man to present day. Whether you are a linguist, a graphic designer, a visual artist, a typographer, an archaeologist or 'just' a reader, there is probably something in here for you. I was already dipping my quill into calligraphy this year so I suspect I would have liked this book in any case, but highlights would definitely include the changing role of writing due to the influence of textual forms in the workplace as well as digression on ergonomic desks employed by monks and scribes in the Middle Ages.

A lot of books by otherwise-sensible authors overstretch themselves when they write about computers or other technology from the Information Age, at best resulting in bizarre non-sequiturs and dangerously Panglossian viewpoints at worst. But Clayton surprised me by writing extremely cogently and accurate on the role of text in this new and unpredictable era. After finishing it I realised why — for a number of years, Clayton was a consultant for the legendary Xerox PARC where he worked in a group focusing on documents and contemporary communications whilst his colleagues were busy inventing the graphical user interface, laser printing, text editors and the computer mouse.


§


New Dark Age & Radical Technologies: The Design of Everyday Life

James Bridle & Adam Greenfield

I struggled to describe these two books to friends, so I doubt I will suddenly do a better job here. Allow me to quote from Will Self's review of James Bridle's New Dark Age in the Guardian:

We're accustomed to worrying about AI systems being built that will either "go rogue" and attack us, or succeed us in a bizarre evolution of, um, evolution – what we didn't reckon on is the sheer inscrutability of these manufactured minds. And minds is not a misnomer. How else should we think about the neural network Google has built so its translator can model the interrelation of all words in all languages, in a kind of three-dimensional "semantic space"?

New Dark Age also turns its attention to the weird, algorithmically-derived products offered for sale on Amazon as well as the disturbing and abusive videos that are automatically uploaded by bots to YouTube. It should, by rights, be a mess of disparate ideas and concerns, but Bridle has a flair for introducing topics which reveals he comes to computer science from another discipline altogether; indeed, on a four-part series he made for Radio 4, he's primarily referred to as "an artist".

Whilst New Dark Age has rather abstract section topics, Adam Greenfield's Radical Technologies is a rather different book altogether. Each chapter dissects one of the so-called 'radical' technologies that condition the choices available to us, asking how do they work, what challenges do they present to us and who ultimately benefits from their adoption. Greenfield takes his scalpel to smartphones, machine learning, cryptocurrencies, artificial intelligence, etc., and I don't think it would be unfair to say that starts and ends with a cynical point of view. He is no reactionary Luddite, though, and this is both informed and extremely well-explained, and it also lacks the lazy, affected and Private Eye-like cynicism of, say, Attack of the 50 Foot Blockchain.

The books aren't a natural pair, for Bridle's writing contains quite a bit of air in places, ironically mimics the very 'clouds' he inveighs against. Greenfield's book, by contrast, as little air and much lower pH value. Still, it was more than refreshing to read two technology books that do not limit themselves to platitudinal booleans, be those dangerously naive (e.g. Kevin Kelly's The Inevitable) or relentlessly nihilistic (Shoshana Zuboff's The Age of Surveillance Capitalism). Sure, they are both anti-technology screeds, but they tend to make arguments about systems of power rather than specific companies and avoid being too anti-'Big Tech' through a narrower, Silicon Valley obsessed lens — for that (dipping into some other 2020 reading of mine) I might suggest Wendy Liu's Abolish Silicon Valley or Scott Galloway's The Four.

Still, both books are superlatively written. In fact, Adam Greenfield has some of the best non-fiction writing around, both in terms of how he can explain complicated concepts (particularly the smart contract mechanism of the Ethereum cryptocurrency) as well as in the extremely finely-crafted sentences — I often felt that the writing style almost had no need to be that poetic, and I particularly enjoyed his fictional scenarios at the end of the book.


§


The Algebra of Happiness & Indistractable: How to Control Your Attention and Choose Your Life

Scott Galloway & Nir Eyal

A cocktail of insight, informality and abrasiveness makes NYU Professor Scott Galloway uncannily appealing to guys around my age. Although Galloway definitely has his own wisdom and experience, similar to Joe Rogan I suspect that a crucial part of Galloway's appeal is that you feel you are learning right alongside him. Thankfully, 'Prof G' is far less — err — problematic than Rogan (Galloway is more of a well-meaning, spirited centrist), although he, too, has some pretty awful takes at time. This is a shame, because removed from the whirlwind of social media he can be really quite considered, such as in this long-form interview with Stephanie Ruhle.

In fact, it is this kind of sentiment that he captured in his 2019 Algebra of Happiness. When I look over my highlighted sections, it's clear that it's rather schmaltzy out of context ("Things you hate become just inconveniences in the presence of people you love..."), but his one-two punch of cynicism and saccharine ("Ask somebody who purchased a home in 2007 if their 'American Dream' came true...") is weirdly effective, especially when he uses his own family experiences as part of his story:

A better proxy for your life isn't your first home, but your last. Where you draw your last breath is more meaningful, as it's a reflection of your success and, more important, the number of people who care about your well-being. Your first house signals the meaningful—your future and possibility. Your last home signals the profound—the people who love you. Where you die, and who is around you at the end, is a strong signal of your success or failure in life.

Nir Eyal's Indistractable, however, is a totally different kind of 'self-help' book. The important background story is that Eyal was the author of the widely-read Hooked which turned into a secular Bible of so-called 'addictive design'. (If you've ever been cornered by a techbro wielding a Wikipedia-thin knowledge of B. F. Skinner's behaviourist psychology and how it can get you to click 'Like' more often, it ultimately came from Hooked.) However, Eyal's latest effort is actually an extended mea culpa for his previous sin and he offers both high and low-level palliative advice on how to avoid falling for the tricks he so studiously espoused before. I suppose we should be thankful to capitalism for selling both cause and cure.

Speaking of markets, there appears to be a growing appetite for books in this 'anti-distraction' category, and whilst I cannot claim to have done an exhausting study of this nascent field, Indistractable argues its points well without relying on accurate-but-dry "studies show..." or, worse, Gladwellian gotchas. My main criticism, however, would be that Eyal doesn't acknowledge the limits of a self-help approach to this problem; it seems that many of the issues he outlines are an inescapable part of the alienation in modern Western society, and the only way one can really avoid distraction is to move up the income ladder or move out to a 500-acre ranch.

Planet DebianNorbert Preining: New job: Fujitsu Research Labs

I am excited to announce that I have joined Fujitsu Research Labs with beginning of February.

My job will comprise, besides other things, research and development in machine learning, open source strategies, development of and representation of Fujitsu in the scikit-learn consortium. We are doing a lot of topological data analysis, so if you are interested in these kinds of topics, don’t hesitate to contact me.

I am still settling into a completely new world of “big and Japanese company� with lots of on-boarding seminars, applications, paper work, meetings, but I am looking forward to start the actual work as soon as possible.

As a long long time Linux user, I am a bit in trouble now, since everything in Fujitsu requires Windows it seems. I will try hard to improve this situation – including my dream of having Fujitsu machines with pre-installed Debian on it 😉

,

Cryptogram Friday Squid Blogging: Live Giant Squid Found in Japan

A giant squid was found alive in the port of Izumo, Japan. Not a lot of news, just this Twitter thread (with a couple of videos).

If confirmed, I believe this will be the THIRD time EVER a giant squid was filmed alive!

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Amazing Video of a Black-Eyed Squid Trying to Eat an Owlfish

From the Monterey Bay Aquarium.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Cryptogram Friday Squid Blogging: Flying Squid

How squid fly.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

LongNowThe Time Machine

Long Now co-founder Brian Eno in front of his 77 Million Paintings generative artwork (02007).

Editor’s Note: This paper was sent our way by its lead author, Henry McGhie. It was originally published in Museum & Society, July 2020. 18(2) 183-197. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. No changes have been made. 

The Time Machine: challenging perceptions of time and place to enhance climate change engagement through museums

By Henry McGhie*, Sarah Mander**, Asher Minns***

Abstract

This article proposes that applying time-related concepts in museum exhibitions and events can contribute constructively to people’s engagement with climate change. Climate change now and future presents particular challenges as it is perceived to be psychologically distant. The link between this distance and effective climate action is complex and presents an opportunity for museums, as sites where psychological distance can be explored in safe, consequence-free ways. This paper explores how museums can help people develop an understanding of their place within the rhetoric of climate change, and assist them with their personal or collective response to the climate challenge. To do so, we find that two time- and place-related concepts, Brian Eno’s the Big Here and Long Now and Foucault’s heterotopia, can provide useful framings through which museums can support constructive climate change engagement.

Key words: Museums, climate change, futures, engagement, psychological distance

1. Introduction

Climate change presents one of the most serious challenges to human society and the environment, where both reducing emissions and adapting to the impacts of climate change involve major systemic change to society and the economy. Given the scale, nature and speed of these systemic changes, greater public engagement has been considered to be essential for numerous reasons, including the building of democratic support for action (see for example Carvalho and Peterson 2012), and to improve policy making (Pidgeon and Fischhoff 2011), notably through the incorporation of diverse perspectives (Chilvers et al. 2018). From an international climate change policy perspective, the United Nations Framework Convention on Climate Change (UNFCCC) (1992) and Paris Agreement (2015) each include an article on education, training, public awareness, public participation and access to information (article 6, which also includes ‘international co-operation’, and article 12 respectively, referred to jointly as Action for Climate Empowerment).¹ The UN Sustainable Development Goals, a blueprint for international sustainable development from 2015–30, include a goal (13) to ‘Take urgent action to combat climate change and its impacts’; this goal includes a target to ‘Improve education, awareness-raising and human and institutional capacity on climate change mitigation, adaptation, impact reduction and early warning’.²

Climate change engagement may be defined as ‘an ongoing personal state of connection’ with the issue of climate change (Lorenzoni et al. 2007: 446; Whitmarsh et al. 2011). As connection incorporates a broad range of aspects that constitute what we think, feel and do about climate change — cognitive, socio-emotional and behavioral aspects — simply knowing more about climate change does not necessarily promote action and, where information provision does not provide people with an understanding of the actions that are needed or is demotivating, it can inadvertently disempower people (Moser and Dilling 2004; O’Neill and Nicholson-Cole 2009). The three elements of climate change engagement — cognitive, socio-emotional and behavioural — approximate to the three domains of the learning model used by UNESCO as a framework for Global Citizenship Education (GCED) and Education for Sustainable Development (ESD); GCED aims to educate people ‘to know, to do, to be, and to live together’, empowering learners of all ages to play an active role in overcoming global challenges (UNESCO 2015: 22; see also UNESCO 2017).

Cognitive, socio-emotional and behavioural aspects connect in non-linear, non- sequential ways, but are iterative and dialogical. Engaging constructively with all three aspects presents a plausible route towards constructive engagement with the topic, allowing people to make sense of climate change in their daily lives, connecting thoughts and concerns with choices and actions (Lorenzoni et al. 2007).

Museums have the potential to be important venues to promote public education, empowerment and action around climate change (see below), and were formally recognized at COP24 in Katowice (Poland) in December 2018 as key sites for supporting Action for Climate Empowerment.³ In this paper, we explore two questions: 1) how can museums help people develop their understanding of what climate change means to them? and 2) how can museums help facilitate a response to the climate challenge? These questions are explored using two concepts, Michel Foucault’s work on heterotopias and Brian Eno’s the Big Here and Long Now. We suggest that these can be used to challenge conventional ways of thinking about time and place, and frame climate change engagement in museums in a way that allows people to negotiate and navigate the psychological distance of climate change in constructive ways. In Section 2 we provide an overview of the potential roles of museums in responding to climate change; in Section 3 we discuss the literature on psychological distance. In Sections 4 and 5 we present Michel Foucault’s work on heterotopias, and Brian Eno’s the Big Here and Long Now, in relation to climate change focused exhibitions in museums.

2. Museums and climate change

Fiona Cameron and her colleagues have written extensively on the role[s] of museums in the context of climate change. They explored the current and potential roles of museums (specifically, natural history museums, science museums and science centres) in society in relation to climate change, in Australia and the US as part of the ‘Hot Science Global Citizens: The Agency of the Museum Sector in Climate Change Interventions’ project (2008–12). Their results demonstrated significant differences between the current and desired roles of museums in respect of climate change among the public and museum workers. The project suggested nine strategic positions for museums to adopt to better meet the desires of their publics, as well as key role changes for science centres and museums (based on large differences between public and museums’ desires for particular positions) (Cameron 2011, 2012). Results of the ‘Hot Science’ project were used to develop a set of nine principles intended to support museums and science centres to act meaningfully on climate change (Cameron et al. 2013).

Cameron (2010) introduced the concepts of ‘liquid museums’ and ‘liquid governmentalities’ to explore how museums can support action and empowerment around contemporary issues such as climate change, without exercising authoritarian control (see also Cameron 2007, 2011). Cameron et al. (2013: 9) wrote

The big task of the museum sector is not only to inform publics on the science of climate change but also to equip citizens with tactical knowledges that enable participation in actions and debates on climate change that affect their futures.

They also suggested that

museums and science centers can engage a future-oriented, forward thinking frame, as places to link the past to the far future through projections of what might happen as places to offer practical governance options and as places to present long-term temporal trajectories. They offer an antidote to short-term thinking and the failure of governments to act, by presenting the variable dispositions, ideologies, and governance options, thereby constructing a mediated view of the future as a series of creative pathways (Cameron et al. 2013: 11; see also Cameron and Neilson 2015).

Notwithstanding the wide potential of museums to contribute meaningfully to addressing the challenges of climate change, Canadian Robert Janes has noted that, for the most part, museums have been slow to incorporate climate change into their work, risking their own long-term relevance (Janes 2009, 2016).

In Curating the Future, Newell et al. proposed that museums can be effective places for supporting discussion and action to address climate change. Through a wide range of case studies that read or re-read objects and exhibitions in the context of rapid climate change, they explored how contemporary museums have been adjusting their conceptual, material and organizational structures to reposition themselves on four deeply rooted trajectories that separate colonized and colonizer, Nature and Culture, local and global, authority and uncertainty (Newell et al. 2017).

Rather than direct their attention to protecting material from the past, museums can direct their work (the full range of their work, including collecting and public-facing work) towards supporting and enabling better futures more actively. Natural history museums and science centres could readily engage around contemporary issues such as climate change and other environmental topics (as could many other kinds of museums) to become ‘natural futures museums’; military museums could focus on topics around the causes and consequences of contemporary wars in order to reduce future conflicts; and ethnographic museums could emphasize issues around cultural diversity and identity in the face of globalization and social inequality (see e.g. Basu and Modest 2015; Dorfman 2018). This approach recognizes the interconnectedness of different forms of heritage — material, natural, cultural and intangible — and connects with emerging ideas of heritage as a future-making practice, e.g.

heritage is not a passive process of simply preserving things from the past that we choose to hold up as a mirror to the present, associated with a particular set of values that we wish to take with us into the future. Thinking of heritage as a creative engagement with the past in the present focuses our attention on our ability to take an active and informed role in the production of our own ‘tomorrow’ (Harrison 2013: 4).

In previous work, we have proposed sets of recommendations for museums, to support them to develop constructive climate change engagement activities (McGhie et al. 2018; McGhie 2019). The present paper builds on these contributions, by providing a more theoretical framework drawing on applied social psychology perspectives.

3. Psychological distance, climate change and museums

From the perspective of many in the Global North, climate change is widely perceived to be a distant phenomenon, something which will happen in the future, in far-away places (so impacting most on those in the Global South), and which has great uncertainty associated with it in terms of the likelihood, scale and nature of impacts. The proximity of climate change can be usefully described in terms of ‘psychological distance’, a theoretical construct defined as ‘a subjective perception of distance between the self and some object, event, or person’ (Wang et al. 2019). Four dimensions of psychological distance have been identified: temporal distance (time), spatial distance (place), social distance (cultural difference), and hypothetical distance (certainty or uncertainty) (Trope and Liberman 2010). These, together, describe the ‘perception of when [an event] occurs, where it occurs, to whom it occurs and whether it occurs’ (Trope and Liberman 2010: 442, quoted in Wang et al. 2019: 2).

As the need to mitigate climate change becomes more urgent (Committee on Climate Change 2019a, 2019b) and climate impacts are felt more strongly (see for example Burke and Stott, 2017; Van Oldenborgh et al. 2017), the influence of the proximity of climate change on people’s decisions to reduce their greenhouse gas emissions or adapt to climate impacts has been suggested as ‘a promising strategy for increasing public engagement with climate change’ (Jones et al. 2017). Reducing psychological distance has frequently been suggested as a means of increasing public engagement with, and action to address, climate change (see Schuldt et al. 2018 for references). There is indeed evidence from several studies that public concern about climate change decreases as the psychological distance of climate change increases, but this is not a simple or straightforward panacea (see Wang et al. 2019 for references). Exploring whether pro-environmental behaviour was best predicted by concrete, close perceptions of climate change (psychological closeness), or abstract, distant perceptions (large psychological distance), Spence et al. (2012) found that, among a nationally representative cohort of people in Britain aged over fifteen years of age (N=1,822), psychological closeness with energy futures and climate change was associated with higher levels of concern and preparedness to reduce energy consumption; so, people who have direct experience of climate impacts, which brings it close in terms of time, place and certainty, have been reported as being more willing to take mitigation actions (Spence et al. 2012; Broomell et al. 2015). However, Spence et al. (2012) also found that greater distance on the social distance dimension was associated with higher preparedness to take personal action, with people expressing concern for people in the Global South who were likely to be personally more seriously impacted by climate change than the survey respondents considered they would be themselves.

Scholars have considered climate change and psychological distance in relation to Construal Level Theory (Brügger et al. 2016; Griffioen et al. 2016; Wang et al. 2019), which is concerned with the ways in which our mental representations depend on their closeness to our present situation. Phenomena of which we have direct experience, or which are close to our present situation, require little mental effort to interpret or construe (low-level construal). By contrast, phenomena which are spatially, temporally or socially distant, or where there is inherent uncertainty, require a greater amount of effort to be represented mentally, and will result in high-level construals which will be more abstract and less concrete (Brügger et al. 2016). According to this rationale, if climate change is perceived as distant, it may be conceived in an abstract way. Abstractness has been found to encourage a goal-centred mind-set, allowing for the exploration of more distant, creative solutions (Liberman and Trope 2008), and enhancing self-control (Trope and Liberman 2010, see Wang et al. 2019). However, a concrete construal of climate change may promote psychological closeness, which may foster concern (Trope and Liberman 2010; Van Boven et al. 2010). Wang et al. (2019) found that psychological closeness to climate change predicted pro-environmental behaviour, while construal level produced inconsistent results; manipulations of both features did not increase pro-environmental behaviour. They also found that the presumed close association between psychological distance and construal level may not hold true in the case of climate change.

In one study on construal level and environmental issues, interventions were most effective when participants were asked to find an abstract goal in a specific context, or a specific goal in an abstract context, in that they facilitated both a greater awareness and a consideration of how to take personal action (Rabinovich et al. 2009; see also Ejelöv et al. 2018). Moreover, McDonald et al. (2015) found a complex relationship, where direct experience (short psychological distance) did not necessarily lead to action, and that ‘the optimal framing of psychological distance depends on 1) the values, beliefs and norms of the audience, and 2) the need to avoid provoking fear and resulting avoidant emotional reactions’. To Wang et al., this ‘suggests that both psychological closeness and distance can promote pro-environmental action in different contexts’ (Wang et al. 2019: 3).

Overall, research in this area demonstrates that the relationship between psychological distance and climate change is complex, but many scholars have pointed out that inspiring more, or sufficient, action on climate change is not simply a matter of bringing climate change closer (see for example Brügger et al. 2015; McDonald et al. 2015; Brügger et al. 2016; Schuldt et al. 2018; Wang et al. 2019).

A role for museums

Clearly, climate change presents an especially complex topic when considering psychological distance and construal level. However, acknowledging this complexity and considering the dimensions of psychological distance and construal level within the design of, and intended outcomes from, climate change engagement activities has the potential to increase their effectiveness. This may help promote people’s constructive engagement with climate change as a result, and offers a distinctive role for museums to play.

Climate change engagement activities may provide opportunities to explore climate change considering the social, spatial (see for example Lorenzoni et al. 2007; Spence et al. 2012) and temporal dimensions of psychological distance and climate change (see for example Rabinovich et al. 2010). These we consider to be of particular relevance in a museum setting as museums use their artefacts, collections and exhibits to connect (‘engage’) visitors with other places and times. They use their collections to tell and create stories in formal, informal and non-formal educational activities that can resonate with, or challenge, the values and world views of their visitors (McGhie et al. 2018; McGhie 2019). Science museums and science centres can also play a particular role in supporting people to understand the key importance of uncertainty and probability in science, which relates to the hypothetical dimension of psychological distance and climate change. Increasing numbers of museums are also seeing themselves as place-makers or spaces for activism, and are actively trying to engage people with thinking about the future (e.g. Janes 2016; Janes and Sandell 2019).

We now move on to present the Big Here and Long Now and heterotopia, two concepts that provide alternative ways of thinking about time and place. We consider how these can usefully be ‘deployed’ to frame museum engagement on climate change and provide examples of where museums are using them.

4. The Big Here and Long Now

Observing the fast pace of New York lifestyles, musician Brian Eno observed ‘everyone seemed to be passing through. It was undeniably lively, but the downside was that it seemed selfish, irresponsible and randomly dangerous’. Eno conceived of this as a ‘short now’, with a fast pace of life, and short timeframes for decisions and for considering the impacts of those decisions. However, this also suggested to Eno the possibility of the opposite, the ‘long now’. Eno also considered how people think about ‘here’: for some it is their immediate surroundings, a ‘small here’, while for others the spatial scale is wider, encompassing neighbourhoods, towns and indeed the world, a ‘big here’. Eno conceived of a ‘Big Here’ and ‘Long Now’, combining these considerations of place and time respectively.⁴

The idea of the Long Now became a manifesto for the Foundation of the Long Now, established in 1996 to encourage a long-term view and stewardship of the long-term (Brand 1999). The first project of the Foundation was the idea of a 10,000-year clock, which is currently being built in Texas (see Brand 1999 for background). Futurist Danny Hillis, who devised the concept of the clock, wrote:

I cannot imagine the future, but I care about it. I know I am a part of a story that starts long before I can remember and continues long beyond when anyone will remember me. I sense that I am alive at a time of important change, and I feel a responsibility to make sure that the change comes out well. I plant my acorns knowing that I will never live to harvest the oaks. I have hope for the future.⁵

Kevin Kelly, also of the Long Now Foundation, popularized a quiz developed by naturalist Peter Warshall, which aimed to encourage people to think in a larger geographical context, namely a river’s watershed.⁶ Kelly broadened the concept to encourage people to think on a macro scale, to constitute a Big Here, which could extend to a country, the planet or indeed beyond the planetary scale. The combination of the Big Here and Long Now has been adopted by the Long Now Foundation as a means for broadening both a sense of place and time, that ‘now’ is not a particular moment but a moment that connects with what has gone before and what will follow, and ‘here’ is bigger than the small piece of ground that we stand upon. ‘Now’ and ‘here’ become entirely subjective in terms of their scope.

Conceptualizing and framing climate change in terms of the Big Here and Long Now, in contrast to the Small Here and Short Now, opens a space for stretching our thinking about place from beyond our immediate surroundings and towards a broader conceptualization of society, both spatially and temporally. This draws our attention to processes, contexts and consequences of decisions — our individual and collective decisions — over a broad range of scales and timeframes. Such an approach may help promote climate change engagement in people’s everyday lives, and climate action through responsible, sustainable consumption.

5. Heterotopia

The Paris Agreement and the Sustainable Development Goals represent an idealized, desired future state. This is a utopia, in the properly ambiguous sense of the word: both an ‘ideal place’ (a ‘eutopia’) and, being in the future, a ‘nowhere place’ (an ‘outopia’) (see, especially, Marin 1984, 1992; Hetherington 1997). In exploring and envisioning this ‘other place’, we can draw on one of the most familiar time-related concepts relating to museums, Michel Foucault’s concept of museums as heterotopia. Foucault introduced the concept in 1967, during a period of work that was concerned with archaeology and archives (Foucault 1986, 1998; see Hetherington 2015). Foucault noted ‘we are in the epoch of simultaneity: we are in the epoch of juxtaposition, the epoch of the near and far, of the side-by-side, of the dispersed’ (Foucault 1986: 22). Foucault distinguished sites that have the ‘curious property’, that ‘suspect, neutralize, or invent the set of relations that they happen to designate, mirror, or reflect’ (Foucault 1986: 24). He identified two such sites; firstly, utopias, sites with no real place that represent society in a perfected form. Secondly, there were sites,

something like counter-sites, a kind of effectively enacted utopia in which the real sites, all the other real sites that can be found within the culture, are simultaneously represented, contested, and inverted. Places of this kind are outside of all places, even though it may be possible to indicate their location in reality (Foucault 1986: 24).

These are, of course, Foucault’s heterotopia. Hetherington has built on this definition, to construe heterotopia as ‘spaces of alternate ordering. Heterotopia organize a bit of the social world in a way different to that which surrounds them’ (Hetherington 1997: viii). Foucault held there to be six principles of heterotopia: firstly, that they probably exist in every culture. Second, and importantly for our purposes, that heterotopia can be made to function in a very different fashion at different times. Third, the heterotopia is capable of juxtaposing several sites and spaces that are themselves incompatible. Fourth, heterotopia are most often linked to slices in time, and ‘the heterotopia begins to function at full capacity when men arrive at a sort of absolute break with their traditional time’ (Foucault 1986: 26). Most notably, in this respect, Foucault wrote:

…there are heterotopias of indefinitely accumulating time, for example museums and libraries. Museums and libraries have become heterotopias in which time never stops building up and topping its own summit, whereas in the seventeenth century, even at the end of the century, museums and libraries were the expression of an individual choice. By contrast, the idea of accumulating everything, of establishing a sort of general archive, the will to enclose in one place all times, all epochs, all forms, all tastes, the idea of constituting a place of all times that is itself outside of time and inaccessible to its ravages, the project of organizing in this way a sort of perpetual and indefinite accumulation of time in an immobile place, this whole idea belongs to our modernity. The museum and the library are heterotopias that are proper to western culture of the nineteenth century (Foucault 1986: 26).

Fifth, heterotopia are not freely accessible: there are limitations or rules around their openness. Finally, heterotopia have a function in relation to all remaining space, either ‘to create a space of illusion that exposes every real space’; ‘their role is to create a space that is other, another real space, as perfect, as meticulous, as well arranged as ours is messy, ill constructed, and jumbled’ (Foucault 1986: 27). Hetherington notes that heterotopia are ambiguously articulated, whether as ‘other places / places of otherness / emplacements of the other’ (Hetherington 2015: 35).

While Foucault’s work on heterotopia has, understandably, been related to museums (see Lord 2006 for examples), as Lord points out, Foucault’s primary discussion of museums as heterotopia was in terms of the building of an archive: of the materiality of the museum that builds up and the knowledges associated with that material, rather than the constant creation and recreation of the past from an interrogation of that material (they ‘endlessly accumulate times in one space through the material objects they contain and the knowledge associated with them’ (Hetherington 2015: 35)). Lord expanded on Foucault’s work on heterotopia to emphasise the key importance of narrative and interpretation in museums’ function as heterotopia:

The museum is the space in which the difference inherent in its content is experienced. It is the difference between things and words, or between objects and conceptual structures: what Foucault calls the ‘space of representation’ (1970: 130)… the space of representation is the heterotopia (Lord 2006: 4–5).

It is worth noting that museums’ attempts to represent everything or to ‘constitute a place of all times that is itself outside time’, to draw on Foucault’s phrase (Foucault 1986: 26, see Hooper-Greenhill 2000; Lord 2006), are increasingly unsustainable or impossible. Their attempts to exist ‘outside of time and inaccessible to its ravages’ (Foucault 1986: 26) are similarly tested by social, economic and environmental challenges, including climate change.

Heterotopia can be repurposed to explore the time that does not yet exist, the future, exploring Foucault’s brief mention on utopias as sites that ‘have a general relation of direct or inverted analogy with the real space of Society. They represent society itself in a perfected form, or else society turned upside down, but in any case these utopias are fundamentally unreal spaces’ (Foucault 1986: 24).⁷ Lord notes how ‘the definition of museum as heterotopia explains how the museum can be progressive without subscribing to politically problematic notions of universality or ‘total history’, but as a ‘growth of capabilities’’. She concludes that ‘museums are best placed to critique, contest and transgress those problematic notions, precisely on the basis of their Enlightenment lineage’ (Lord 2006: 12). Here, then, we can see potential for museums as sites for subverting and imagining other potential societies and futures, and a ‘growth of capabilities’ speaks well to the language of a productive future where, in the language of the Sustainable Development Goals ‘no-one is left behind’.

Figure 1. In Human Time exhibition, Climate Museum, New York, showing Peggy Weil’s film 88 Cores, image credit: Sari Goodfriend, courtesy of the Climate Museum.

6. Applying the Big Here and Long Now, and heterotopia in museums

In this section we consider how the two aforementioned concepts can be related to exhibitions and events linked to climate change, and how they can be factored into new developments. Museums typically have collections shown in exhibits that originate from different time periods and places, which speak to both the Big Here and the Long Now, extending the viewer’s or participant’s ‘here’ or ‘now’. Considering the Big Here and Long Now can provide a useful context for exploring issues such as climate change, sustainability and citizenship, and can be seen in many exhibitions about climate change. The Big Here and Long Now becomes a useful lens which, together with considerations of psychological distance and construal level, allows us to consider how museum interventions are aligning, or not, with these concepts.

To take one example, the recent exhibition Human Nature (2019–20) at the World Cultures Museum in Stockholm conveys the key message ‘it’s all connected. How we live our lives is closely related to the state of our earth’.⁸ This exhibition and this strapline extend our sense of the here and now; they seem to attempt to reduce psychological distance, linking our lives with their impacts; by giving form and voice to these relationships the museum appears to make our construal of the relationship more concrete. The Climate Museum in New York staged a two-part exhibition, In Human Time (2017–18) by Peggy Weil and Zaria Forman, to explore ‘intersections of polar ice, humanity, and time’ (fig. 1).⁹ A film, by Peggy Weil, shows close-ups of ice cores that were drilled down two miles into the Greenland Ice Sheet, spanning 110,000 years; the film pans very slowly over the ice core, revealing the subtle changes in colour, bubbles and texture of the ice. Weil wrote ‘The pace and scale of the work is a gesture towards deep time and the gravity of climate change’.¹⁰ Zaria Forman’s work consisted of a reproduction of a hyper-realistic image of an Antarctic iceberg, grounded in an ‘iceberg graveyard’ in Antarctica. The image was accompanied by a timelapse video, illustrating the process of the creation of the image. This single exhibition, in two parts, demonstrates a complex interplay of the concepts of the Big Here and Long Now, with the long timescale of the development of the ice in the ice core reflected in the slow pace of the film. The grounded, melting iceberg in the Antarctic reflects a concrete construal of the effects of climate change, while the far away nature of the Antarctic speaks of a large psychological distance.

Figure 2. Climate Control exhibition, Manchester Museum, UK, 2016, showing two entrances where visitors decided whether to explore the past or the future. Image credit: Gareth Gardner.

To take another example, the exhibition Climate Control was shown at Manchester Museum (University of Manchester) during the city’s time as European City of Science in 2015–16. Two of the authors (HM and SM) were involved in the development of the exhibition and accompanying programme. The exhibition was accompanied by a range of activities, developed in partnership with and involving academics from the University of Manchester and a range of NGOs and community organizations, as well as Manchester Climate Change Agency, which is responsible for developing and overseeing the city’s climate change mitigation and adaptation strategy. Through these partnerships, the exhibition was used as the inspiration for, and reinterpreted through, a range of engagement activities to promote climate change awareness, adaptation and mitigation.

The exhibition had two entrances where visitors could choose either to explore climate change in the past (and present) or the future (fig. 2). The section of the exhibit on the past (and present) included exhibits on fossil fuels and fossils from millions of years ago, a range of Arcic wildlife impacted by climate change today, and photographs of people impacted by climate change around the world. The exhibit emphasized the connection between events over very long timescales: the trapping of sunlight by plants millions of years ago, their preservation as fossils, and the burning of fossil fuels over the last three centuries. It also emphasized the connection between far-distant places: the burning of fossil fuels in industrial countries, and climate impacts in the Arctic and around the world. The connection was illustrated by birds that spend the summer in the Arctic and migrate to the UK in the winter, to foster a sense of shared wildlife. Images of people affected by flooding in Bangladesh, sea-level rise in Belize, and people who rely on meltwater from vanishing glaciers in Ladakh and Peru, showed the real-life impacts of climate change on people round the world. The exhibit explored climate change from a local, place-specific context, in terms of the industrial history of Manchester, a global dimension linking Manchester to the Arctic, and to a range of different communities around the world. A taxidermy mount of a Polar Bear was accompanied by the open-ended question ‘are we so different?’. This exhibition thus approached climate change from an abstract and concrete construal level, brought in various psychological distances, and was strongly linked with the Big Here and Long Now concept. The viewer or participant was always intended to be psychologically close to the place – the museum and exhibition gallery – where the exhibition was shown.

Seeking to empower visitors to the Climate Control exhibition to consider their place in this and the myriad of possible alternative future worlds, the other half of the exhibition was entitled ‘explore the future’. This part of the exhibition did not contain museum objects, but instead was a space with information on climate change action at local, national and international scales and activities which invited people to share ideas on ‘changing the future’ and to reflect on the ideas of others. The exhibition was intended to look unfinished when it first opened, as the future is not set in stone. This part of the exhibition was, we feel, a heterotopia in the sense that it asked people to create a place that is not a real place, but which has a role in relation to the external world.

The two halves of the exhibition were divided by a central wall. Visitors to the ‘explore the past’ section were invited to stick a small black sticker to a white wall to represent their carbon footprint, and to emphasize that together we make a large collective impact. This can be regarded as a concrete construal level. On the reverse side of the wall, in the ‘explore the future’ section, visitors were invited to add stickers on which they wrote their ideas on how to create a sustainable future. This, being abstract, we feel represented a higher construal level.

The accompanying engagement activities, developed in partnership with community organizations and academics, further sought to engage visitors to the museum with climate change in novel and multi-sensory ways, encouraging them to think about climate change in terms of time and place. During exhibition opening hours, researchers and practitioners invited visitors to take part in ‘Climate Conversations’, talking and telling their own climate change story. Each person took a different approach to their ‘climate conversation’ using experiments, computer simulations, stories, data and objects as the jumping off points for discussion; the purpose was not to provide information, but instead to present a diverse range of perspectives on the meaning of climate change in the lives of researchers and practitioners and, in so doing, invite visitors to think about what climate change meant to them. Climate Control sought to elicit new visions from the people of Manchester for their city, through the co-creation of alternative futures in the heterotopia of the museum. This took place in different ways including creative mapping and facilitated sessions based on Manchester’s Climate Change Strategy, where people built their visions for a sustainable Manchester from Lego, guided by policies on mitigation and adaptation from the city’s climate strategy.¹¹

The triangulation of academia, public engagement and public policy raised challenges of working together, but was aimed at supporting the development of climate change policies within the city, and promoting civic participation among the public. Climate Control drew upon Manchester’s industrial heritage and its inextricable link to climate change to create public opportunities directed towards shaping the future (McGhie et al. 2018; McGhie 2019).

7. Discussion

As the need for climate action becomes ever more urgent, we argue in this paper that museums have a key role to play, providing a space where people can work through the meaning of climate change in their own lives, and in inspiring and supporting climate actions. More ambitiously, however, we argue that museums can support people’s constructive, meaningful and impactful climate change engagement beyond the museum, by developing exhibitions and other events which recognize the psychological distance of climate change. Whilst making climate change closer — more immediate, personal or concrete — is not a silver bullet for enhancing climate change awareness, empowerment and action, working with psychological distance, in terms of time, place and uncertainty in museums, contributes to the perceived distance of climate change from people’s everyday lives, which can be a barrier to climate action.

Framing climate engagement through the Big Here and Long Now offers the opportunity to change perceptions of time and place, enabling people to explore and question the relationship between the local and the global or national, and recognize that their ‘now’ is merely a stopping off point between the past and multiple possible futures which have yet to be created. Through their exhibitions, museums can develop narratives which align with the multiple values of their visitors, telling different stories at the same time. Depending on the narrative, climate change can be made less abstract, or alternatively a narrative could be framed around the abstract aspect of climate change to encourage people to reflect on rights, responsibilities and morality. We suggest that the combination of the Big Here and Long Now with the concept of the heterotopia presents a particularly powerful approach, combining a deep exploration of ‘where we are now’, from the Big Here and Long Now, with a vision-creating element from the heterotopia: where we are trying to get to. This enriched understanding provides opportunities to explore how we, individually and collectively, will bridge the difference between our current state and the state we desire, regarding climate change.

Museums have a unique role as trusted organizations and spaces where people come not only to be entertained but also to learn; increasingly museums are using their collections in creative ways as sites of social change. Working in collaboration with partners, museums can be part of a coalition of action on climate change, as Manchester Museum sought to do with the Climate Control exhibition and associated activities. For example, the co-creation of future visions for Manchester out of Lego allowed people to explore alternative visions, with such models having a ‘performative’ purpose, moving discussions away from targets to places, lives and communities. Working with different conceptions of time and place can give people a sense of agency, whereby transformation is something created by people, rather than happening to them (see Cameron and Deslandes 2011). Museums can aim to work with people, as individuals and communities, in co-production and co-creation, to give people agency in their future and its creation: ‘Rather than treating audiences as passive species bodies to be reformed, museums need to acknowledge the creative potential of their audiences as valued actors having valued opinions and expertise, skills, capacities, desires, expectations, reflexive capabilities and imagination’ (Cameron 2011: 100).

Museums have the potential to provide people with opportunities to explore alternative pasts, presents and futures, and to negotiate the connections (and disconnections) between local and global dimensions, and short and long-term temporalities; in other words, museums can help people (individually and collectively) negotiate the psychological distance dimensions of climate change, and connect them with their own lives. Focussing on local and immediate situations has perhaps the greatest potential to empower people and to consider personal contribution, community and citizenship; while long-term dimensions can provide greater opportunity for creative exploration of more radically different, structural changes to society. ‘Starting’ with the local may engage people who are not immediately concerned with exploring more abstract ideas of the future. The combination of creative, interactive experiences mentioned above, which draw on people’s own ideas as much as projecting ‘museum narrative’ for people to consume, provides a more plausible route for supporting people’s ongoing, constructive engagement and dialogue with climate change beyond the museum, going beyond ‘mere’ intellectual understanding to self-knowledge. Providing opportunities for people to understand, share and respond as part of museum experiences provides opportunities for people to explore and begin to create possible futures together in a safe environment.

If we are to transform society, and our lives, we need spaces that support transformation and that create opportunities to imagine, design and begin to create desirable futures. When we think about the future, we normally do so in the box of our town, our house, our lives. In a museum you are transported to a different place; accepting the museum’s function as heterotopia can free you up to imagine new futures with different boundaries and free to explore different times and places (at least in some sense): surely a kind of ‘partly enacted utopia’ that can be put to work. By providing a space (physical and intellectual) and a frame to consider the present as a point on the journey from the past to one of a myriad of possible futures, museums can begin to reposition themselves to actively promote civic participation and action around climate change.

Received: 5 September 2018 

Finally accepted: 11 Mar 2020

Acknowledgments

An early version of this paper was presented (by HM) at the 25th International Congress of the History of Science and Technology (Rio de Janeiro, July 2017) in a symposium on ‘Narratives of Future Earth’. HM and SM are grateful to Dr. David Gelsthorpe, Anna Bunney (both Manchester Museum), Dr. Rebecca Cunningham (University of Technology, Sydney) and Jonny Sadler (Manchester Climate Change Agency) for help in developing the Climate Control programme at Manchester Museum.

Notes

[1] https://unfccc.int/resource/docs/convkp/conveng.pdf, https://unfccc.int/sites/default/files/english_paris_agreement.pdf accessed 24 March 2020.

[2] https://sdgs.un.org/2030agenda accessed 24 March 2020. 

[3] https://unfccc.int/sites/default/files/resource/cp24_auv_L.3_edu.pdf accessed 16 January 2020.

[4] Brian Eno, ‘The Big Here and Long Now’, https://longnow.org/essays/big-here-and-long-now/ accessed 16 January 2020.

[5] https://www.wired.com/1995/12/the-millennium-clock/ accessed 25 March 2020.

[6] https://kk.org/helpwanted/archives/001084.php accessed 31 May 2020.

[7] See also Peter Johnson, ‘Some reflections on the relationship between utopia and heterotopia’, Heterotopian Studies, 2012. http://www.heterotopiastudies.com, accessed 25 March 2020.

[8] https://www.varldskulturmuseet.se/en/varldskulturmuseet/ongoing-exhibitions/human-nature/about-the-exhibition/ accessed 15 January 2020.

[9] https://www.inhumantime.org/ accessed 15 January 2020.

[10] https://pweilstudio.com/project/88-cores/ accessed 15 January 2020.

[11] Manchester Climate Change Agency (2016), https://www.manchesterclimate.com/sites/default/files/MCCS%202017-50.pdf, accessed 25 March 2020.

References

Basu, P. and Modest, W. (2015) Museums, Heritage and International Development, London: Routledge.

Brand, S. (1999) The Clock of the Long Now. Time and Responsibility: The Ideas Behind the World’s Slowest Computer, New York: Basic Books.

Broomell, S.B., Budescu, D.V. and Por, H.-H (2015) ‘Personal Experience with Climate Change Predicts Intentions to Act’, Global Environmental Change, 32 67–73.

Brügger, A., Dessai, S., Devine-Wright, P., Morton, T.A. and Pidgeon, N.F. (2015) ‘Psychological Responses to the Proximity of Climate Change’, Nature Climate Change, 5 1031–7.

Brügger, A., Morton, T.A. and Dessai, S. (2016) ‘‘Proximising’ Climate Change Reconsidered: A Construal Level Theory Perspective’, Journal of Experimental Psychology, 46 125–42.

Burke, C. and Stott, P. (2017) ‘Impact of Anthropogenic Climate Change on the East Asian Summer Monsoon’, Journal of Climate, 30 5205–20.

Cameron, F.R. (2007) ‘Moral Lessons and Reforming Agendas: History Museums, Science Museums, Contentious Topics and Contemporary Societies’, in Simon J. Knell, Suzanne MacLeod and Sheila Watson (eds) Museum Revolutions: How Museums Change and are Changed, 330–42, London: Routledge.

(2010) ‘Liquid Governmentalities, Liquid Museums and the Climate Crisis’, in Fiona Cameron and Lynda Kelly (eds) Hot Topics, Public Culture, Museums,112–28, Newcastle upon Tyne: Cambridge Scholars.

(2011) ‘From Mitigation to Creativity: The Agency of Museums and Science Centres and the Means to Govern Climate Change’, Museum and Society, 9 (2) 90–106.

(2012) ‘Climate Change, Agencies, and the Museum for a Complex World’, Museum Management and Curatorship, 27 (4) 317–39.

Cameron, F.R. and Deslandes, A. (2011) ‘Museums and Science Centres as Sites for Deliberative Democracy on Climate Change’, Museum and Society, 9 (2) 136–53.

Cameron, F.R., Hodge, B. and Salazar, F. (2013) ‘Representing Climate Change in Museum Space and Places’, WIREs Climate Change, 4 (1) 9–21.

Cameron, F.R. and Neilson, B. (eds) (2015) Climate Change and Museum Futures, London: Routledge.

Carvalho, A. and Peterson, T.R. (2012) ‘Reinventing the Political: How Climate Change Can Breathe New Life into Democracies’, in Anabela Carvalho and Tarla Rai Peterson (eds) Climate Change Politics. Communication and Public Engagement, 1–28, New York: Cambria Press.

Chilvers, J., Pallett, H. and Hargreaves, T. (2018) ‘Ecologies of Participation in Socio- Technical Change: The Case of Energy System Transitions’, Energy Research and Social Science, 42 199–210.

Committee on Climate Change (2019a) Reducing UK Emissions: 2019 Progress Report to Parliament, London: Committee on Climate Change.
 (2019b) Progress in Preparing for Climate Change: 2019 Report to Parliament, London: Committee on Climate Change.

Dorfman, E. (ed) (2018) The Future of Natural History Museums, ICOM Advances in Museums Research, London: Routledge.

Ejelöv, E., Hansla, A., Bergquist, M. and Nilsson, A. (2018) ‘Regulating Emotional Responses to Climate Change — A Construal Level Perspective’, Frontiers in Psychology, 9 (629) doi.org/10.3389/fpsyg.2018.00629.

Foucault, M. (1986) ‘Of Other Spaces’, Diacritics, 16 (1) 22–7.
 (1998) ‘Different Spaces’, in James D. Faubion (ed) The Essential Works, vol. 2, Aesthetics, 175–85, London: Allen Lane.

Griffioen, A.M., van Beek, J., Lindhout, S.N. and Handgraaf, M.J.J. (2016) ‘Distance Makes the Mind Grow Broader: An Overview of Psychological Distance Studies in the Environmental and Health Domains’, Applied Studies in Agribusiness and Commerce, 10 (2–3) 33–46.

Harrison, R. (2013) Heritage: Critical Approaches, Abingdon: Routledge. Hetherington, K. (1997) The Badlands of Modernity: Heterotopia and Social Ordering, London: Routledge.

(2015) ‘Foucault and the Museum’, in Andrea Witcomb and Kylie Message (eds) The International Handbooks of Museum Studies: Museum Theory, 21–40, Chichester: John Wiley and Sons.

Hooper-Greenhill, E. (2000) Museums and the Interpretation of Visual Culture, London: Routledge.

Janes, R.R. (2009) Museums in a Troubled World: Renewal, Irrelevance or Collapse?, London: Routledge.

(2016) Museums Without Borders, Abingdon: Routledge.

Janes, R.R. and Sandell, R. (2019) Museum Activism, Abingdon: Routledge.

Jones, C., Hine, D.W. and Marks, D.G. (2017) ‘The Future is Now: Reducing Psychological Distance to Increase Public Engagement with Climate Change’, Risk Analysis, 37 (2) 331–41.

Liberman, N. and Trope, Y. (2008) ‘The Psychology of Transcending the Here and Now’, Science, 322 (5905) 1201–5.

Lord, B. (2006) ‘Foucault’s Museum: Difference, Representation and Genealogy’, Museum and Society, 4 (1) 1–14.

Lorenzoni, I., Nicholson-Cole, S. and Whitmarsh, L. (2007) ‘Barriers Perceived to Engaging with Climate Change Among the UK Public and their Policy Implications’, Global Environmental Change, 17 (3–4) 445–59.

Marin, L. (1984) Utopics: Spatial Play, London: Macmillan.

(1992) ‘Frontiers of Utopia: Past and Present’, Critical Inquiry, 19 (3) 397–420.

McDonald, R.I., Chai, H.Y. and Newell, B.R. (2015) ‘Personal Experience and the ‘Psychological Distance’ of Climate Change: An Integrative Review’, Journal of Environmental Psychology, 44 109–18.

McGhie, H.A. (2019) ‘Climate Change: A Different Narrative’, in Walter Leal Filho, Bettina Lackner and Henry McGhie (eds) Addressing the Challenges in Communicating Climate Change Across Various Audiences, 13–29, Cham (Switzerland): Springer International.

McGhie, H.A., Mander, S. and Underhill, R. (2018) ‘Engaging People with Climate Change through Museums’, in Walter Leal Filho, Evangelos Manolas, Anabela Marisa Azul, Ulisses M. Azeiteiro and Henry McGhie (eds), A Handbook of Climate Change Communication, vol. 3, 329–48, Cham (Switzerland): Springer.

Moser, S. and Dilling, L. (2004) ‘Making Climate Hot: Communicating the Urgency and Challenge of Global Climate Change’, Environment, 46 (10) 32–46.

Newell, J., Robbin, L. and Wehner, K. (eds) (2017) Curating the Future: Museums, Communities and Climate Change, Abingdon: Routledge.

O’Neill, S. and Nicholson-Cole, S. (2009) ‘‘Fear Won’t Do It’: Promoting Positive Engagement with Climate Change through Visual and Iconic Representations’, Science Communication, 30 355–79.

Pidgeon, N. and Fischhoff, B. (2011) ‘The Role of Social and Decision Sciences in Communicating Uncertain Climate Risks’, Nature Climate Change, 1 35–41.

Rabinovich, A., Morton, T. and Postmes, T. (2010) ‘Time Perspective and Attitude- Behaviour Consistency in Future-Oriented Behaviours’, British Journal of Social Psychology, 49 (1) 69–89.

Rabinovich, A., Morton, T.A., Postmes, T. and Verplanken, B. (2009) ‘Think Global, Act Local: The Effect of Goal and Mindset Specificity on Willingness to Donate to an Environmental Organization’, Journal of Environmental Psychology, 29 (4) 391–9.

Schuldt, J.P., Rickard, L.N. and Yang, Z.J. (2018) ‘Does Reduced Psychological Distance Increase Climate Engagement? On the Limits of Localizing Climate Change’, Journal of Environmental Psychology, 55 147–53.

Spence, A., Poortinga, W. and Pidgeon, N. (2012) ‘The Psychological Distance of Climate Change’, Risk Analysis, 32 (6) 957–72.

Trope, Y. and Liberman, N. (2010) ‘Construal‐Level Theory of Psychological Distance’, Psychological Review, 117 (2) 440–63.

UNESCO (2015) Global Citizenship Education: Topics and Learning Objectives, Paris: UNESCO.

(2017) Education for Sustainable Development Goals: Learning Objectives, Paris: UNESCO.

Van Boven, L., Kane, J., McGraw, P.A. and Dale, J. (2010) ‘Feeling Close: Emotional Intensity Reduces Perceived Psychological Distance’, Journal of Personality and Social Psychology, 98 (6) 872–85.

Van Oldenborgh, G.J., Van der Wiel, K., Sebastian, A., Singh, R., Arrighi, J., Otto, F., Haustein, K., Li, S.H., Vecchi, G. and Cullen, H. (2017) ‘Attribution of Extreme Rainfall from Hurricane Harvey, August 2017’, Environmental Research Letters, 12.

Wang, S., Hurlstone, M., Leviston, Z., Walker, I., and Lawrence, C. (2019) ‘Climate Change from a Distance: An Analysis of Construal Level and Psychological Distance from Climate Change’, Frontiers in Psychology, 10 (230), doi.org/10.3389/ fpsyg.2019.00230.

Whitmarsh, L., O’Neill, S. and Lorenzoni, I. (2011) Engaging the Public with Climate Change: Behaviour Change and Communication, London: Earthscan.

Authors

*Henry McGhie, Curating Tomorrow, 40 Acuba Road, Liverpool UK, L15 7LR, henrymcghie@curatingtomorrow.co.uk
 Tel: 07402 659 372

Henry McGhie has a background as an ornithologist, museum curator and senior manager. He has been working on sustainability, climate change and museums for over 15 years, developing exhibitions, working with local and international policy workers, organizing international conferences and editing two books on the subject. He established Curating Tomorrow in 2019 as a consultancy for museums and the heritage sector, helping them draw on their unique resources to enhance their contributions to society and the natural environment, the Sustainable Development Goals, climate action and nature conservation. He is a member of the International Council of Museums Working Group on Sustainability.

**Sarah Mander, Tyndall Centre for Climate Change Research, University of Manchester, M13 9QL, s.mander@manchester.ac.uk
 Tel: 0161 3063259

Dr Sarah Mander is a Reader in Energy and Climate Policy and an interdisciplinary energy researcher, with over a decade’s experience using deliberative and participatory approaches to understand social, institutional and governance barriers to climate mitigation. For the past five years, she has coordinated Tyndall Manchester’s public engagement activities, working with museums, schools and community organizations to develop arts-based and creative approaches to climate change engagement, including theatre games and performance art. Dr Mander is a member of the Centre for Climate Change and Social Transformations (CAST), where her work combines her expertise in social responses to low-carbon technology with her belief that, in the absence of effective action on climate change from governments, innovation by grass-roots organizations is key to driving the low-carbon transition.

***Asher Minns, Tyndall Centre for Climate Change Research, University of East Anglia, Norwich, a.minns@uea.ac.uk

Asher Minns is a science communicator specialising in knowledge transfer of climate change and other global change research to audiences outside of academia. He has over two decades in practice, and is also the Executive Director of the Tyndall Centre for Climate Change Research.

Worse Than FailureError'd: We're Number 0th

Drinker Philip B. confesses "The first bottle went down fine but after the second my speech got a little schlurred ..."

 

"Fortunately, we can rely on this detection originally developed to fight the plague!" advises an anonymous time traveler

 

Joel G. asks "I wonder if the $1003.99 fee is a maths error, a shipping error, or my postcode is a variable in calculating it? "

I will just pick it up in person, thanks.

 

Polyglot Chris humblebrags "Duolingo thinks I'm better than number 1 in the league, I'm number 0!"

Everyone knows that counting things is one of the two hardest problems in computer science

 

 

Airplane enthusiast Michael P. is sure he can make this toy fly for half the price. We must admit the gag went over our heads at first.

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Krebs on SecurityFacebook, Instagram, TikTok and Twitter Target Resellers of Hacked Accounts

Facebook, Instagram, TikTok, and Twitter this week all took steps to crack down on users involved in trafficking hijacked user accounts across their platforms. The coordinated action seized hundreds of accounts the companies say have played a major role in facilitating the trade and often lucrative resale of compromised, highly sought-after usernames.

At the center of the account ban wave are some of the most active members of OGUsers, a forum that caters to thousands of people selling access to hijacked social media and other online accounts.

Particularly prized by this community are short usernames, which can often be resold for thousands of dollars to those looking to claim a choice vanity name.

Facebook told KrebsOnSecurity it seized hundreds of accounts — mainly on Instagram — that have been stolen from legitimate users through a variety of intimidation and harassment tactics, including hacking, coercion, extortion, sextortion, SIM swapping, and swatting.

THE MIDDLEMEN

Facebook said it targeted a number of accounts tied to key sellers on OGUsers, as well as those who advertise the ability to broker stolen account sales.

Like most cybercrime forums, OGUsers is overrun with shady characters who are there mainly to rip off other members. As a result, some of the most popular denizens of the community are those who’ve earned a reputation as trusted “middlemen.”

These core members offer escrow services that – in exchange for a cut of the total transaction cost (usually five percent) — will hold the buyer’s funds until he is satisfied that the seller has delivered the credentials and any email account access needed to control the hijacked social media account.

For example, one of the most active accounts targeted in this week’s social network crackdown is the Instagram profileTrusted,” self-described as “top-tier professional middleman/escrow since 2014.”

Trusted’s profile included several screenshots of his OGUsers persona, “Beam,” who warns members about an uptick in the number of new OGUsers profiles impersonating him and other middlemen on the forum. Beam currently has more reputation points or “vouches” than almost anyone on the forum, save for perhaps the current and former site administrators.

The now-banned Instagram account for the middleman @trusted/beam.

Helpfully, OGUsers has been hacked multiple times over the years, and its database of user details and private messages posted on competing crime forums. Those databases show Beam was just the 12th user account created on OGUsers back in 2014.

In his posts, Beam says he has brokered well north of 10,000 transactions. Indeed, the leaked OGUsers databases — which include private messages on the forum prior to June 2020 — offer a small window into the overall value of the hijacked social media account industry.

In each of Beam’s direct messages to other members who hired him as a middleman he would include the address of the bitcoin wallet to which the buyer was to send the funds. Just two of the bitcoin wallets Beam used for middlemanning over the past of couple of years recorded in excess of 6,700 transactions totaling more than 243 bitcoins — or roughly $8.5 million by today’s valuation (~$35,000 per coin)Beam would have earned roughly $425,000 in commissions on those sales.

Beam, a Canadian whose real name is Noah Hawkins, declined to be interviewed when contacted earlier this week. But his “Trusted” account on Instagram was taken down by Facebook today, as were “@Killer,” — a personal Instagram account he used under the nickname “noah/beam.” Beam’s Twitter account — @NH — has been deactivated by Twitter; it was hacked and stolen from its original owner back in 2014.

Reached for comment, Twitter confirmed that it worked in tandem with Facebook to seize accounts tied to top members of OGUsers, citing its platform manipulation and spam policy. Twitter said its investigation into the people behind these accounts is ongoing.

TikTok confirmed it also took action to target accounts tied to top OGUusers members, although it declined to say how many accounts were reclaimed.

“As part of our ongoing work to find and stop inauthentic behavior, we recently reclaimed a number of TikTok usernames that were being used for account squatting,” TikTok said in a written statement. “We will continue to focus on staying ahead of the ever-evolving tactics of bad actors, including cooperating with third parties and others in the industry.”

‘SOCIAL MEDIA SPECIALISTS’

Other key middlemen who’ve brokered thousands more social media account transactions via OGUsers that were part of this week’s ban wave included Farzad (OGUser #81), who used the Instagram accounts @middleman and @frzd; and @rl, a.k.a. “Amp,” a major middleman and account seller on OGUusers.

Naturally, the top middlemen in the OGUsers community get much of their business from sellers of compromised social media and online gaming accounts, and these two groups tend to cross-promote one another. Among the top seller accounts targeted in the ban wave was the Instagram account belonging to Ryan Zanelli (@zanelli), a 22-year-old self-described “social media marketing specialist” from Melbourne, Australia.

The leaked OGusers databases suggest Zanelli is better known to the OGusers community as “Verdict,” the fifth profile created on the forum and a longtime administrator of the site.

Reached via Telegram, Zanelli acknowledged he was an administrator of OGUsers, but denied being involved in anything illegal.

“I’m an early adaptor to the forum yes just like other countless members, and no social media property I sell is hacked or has been obtained through illegitimate means,” he said. “If you want the truth, I don’t even own any of the stock, I just resell off of people who do.”

This is not the first time Instagram has come for his accounts: As documented in this story in The Atlantic, some of his accounts totaling more than 1 million followers were axed in late 2018 when the platform took down 500 usernames that were stolen, resold, and used for posting memes.

“This is my full-time income, so it’s very detrimental to my livelihood,” Zanelli told The Atlantic, which identified him only by his first name. “I was trying to eat dinner and socialize with my family, but knowing behind the scenes everything I’ve built, my entire net worth, was just gone before my eyes.”

Another top seller account targeted in the ban wave was the Instagram account @h4ck, whose Telegram sales channel also advertises various services to get certain accounts banned and unbanned from differed platforms, including Snapchat and Instagram.

Snippets from the Telegram sales channel for @h4ck, one of the Instagram handles seized by Facebook today.

Facebook said while this is hardly the first time it has reclaimed accounts associated with hijackers, it is the first time it has done so publicly. The company says it has no illusions that this latest enforcement action is going to put a stop to the rampant problem of account hijacking for resale, but views the effort as part of an ongoing strategy to drive up costs for account traffickers, and to educate potential account buyers about the damage inflicted on people whose accounts are hijacked.

In recognition of the scale of the problem, Instagram today rolled out a new feature called “Recently Deleted,” which seeks to help victims undo the damage wrought by an account takeover.

“We know hackers sometimes delete content when they gain access to an account, and until now people had no way of easily getting their photos and videos back,” Instagram explained in a blog post. “Starting today, we will ask people to first verify that they are the rightful account holders when permanently deleting or restoring content from Recently Deleted.”

Facebook wasn’t exaggerating about the hijacking community’s use of extortion and other serious threats to gain control over highly prized usernames. I wish I could get back the many hours spent reading private messages from the OGUsers community, but it is certainly not uncommon for targets to be threatened with swatting attacks, or to have their deeply personal and/or financial information posted publicly online unless they relinquish control over a desired account.

WHAT YOU CAN DO

Any accounts that you value should be secured with a unique and strong password, as well the most robust form of multi-factor authentication available. Usually, this is a mobile app that generates a one-time code, but some sites like Twitter and Facebook now support even more robust options — such as physical security keys.

Whenever possible, avoid opting to receive the second factor via text message or automated phone calls, as these methods are prone to compromise via SIM swapping — a crime that is prevalent among people engaged in stealing social media accounts. SIM swapping involves convincing mobile phone company employees to transfer ownership of the target’s phone number to a device the attackers control.

These precautions are even more important for any email accounts you may have. Sign up with any service online, and it will almost certainly require you to supply an email address. In nearly all cases, the person who is in control of that address can reset the password of any associated services or accounts –merely by requesting a password reset email. Unfortunately, many email providers still let users reset their account passwords by having a link sent via text to the phone number on file for the account.

Most online services require users to supply a mobile phone number when setting up the account, but do not require the number to remain associated with the account after it is established. I advise readers to remove their phone numbers from accounts wherever possible, and to take advantage of a mobile app to generate any one-time codes for multifactor authentication.

LongNowPodcast: Queering the Future | Jason Tester

Jason Tester asks us to see the powerful potential of “queering the future” – how looking at the future through a lens of difference and openness can reveal unexpected solutions to wicked problems, and new angles on innovation. Might a queer perspective hold some of the keys to our seemingly intractable issues?

Tester brings his research in strategic foresight, speculative design work, and understanding of the activism and resiliency of LGBTQ communities together as he looks toward the future. Can we learn new ways of thinking, and thriving, from the creative approaches and adaptive strategies that have emerged from these historically marginalized groups?

Listen on Apple Podcasts.

Listen on Spotify.

Worse Than FailureComing to Grips

Regardless of what industry you're in, every startup hits that dangerous phase where you're nearing the end of your runway but you still haven't gotten to the point where you can actually make money with your product. The cash crunch starts, and what happens next can often make or break the company.

Nathan was working for a biotech company that had hit that phase. They had a product, but they couldn't produce enough of it, cheaply enough, to actually make a profit. What they needed was some automation, and laboratory robots were the solution. But laboratory robots were expensive, and for a company facing a cash crunch, "expensive" was too risky. They needed a cheaper solution.

So they found Roy. Roy was an engineer and software developer, and Roy could build them a custom robot for much cheaper. After all, what was a lab robot but an arm with a few stepper motors and some control software? Roy turned around and shipped them a 2-axis robotic gripper arm ahead of schedule and under budget.

When Nathan joined the team, the arm wasn't working. Or, well, it kinda worked. Like a lot of such systems, the gripper had a "home" position. Between tasks, the gripper needed to return to that home position, and the way it was supposed to know that it was there was by checking a limit switch- a physical "button" that the arm would touch, telling the motor-control board that it should stop moving the arm.

For some reason, during the homing operation, the arm would stutter its way over, constantly stopping at random intervals, jerking and making godawful noises along the way. Nathan got Roy on the phone to talk through the symptoms and what was going on.

"So, when I was testing these," Roy said, "I found a fault in the motor-control boards. I think it's the whole batch of them, because every one I tried had the exact same behavior."

Nathan asked Roy to repeat that. "You think the entire lot of motor controllers you ordered from the vendor have QA failures?"

"It's the only explanation," Roy said. "It's certainly not anything in my software or any of my custom parts."

Nathan was almost certain that wasn't true.

Nathan examined the hardware while Roy continued his explanation. "So, anyway, what I was seeing was that the motor controller will report the home switch is hit, even when absolutely nothing is touching the home switch. So I added a work around; when the motor controller tells my software the switch is hit, I stop the motor, but then check to see if the switch is actually hit- and if it isn't, I keep moving."

"So, wait, when you stop the motor, the incorrect data goes away?"

"That's what I found in testing, yeah."

Nathan examined the wiring that connected the motor controller to the rest of the hardware- specifically the stepper motors and the limit switch. Roy had obviously wanted to keep his design "neat and clean", because he used a single, multi-conductor cable. From there, it wasn't hard for Nathan to figure out what was going on.

Some of the wires in that connector were just power and ground. One was for the limit switch- it'd read "high" if the switch were hit, and "low" otherwise. And the others were for the stepper motors. All of these wires were crammed together, with only a thin layer of insulation between them. The problem with that design was that stepper motors are controlled by sending PWM signals down the wire. Time-varying electrical fields have this seemingly magical power to induce current in other fields, which is a fancy way of saying "putting PWM signals on one wire can induce current in a nearby wire", and also is one of the basic principles which anyone doing electrical design should know.

When the homing operation told the steppers to move, the signal to the steppers created interference in the home switch wire, causing the motor controller to think the home switch had been hit. By stopping the steppers, Roy stopped the interference, so the interference stopped, and now the system could continue.

The fix was also simple: Nathan replaced the single multi-conductor cable with two shielded cables, isolating the home switch wire from interference.

As a bonus example of what you get when you hire Roy, Nathan also supplied some of the sample code in Roy's custom robot scripting language. This sample promises to show you how to weigh every vial in a rack of vials:

For Vial = 1 to 96 Rack.MoveToCell(XY, Vial) Gripper.PickUpVial() Balance.Tare() Balance.MoveToCell(XY, 1) MyWeight = Balance.Weight() PutVialBack(Vial) Next

Roy calls this code "easy to understand and modify", but I'll let Nathan explain why this isn't true:

Why do the Rack and the Balance each have the ability to move the Gripper? Why is "PickUpVial()" a function on the gripper, but "PutBackVial()" is not part of a class? What's the logic and abstraction here? Much like the robot itself, this is code that looks easy to understand but in practice is hard to operate.

As for what happened to the startup that was so focused on cutting costs they were buying equipment from Roy? Well, Nathan doesn't say, but whether they survived or failed, it's still not really a happy ending either way, is it?

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Cryptogram Ransomware Profitability

Analyzing cryptocurrency data, a research group has estimated a lower-bound on 2020 ransomware revenue: $350 million, four times more than in 2019.

Based on the company’s data, among last year’s top earners, there were groups like Ryuk, Maze (now-defunct), Doppelpaymer, Netwalker (disrupted by authorities), Conti, and REvil (aka Sodinokibi).

Ransomware is now an established worldwide business.

Slashdot thread.

Cryptogram Web Credit Card Skimmer Steals Data from Another Credit Card Skimmer

MalwareBytes is reporting a weird software credit card skimmer. It harvests credit card data stolen by another, different skimmer:

Even though spotting multiple card skimmer scripts on the same online shop is not unheard of, this one stood out due to its highly specialized nature.

“The threat actors devised a version of their script that is aware of sites already injected with a Magento 1 skimmer,” Malwarebytes’ Head of Threat Intelligence Jérôme Segura explains in a report shared in advance with Bleeping Computer.

“That second skimmer will simply harvest credit card details from the already existing fake form injected by the previous attackers.”

Cryptogram NoxPlayer Android Emulator Supply-Chain Attack

It seems to be the season of sophisticated supply-chain attacks.

This one is in the NoxPlayer Android emulator:

ESET says that based on evidence its researchers gathered, a threat actor compromised one of the company’s official API (api.bignox.com) and file-hosting servers (res06.bignox.com).

Using this access, hackers tampered with the download URL of NoxPlayer updates in the API server to deliver malware to NoxPlayer users.

[…]

Despite evidence implying that attackers had access to BigNox servers since at least September 2020, ESET said the threat actor didn’t target all of the company’s users but instead focused on specific machines, suggesting this was a highly-targeted attack looking to infect only a certain class of users.

Until today, and based on its own telemetry, ESET said it spotted malware-laced NoxPlayer updates being delivered to only five victims, located in Taiwan, Hong Kong, and Sri Lanka.

I don’t know if there are actually more supply-chain attacks occurring right now. More likely is that they’ve been happening for a while, and we have recently become more diligent about looking for them.

Cryptogram Presidential Cybersecurity and Pelotons

President Biden wants his Peloton in the White House. For those who have missed the hype, it’s an Internet-connected stationary bicycle. It has a screen, a camera, and a microphone. You can take live classes online, work out with your friends, or join the exercise social network. And all of that is a security risk, especially if you are the president of the United States.

Any computer brings with it the risk of hacking. This is true of our computers and phones, and it’s also true about all of the Internet-of-Things devices that are increasingly part of our lives. These large and small appliances, cars, medical devices, toys and — yes — exercise machines are all computers at their core, and they’re all just as vulnerable. Presidents face special risks when it comes to the IoT, but Biden has the NSA to help him handle them.

Not everyone is so lucky, and the rest of us need something more structural.

US presidents have long tussled with their security advisers over tech. The NSA often customizes devices, but that means eliminating features. In 2010, President Barack Obama complained that his presidential BlackBerry device was “no fun” because only ten people were allowed to contact him on it. In 2013, security prevented him from getting an iPhone. When he finally got an upgrade to his BlackBerry in 2016, he complained that his new “secure” phone couldn’t take pictures, send texts, or play music. His “hardened” iPad to read daily intelligence briefings was presumably similarly handicapped. We don’t know what the NSA did to these devices, but they certainly modified the software and physically removed the cameras and microphones — and possibly the wireless Internet connection.

President Donald Trump resisted efforts to secure his phones. We don’t know the details, only that they were regularly replaced, with the government effectively treating them as burner phones.

The risks are serious. We know that the Russians and the Chinese were eavesdropping on Trump’s phones. Hackers can remotely turn on microphones and cameras, listening in on conversations. They can grab copies of any documents on the device. They can also use those devices to further infiltrate government networks, maybe even jumping onto classified networks that the devices connect to. If the devices have physical capabilities, those can be hacked as well. In 2007, the wireless features of Vice President Richard B. Cheney’s pacemaker were disabled out of fears that it could be hacked to assassinate him. In 1999, the NSA banned Furbies from its offices, mistakenly believing that they could listen and learn.

Physically removing features and components works, but the results are increasingly unacceptable. The NSA could take Biden’s Peloton and rip out the camera, microphone, and Internet connection, and that would make it secure — but then it would just be a normal (albeit expensive) stationary bike. Maybe Biden wouldn’t accept that, and he’d demand that the NSA do even more work to customize and secure the Peloton part of the bicycle. Maybe Biden’s security agents could isolate his Peloton in a specially shielded room where it couldn’t infect other computers, and warn him not to discuss national security in its presence.

This might work, but it certainly doesn’t scale. As president, Biden can direct substantial resources to solving his cybersecurity problems. The real issue is what everyone else should do. The president of the United States is a singular espionage target, but so are members of his staff and other administration officials.

Members of Congress are targets, as are governors and mayors, police officers and judges, CEOs and directors of human rights organizations, nuclear power plant operators, and election officials. All of these people have smartphones, tablets, and laptops. Many have Internet-connected cars and appliances, vacuums, bikes, and doorbells. Every one of those devices is a potential security risk, and all of those people are potential national security targets. But none of those people will get their Internet-connected devices customized by the NSA.

That is the real cybersecurity issue. Internet connectivity brings with it features we like. In our cars, it means real-time navigation, entertainment options, automatic diagnostics, and more. In a Peloton, it means everything that makes it more than a stationary bike. In a pacemaker, it means continuous monitoring by your doctor — and possibly your life saved as a result. In an iPhone or iPad, it means…well, everything. We can search for older, non-networked versions of some of these devices, or the NSA can disable connectivity for the privileged few of us. But the result is the same: in Obama’s words, “no fun.”

And unconnected options are increasingly hard to find. In 2016, I tried to find a new car that didn’t come with Internet connectivity, but I had to give up: there were no options to omit that in the class of car I wanted. Similarly, it’s getting harder to find major appliances without a wireless connection. As the price of connectivity continues to drop, more and more things will only be available Internet-enabled.

Internet security is national security — not because the president is personally vulnerable but because we are all part of a single network. Depending on who we are and what we do, we will make different trade-offs between security and fun. But we all deserve better options.

Regulations that force manufacturers to provide better security for all of us are the only way to do that. We need minimum security standards for computers of all kinds. We need transparency laws that give all of us, from the president on down, sufficient information to make our own security trade-offs. And we need liability laws that hold companies liable when they misrepresent the security of their products and services.

I’m not worried about Biden. He and his staff will figure out how to balance his exercise needs with the national security needs of the country. Sometimes the solutions are weirdly customized, such as the anti-eavesdropping tent that Obama used while traveling. I am much more worried about the political activists, journalists, human rights workers, and oppressed minorities around the world who don’t have the money or expertise to secure their technology, or the information that would give them the ability to make informed decisions on which technologies to choose.

This essay previously appeared in the Washington Post.

Cryptogram Another SolarWinds Orion Hack

At the same time the Russians were using a backdoored SolarWinds update to attack networks worldwide, another threat actor — believed to be Chinese in origin — was using an already existing vulnerability in Orion to penetrate networks:

Two people briefed on the case said FBI investigators recently found that the National Finance Center, a federal payroll agency inside the U.S. Department of Agriculture, was among the affected organizations, raising fears that data on thousands of government employees may have been compromised.

[…]

Reuters was not able to establish how many organizations were compromised by the suspected Chinese operation. The sources, who spoke on condition of anonymity to discuss ongoing investigations, said the attackers used computer infrastructure and hacking tools previously deployed by state-backed Chinese cyberspies.

[…]

While the alleged Russian hackers penetrated deep into SolarWinds network and hid a “back door” in Orion software updates which were then sent to customers, the suspected Chinese group exploited a separate bug in Orion’s code to help spread across networks they had already compromised, the sources said.

Two takeaways: One, we are learning about a lot of supply-chain attacks right now. Two, SolarWinds’ terrible security is the result of a conscious business decision to reduce costs in the name of short-term profits. Economist Matt Stoller writes about this:

These private equity-owned software firms torture professionals with bad user experiences and shitty customer support in everything from yoga studio software to car dealer IT to the nightmarish ‘core’ software that runs small banks and credit unions, as close as one gets to automating Office Space. But they also degrade product quality by firing or disrespecting good workers, under-investing in good security practices, or sending work abroad and paying badly, meaning their products are more prone to espionage. In other words, the same sloppy and corrupt practices that allowed this massive cybersecurity hack made Bravo a billionaire. In a sense, this hack, and many more like it, will continue to happen, as long as men like Bravo get rich creating security vulnerabilities for bad actors to exploit.

SolarWinds increased its profits by increasing its cybersecurity risk, and then transferred that risk to its customers without their knowledge or consent.

LongNowSmithsonian acquires artwork based on Stewart Brand epigram

Alicia Eggert next to her artwork, This Present Moment (02019-20). Via The University of North Texas.

The Smithsonian’s Renwick Gallery has acquired a light sculpture based on quote from Long Now co-founder Stewart Brand’s book on long-term thinking, The Clock of the Long Now (01999).

The epigram comes from the book’s final chapter, and has its origins in an exchange between Brand and his friend, the Beat poet Gary Snyder:

While I was completing this book, the poet Gary Snyder sent me an epigram that had come to him:

This present moment

That lives on to become

Long ago.

I felt it was The Clock of the Long Now that responded to him:

This present moment

Used to be

The unimaginable future.

Stewart Brand, The Clock of the Long Now (01999), 163-4.

In 02019, Alicia Eggert, an artist and professor of sculpture at The University of North Texas, created a light sculpture based on Brand’s epigram titled This Present Moment. The artwork cycles through two neon statements: “This present moment used to be the unimaginable future” and “This present moment used to be the future.”

The Present Moment (02019-20).

“My goal is always to say something that feels really meaningful but is always relevant — something that will be true today and 1,000 years from now,” Eggert said in a statement. “These statements from Brand are always true, but they mean different things at different times, and their meanings can vary from person to person.”

This is Eggert’s first acquisition. The artwork will debut at the museum as part of the Renwick Gallery’s 50th anniversary exhibition in 02022.

“Having my work in a museum was unimaginable to me for a long time,” Eggert said. “The idea that it’s going to be cared for and be viewed by people for generations to come is such an incredible thing.”

Kevin RuddABC TV: Myanmar coup

INTERVIEW VIDEO
ABC NEWS CHANNEL
AFTERNOON BRIEFING
WITH PATRICIA KARVELAS
2 FEBRUARY 2021

Topic: Aung San Suu Kyi and coup in Myanmar

The post ABC TV: Myanmar coup appeared first on Kevin Rudd.

Kevin RuddABC Radio: Myanmar Coup

INTERVIEW AUDIO
ABC NEWSRADIO
WITH JAMIE TRAVERS
2 FEBRUARY 2021

Topic: Myanmar and Aung San Suu Kyi

The post ABC Radio: Myanmar Coup appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Don't Do This

Let's say you were writing a type checker in TypeScript. At some point, you would find that you need to iterate across various lists of things, like for example, the list of arguments to a function.

Now, JavaScript (and thus TypeScript) gives you plenty of options for building the right loop for your specific problem. Or, if you look at the code our anonymous submitter sent, you could just choose the wrongest one.

var i = 0; var func1: string[] = func_e.get(expr.name); if (func1.length != 0) { do { var a = typeCheckExpr(expr.arguments[i], env); if (a != func1[i]) throw new Error("Type mismatch in parameters"); i += 1; } while (i != expr.arguments.length); }

Now, it's tricky to reconstruct what the intent of the code is with these wonderfully vague variable names, but I'm fairly certain that func1 contains information about the definition of the function, while expr is the expression that we're type checking. So, if the definition of func1 doesn't contain any parameters, there's nothing to type-check, we skip the loop. Then, we use a do loop, because that if tells us we have at least one argument.

In the loop, we check the passed in argument against the function definition, and chuck an error if they don't match. Increment the counter, and then keep looping while there are more passed in arguments.

Our submitter claims "There's nothing strictly wrong about this snippet, and it all runs correctly," which may be true- I don't know enough about how this code is used, but I suspect that it's going to have weird and unexpected behaviors depending on the inputs, especially if the idea of "optional parameters" exists in the language they're type-checking (presumably TypeScript?).

But bugs aside, the core logic is: if the function takes parameters, iterate across the list of arguments and confirm they match the type. The do loop just confuses that logic, when the whole thing could be a much simpler for loop. As our submitter says, it's not wrong, but boy is it annoying. Annoying to read, annoying to parse, annoying because it should be a pretty simple block of code, but someone went and made it hard.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cryptogram More SolarWinds News

Microsoft analyzed details of the SolarWinds attack:

Microsoft and FireEye only detected the Sunburst or Solorigate malware in December, but Crowdstrike reported this month that another related piece of malware, Sunspot, was deployed in September 2019, at the time hackers breached SolarWinds’ internal network. Other related malware includes Teardrop aka Raindrop.

Details are in the Microsoft blog:

We have published our in-depth analysis of the Solorigate backdoor malware (also referred to as SUNBURST by FireEye), the compromised DLL that was deployed on networks as part of SolarWinds products, that allowed attackers to gain backdoor access to affected devices. We have also detailed the hands-on-keyboard techniques that attackers employed on compromised endpoints using a powerful second-stage payload, one of several custom Cobalt Strike loaders, including the loader dubbed TEARDROP by FireEye and a variant named Raindrop by Symantec.

One missing link in the complex Solorigate attack chain is the handover from the Solorigate DLL backdoor to the Cobalt Strike loader. Our investigations show that the attackers went out of their way to ensure that these two components are separated as much as possible to evade detection. This blog provides details about this handover based on a limited number of cases where this process occurred. To uncover these cases, we used the powerful, cross-domain optics of Microsoft 365 Defender to gain visibility across the entire attack chain in one complete and consolidated view.

This is all important, because MalwareBytes was penetrated through Office 365, and not SolarWinds. New estimates are that 30% of the SolarWinds victims didn’t use SolarWinds:

Many of the attacks gained initial footholds by password spraying to compromise individual email accounts at targeted organizations. Once the attackers had that initial foothold, they used a variety of complex privilege escalation and authentication attacks to exploit flaws in Microsoft’s cloud services. Another of the Advanced Persistent Threat (APT)’s targets, security firm CrowdStrike, said the attacker tried unsuccessfully to read its email by leveraging a compromised account of a Microsoft reseller the firm had worked with.

On attribution: Earlier this month, the US government has stated the attack is “likely Russian in origin.” This echos what then Secretary of State Mike Pompeo said in December, and the Washington Post‘s reporting (both from December). (The New York Times has repeated this attribution — a good article that also discusses the magnitude of the attack.) More evidence comes from code forensics, which links it to Turla, another Russian threat actor.

And lastly, a long ProPublica story on an unused piece of government-developed tech that might have caught the supply-chain attack much earlier:

The in-toto system requires software vendors to map out their process for assembling computer code that will be sent to customers, and it records what’s done at each step along the way. It then verifies electronically that no hacker has inserted something in between steps. Immediately before installation, a pre-installed tool automatically runs a final check to make sure that what the customer received matches the final product the software vendor generated for delivery, confirming that it wasn’t tampered with in transit.

I don’t want to hype this defense too much without knowing a lot more, but I like the approach of verifying the software build process.

,

Krebs on Security‘ValidCC,’ a Major Payment Card Bazaar and Looter of E-Commerce Sites, Shuttered

ValidCC, a dark web bazaar run by a cybercrime group that for more than six years hacked online merchants and sold stolen payment card data, abruptly closed up shop last week. The proprietors of the popular store said their servers were seized as part of a coordinated law enforcement operation designed to disconnect and confiscate its infrastructure.

ValidCC, circa 2017.

There are dozens of online shops that sell so-called “card not present” (CNP) payment card data stolen from e-commerce stores, but most source the data from other criminals. In contrast, researchers say ValidCC was actively involved in hacking and pillaging hundreds of online merchants — seeding the sites with hidden card-skimming code that siphoned personal and financial information as customers went through the checkout process.

Cybersecurity firm Group-IB published a report last year detailing the activities of ValidCC, noting the gang behind the crime shop was responsible for plundering nearly 700 e-commerce sites. Group-IB dubbed the gang “UltraRank,” which it said had additionally compromised at least 13 third-party suppliers whose software components are used by countless online stores across Europe, Asia, North and Latin America.

Group-IB believes UltraRank is responsible for a slew of hacks that other security firms previously attributed to at least three distinct cybercrime groups.

“Over five years….UltraRank changed its infrastructure and malicious code on numerous occasions, as a result of which cybersecurity experts would wrongly attribute its attacks to other threat actors,” Group-IB wrote. “UltraRank combined attacks on single targets with supply chain attacks.”

ValidCC’s front man on multiple forums — a cybercriminal who uses the hacker handle “SPR” — told customers on Jan. 28 that the shop would close for good following what appeared to be a law enforcement takedown of its operations. SPR claims his site lost access to a significant inventory — more than 600,000 unsold stolen payment card accounts.

“As a result, we lost the proxy and destination backup servers,” SPR explained. “Besides, now it’s impossible to open and decrypt the backend. The database is in the hands of the police, but it’s encrypted.”

ValidCC had thousands of users, some of whom held significant balances of bitcoin stored in the shop when it ceased operations. SPR claims the site took in approximately $100,000 worth of virtual currency deposits each day from customers.

Many of those customers took to the various crime forums where the shop has a presence to voice suspicions that the proprietors had simply decided to walk away with their money at a time when Bitcoin was near record-high price levels.

SPR countered that ValidCC couldn’t return balances because it no longer had access to its own ledgers.

“We don’t know anything!,” SPR pleaded. “We don’t know users’ balances, or your account logins or passwords, or the [credit cards] you purchased, or anything else! You are free to think what you want, but our team has never conned or let anyone down since the beginning of our operations! Nobody would abandon a dairy cow and let it die in the field! We did not take this decision lightly!”

Group-IB said ValidCC was one of many cybercrime shops that stored some or all of its operational components at Media Land LLC, a major “bulletproof hosting” provider that supports a vast array of phishing sites, cybercrime forums and malware download servers.

Assuming SPR’s claims are truthful, it could be that law enforcement agencies targeted portions of Media Land’s digital infrastructure in some sort of coordinated action. However, so far there are no signs of any major uproar in the cybercrime underground directed at Yalishanda, the nickname used by the longtime proprietor of Media Land.

ValidCC’s demise comes close on the heels of the shuttering of Joker’s Stash, by some accounts the largest underground shop for selling stolen credit card and identity data. On Dec. 16, 2020, several of Joker’s long-held domains began displaying notices that the sites had been seized by the U.S. Department of Justice and Interpol. Less than a month later, Joker announced he was closing the shop permanently.

And last week, authorities across Europe seized control over dozens of servers used to operate Emotet, a prolific malware strain and cybercrime-as-service operation. While there are no indications that action targeted any criminal groups apart from the Emotet gang, it is often the case that multiple cybercrime groups will share the same dodgy digital infrastructure providers, knowingly or unwittingly.

Gemini Advisory, a New York-based firm that closely monitors cybercriminal stores, said ValidCC’s administrators recently began recruiting stolen card data resellers who previously had sold their wares to Joker’s Stash.

Stas Alforov, Gemini’s director of research and development, said other card shops will quickly move in to capture the customers and suppliers who frequented ValidCC.

“There are still a bunch of other shops out there,” Alforov said. “There’s enough tier one shops out there that sell card-not-present data that haven’t dropped a beat and have even picked up volumes.”

Update, Feb. 4, 6:01 p.m. ET: A previous version of this story said Group-IB was a Russian cybersecurity firm. The company says it moved its global headquarters to Singapore in 2019.

Cryptogram Georgia’s Ballot-Marking Devices

Andrew Appel discusses Georgia’s voting machines, how the paper ballots facilitated a recount, and the problem with automatic ballot-marking devices:

Suppose the polling-place optical scanners had been hacked (enough to change the outcome). Then this would have been detected in the audit, and (in principle) Georgia would have been able to recover by doing a full recount. That’s what we mean when we say optical-scan voting machines have “strong software independence”­you can obtain a trustworthy result even if you’re not sure about the software in the machine on election day.

If Georgia had still been using the paperless touchscreen DRE voting machines that they used from 2003 to 2019, then there would have been no paper ballots to recount, and no way to disprove the allegations that the election was hacked. That would have been a nightmare scenario. I’ll bet that Secretary of State Raffensperger now appreciates why the Federal Court forced him to stop using those DRE machines (Curling v. Raffensperger, Case 1:17-cv-02989-AT Document 579).

I have long advocated voter-verifiable paper ballots, and this is an example of why.

Kevin RuddCNN: Climate and Coronavirus

INTERVIEW VIDEO
TV INTERVIEW
CNN, FREDRICKA WHITFIELD
1 FEBRUARY 2021

The post CNN: Climate and Coronavirus appeared first on Kevin Rudd.

Worse Than FailureNot-so-Portable Document Format

Adrian worked for a document services company. Among other things, they provided high-speed printing services to clients in the financial services industry. This means providing on site service, which is how Adrian ended up with an office in the sub-sub-basement of a finance company. Adrian's boss, Lester, was too busy "developing high-end printing solutions on a Unix system" to spend any time in that sub-sub-basement, and instead embedded himself with the client's IT team.

"It's important that I'm working closely with them," Lester explained, "because it's the only way we can guarantee true inter-system compatibility." With disgust, he added, "They're mostly a Windows shop, and don't understand Unix systems, which is what drives our high-speed printing solution."

It was unclear to Adrian whether Lester was more interested in "working closely" or "getting access to the executive breakroom with free espressos", but that's what Lester got, while Adrian made do with a Mr. Coffee from 1987, while fielding emails from users trying to understand why their prints didn't work.

Bobbi was one such user. She was fairly technical, and had prepared some complex financial reports for printing. Because she was very aware how she wanted these reports to look, she'd gone the extra step and made them as a PDF. She'd sent it over to the high-speed-printer and it got kicked back with an error about invalid files. Adrian reviewed her PDFs, couldn't see any errors or problems, tried submitting the job himself, and a few minutes later it got kicked back.

Eventually, he called Lester.

"Hey, I've got a user trying to send some files over to the high-speed printer, and it doesn't seem like it'll take them."

"Oh, is that where all these PDFs have been coming from?"

"Uh… yes?"

Lester sighed. "See, this is why I need to be embedded with the team, they're so Windows biased, and now it's even infecting you."

"Hunh?"

Adrian somehow could hear Lester rolling his eyes over the phone. "The high speed printer is a Unix system, you know this."

"I do know that," Adrian confirmed, still mystified.

"PDFs are only good for the Windows operating system," Lester said. "It's not going to print properly on a Unix operating system."

"Our… high speed printer can't print PDFs?"

"If your users want to print PDFs, they need to print on their Windows-based printers."

"I just want to confirm," Adrian said, "again, our printer can't handle PDFs, the most common print format in the world, which is 100% supported by CUPS, and probably supported directly by the printer itself?"

"Adrian, this is why you're down in the sub-sub-basement doing support, you have a lot to learn about cross-platform interoperability."

Adrian related this information to Bobbi, and worked with her to convert the files into one of the "Unix-friendly" file formats Lester approved. After that, though, he did his own digging, and tried to understand why PDFs were forbidden.

It didn't take long. Lester handled all the print jobs through a set of homebrew shell scripts. Their main job was to prepend a banner page for the print job, but they also handled details about copying files, managing the queue, and had grown into a gigantic, unmanageable mess. It wasn't that Unix couldn't print PDFs, it was that Lester couldn't hack his already hacked scripts any further to support the Portable Document Format, and thus their high-speed print system couldn't handle the standard Unix printing format of PDFs.

Adrian eventually left that job. Lester, however, was still there, and so were his scripts.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Chaotic IdealismBeauty and the Misogynist

You’ve probably heard it before: Beauty and the Beast is actually a pretty misogynistic story. A guy gets cursed to take the shape of a Beast because he’s rude and inhospitable; he captures an old man; the old man’s daughter offers to take his place; she manages to warm his cold, cold heart and change him, she loves him, the curse is broken, and they live happily ever after.

Modern perspective: Ouch. Stockholm Syndrome much?

There are countless blog posts and opinion pieces written about exactly why Belle is a victim, and quite possibly only loves him because she’s trapped and desperately wants to survive. And boy, have they got a point.

But that’s not the only perspective you can take. There are modern retellings of the story that change the focus from Belle to the Beast. He’s the true protagonist, going from villain to hero as the story progresses. She’s a constant: Always kind and polite, altruistic, assertive, and generally a stand-up sort of gal. The Beast is the one who grows and changes, and his being rewarded with love is only good luck–not something he expects, or should expect.

The newest live-action version has the Beast initially capturing her, then treating her well, then letting her go, then actively defending her freedom to the point of endangering himself–all before she ever says she loves him. The less he sees her as a way out of his curse, the kinder he becomes. It shows he’s changed for real, not in the way narcissistic abusers do with roses and apologies.

I have always seen it as a message to the bullies of the world–that yes, you can change; yes, you can love and be loved; but it’ll only happen once you stop believing that love–or a lover–is something you can own. Notably, the Beast’s change comes *before* her declaration of love. When he stops trying to buy her love, he truly starts to change.

It’s only a perspective you get if the story is told properly–as the Beast changing because he loves Belle, rather than Belle changing the Beast. The crucial thing is to make it clear that the Beast has come to the point that even if Belle never wanted to see him again, he would still defend her freedom.

Only at that point does he become the sort of person who can love and be loved in return. Because the story has a happy ending, they do get together in the end–but that isn’t the Beast’s “reward” for letting Belle go, and he didn’t expect to get it. He expected Belle to go back to her father, live happily there, and probably go on to find a non-Beastified husband.

It bothers me that so many people think that because Beast changed, he got the girl, and that if you change, you also deserve to get the girl. That’s not how it works–especially if it’s the same girl you’ve hurt in the past. There are too many Nice Guys who suffer under that misconception.

Many men (usually men, but not universally) on the autism spectrum believe that because they are kind, they deserve a relationship. But that isn’t true. Being kind is a prerequisite for a solid relationship–but it is not a guarantee that you will get one. Being kind is something you should strive for whether or not you ever get anything out of it.

Being kind has a lot of rewards. But if you are kind because of the rewards, you are not kind; you are manipulative. Be kind for its own sake. As the Beast learned, you will be happier as a kind person without a partner, than as a manipulative person with a partner who cannot truly love you.

Krebs on SecurityU.K. Arrest in ‘SMS Bandits’ Phishing Service

Authorities in the United Kingdom have arrested a 20-year-old man for allegedly operating an online service for sending high-volume phishing campaigns via mobile text messages. The service, marketed in the underground under the name “SMS Bandits,” has been responsible for blasting out huge volumes of phishing lures spoofing everything from COVID-19 pandemic relief efforts to PayPal, telecommunications providers and tax revenue agencies.

The U.K.’s National Crime Agency (NCA) declined to name the suspect, but confirmed that the Metropolitan Police Service’s cyber crime unit had detained an individual from Birmingham in connection to a business that supplied “criminal services related to phishing offenses.”

The proprietors of the phishing service were variously known on cybercrime forums under handles such as SMSBandits, “Gmuni,” “Bamit9,” and “Uncle Munis.” SMS Bandits offered an SMS phishing (a.k.a. “smishing”) service for the mass sending of text messages designed to phish account credentials for different popular websites and steal personal and financial data for resale.

Image: osint.fans

Sasha Angus is a partner at Scylla Intel, a cyber intelligence startup that did a great deal of research into the SMS Bandits leading up to the arrest. Angus said the phishing lures sent by the SMS Bandits were unusually well-done and free of grammar and spelling mistakes that often make it easy to spot a phony message.

“Just by virtue of these guys being native English speakers, the quality of their phishing kits and lures were considerably better than most,” Angus said.

According to Scylla, the SMS Bandits made a number of operational security (or “opsec”) mistakes that made it relatively easy to find out who they were in real life, but the technical side SMS Bandits’ operation was rather advanced.

“They were launching fairly high-volume smishing campaigns from SMS gateways, but overall their opsec was fairly lousy,” Angus said. “But on the telecom front they were using fairly sophisticated tactics.”

The proprietor of the SMS Bandits, telling the world he lives in Birmingham.

For example, the SMS Bandits automated systems to check whether the phone number list provided by their customers was indeed tied to actual mobile numbers, and not landlines that might tip off telecommunications companies about mass spam campaigns.

“The telcos are monitoring for malicious SMS messages on a number of fronts,” Angus said. “One way to tip off an SMS gateway or wireless provider is to start blasting text messages to phone numbers that can’t receive them.”

Scylla gathered reams of evidence showing the SMS Bandits used email addresses and passwords stolen through its services to validate a variety of account credentials — from PayPal to bank accounts and utilities providers. They would then offload the working credentials onto marketplaces they controlled, and to third-party vendors. One of SMS Bandits’ key offerings: An “auto-shop” web panel for selling stolen account credentials.

SMS Bandits also provided their own “bulletproof hosting” service advertised as a platform that supported “freedom of speach” [sic] where customers could “host any content without restriction.” Invariably, that content constituted sites designed to phish credentials from users of various online services.

The “bulletproof” offerings of Muni Hosting (pronounced “Money Hosting”).

The SMS Bandits phishing service is tied to another crime-friendly service called “OTP Agency,” a bulk SMS provider that appears catered to phishers: The service’s administrator stated on multiple forums that he worked directly with the SMS Bandits.

Otp[.]agency advertises a service designed to help intercept one-time passwords needed to log in to various websites. The customer enters the target’s phone number and name, and OTP Agency will initiate an automated phone call to the target that alerts them about unauthorized activity on their account.

The call prompts the target to enter a one-time password generated by their phone’s mobile app, and that code is then relayed back to the scammer’s user panel at the OTP Agency website.

“We call the holder with an automatic calling bot, with a very believable script, they enter the OTP on the phone, and you’ll see it in real time,” OTP Agency explained on their Telegram channel. The service, which costs anywhere from $40 to $125 per week, advertises unlimited international calling, as well as multiple call scripts and voice accents.

One of the pricing plans available to OTP Agency users.

The volume of SMS-based phishing skyrocketed in 2020 — by more than 328 percent — according to a recent report from Proofpoint, a security firm that processes more than 80 percent of North America’s mobile messages [Full disclosure: Proofpoint is currently an advertiser on this site].

Sociological ImagesThe Mask of the “Middle Class”

I love this podcast conversation with Rachel Sherman and Anne Helen Petersen about Sherman’s recent book, Uneasy Street: The Anxieties of Affluence. It is a great source for introduction to sociology courses looking to open up a conversation about differences in social class, especially because it draws attention to the fact that people do a lot of work to hide that social class position.

When we think about wealth, it is tempting to focus on flaunting riches through conspicuous consumption of flashy clothes, large homes, and other reality TV fodder. Sherman’s work makes an important point: phrases like “middle class” actually do a lot to hide our economic positions in society, and wealthy people often work to manage others’ perceptions of their wealth.

The podcast pairs well with a recent Twitter thread from John Holbein tracing research from around the world on how people’s perceptions of their economic position line up with their actual income and wealth. In case after case, many people report a social class that doesn’t line up with what they actually have.

This is a point I always try to make with my students: our social relationships are as much about the things we hide and avoid talking about as the things we openly share with each other. One of the most powerful points sociologists can make is to show these hidden patterns in the way we interact. The goal is not to call people out or to accuse them of lying, but rather to ask ourselves what it is about our economic lives that makes us want to work so hard to manage others’ perceptions in this way.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureCodeSOD: Null and Terminated

There's plenty of room for debate about what specific poor choices in history lead to the most bugs today. Was it the billion dollar mistake of allowing null pointers? Is it the absolute mess that is C memory management? Or is it C-style strings and all the attendant functions and buffer-overruns they entail?

A developer at Jay's company had been porting some C++ code to a new platform. That developer left, and the wheel-of-you-own-this-now spun and landed on Jay. The code was messy, but mostly functional. Jay was able to get it building, running, and then added a new feature. It was during testing that Jay noticed that some fields in the UI weren't being populated.

Jay broke out a memory analyzer tool, and it popped out warnings on lines where strlcpy was being called. Now that was odd, as strlcpy is the "good" way to copy strings, with guarantees that it would never allow buffer overruns. The buffers were all correctly sized, which left Jay wondering what exactly was wrong with the calls to strlcpy?

A quick grep through the code later, and Jay knew exactly what was wrong:

#define strlcpy strncpy

The code originally had been targeting a platform which had strlcpy available, but the port was moving to a platform which did not. The previous developer, either out of a combination of laziness, ignorance, carelessness, or some combination of all of those, decided that since strlcpy and strncpy had the same calling semantics, a macro could solve all their problems.

If you haven't had to deal with C-strings, or just general C-style conventions, recently, it's important to note a few things. First, C doesn't actually have strings as a datatype, it just has an array of characters. Second, arrays are actually just pointers to the first item in the array, and C doesn't do anything to enforce the length, which means you're free to access element 11 in a 10 element array, and C will let you. Finally, since "knowing how long a string is" might actually be important, the way C-strings address the problems above is that the last character in the string should be a null terminator. All the string handling functions know that if they see a null terminator, that's the end of the string, and that keeps your code from reading off the end of the array into some other block of memory- or worse, writing to that arbitrary block of memory.

Which brings us to the key difference between strlcpy and strncpy: the first one is "safer" and guarantees that the last character in the output buffer is going to be a null terminator. strncpy makes no such guarantee; if there isn't room in the buffer for a null terminator, it just doesn't put one in.

In other words, with one macro, Jay's predecessor had created hundreds of buffer-overrun vulnerabilities. Jay removed the macro, properly updated the calls to safely copy strings, and the errors went away.

In any case, let's close with this quote, from the "Bugs" section of the strncpy/strcpy manpage, which is just a fun read:

If the destination string of a strcpy() is not large enough, then anything might happen. Overflowing fixed-length string buffers is a favorite cracker technique for taking complete control of the machine. Any time a program reads or copies data into a buffer, the program first needs to check that there's enough space. This may be unnecessary if you can show that overflow is impossible, but be careful: programs can get changed over time, in ways that may make the impossible possible.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 30)

Here’s part thirty of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:


Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

,

Cory DoctorowTalking Attack Surface in the LA Review of Books

In an interview in the LA Review of Books, Technology and Politics Are Inseparable: An Interview with Cory Doctorow, Eliot Peper digs into the backstory and ethos of the Little Brother books in general and Attack Surface in particular:

Attack Surface explores how technology is not the solution to social problems, but a morally neutral accelerant to political action, and that ultimately only politics can solve social problems. How did you learn this lesson? How did it change your worldview? What does it mean for someone who wants to contribute to building a better future?

I started in politics — my parents are activists who started taking me to protests when I was in a stroller. But in 1977, when I was six, we got our first computer (a teletype terminal and acoustic coupler that let me connect to a DEC minicomputer at the university my dad was studying at). I never thought that computers on their own could solve our political problems — but I always thought that computers would play an important role in social and political struggles.

Technology and politics are inseparable. There’s a kind of nerd determinism that denies politics (“Our superior technology makes your inferior laws irrelevant”). But just as pernicious is the inverse, the politicos who insist that technology is irrelevant to struggle, sneering about “clicktivism” and “solutionism.” I have logged innumerable hours wheatpasting posters for demonstrations to telephone poles. I can’t believe that anyone who claims networked computers don’t change how politics work has ever wheatpasted a single handbill.

Cryptography cannot create a stable demimonde that is impregnable to oppressive, illegitimate states — over time, you and your co-dissidents will make a mistake, and the protection of math will vanish. But the fact that it’s not impregnable doesn’t disqualify cryptography from being significant to political struggle. Nothing is impregnable. Crypto is a tool — not a tool for obviating politics, but a tool for doing politics.

Moreover, the existence of crypto — the fact that everyday people can have secrets that can’t be read without their consent — changes the equilibrium in oppressive states. The privilege of the powerful — secrecy — has spread to the general public, which means that leaders who are tempted to take oppressive action have to take account of the possibility that the people they oppress will be able to plan their downfall in ways that they will struggle to detect.

,

Krebs on SecurityThe Taxman Cometh for ID Theft Victims

The unprecedented volume of unemployment insurance fraud witnessed in 2020 hasn’t abated, although news coverage of the issue has largely been pushed off the front pages by other events. But the ID theft problem is coming to the fore once again: Countless Americans will soon be receiving notices from state regulators saying they owe thousands of dollars in taxes on benefits they never received last year.

One state’s experience offers a window into the potential scope of the problem. Hackers, identity thieves and overseas criminal rings stole over $11 billion in unemployment benefits from California last year, or roughly 10 percent of all such claims the state paid out in 2020, the state’s labor secretary told reporters this week. Another 17 percent of claims — nearly $20 billion more – are suspected fraud.

California’s experience is tracked at a somewhat smaller scale in dozens of other states, where chronically underfunded and technologically outdated unemployment insurance systems were caught flat-footed by an avalanche of fraudulent claims. The scammers typically use stolen identity data to claim benefits, and then have the funds credited to an online account that they control.

States are required to send out 1099-G forms reporting taxable income by Jan. 31, and under federal law unemployment benefits are considered taxable income. Unfortunately, many states have not reconciled their forms with confirmed incidences of fraudulent unemployment insurance claims, meaning many people are being told they owe a great deal more in taxes than they actually do.

In a notice posted Jan. 28, the U.S. Internal Revenue Service urged taxpayers who receive forms 1099-G for unemployment benefits they didn’t actually get because of ID theft to contact their appropriate state agency and request a corrected form.

But the IRS’s advice ignores two rather inconvenient realities. The first is that the same 1099-G forms which states are sending to their citizens also are reported to the IRS — typically at the same time the notices are mailed to residents. The other is that many state agencies are completely overwhelmed right now.

Karl Fava, a certified public accountant in Michigan, told KrebsOnSecurity two of his clients have received 1099-G forms from Michigan regarding thousands of dollars in unemployment payments that they had neither requested nor received.

Fava said Michigan recently stood up a website where victims of unemployment insurance fraud who’ve received incorrect 1099-Gs can report it, but said he’s not confident the state will issue corrected notices before the April 15 tax filing deadline.

“In both cases, the recipients contacted the state but couldn’t get any help,” Fava said. “We’re not getting a lot of traction in resolving this issue. But the fact that they’ve now created a web page where people can input information about receiving these tells you they have to know how prevalent this is.”

Fava said for now he’s advising his clients who are dealing with this problem to acknowledge the amount of fraudulent income on their federal tax returns, but also to subtract an equal amount on the return and note that the income reported by the state was due to fraud.

“That way, things can be consistent with what the IRS already knows,” Fava said. “Not to acknowledge an issue like this on a federal return is just asking for a notice from the IRS.”

The Taxpayer Advocate Service, an independent office of the U.S. Internal Revenue Service (IRS) that champions taxpayer advocacy issues, said it recently became aware that some taxpayers are receiving 1099-Gs that include reported income due to unemployment insurance identity theft. The office said it is hearing about a lot of such issues in Ohio particularly, but that the problem is happening nationally.

Another perennial (albeit not directly related) identity theft scourge involving taxes each year is refund fraud. Tax refund fraud involves the use of identity information and often stolen or misdirected W-2 forms to electronically file an unauthorized tax return for the purposes of claiming a refund in the name of a taxpayer.

Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.  

The best way to avoid tax refund fraud is to file your taxes as early possible. This year, that date is Feb. 12. One way the IRS has sought to stem the flow of bogus tax refund applications is to issue the IP PIN, which is a six-digit number assigned to taxpayers that helps prevent the use of their Social Security number on a fraudulent income tax return. Each PIN is good only for the tax year for which it was issued.

Until recently the IRS restricted who could apply for an IP PIN, but the program has since been opened to all taxpayers. To create one, if you haven’t already done so you will need to plant your flag at the IRS by stepping through the agency’s “secure access authentication” process.

Creating an account requires supplying a great deal of personal data; the information that will be requested is listed here.

The signup process requires one to validate ownership of a mobile phone number in one’s name, and it will reject any voice-over-IP-based numbers such as those tied to Skype or Google Voice. If the process fails at this point, the site should offer to send an activation code via postal mail to your address on file.

Once you have an account at the IRS and are logged in, you can request an IP PIN by visiting this link and following the prompts. The site will then display a six digit PIN that needs to be included on your federal return before it can be accepted. Be sure to print out a copy and save it in a secure place.

Cryptogram Including Hackers in NATO Wargames

This essay makes the point that actual computer hackers would be a useful addition to NATO wargames:

The international information security community is filled with smart people who are not in a military structure, many of whom would be excited to pose as independent actors in any upcoming wargames. Including them would increase the reality of the game and the skills of the soldiers building and training on these networks. Hackers and cyberwar experts would demonstrate how industrial control systems such as power supply for refrigeration and temperature monitoring in vaccine production facilities are critical infrastructure; they’re easy targets and should be among NATO’s priorities at the moment.

Diversity of thought leads to better solutions. We in the information security community strongly support the involvement of acknowledged nonmilitary experts in the development and testing of future cyberwar scenarios. We are confident that independent experts, many of whom see sharing their skills as public service, would view participation in these cybergames as a challenge and an honor.

Cryptogram New iMessage Security Features

Apple has added added security features to mitigate the risk of zero-click iMessage attacks.

Apple did not document the changes but Groß said he fiddled around with the newest iOS 14 and found that Apple shipped a “significant refactoring of iMessage processing” that severely cripples the usual ways exploits are chained together for zero-click attacks.

Groß notes that memory corruption based zero-click exploits typically require exploitation of multiple vulnerabilities to create exploit chains. In most observed attacks, these could include a memory corruption vulnerability, reachable without user interaction and ideally without triggering any user notifications; a way to break ASLR remotely; a way to turn the vulnerability into remote code execution;; and a way to break out of any sandbox, typically by exploiting a separate vulnerability in another operating system component (e.g. a userspace service or the kernel).

Worse Than FailureError'd: The Journey is the Destination

"As if my Uber ride wasn't expensive enough on its own, apparently I have to go sightseeing East for a little while first," writes Pascal.

 

Mike S. wrote, "Rumors have it that Apple will release a newer, sleeker, better circle next month."

 

"Oh cool! Future me is playing AdVenture Capitalist on my Google account," Luke writes.

 

"While everybody is complaining about extreme shipping delays, my great grandmother is getting my Etsy order just in time for New Year's," wrote Gord S.

 

Kevin O. writes, "Not sure whether this is supposed to be joke or is PHP really crashing its Wikipedia page."

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Cryptogram Police Have Disrupted the Emotet Botnet

A coordinated effort has captured the command-and-control servers of the Emotet botnet:

Emotet establishes a backdoor onto Windows computer systems via automated phishing emails that distribute Word documents compromised with malware. Subjects of emails and documents in Emotet campaigns are regularly altered to provide the best chance of luring victims into opening emails and installing malware ­ regular themes include invoices, shipping notices and information about COVID-19.

Those behind the Emotet lease their army of infected machines out to other cyber criminals as a gateway for additional malware attacks, including remote access tools (RATs) and ransomware.

[…]

A week of action by law enforcement agencies around the world gained control of Emotet’s infrastructure of hundreds of servers around the world and disrupted it from the inside.

Machines infected by Emotet are now directed to infrastructure controlled by law enforcement, meaning cyber criminals can no longer exploit machines compromised and the malware can no longer spread to new targets, something which will cause significant disruption to cyber-criminal operations.

[…]

The Emotet takedown is the result of over two years of coordinated work by law enforcement operations around the world, including the Dutch National Police, Germany’s Federal Crime Police, France’s National Police, the Lithuanian Criminal Police Bureau, the Royal Canadian Mounted Police, the US Federal Bureau of Investigation, the UK’s National Crime Agency, and the National Police of Ukraine.

Worse Than FailureCodeSOD: Revenge of the Stream

It's weird to call Java's streams a "new" feature at this point, but given Java's prevalence in the "enterprise" space, it's not surprising that people are still learning how to incorporate them into their software. We've seen bad uses of streams before, notably thanks to Frenk, and his disciple Grenk.

Well, one of Antonio's other co-workers "learned" their lesson from Frenk and Grenk. Well, they learned a lesson, anyway. That lesson was "don't, under any circumstances, use streams".

Unfortunately, they were so against streams, they also forgot about basic things, like how lists and for-loops work, and created this:

private List<Integer> createListOfDays(String monthAndYear) { List<Integer> daysToRet = new ArrayList<>(); Integer daysInMonth = DateUtils.daysInMonth(monthAndYear); for (Integer i = 1; i <= daysInMonth; i++) { daysToRet.add(i); } Set<Integer> dedupeCustomers = new LinkedHashSet<>(daysToRet); daysToRet.clear(); daysToRet.addAll(dedupeCustomers); Collections.sort(daysToRet); return daysToRet; }

The goal of this method is, given a month, it returns a list of every day in that month, from 1…31, or whatever is appropriate. The date-handling is all taken care of by daysInMonth, which means this is the rare date-handling code where the date-handling isn't the WTF.

No, the goal here is simply to populate an array with numbers in order, which the for-loop handles perfectly well. It's there, it's done, it's an entirely acceptable solution. Just return right after the for loop, and there's no problem at all with this code. You could just stop.

But no, we need to dedupeCustomers, which oh no, they just copied and pasted this code from somewhere else. In this case, to remove duplicates, they use a Set, or specifically a LinkedHashSet, which is one of the many set implementations Java offers as a built-in. A hash set, doesn't retain order, in contrast to something like a TreeSet, which does.

I bring that ordering thing up, because we started with our list in sorted order, with no duplicates. We added it to a set, destroying the order and removing the duplicates we don't have. Then, we clear the original list, jam the unsorted data back into it, and then have to sort it again.

This code made Antonio angry, and dealing with Frenk's unholy streams also made him angry, so Antonio decided to not only fix this method, but use it to demonstrate a stream one-liner which wasn't a disaster:

private List<Integer> createListOfDays(String monthAndYear) { return IntStream.rangeClosed(1, DateUtils.daysInMonth(monthAndYear)).collect(ArrayList::new, List::add, List::addAll); }
[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityArrest, Seizures Tied to Netwalker Ransomware

U.S. and Bulgarian authorities this week seized the darkweb site used by the NetWalker ransomware cybercrime group to publish data stolen from its victims. In connection with the seizure, a Canadian national suspected of extorting more than $27 million through the spreading of NetWalker was charged in a Florida court.

The victim shaming site maintained by the NetWalker ransomware group, after being seized by authorities this week.

NetWalker is a ransomware-as-a-service crimeware product in which affiliates rent access to the continuously updated malware code in exchange for a percentage of any funds extorted from victims. The crooks behind NetWalker used the now-seized website to publish personal and proprietary data stolen from their prey, as part of a public pressure campaign to convince victims to pay up.

NetWalker has been among the most rapacious ransomware strains, hitting at least 305 victims from 27 countries — the majority in the United States, according to Chainalysis, a company that tracks the flow virtual currency payments.

“Chainalysis has traced more than $46 million worth of funds in NetWalker ransoms since it first came on the scene in August 2019,” the company said in a blog post detailing its assistance with the investigation. “It picked up steam in mid-2020, growing the average ransom to $65,000 last year, up from $18,800 in 2019.”

Image: Chainalysis

In a statement on the seizure, the Justice Department said the NetWalker ransomware has impacted numerous victims, including companies, municipalities, hospitals, law enforcement, emergency services, school districts, colleges, and universities. For example, the University of California, San Francisco paid $1.14 million last summer in exchange for a digital key needed to unlock files encrypted by the ransomware.

“Attacks have specifically targeted the healthcare sector during the COVID-19 pandemic, taking advantage of the global crisis to extort victims,” the DOJ said.

U.S. prosecutors say one of NetWalker’s top affiliates was Sebastien Vachon-Desjardins, of Gatineau, in Ottawa, Canada. An indictment unsealed today in Florida alleges Vachon-Desjardins obtained at least $27.6 million from the scheme.

The DOJ’s media advisory doesn’t mention the defendant’s age, but a 2015 report in the Gatineau local news website ledroit.com suggests this may not be his first offense. According to the story, a then-27-year-old Sebastien Vachon-Desjardins was sentenced to more than three years in prison for drug trafficking: He was reportedly found in possession of more than 50,000 methamphetamine tablets.

The NetWalker action came on the same day that European authorities announced a coordinated takedown targeting the Emotet crimeware-as-a-service network. Emotet is a pay-per-install botnet that is used by several distinct cybercrime groups to deploy secondary malware — most notably the ransomware strain Ryuk and Trickbot, a powerful banking trojan.

The NetWalker ransomware affiliate program kicked off in March 2020, when the administrator of the crimeware project began recruiting people on the dark web. Like many other ransomware programs, NetWalker does not permit affiliates to infect systems physically located in Russia or in any other countries that are part of the Commonwealth of Independent States (CIS) — which includes most of the nations in the former Soviet Union. This is a prohibition typically made by cybercrime operations that are coordinated out of Russia and/or other CIS nations because it helps minimize the chances that local authorities will investigate their crimes.

The following advertisement (translated into English by cybersecurity firm Intel 471) was posted by the NetWalker affiliate program manager last year to a top cybercrime forum. It illustrates the allure of the ransomware affiliate model, which handles everything from updating the malware to slip past the latest antivirus updates, to leasing space on the dark web where affiliates can interact with victims and negotiate payment. The affiliate, on the other hand, need only focus on finding new victims.

We are recruiting affiliates for network processing and spamming.
We are interested in people whose priority is quality and not quantity.
We prefer candidates who can work with large networks and have their own access to them.
We are going to recruit a limited number of affiliates and then close the openings until they are available again.

We offer you prompt and flexible ransomware, a user-friendly admin panel in Tor, an automated service.

Encryption of shared accesses: if several users are logged in to the target computer, the ransomware will infect their mapped drives, as well as network resources where those users are logged in — shared accesses/NAS etc.

Powershell build. Each build is unique, in that the malware is inside the script – it is not downloaded from the internet. This makes bypassing antivirus protection easier, including Windows Defender (cloud+).

A fully automated blog where the victim’s dumped data is directed. The data is published according to your settings. Instant and automated payouts: initially 20 percent, no less than 16 percent.

Accessibility of a crypting service to avoid AV detections.

The ransomware has been in use since September 2019 and proved to be reliable. The files encrypted with it cannot be decrypted.

Targeting Russia or the CIS is prohibited.

You’ll get all the information about the ransomware as well as terms and conditions after you place an application via PM.

Application form:
1) The field you specialize in.
2) Your experience. What other affiliate programs have you been in and what was your profit?
3) How many accesses [to networks] do you have? When are you ready to start? How many accesses do you plan on monetizing?

Cryptogram Dutch Insider Attack on COVID-19 Data

Insider data theft:

Dutch police have arrested two individuals on Friday for allegedly selling data from the Dutch health ministry’s COVID-19 systems on the criminal underground.

[…]

According to Verlaan, the two suspects worked in DDG call centers, where they had access to official Dutch government COVID-19 systems and databases.

They were working from home:

“Because people are working from home, they can easily take photos of their screens. This is one of the issues when your administrative staff is working from home,” Victor Gevers, Chair of the Dutch Institute for Vulnerability Disclosure, told ZDNet in an interview today.

All of this remote call-center work brings with it additional risks.

Krebs on SecurityInternational Action Targets Emotet Crimeware

Authorities across Europe on Tuesday said they’d seized control over Emotet, a prolific malware strain and cybercrime-as-service operation. Investigators say the action could help quarantine more than a million Microsoft Windows systems currently compromised with malware tied to Emotet infections.

First surfacing in 2014, Emotet began as a banking trojan, but over the years it has evolved into one of the more aggressive platforms for spreading malware that lays the groundwork for ransomware attacks.

In a statement published Wednesday morning on an action dubbed “Operation Ladybird,” the European police agency Europol said the investigation involved authorities in the Netherlands, Germany, United States, the United Kingdom, France, Lithuania, Canada and Ukraine.

“The EMOTET infrastructure essentially acted as a primary door opener for computer systems on a global scale,” Europol said. “Once this unauthorised access was established, these were sold to other top-level criminal groups to deploy further illicit activities such data theft and extortion through ransomware.”

Experts say Emotet is a pay-per-install botnet that is used by several distinct cybercrime groups to deploy secondary malware — most notably the ransomware strain Ryuk and Trickbot, a powerful banking trojan. It propagates mainly via malicious links and attachments sent through compromised email accounts, blasting out tens of thousands of malware-laced missives daily.

Emotet relies on several hierarchical tiers of control servers that communicate with infected systems. Those controllers coordinate the dissemination of second-stage malware and the theft of passwords and other data, and their distributed nature is designed to make the crimeware infrastructure more difficult to dismantle or commandeer.

In a separate statement on the malware takeover, the Dutch National police said two of the three primary servers were located in the Netherlands.

“A software update is placed on the Dutch central servers for all infected computer systems,” the Dutch authorities wrote. “All infected computer systems will automatically retrieve the update there, after which the Emotet infection will be quarantined. Simultaneous action in all the countries concerned was necessary to be able to effectively dismantle the network and thwart any reconstruction.”

A statement from the German Federal Criminal Police Office about their participation in Operation Ladybird said prosecutors seized 17 servers in Germany that acted as Emotet controllers.

“As part of this investigation, various servers were initially identified in Germany with which the malicious software is distributed and the victim systems are monitored and controlled using encrypted communication,” the German police said.

Sources close to the investigation told KrebsOnSecurity the law enforcement action included the arrest of several suspects in Europe thought to be connected to the crimeware gang. The core group of criminals behind Emotet are widely considered to be operating out of Russia.

A statement by the National Police of Ukraine says two citizens of Ukraine were identified “who ensured the proper functioning of the infrastructure for the spread of the virus and maintained its smooth operation.”

A video released to YouTube by the NPU this morning shows authorities there raiding a residence, seizing cash and computer equipment, and what appear to be numerous large bars made of gold or perhaps silver. The Ukrainian policeman speaking in that video said the crooks behind Emotet have caused more than $2 billion in losses globally. That is almost certainly a very conservative number.

Police in the Netherlands seized huge volumes of data stolen by Emotet infections, including email addresses, usernames and passwords. A tool on the Dutch police website lets users learn if their email address has been compromised by Emotet.

But because Emotet is typically used to install additional malware that gets its hooks deeply into infected systems, cleaning up after it is going to be far more complicated and may require a complete rebuild of compromised computers.

The U.S. Cybersecurity & Infrastructure Security Agency has labeled Emotet “one of the most prevalent ongoing threats” that is difficult to combat because of its ‘worm-like’ features that enable network-wide infections.” Hence, a single Emotet infection can often lead to multiple systems on the same network getting compromised.

It is too soon to say how effective this operation has been in fully wresting control over Emotet, but a takedown of this size is a significant action.

In October, Microsoft used trademark law to disrupt the Trickbot botnet. Around the same time, the U.S. Cyber Command also took aim at Trickbot. However, neither of those actions completely dismantled the crimeware network, which remains in operation today.

Roman Hüssy, a Swiss information technology expert who maintains Feodotracker — a site that lists the location of major botnet controllers — told KrebsOnSecurity that prior to January 25, some 98 Emotet control servers were active. The site now lists 20 Emotet controllers online, although it is unclear if any of those remaining servers have been commandeered as part of the quarantine effort.

A current list of Emotet control servers online. Source: Feodotracker.abuse.ch

Further reading: Team Cymru on taking down Emotet

LongNowNils Gilman Wins 12-Year Long Bet About Women in Sports, But It Was Closer Than The Final Score Suggests

Soccer player Carli Lloyd kicks a field goal during practice with the NFL’s Baltimore Ravens in 02019. Photo credit: Heather Khalifa.

Nils Gilman, VP of Programs at the Berggruen Institute, Deputy Editor of Noema magazine, and a Long Now Speaker, has won a 12-Year Long Bet about Women in Sports. In 02008, Gilman challenged a prediction by Thomas R. Leavens, a Chicago attorney, that by the end of 02020, a professional sports team that was part of either the National Football League, the National Basketball Association, Major League Baseball, the National Hockey League, or Major League Soccer would integrate and have a woman as a team member on its regular season roster. 

Leavens presented the following argument in favor of his prediction:

While there may be a rational basis for arranging competitive sporting events by gender when the competition is one-on-one, such as track, skiing, or tennis, that rationale starts to break down with respect to team sports, where gender physical differences may not have the same impact and women may not be viewed as being disadvantaged (or advantaged) by competing against men. Participation by women in all areas of sports has increased, with many entering areas previously occupied only by men. However, to my knowledge, no woman has been selected as a player with a major US professional football, soccer, hockey, basketball, or baseball team. My prediction is based on the belief that by 02020, a woman athlete will emerge as a member of such a team, based not only on her skill but also on the greater available pool of women playing such sports, the incentive of the greater talent compensation available to players on the major sports teams (as opposed to the compensation paid to current women-only sports teams), and the changing overall societal view of the role of gender that will make a team’s decision to add a woman player to a previously all-male team more compelling.

Gilman challenged Leavens’s prediction on the basis of the physical disparities in size, speed, strength and testosterone levels that advantage men in most sports—resulting, by some estimates, in an 8-12% performance gap between the sexes:

In many sports, men and women are able to compete at nearly equal levels. Sports that are primarily about eye-hand coordination, reflexes, and rapid decision making are ripe for gender integration. However, there are many sports for which strength — in terms of explosiveness, endurance, and sheer force — are predominant factors in determining excellence. At the elite, professional level, male athletes in these sports exceed the conceivable strength of all females. This applies to football, soccer, hockey, basketball and baseball. Genetic or chemical modification could conceivably change this, and if such technologies were to become available, they would presumably also be used by male athletes, thus leveling the playing field.

While these leagues made notable progress toward gender integration outside of the field of play, none came close to adding a woman as a team member.¹ Gilman’s $500 in winnings will go to the UC Berkeley History Department, where he completed his Bachelors, Masters, and Doctorate degrees. 

At first glance, Gilman appeared to win this bet handedly. But it was closer than the final score suggests. 

“At the time the bet was made, the categories of ‘men’ and ‘women’ were much more stable than they are now — the bet itself presumes that everyone (in men’s professional sports) will be cis-gendered,” Gilman tells Long Now. “So far, in fact, that has turned out to be true, but at this point I wouldn’t count on that lasting that much longer.” 

There have been significant advances in trans-visibility and trans-rights since 02008. These advances have occurred alongside a societal evolution regarding the fluidity of gender. This is especially evident in Gen Z, the demographic cohort whose members were born in the mid-to-late 01990s. 

According to a 02018 report on gender fluidity, nearly 25% of Gen Zers expect their gender to change throughout their lifetimes. Of those, “45% expect their gender identity to change 2-3 times.”

How would the bet have been resolved if a trans-woman emerged on the roster in the five professional sports leagues?

The bet’s terms made room for this possibility, stating that “a woman, or a person who identifies as a woman” would satisfy the bet. 

Gilman admits that when outlining the bet’s terms, he did not have transgender people in mind, but athletes along the lines of Dennis Rodman, the eccentric basketball player from the 01990s who once wore a wedding dress to promote his autobiography. Regardless, had a trans-woman made the roster of a team, Leavens would have won the bet. 

“I still think trans-phobia will prevent that from happening for quite some time, but I don’t know how long of a bet I’d want to make that for now — certainly not past the end of this decade,” Gilman says. “But at the time I made the original bet, I was so naively cis-centric that I didn’t even contemplate this possibility.”

Long Now’s Long Bets project was founded on the premise that we can improve our long-term thinking by holding ourselves accountable for the predictions we make about the future. By revisiting our forecasts as time goes by, we reveal the subtle mechanics of society’s evolution, and teach ourselves something about what kinds of visions might turn into reality. 

“One of the challenges of thinking long is that one focuses inevitably on a set of things that one thinks is going to change,” Gilman says. “And one makes implicit assumptions about things that aren’t going to change.”

Sometimes, what predictors miss is as illuminating as what they anticipate.

Notes

[1]  The NFL came closest, but that isn’t saying much. In 02013, Lauren Silberman became the first woman to participate in an NFL try out at a regional combine. In 02019, Women’s World Cup hero Carli Lloyd was approached by several NFL teams after footage of her kicking field goals went viral, but nothing came of it. 

Learn More

Worse Than FailureJust Google It

Based on the glowing recommendations of a friend, Philip accepted a new job in a new city. The new city was a wonderful change of pace. The new job, on the other hand…

The company was a startup, "running lean" and "making the best use of our runway". The CEO was a distant figure, but the CTO, Trey, was a much more hands on "leader". Trey was part of the interview process, and was the final decision maker for hiring Philip. On Philip's first day, Trey commented on the fact that Philip specifically hadn't gotten a degree in software engineering, but had twenty years of work experience.

"Honestly, that really put you over the top," Trey said. He grinned the smile of someone who has spent a lot of money engineering the perfect smile, and clapped Philip on the back. "We tend to prefer candidates from, y'know, 'non-traditional' backgrounds. I mean, I have a degree in logic!"

Philip nodded awkwardly, not exactly sure what to make of that. Trey clapped him on the back again, and added. "But as you can see, I've made quite the career in tech. I think my background gives me a better perspective than someone who's been too focused on the bits and bytes, you know? Broader. Why, it's certainly helped me make connections. I know people at Google! Maybe I'll introduce you, if you promise not to go running off!" Trey laughed at his own joke.

Or maybe it wasn't a joke. Philip's first few months at the company were mostly meetings. Some of those meetings were about company processes and standards, which Trey had copied from what he heard Google did. Sometimes, the meetings were more like propaganda sessions, focusing on how much this company was like Google, and would one day be as successful as Google, and how lucky you were to get in on the ground floor of the next Google.

A number of the meetings focused on security, and these meetings generally had a darker, more threatening tone. "Our intellectual property," Trey explained, "belongs to our investors. We must protect it at all costs."

Once Philip was properly indoctrinated into the company cult, and fully warned about security concerns, he was given access to the company's private Gitlab, hosted on the Google Cloud Platform. The first thing Philip noticed was that the installed version of Gitlab was from 2015, and there were a number of documented vulnerabilities that had since been patched in newer versions.

So much for security.

Their product was one gigantic Eclipse project. Emphasis on Eclipse. There were no automated builds. If you wanted to build, you used Eclipse. There were no automated deployments. If you wanted to deploy, you used Eclipse. Philip ran some automated analysis tools against the codebase, just to help him get a sense of what he was looking at.

About 45% of the code was duplicated code from elsewhere in the codebase. One letter variable names were apparently the standard. There was no testing code, whatsoever. And every single third-party library was included in source control, creating a git repo that was over 2GB in size.

Philip wasn't given much direction on what he should work on next. "We want to let smart people do smart things," Trey said, "like at Google. You set direction, and keep me in the loop so I can course correct."

Philip decided that the first thing they needed was some automation on the builds. Then they could move up to continuous integration. It'd also be nice to get dependency management cleaned up so instead of tracking all of your dependencies in git, you could have your build/deploy system handle that.

"So," Philip said to Trey after explaining this, "I thought I'd get started on building with Maven. That's pretty much the standard tool for Java projects like this, I'm already familiar with it, the rest of the team is too-"

"I don't think that's what they use at Google," Trey said.

"Well, I mean…"

"No, no, let me make a few calls. I know people at Google."

While Trey went ahead and made his calls, Philip got to work building a Maven build anyway. It turned out to be more complicated than he expected at first, especially because the code depended on some deprecated Google Cloud Platform libraries, which meant Philip had to also modernize some of that as part of the process, which meant starting on writing some unit tests to avoid introducing regressions, and it sorta turned into a yak shaving expedition.

"Hey," Trey said, "at Google they use Bazel, so we should use that."

"Um, I mean, none of us are really familiar, and Maven is really fit for purpo-"

"At Google," Trey repeated, "they use Bazel."

Philip and the rest of the team gave Bazel a fair shot, and spent the better part of a week trying to get their project configured in a way that worked with Bazel or configure Bazel in a way that worked with their project, and the end result was confused, angry and frustrated developers.

"I understand that this is what Google uses, and it might be a great fit for them, but it's really not a great fit for this project or this team," Philip explained to Trey. "I think we can get to a Maven version in just another day or two, and it'll give us all the benefits we want."

"Well, I think we should do what Google does, but we have another problem," Trey said. "It seems that you merged some code to master."

"Uh, yeah, just some unit tests, and the team reviewed them."

"But I didn't review them," Trey said. "So I'm going to have to call a code freeze until I get a chance to understand the changes you made. No more changes until I've done that."

"Do they do code freezes at Google?" Philip wondered.

"Of course they do."

That was the last time Philip saw the CTO in person. Instead of being busy studying the code, however, Trey was busy gladhanding with investors, showing up for photo-ops with trade mags, and scheduling media appearances to talk up how this was the next Google.

In the end, the code freeze lasted five months, which was longer than Philip lasted. He found another job. The startup is still running, however, but Trey is no longer the CTO. He's now the CEO, and in charge of the entire operation.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Cory DoctorowA free Danish ebook of Little Brother

Science Fiction Cirklen is a Danish collective that works to translate and publish science fiction in Danish; years ago, they published a print edition of the book and have now released a free, Creative Commons-licensed ebook edition in Epub and PDF!

Chaotic IdealismIf only I weren’t autistic

“If only I weren’t autistic, my life would be so much easier.”

You’re right; you could have avoided all the prejudice, all the ill-treatment, all the mismatches with your neurotypical environment, if you weren’t autistic. You could, however, also have avoided them if you lived in a world where autistic people were respected and welcome members of society.

Those of us in the “autism pride” camp are very much aware of how hard it is to be autistic. We acknowledge that it’s a disability. But we also draw a sharp distinction between what’s due to autism itself, and what’s due to the fact that some people mistreat us, many people misunderstand us, and the world itself is not made for us.

I hope you don’t see autism-pride folks as enemies. We want your life to get easier, too. If there were a way autism could be “fixed”, we’d want you to have that choice. But we know it’s not possible–not now, and not even theoretically possible (after early infancy) without literally rewiring the brain’s connections and changing the person’s fundamental identity. Autistic people are autistic to stay.

But we can learn and grow, and we can advocate for ourselves and for younger autistics. We can demand proper education and an environment that is more sensory-friendly. We can demand that people give us warning before things change, that neurotypical children are educated about the assumptions they make and why they may not work for communicating with people who are different–whether from a different country or a different neurotype.

I can see you’re frustrated, and it’s legit frustration. But please don’t feel like your life is a dead end. You can’t change the autism, but you can learn new skills, and you can advocate for yourself. When people around you are prejudiced, cruel, or ignorant, you can remind yourself that it’s their failings, not your fault, that causes their behavior; and you can seek out healthier relationships. When you feel there is no way out, remember that there is an entire autism and disability-rights community out there who will back you up and help you fight for your rights, and that we are gradually making things better.

Worse Than FailureCodeSOD: Table This for a Moment

Relational databases have very different constraints on how they structure and store data than most programming languages- tables and relationships don't map neatly to objects. They also have very different ways in which they can be altered. With software, you can just release a new version, but a database requires careful alterations lest you corrupt your data.

There are many ways to try and address that mismatch in our software, but sometimes the mismatch isn't in our software, it's in our brains.

Peter was going through an older database, finally migrating its schema definition into scripts that can be source-controlled. This particular database had never received any care from the database team, and instead all of the data modeling was done by developers. Developers who might not have been quite ready to design a database system from scratch.

The result was this:

mysql> DESC `preferences`; +--------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_id | int(11) | YES | MUL | NULL | | | time_format | varchar(255) | YES | | 24 | | | date_format | varchar(255) | YES | | eu | | | boolean_val1 | tinyint(1) | YES | MUL | 0 | | | boolean_val2 | tinyint(1) | YES | MUL | 0 | | | boolean_val3 | tinyint(1) | YES | MUL | 0 | | | boolean_val4 | tinyint(1) | YES | MUL | 0 | | | boolean_val5 | tinyint(1) | YES | MUL | 0 | | | boolean_val6 | tinyint(1) | YES | MUL | 0 | | | boolean_val7 | tinyint(1) | YES | MUL | 0 | | | boolean_val8 | tinyint(1) | YES | MUL | 0 | | | boolean_val9 | tinyint(1) | YES | MUL | 0 | | +--------------+--------------+------+-----+---------+----------------+

So, let's start with the boolean_val* fields. As you can see, they're helpfully named in a way that gives you absolutely no idea what they might be used for. Obviously, as this is the preferences table, they should be storing some sort of preference. Well, at least one of them is- boolean_val2 has a mix of ones and zeroes. All the other boolean_val* fields just store zeroes. Is that because they're unused? Or because all the users have left the corresponding preference at its default value? Nobody knows!

With that in mind, let's turn to time_format and date_format. If one were going "by the book" on normal forms, these should probably be foreign keys to a table which lists the options, but that means extra joins and boy, that just might be overkill if you only have a handful of options.

So it's not wrong that they store these as strings in the field. But it's worth noting that time_format has only two allowed values- 12 and 24. And date_format also has only two allowed values - eu and us. So again, not wrong, but with a solid understanding that there are only two possible values for each field, and that big pile of boolean_val* fields which may or may not be used, it's at least ironic that using booleans never occurred to them.

As a bonus, while time_format only has two possible values, the schema permits it to be null. That means there are definitely nulls:

mysql> SELECT DISTINCT(`time_format`) FROM `preferences`; +-------------+ | time_format | +-------------+ | 24 | | 12 | | NULL | +-------------+

There are definitely cases where it is null. Peter hadn't yet confirmed, but it's likely the front-end wasn't expecting nulls, and this accounts for a number of reported bugs in the UI.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!